text
stringlengths
6
128k
# Hessian-aware Quantized Node Embeddings for Recommendation Huiyuan Chen<EMAIL_ADDRESS>Visa ResearchPalo AltoCAUSA , Kaixiong Zhou , Kwei-Herng Lai<EMAIL_ADDRESS>Rice UniversityHoustonTXUSA , Chin- Chia Michael Yeh<EMAIL_ADDRESS>Visa ResearchPalo AltoCAUSA , Yan Zheng <EMAIL_ADDRESS>Visa ResearchPalo AltoCAUSA , Xia Hu<EMAIL_ADDRESS>Rice UniversityHoustonTXUSA and Hao Yang<EMAIL_ADDRESS>Visa ResearchPalo AltoCAUSA (2023) ###### Abstract. Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back- propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance. Collaborative Filtering, Graph Neural Networks, Low-bit Quantization, Generalized Straight-Through Estimator ††journalyear: 2023††copyright: acmlicensed††conference: Seventeenth ACM Conference on Recommender Systems; September 18–22, 2023; Singapore, Singapore††booktitle: Seventeenth ACM Conference on Recommender Systems (RecSys ’23), September 18–22, 2023, Singapore, Singapore††price: 15.00††doi: 10.1145/3604915.3608826††isbn: 979-8-4007-0241-9/23/09††ccs: Information systems Recommender systems††ccs: Computing methodologies Neural networks ## 1\. Introduction Recommender systems play an important role for e-commerce, such as display advertising and ranking products (Huang et al., 2020; Chen et al., 2021). Among different recommender models, Graph Neural Networks (GNNs) have achieved cutting-edge performance on top-$k$ recommendations (Ying et al., 2018; Wang et al., 2019; He et al., 2020; Huang et al., 2021). For instance, Pinterest deploys a GNN model to train on a graph with $3$ billion nodes and 18 billion edges, which has delivered state-of-the-art performance (Ying et al., 2018). Despite the superior ability of GNNs, node representations are often stored in continuous embedding space (e.g., 32-bit floating point (FP32)). This often requires huge memory consumption (Lian et al., 2020). For example, the FP32 embeddings of 10 million items with a dimensional size of 256 will take up over 9.5 GB of storage space, which is hard to be deployed into devices with limited memory, especially under the federated learning settings (Reisizadeh et al., 2020; Yuan et al., 2023). Therefore, searching and ranking from a large item corpus to generate top-$k$ recommendations become intractable at scale due to their high latency (Shi et al., 2020; Tan et al., 2020; Chen et al., 2022b; Wang et al., 2023; Xu et al., 2023). Low-bit quantization (Gong et al., 2019; Jacob et al., 2018; Lee et al., 2021; Kim et al., 2021; Cao et al., 2017) is a promising method to save the memory footprint and accelerate model inference for large-scale systems. By replacing FP32 values with lower precision values, e.g., 8-bit integer (INT8), quantization can shrink down the size of embeddings without modifying the original network architectures. Also, quantized operators are widely supported by modern hardwares, which allows to deploy very large networks to resource- limited devices (Jacob et al., 2018; Chen et al., 2022b). For example, NVIDIA Turing GPU architecture111https://www.nvidia.com/en-us/geforce/turing/ supports the INT8 arithmetic operations. Recently, several studies have adopted quantization in large-scale recommender systems (Cao et al., 2017; Tan et al., 2020; Wu et al., 2021; Kang and McAuley, 2019). However, existing methods suffer from two drawbacks: 1) Most of them employ binary hash techniques to compress user/item embeddings into 1-bit quantized representations. Nevertheless, recent studies show that ultra low-bit quantizations (e.g., 1 or 2 bits) can be much more challenging due to their significant degradation in the accuracy (Zhou et al., 2016; Gong et al., 2019); 2) They often use the Straight-through Estimator (STE) (Bengio et al., 2013) to avoid zero gradients during the back-propagation. Specifically, the non-differentiable quantized function is replaced with a surrogate: the identity function (Tan et al., 2020) or the scaled tanh function (Cao et al., 2017; Kang and McAuley, 2019). However, the use of different forward and backward functions results in a gradient mismatch problem, i.e., the modified gradient is certainly not the gradient of loss function, which makes the network training unstable (Yin et al., 2019; Chen et al., 2022a). In this work, we propose the Hessian-aware Quantized GNN (HQ-GNN) for effective discrete representations of users and items for fast retrieval. Specifically, HQ-GNN consists of two components: a GNN encoder for learning continuous user/item embeddings, and a quantized module for compressing the full-precision embeddings into low-bit ones. Instead of 1-bit, HQ-GNN allows arbitrary bit quantization for better trade-offs between latency and performance. To address the gradient mismatch problem, we tailor the STE by further considering the quantized errors and second-order derivatives (e.g. Hessian) for better stability and accuracy. As such, HQ-GNN can benefit from both lower memory footprint and faster inference speed comparing to vanilla GNN. Experimental results on several large-scale datasets show the superiority of our HQ-GNN. ## 2\. Related Work #### GNN-based Recommenders GNNs have received a lot of attention in graph domains. GNNs learn how to aggregate messages from local neighbors using neural networks, which have been successfully applied to user-item bipartite graphs (Ying et al., 2018; Wang et al., 2019; He et al., 2020; Chen et al., 2022c, d; Wang et al., 2022). Some representative models include PinSage (Ying et al., 2018), NGCF (Wang et al., 2019), LightGCN (He et al., 2020), etc. Although GNNs have great ability of capturing high-order collaborative signals between users and items, their node embeddings are stored in continuous space (e.g., FP32), which is the major bottleneck for searching and ranking (e.g., high computational cost of similarity calculation between continuous embeddings). It is thus essential to improve the efficiency of generating top-$k$ recommendations at scale (Shi et al., 2020; Tan et al., 2020). #### Network Quantizations Quantization is a hardware-friendly approach by approximating real values with low-bit ones (Gong et al., 2019; Jacob et al., 2018; Lee et al., 2021; Kim et al., 2021; Cao et al., 2017; Jing et al., 2021; Jiang et al., 2021; Yeh et al., 2022). Meanwhile, network inference can be performed using cheaper fixed- point multiple-accumulation operations. As a result, quantization can reduce the storage overhead and inference latency of networks (Zhou et al., 2016; Gong et al., 2019; Zhu et al., 2020; Lee et al., 2021; Lian et al., 2020). In recommender systems, HashNet (Cao et al., 2017) proposes to binarize the embeddings by continuation method for multimedia retrieval. Similarly, CIGAR (Kang and McAuley, 2019) learns binary codes to build a hash table for retrieving top-$k$ item candidates. Recently, HashGNN (Tan et al., 2020) learns hash functions and graph representations in an end-to-end fashion. Our HQ-GNN builds on HashGNN. Specifically, we extend 1-bit quantization of HashGNN to arbitrary-bit one, and address the gradient mismatch issue of STE, resulting in better performance. ## 3\. methodology ### 3.1. Task Description Generally, the input of recommender systems includes a set of users $\mathcal{U}=\\{u\\}$, items $\mathcal{I}=\\{i\\}$, and users’ implicit feedback $\mathcal{O}^{+}=\left\\{(u,i)\mid u\in\mathcal{U},i\in\mathcal{I},y_{ui}=1\right\\}$, where $y_{ui}=1$ indicates that user $u$ has adopted item $i$ before, $y_{ui}=0$ otherwise. One can construct a corresponding bipartite graph $\mathcal{G}=(\mathcal{V}=\mathcal{U}\cup\mathcal{I},\mathcal{E}=\mathcal{O}^{+})$. The goal is to estimate the user preference towards unobserved items. We next introduce our HQ-GNN that consists of two parts: a GNN encoder and a quantized module. ### 3.2. GNN-based Recommenders Most GNNs fit under the message-passing schema (Wang et al., 2019; He et al., 2020), where the representation of each node is updated by collecting messages from its neighbors via an aggregation operation $\text{Agg}(\cdot)$ followed by an $\text{Update}(\cdot)$ operation as: (1) $\displaystyle\mathbf{e}_{u}^{(l)}=$ $\displaystyle\text{Update}\left(\mathbf{e}_{u}^{(l-1)},\text{Agg }(\\{\mathbf{e}_{i}^{(l-1)}\mid i\in\mathcal{N}_{u}\\})\right),$ $\displaystyle\mathbf{e}_{i}^{(l)}=$ $\displaystyle\text{Update}\left(\mathbf{e}_{i}^{(l-1)},\text{Agg }(\\{\mathbf{e}_{u}^{(l-1)}\mid u\in\mathcal{N}_{i}\\})\right),$ where $\\{\mathbf{e}^{(l)}_{u},\mathbf{e}^{(l)}_{i}\\}\in\mathbb{R}^{d}$ denote the embeddings of user and item in the $l$-th layer; $\mathcal{N}_{u}$ and $\mathcal{N}_{i}$ denote neighbors of user $u$ and item $i$, respectively. By propagating $L$ layer, a pooling operator is used to obtain the final representations: (2) $\mathbf{e}_{u}=\text{Pool}(\mathbf{e}_{u}^{(0)},\ldots,\mathbf{e}_{u}^{(L)}),\quad\mathbf{e}_{i}=\text{Pool}(\mathbf{e}_{i}^{(0)},\ldots,\mathbf{e}_{i}^{(L)}),$ where the final representations $\mathbf{e}_{u}\in\mathbb{R}^{d}$ and $\mathbf{e}_{i}\in\mathbb{R}^{d}$ can be used for downstream tasks. However, the full-precision embeddings, e.g., FP$32$, usually require high memory cost and power consumption to generate top-$k$ recommendations for the billion- scale graphs. ### 3.3. Low-bit Quantization Quantization is a hardware-friendly technique to reduce memory footprint and energy consumption (Han et al., 2016; Sun et al., 2020; Zhu et al., 2020). For a uniform $b$-bit quantization, one can clip and normalize a floating-point number $x$ into a quantization interval, parameterized by an upper $u$ and a lower $l$ bounds, as: (3) $x_{n}=\frac{\text{clip}(x,l,u)-l}{\Delta},$ where $x_{n}$ is the normalized output, $\text{clip}(x,l,u)=\min(\max(x,l),u)$, $\Delta=\frac{u-l}{2^{b}-1}$ is the interval length, and $b$ denotes the number of quantization levels, e.g., $b=8$ for $8$-bit quantization. During training, the clipping interval $(l,u)$ is often unknown beforehand, two strategies are commonly used to determine the upper/lower thresholds: exponential moving averages (Jacob et al., 2018) and treating the thresholds as learnable parameters (Choi et al., 2018). The normalized output $x_{n}$ can be then converted to a discrete value $x_{b}$ using a round function with post-scaling as (Zhou et al., 2016; Gong et al., 2019; Zhu et al., 2020): (4) $x_{b}=x_{q}\cdot\Delta,\quad x_{q}=\text{round}(x_{n}),$ where $\text{round}(\cdot)$ maps a full-precision value to its nearest integer. The quantized tensor $x_{b}$ can be then used for efficient computation by emergent accelerators (e.g., NVIDIA TensorRT) that are able to handle $\Delta$ efficiently. By combining Eq. (3) and Eq. (4), we can defined a quantization function $Q_{b}(\cdot)$ as: $x_{b}=Q_{b}(x)$. If the input is a vector/matrix, $Q_{b}(\cdot)$ would apply to each element of the vector/matrix. To this end, we can quantize the GNN embeddings $\mathbf{e}_{u}$ and $\mathbf{e}_{i}$ in Eq. (2) into: (5) $\mathbf{q}_{u}=Q_{b}(\mathbf{e}_{u}),\quad\mathbf{q}_{i}=Q_{b}(\mathbf{e}_{i}),$ where $\\{\mathbf{q}_{u},\mathbf{q}_{i}\\}\in\mathbb{R}^{d}$ are the $b$-bit representations of user $u$ and item $i$, respectively. Our model follows the mixed-precision quantization policy (Micikevicius et al., 2018), where we only compress the activations of GNNs for faster inference, and leave the weights of GNNs at full precision. Since GNNs often contain less than three layers and have limited weights, the mixed-precision scheme could achieve good trade-offs between performance and memory size (Dong et al., 2019). The mixed-precision quantization has also become more and more common in deep learning frameworks222https://www.tensorflow.org/guide/mixed_precision. However, the non-differentiable quantized processes are undesirable for the standard back-propagation, i.e., the quantization function is intrinsically a discontinuous step function and nearly has zero gradients, which significantly affects the training of HQ-GNN. We next present a Generalized Straight-Through Estimator to address this problem. ### 3.4. Generalized Straight-Through Estimator The main challenge of training our HQ-GNN arises from the discretized round function in Eq. (4), where its derivative is either infinite or zero at almost everywhere. One popular family of estimators are the so-called Straight- Through Estimators (STE) (Bengio et al., 2013; Yin et al., 2019). In STE, the forward computation of $\text{round}(\cdot)$ is unchanged, but back- propagation is computed through a surrogate (Tan et al., 2020; Cao et al., 2017; Zhou et al., 2016): replacing $\text{round}(\cdot)$ with an identity function, i.e., $\mathcal{G}_{\mathbf{x_{n}}}=\mathcal{G}_{\mathbf{x_{q}}}$ where $\mathcal{G}$ denotes the gradient operator. However, STE runs the risk of convergence to poor minima and unstable training (Yin et al., 2019). For example, both values of $0.51$ and $1.49$ round to same integer $1$ with different quantized errors. Moreover, STE forces to update both values equally with the same gradient at integer $1$, which is likely to be biased with cumulative quantized errors. Moreover, a small decrement (e.g., $-0.2$) for value $0.51$ can largely change the quantized integer from $1$ to $0$, while a same decrement to $1.49$ cannot. To mitigate the impact of quantized errors, we generalize the STE as (Lee et al., 2021): (6) $\mathcal{G}_{\mathbf{x_{n}}}=\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+\delta\cdot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\mathbf{x_{n}}-\mathbf{x_{q}})\right),$ where $\odot$ denotes element-wise product; $\text{sign}(\cdot)$ is a sign function such that $\text{sign}(x)=+1$ if $x\geq 0$, $-1$ otherwise; $\delta$ is the scaling factor. Eq. (6) is able to scale up/down the gradient of $\mathcal{G}_{\mathbf{x_{q}}}$ when the $\mathbf{x_{n}}$ requires a larger/smaller magnitude for an update. Moreover, Eq. (6) is equivalent to vanilla STE when setting $\delta=0$. It is thus crucial to determine the scaling factor $\delta$ during training. Inspired by Hessian-aware quantized networks (Dong et al., 2019, 2020), we use second-order information to guide the selection of $\delta$. Let $\mathbf{\epsilon}=\mathbf{x_{n}}-\mathbf{x_{q}}$ denote the quantized error for round function, where each element of $\mathbf{\epsilon}$ is well bound by a small number, i.e., $|\epsilon_{i}|\leq\frac{0.5}{2^{b}-1}$, with element- wise Taylor expansion, we have: $\displaystyle\mathcal{G}_{\mathbf{x_{n}}}=$ $\displaystyle\mathcal{G}_{\mathbf{x_{q}}}+\frac{\mathcal{G}_{\mathbf{x_{n}}}-\mathcal{G}_{\mathbf{x_{q}}}}{\mathbf{x_{n}}-\mathbf{x_{q}}}\odot(\mathbf{x_{n}}-\mathbf{x_{q}})$ $\displaystyle=$ $\displaystyle\mathcal{G}_{\mathbf{x_{q}}}+\frac{\mathcal{G}_{\mathbf{x_{q}}+\mathbf{\epsilon}}-\mathcal{G}_{\mathbf{x_{q}}}}{\mathbf{\epsilon}}\odot(\mathbf{x_{n}}-\mathbf{x_{q}})$ $\displaystyle\approx$ $\displaystyle\mathcal{G}_{\mathbf{x_{q}}}+\mathcal{G}^{\prime}_{\mathbf{x_{q}}}\odot(\mathbf{x_{n}}-\mathbf{x_{q}}),$ where $\frac{[\cdot]}{[\cdot]}$ is the element-wise division, $\mathcal{G}^{\prime}_{\mathbf{x_{q}}}=\frac{\partial\mathcal{G}_{\mathbf{x_{q}}}}{\partial\mathbf{x_{q}}}$ denotes the second-order derivative of a task loss with respect to $\mathbf{x_{q}}$. The above equation can be represented as: (7) $\mathcal{G}_{\mathbf{x_{n}}}\approx\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+\frac{\mathcal{G}^{\prime}_{\mathbf{x_{q}}}}{|\mathcal{G}_{\mathbf{x_{q}}}|}\odot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\mathbf{x_{n}}-\mathbf{x_{q}})\right),$ where $|\cdot|$ denotes the absolute value. Comparing Eq. (6) and Eq. (7) suggests that we can connect $\delta$ with $\frac{\mathcal{G}^{\prime}_{\mathbf{x_{q}}}}{|\mathcal{G}_{\mathbf{x_{q}}}|}$, but explicitly forming the Hessian matrix $\mathbf{H}$ (containing all $\mathcal{G}^{\prime}_{\mathbf{x_{q}}}$) is computationally infeasible in practice. Instead, recent quantized networks approximate the second-order information by the average Hessian Trace (Dong et al., 2020) or top Hessian eigenvalues (Dong et al., 2019). In this work, we summarize the average trace of Hessian and $\frac{\mathcal{G}^{\prime}_{\mathbf{x_{q}}}}{|\mathcal{G}_{\mathbf{x_{q}}}|}$ as scaling factor: (8) $\delta=\frac{\text{Tr}(\mathbf{H})/N}{G},$ where $N$ is the number of diagonal elements in $\mathbf{H}$ and $G$ is an average over the absolute values of gradients, i.e., $\mathbb{E}[|\mathcal{G}_{\mathbf{x_{q}}}|]$. Input: A GNN $f_{gnn}$, bipartite graph $\mathbf{A}$, bit-width $b$, regularizer $\alpha$. 1 Output: Model parameters $\mathbf{\Theta}$ of $f_{gnn}$; 2 3Initialize $\mathbf{\Theta}$ ; 4 5for _each mini-batch_ do /* Forward pass */ 6 Compute node embeddings $\mathbf{e}_{u}$ and $\mathbf{e}_{i}$ by Eq. (2); 7 Normalize outputs $\hat{\mathbf{e}}_{u}=\frac{\text{clip}(\mathbf{e}_{u},l,u)-l}{\Delta}$ (same for $\hat{\mathbf{e}}_{i}$); 8 Quantize values $\bar{\mathbf{e}}_{u}=\text{round}(\hat{\mathbf{e}}_{u})$ (same for $\bar{\mathbf{e}}_{i}$); 9 Post-scaling quantized values $\mathbf{q}_{u}=\bar{\mathbf{e}}_{u}\odot\Delta$ (same for $\mathbf{q}_{i}$); 10 Compute the BPR loss by Eq. (9); /* Backward propagation */ 11 Compute the gradients $\mathcal{G}_{\bar{\mathbf{e}}_{u}}$ and $\mathcal{G}_{\bar{\mathbf{e}}_{i}}$ via standard SGD; 12 Adjust the gradients $\mathcal{G}_{\hat{\mathbf{e}}_{u}}$ and $\mathcal{G}_{\hat{\mathbf{e}}_{i}}$ by Eq. (6): 13 $\mathcal{G}_{\hat{\mathbf{e}}_{u}}=\mathcal{G}_{\bar{\mathbf{e}}_{u}}\odot\left(1+\delta\cdot\text{sign}(\mathcal{G}_{\bar{\mathbf{e}}_{u}})\odot({\hat{\mathbf{e}}_{u}}-{\bar{\mathbf{e}}_{u}})\right)$, 14 $\mathcal{G}_{\hat{\mathbf{e}}_{i}}=\mathcal{G}_{\bar{\mathbf{e}}_{i}}\odot\left(1+\delta\cdot\text{sign}(\mathcal{G}_{\bar{\mathbf{e}}_{i}})\odot({\hat{\mathbf{e}}_{i}}-{\bar{\mathbf{e}}_{i}})\right)$. 15 Compute the trace of Hessian by Hutchinson method (Avron and Toledo, 2011); 16 Update GNN parameters $\mathbf{\Theta}$ and the scaling factor $\delta$ by Eq. (8); 17 end for return _ $\mathbf{\Theta}$_ Algorithm 1 HQ-GNN We compute the trace of Hessian via Hutchinson’s method (Avron and Toledo, 2011) Given a random vector $\mathbf{v}$, whose elements are i.i.d. sampled from a Rademacher distribution such that $\mathbb{E}[\mathbf{v}\mathbf{v}^{\top}]=\mathbf{I}$. Then, we have: $\displaystyle\text{Tr}(\mathbf{H})$ $\displaystyle=\text{Tr}(\mathbf{H}\mathbb{E}[\mathbf{v}\mathbf{v}^{\top}])=\mathbb{E}[\text{Tr}(\mathbf{H}\mathbf{v}\mathbf{v}^{\top})]$ $\displaystyle=\mathbb{E}[\mathbf{v}^{\top}\mathbf{H}\mathbf{v}]\approx\frac{1}{m}\sum_{i=1}^{m}({\mathbf{v}^{(i)}}^{\top}\mathbf{H}\mathbf{v}^{(i)}),$ where $\mathbf{I}$ is the identity matrix. The trace of $\mathbf{H}$ can be estimated by $\mathbb{E}[\mathbf{v}^{\top}\mathbf{H}\mathbf{v}]$, where the expectation can be obtained by drawing $m$ random vectors. Note that we can first compute $\mathbf{H}\mathbf{v}$, then $\mathbf{v}^{\top}\mathbf{H}\mathbf{v}$ is a simple inner product between $\mathbf{v}$ and $\mathbf{H}\mathbf{v}$. Also, we can obtain $\mathbf{H}\mathbf{v}$ efficiently without computing an exact Hessian matrix as follows: $\frac{\partial(\mathcal{G}^{\top}_{\mathbf{x_{q}}}\mathbf{v})}{\partial\mathbf{x_{q}}}=\frac{\partial\mathcal{G}^{\top}_{\mathbf{x_{q}}}}{\partial\mathbf{x_{q}}}\mathbf{v}+\mathcal{G}^{\top}_{\mathbf{x_{q}}}\frac{\partial\mathbf{v}}{\partial\mathbf{x_{q}}}=\frac{\partial\mathcal{G}^{\top}_{\mathbf{x_{q}}}}{\partial\mathbf{x_{q}}}\mathbf{v}=\mathbf{H}\mathbf{v},$ where the first equality is the chain rule, while the second is due to the independence of $\mathbf{v}$ and $\mathbf{x_{q}}$. As such, the cost of Hessian matrix-vector multiply is the same as one gradient back-propagation. ### 3.5. Model Optimization #### 3.5.1. Loss function Based on the $b$-bit representations $\mathbf{q}_{u}$ and $\mathbf{q}_{i}$ from Eq. (5), we can adopt the inner product to estimate the user’s preference towards the target item as: $\hat{y}_{ui}=\langle\mathbf{q}_{u},\mathbf{q}_{i}\rangle$. Also, we use Bayesian Personalized Ranking loss to optimize the model (Kang and McAuley, 2019): (9) $\mathcal{L}_{BPR}(\mathbf{\Theta})=\sum_{\begin{subarray}{c}(u,i)\in\mathcal{O}^{+},(u,j)\notin\mathcal{O}^{+}\end{subarray}}-\ln\sigma\left(\hat{y}_{ui}-\hat{y}_{uj}\right)+\alpha\|\mathbf{\Theta}\|_{F}^{2},$ where $\sigma(\cdot)$ denotes the sigmoid function, $\mathbf{\Theta}$ denotes the model parameters of GNNs, and $\alpha$ controls the $L_{2}$ regularization strength. Finally, we briefly summarize our HQ-GNN in Algorithm 1. #### 3.5.2. Complexity Compared to vanilla GNN, HQ-GNN has an extra time cost to perform gradient adjustments in Eq. (6). The computation of Hessian Trace only requires one gradient back-propagation, which is significantly faster than training the GNN encoder itself (Dong et al., 2020). Thus, HQ-GNN has the same training complexity as its GNN encoder. However, during the inference, we can use integer-only node embeddings (without post-scaling) to generate the top-$k$ candidates, which has both lower memory footprint and faster inference speed compared to the vanilla GNN. Table 1. Dataset statistics. Dataset | Gowalla | Yelp2018 | Amazon-Book | Alibaba ---|---|---|---|--- —User— | 29,858 | 31,668 | 52,643 | 106,042 —Item— | 40,981 | 38,048 | 91,599 | 53,591 —Interaction— | 1,027,370 | 1,561,406 | 2,984,108 | 907,407 ## 4\. Experiments ### 4.1. Experimental Settings Table 2. Performance comparison (bold and underline represent the best full- precision and 1-bit quantized models). | Gowalla | Yelp-2018 | Amazon-Book | Alibaba ---|---|---|---|--- Methods | Recall@50 | NDCG@50 | Recall@50 | NDCG@50 | Recall@50 | NDCG@50 | Recall@50 | NDCG@50 NGCF | 0.159 | 0.130 | 0.114 | 0.054 | 0.092 | 0.065 | 0.071 | 0.033 +HashNet | 0.104 | 0.082 | 0.071 | 0.030 | 0.057 | 0.038 | 0.047 | 0.021 +HashGNN | 0.122 | 0.098 | 0.091 | 0.042 | 0.073 | 0.043 | 0.054 | 0.023 +HQ-GNN | 0.145 | 0.112 | 0.101 | 0.048 | 0.081 | 0.054 | 0.065 | 0.029 LightGCN | 0.163 | 0.134 | 0.118 | 0.059 | 0.098 | 0.072 | 0.076 | 0.036 +HashNet | 0.113 | 0.088 | 0.074 | 0.036 | 0.064 | 0.041 | 0.052 | 0.024 +HashGNN | 0.128 | 0.112 | 0.094 | 0.047 | 0.075 | 0.053 | 0.062 | 0.029 +HQ-GNN | 0.152 | 0.122 | 0.108 | 0.051 | 0.089 | 0.062 | 0.070 | 0.032 #### 4.1.1. Datasets. We evaluate our method on four public datasets (Wang et al., 2019; He et al., 2020; Huang et al., 2021): Gowalla333https://snap.stanford.edu/data/loc- gowalla.html, Yelp-2018444https://www.yelp.com/dataset, Amazon- book555https://jmcauley.ucsd.edu/data/amazon/, and Alibaba666https://github.com/huangtinglin/MixGCF/tree/main/data/ali. Their statistics are summarized in Table 1. For each dataset, we randomly select $80\%$ of historical interactions of each user to construct the training set, and treat the remaining as the test set. From the training set, we randomly select $10\%$ of interactions as the validation set to tune the hyper- parameters. #### 4.1.2. Baselines and Evaluations. To verify the effectiveness of HQ-GNN, we mainly compare with graph-based models: NGCF (Wang et al., 2019), LightGCN (He et al., 2020), HashNet (Cao et al., 2017) and HashGNN (Tan et al., 2020). For HashNet, HashGNN and HQ-GNN, we can choose any GNN encoder to compute the continuous node embeddings in Eq. (2). The comparison against other methods (e.g., factorization machines) is omitted, since most of them are outperformed by LightGCN. We choose the widely-used Recall$@k$ and NDCG$@k$ as the evaluation metrics (Wang et al., 2019; He et al., 2020; Huang et al., 2021). We simply set $k=50$ in all experiments (Tan et al., 2020). #### 4.1.3. Implementation Details. For all baselines, the embedding size of user/item is searched among $\\{16,32,64,128\\}$. The hyper-parameters (e.g., batch size, learning rate) of baselines are initialized as their original settings and are then carefully tuned to achieve the optimal performance. For HQ-GNN, we search $L_{2}$ regularizer $\alpha$ within $\\{10^{-5},10^{-4},10^{-3},10^{-2},10^{-1}\\}$. In addition, we determine the upper/lower thresholds (Eq. (3)) by exponential moving averages (Jacob et al., 2018), and set the number of bits $b=1$ in Eq. (5) for fair comparisons with binary hash methods: HashNet (Cao et al., 2017) and HashGNN (Tan et al., 2020). ### 4.2. Experimental Results #### 4.2.1. Overall Performance. We present a comprehensive performance comparison between full-precision GNNs and quantization-aware GNNs. We summarize the results in terms of Recall$@50$ and NDCG$@50$ for different datasets in Table 2. From the table, we have two major observations: 1) Among all 1-bit GNNs, our proposed HQ-GNN consistently outperforms both HashNet and HashGNN by a large margin on all four datasets. Clearly, this reveals that our HQ-GNNs provide a meaningful gradient adjustments for non-differentiable quantized function. For example, for LightGCN encoder, HQ-GNN has on average $15.80\%$ improvement with respect to Recall$@50$ and over $15.63\%$ improvement with respect to NDCG$@50$, comparing to the state-of-the-art HashGNN. 2) It is not surprised that full- precision GNNs perform better than quantization-aware GNNs in all cases. However, quantization-aware GNNs benefit from both lower memory footprint and faster inference speed comparing to vanilla GNN. In terms of memory and inference speed, we have observed similar results as those reported in HashNet (Cao et al., 2017) and HashGNN (Tan et al., 2020). This is because our HQ-GNN, with $b=1$, inherits all the benefits of HashGNN. For instance, using binarized embeddings (1 bit) can significantly reduce memory usage as compared to using FP32 embeddings. Moreover, the inference speed of our HQ-GNNs is approximately 3.6 times faster than that of full- precision GNNs because the Hamming distance between two binary embeddings can be calculated efficiently (Tan et al., 2020). These features make our HQ-GNN more desirable for large-scale retrieval applications in the industry. #### 4.2.2. Compared to GTE The STE method propagates the same gradient from an output to an input of the discretizer, assuming that the derivative of the discretizer is equal to 1. In contrast, our GSTE method adopts the Hessian to refine the gradients. To evaluate the effectiveness of our GSTE method, we chose LightGCN as the backbone and quantized its embeddings into 1 bit. The performance on different datasets is summarized in Table 3. From the table, it is clear that our GSTE method performs better than STE for 1-bit quantization, with improvements ranging from $14.7\%$ to $24.5\%$. Regarding running time, during the training stage, our GSTE method requires computing the trace of Hessian using Hutchinson’s method, which is however fast. From Table 3, we can see that our GSTE method is slightly slower than STE, which is negligible in practice. During inference, both our GSTE and STE methods have the same speed as both use 1-bit quantized embeddings for retrieval, and the trace of Hessian is not needed in the inference stage. The left of Figure 1 also displays the training curves of GSTE and STE, and we clearly observe that training quantized LightGCN with GSTE is better than STE in terms of stability. This highlights the effectiveness of utilizing Hessian information in the training process. The right of Figure 1 shows the impact of quantization levels by varying $b$ within $\\{1,2,3,4\\}$ for both GSTE and STE. As can be seen, aggressive quantization (less than 2-bit precision) can lead to significant degradation in the accuracy. When $b=4$, HQ-GNN obtains $98.5\%$ performance recovery of LightGCN. Comparing STE and GSTE, our GSTE consistently performance better than STE in all cases. In summary, HQ-GNN strikes a good balance between latency and performance. Table 3. The performance and the running time of 1-bit quantized LightGCN with STE and GSTE. | Gowalla | Yelp-2018 | Amazon-Book | Alibaba ---|---|---|---|--- LightGCN | Recall@50 | Time(sec) | Recall@50 | Time(sec) | Recall@50 | Time(sec) | Recall@50 | Time(sec) +STE | 0.122 | 30.4 | 0.092 | 41.7 | 0.074 | 103.6 | 0.061 | 22.2 +GSTE | 0.152 | 32.9 | 0.108 | 45.1 | 0.089 | 110.7 | 0.070 | 23.9 Improv(%) | +$24.5\%$ | - | +$17.3\%$ | - | +$20.2\%$ | - | +$14.7\%$ | - Figure 1. Left: GSTE vs. STE over training loss. Right: the impact of the number of bits in the HQ-GNN. ## 5\. Conclusion Training graph neural networks on large-scale user-item bipartite graphs has been a challenging task due to the extensive memory requirement. To address this problem, we propose HQ-GNN that explores the issue of low-bit quantization of graph neural networks for large-scale recommendations. Additionally, we introduce a Generalized Straight-Through Estimator to solve the gradient mismatch problem that arises during the training of quantized networks. HQ-GNN is flexible and can be applied to various graph neural networks. The effectiveness of our proposed method is demonstrated through extensive experiments on real-world datasets. ## References * (1) * Avron and Toledo (2011) Haim Avron and Sivan Toledo. 2011. Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. _J. ACM_ (2011), 1–34. * Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013\. Estimating or propagating gradients through stochastic neurons for conditional computation. _arXiv preprint arXiv:1308.3432_ (2013). * Cao et al. (2017) Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S Yu. 2017\. Hashnet: Deep learning to hash by continuation. In _Proceedings of the IEEE international conference on computer vision_. 5608–5617. * Chen et al. (2022b) Huiyuan Chen, Xiaoting Li, Kaixiong Zhou, Xia Hu, Chin-Chia Michael Yeh, Yan Zheng, and Hao Yang. 2022b. TinyKG: Memory-Efficient Training Framework for Knowledge Graph Neural Recommender Systems. In _Proceedings of the 16th ACM Conference on Recommender Systems_. 257–267. * Chen et al. (2021) Huiyuan Chen, Yusan Lin, Fei Wang, and Hao Yang. 2021\. Tops, bottoms, and shoes: building capsule wardrobes via cross-attention tensor network. In _Proceedings of the 15th ACM Conference on Recommender Systems_. 453–462. * Chen et al. (2022c) Huiyuan Chen, Chin-Chia Michael Yeh, Fei Wang, and Hao Yang. 2022c. Graph neural transport networks with non-local attentions for recommender systems. In _Proceedings of the ACM Web Conference 2022_. 1955–1964. * Chen et al. (2022d) Huiyuan Chen, Kaixiong Zhou, Kwei-Herng Lai, Xia Hu, Fei Wang, and Hao Yang. 2022d. Adversarial graph perturbations for recommendations at scale. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_. 1854–1858. * Chen et al. (2022a) Yankai Chen, Huifeng Guo, Yingxue Zhang, Chen Ma, Ruiming Tang, Jingjie Li, and Irwin King. 2022a. Learning binarized graph representations with multi-faceted quantization reinforcement for top-k recommendation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 168–178. * Choi et al. (2018) Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. 2018. Pact: Parameterized clipping activation for quantized neural networks. _arXiv preprint arXiv:1805.06085_ (2018). * Dong et al. (2020) Zhen Dong, Zhewei Yao, Daiyaan Arfeen, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020\. Hawq-v2: Hessian aware trace-weighted quantization of neural networks. _Advances in neural information processing systems_. * Dong et al. (2019) Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2019. Hawq: Hessian aware quantization of neural networks with mixed-precision. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 293–302. * Gong et al. (2019) Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. 2019\. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 4852–4861. * Han et al. (2016) Song Han, Huizi Mao, and William J Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. _International Conference on Learning Representations_. * He et al. (2020) Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020\. Lightgcn: Simplifying and powering graph convolution network for recommendation. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval_. 639–648. * Huang et al. (2020) Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, and Linjun Yang. 2020. Embedding-based retrieval in facebook search. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. 2553–2561. * Huang et al. (2021) Tinglin Huang, Yuxiao Dong, Ming Ding, Zhen Yang, Wenzheng Feng, Xinyu Wang, and Jie Tang. 2021. MixGCF: An Improved Training Method for Graph Neural Network-Based Recommender systems. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_. 665–674. * Jacob et al. (2018) Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018\. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2704–2713. * Jiang et al. (2021) Gangwei Jiang, Hao Wang, Jin Chen, Haoyu Wang, Defu Lian, and Enhong Chen. 2021\. xLightFM: Extremely Memory-Efficient Factorization Machine. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_. 337–346. * Jing et al. (2021) Yongcheng Jing, Yiding Yang, Xinchao Wang, Mingli Song, and Dacheng Tao. 2021. Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 5301–5310. * Kang and McAuley (2019) Wang-Cheng Kang and Julian McAuley. 2019. Candidate generation with binary codes for large-scale top-n recommendation. In _Proceedings of the 28th ACM international conference on information and knowledge management_. 1523–1532. * Kim et al. (2021) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. 2021. I-bert: Integer-only bert quantization. In _International conference on machine learning_. 5506–5518. * Lee et al. (2021) Junghyup Lee, Dohyung Kim, and Bumsub Ham. 2021. Network Quantization with Element-wise Gradient Scaling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 6448–6457. * Lian et al. (2020) Defu Lian, Haoyu Wang, Zheng Liu, Jianxun Lian, Enhong Chen, and Xing Xie. 2020\. Lightrec: A memory and search-efficient recommender system. In _Proceedings of The Web Conference 2020_. 695–705. * Micikevicius et al. (2018) Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018\. Mixed Precision Training. In _International Conference on Learning Representations_. * Reisizadeh et al. (2020) Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, and Ramtin Pedarsani. 2020\. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In _International Conference on Artificial Intelligence and Statistics_. * Shi et al. (2020) Hao-Jun Michael Shi, Dheevatsa Mudigere, Maxim Naumov, and Jiyan Yang. 2020. Compositional embeddings using complementary partitions for memory-efficient recommendation systems. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. 165–175. * Sun et al. (2020) Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi Viji Srinivasan, and Kailash Gopalakrishnan. 2020. Ultra-low precision 4-bit training of deep neural networks. _Advances in Neural Information Processing Systems_. * Tan et al. (2020) Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, and Xia Hu. 2020\. Learning to Hash with Graph Neural Networks for Recommender Systems. In _Proceedings of The Web Conference 2020_. 1988–1998. * Wang et al. (2023) Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, and Jundong Li. 2023\. Federated Few-shot Learning. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. * Wang et al. (2019) Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In _Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval_. 165–174. * Wang et al. (2022) Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, and Tyler Derr. 2022\. Improving fairness in graph neural networks via mitigating sensitive attribute leakage. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 1938–1948. * Wu et al. (2021) Wei Wu, Bin Li, Chuan Luo, and Wolfgang Nejdl. 2021\. Hashing-accelerated graph neural networks for link prediction. In _Proceedings of the Web Conference 2021_. 2910–2920. * Xu et al. (2023) Zhe Xu, Yuzhong Chen, Menghai Pan, Huiyuan Chen, Mahashweta Das, and Hao Yang. 2023\. Kernel Ridge Regression-Based Graph Dataset Distillation. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. * Yeh et al. (2022) Chin-Chia Michael Yeh, Mengting Gu, Yan Zheng, Huiyuan Chen, Javid Ebrahimi, Zhongfang Zhuang, Junpeng Wang, Liang Wang, and Wei Zhang. 2022\. Embedding Compression with Hashing for Efficient Representation Learning in Large-Scale Graph. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 4391–4401. * Yin et al. (2019) Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley J. Osher, Yingyong Qi, and Jack Xin. 2019\. Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets. In _International Conference on Learning Representations_. * Ying et al. (2018) Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018\. Graph convolutional neural networks for web-scale recommender systems. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. 974–983. * Yuan et al. (2023) Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, and Hao Wang. 2023\. Federated unlearning for on-device recommendation. In _Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining_. 393–401. * Zhou et al. (2016) Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. 2016. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. _arXiv preprint arXiv:1606.06160_ (2016). * Zhu et al. (2020) Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, and Junjie Yan. 2020\. Towards unified int8 training for convolutional neural network. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1969–1979.
††thanks: These authors contributed equally††thanks: These authors contributed equally # Unified Theory of the Anomalous and Topological Hall Effects with Phase Space Berry Curvatures Nishchhal Verma Department of Physics, The Ohio State University, Columbus, OH 43210, USA Zachariah Addison Department of Physics, The Ohio State University, Columbus, OH 43210, USA Mohit Randeria Department of Physics, The Ohio State University, Columbus, OH 43210, USA ###### Abstract Hall experiments in chiral magnets are often analyzed as the sum of an anomalous Hall effect, dominated by momentum-space Berry curvature, and a topological Hall effect, arising from the real-space Berry curvature in the presence of skyrmions, in addition to the ordinary Hall resistivity. This raises the questions of how one can incorporate, on an equal footing, the effects of the anomalous velocity and the real space winding of the magnetization, and when such a decomposition of the resistivity is justified. We provide definitive answers to these questions by including the effects of all phase-space Berry curvatures in a semi-classical approach and by solving the Boltzmann equation in a weak spin-orbit coupling regime when the magnetization texture varies slowly on the scale of the mean free path. We show that the Hall resistivity is then just the sum of the anomalous and topological contributions, with negligible corrections from Berry curvature- independent and mixed curvature terms. We also use an exact Kubo formalism to numerically investigate the opposite limit of infinite mean path, and show that the results are similar to the semi-classical results. The Hall effect in magnetic materials has a long history Hall (1879). One might be tempted to think the primary explanation for the effect is that the magnetization ${\bf M}$ can exert a Lorentz force on the materials electrons, however, this effect is negligible for the non-relativistic charge carriers in metals Wannier (1947). Karplus and Luttinger Karplus and Luttinger (1954) correctly identified the importance of spin-orbit coupling (SOC) in the anomalous Hall effect (AHE); their analysis is now best understood in terms of the momentum-space Berry curvature of the electron Nagaosa (2006); Nagaosa _et al._ (2010); Xiao _et al._ (2010). Scattering in the presence of SOC also makes extrinsic contributions Smit (1955); Berger (1964) to the AHE, but the anomalous velocity that arises from Berry curvature effects is an intrinsic effect that dominates in many experiments Nagaosa _et al._ (2010); Yao _et al._ (2004), and will be our focus here. Understanding the Hall effect with spatially varying magnetic textures poses further challenges. In addition to the AHE, experiments see a topological Hall effect (THE) in a variety of chiral magnetic materials that harbor skyrmions, including B20 crystals Lee _et al._ (2009); Neubauer _et al._ (2009); Kanazawa _et al._ (2011) and thin films Li _et al._ (2013); Gallagher _et al._ (2017); Ahmed _et al._ (2018), and heavy metal/magnetic insulator bilayers Ahmed _et al._ (2019); Shao _et al._ (2019). Skyrmions give rise to an emergent magnetic field that derives from the real-space Berry curvature, resulting in a THE proportional to their topological charge density $n_{\text{sk}}$ Ye _et al._ (1999); Tatara and Kawamura (2002); Bruno _et al._ (2004); Onoda _et al._ (2004); Nagaosa _et al._ (2012); Nagaosa and Tokura (2013); Hamamoto _et al._ (2015); Nakazawa _et al._ (2018); Ishizuka and Nagaosa (2018). Theories of the anomalous and topological Hall effect have for the most part been distinct and, despite important recent progress Kim _et al._ (2013); Akosa _et al._ (2018); Lux _et al._ (2018); Akosa _et al._ (2019); Zhang _et al._ (2020); Lux _et al._ (2020); Bouaziz _et al._ (2021) on electrons with SOC interacting with skyrmions, a single theory that incorporates both real and momentum space Berry curvature effects on an equal footing to calculate electronic transport has remained elusive. The experiments Lee _et al._ (2009); Neubauer _et al._ (2009); Kanazawa _et al._ (2011); Li _et al._ (2013); Gallagher _et al._ (2017); Ahmed _et al._ (2018, 2019); Shao _et al._ (2019), on the other hand, are routinely interpreted as a sum of an anomalous and topological Hall resistivity, in addition to the ordinary Hall effect proportional to the magnetic field. Figure 1: The semi-classical wave-packet follows the texture and is influenced by real-space Berry curvature arising from the presence of skyrmions, in addition to the anomalous velocity that it acquires from an external electric field and momentum space Berry curvature. Our results are obtained in the regime where spin texture length scale $L_{s}\gg$ mean free path $\ell\gg a$, the lattice spacing, and weak spin-orbit coupling $\lambda\ll E_{F}$, the Fermi energy. The table summarizes the three contributions to $\rho_{xy}$, their scaling with these parameters, their dependence on the magnetic texture $\hat{\bf m}({\bf r})$, and their relation to Berry curvatures. Mixed momentum- and real-space curvatures contribute to the Hall resistivity at higher order in $(\lambda/E_{F})$ and $(a/L_{s})$. In this paper, we demonstrate within a semi-classical theory that the Hall response (excluding the ordinary Hall effect) is just the sum of two terms, the AHE and the THE. The semi-classical approach Xiao _et al._ (2010); Freimuth _et al._ (2013) is a natural avenue to study the effects of all phase space Berry curvatures, ${\bf r}$-space, ${\bf k}$-space, and mixed, on an equal footing in the regime where the length scale $L_{s}$ on which the spin texture varies and the mean free path $\ell$ from impurity scattering are both much larger than the microscopic scales of the average inter-particle spacing $k_{F}^{-1}$ or the lattice spacing $a$. We are also interested in the regime of weak SOC $\lambda\ll E_{F}$, the Fermi energy. To determine the Hall resistivity we solve the Boltzmann equation to linear order in the electric field in the presence of all phase-space curvatures and real and momentum space derivatives of the semi-classical energy eigenvalues. Systematically classifying the resulting array of terms in powers of the small parameters $\lambda/E_{F}$ and $\ell/L_{s}$, and extracting the leading contributions in the regime $L_{s}\gg\ell\gg k_{F}^{-1}\simeq a$ and $\lambda\ll E_{F}$, we find that $\rho_{xy}=\rho_{xy}^{\text{AHE}}+\rho_{xy}^{\text{THE}}+\delta\rho_{xy}.$ (1) Our results are summarized in the table in Fig. 1, where we show how each term depends (i) on the small parameters that control our calculation, (ii) on the spatially varying magnetization ${\bf M}=M_{s}\,\hat{\bf m}({\bf r})$, and (iii) on the Berry curvatures. While the first two terms represent the AHE and the THE respectively, the correction term $\delta\rho_{xy}$ is a curvature- independent boundary contribution proportional to the vorticity of the local electronic velocity field. It vanishes when the spin texture is periodic, e.g., a skyrmion crystal, and is negligible for a disordered skyrmion array in the thermodynamic limit. We show that the mixed curvatures contribute to the Hall resistivity at higher order in the small parameters $(\lambda/E_{F})$ and $(a/L_{s})$ than the terms shown in Fig. 1. Finally, we also present results using the Kubo formula in the opposite regime where $\ell\gg L_{s}\gtrsim k_{F}^{-1}\simeq a$. We focus on a disorder free system with $\ell=\infty$, use exact diagonalization in the magnetic unit cell of a skyrmion crystal, and compute the total Hall conductivity using the TKNN formula Thouless _et al._ (1982) in the magnetic Brillouin zone, which includes the effects of both the anomalous velocity and of the skyrmion topological charge density. We show how the semi-classical results allow us to qualitatively understand all of the non-trivial parameter dependencies of the Hall response including the dependence on the density, the SOC, and the exchange coupling between the charge carriers and the spin. Model: We analyze a minimal Hamiltonian for studying the confluence of anomalous and topological Hall effects. It can arise either from a “s-d model” of itinerant electrons interacting with local moments in a metallic magnet with Rashba SOC, or alternatively, it can be used to model the conduction electrons in a metal proximate to a magnetic insulator where broken inversion symmetry at the interface induces a Rashba SOC. We consider a 2D Hamiltonian $\widehat{\mathcal{H}}=\dfrac{\widehat{{\bf p}}^{2}}{2m}+\dfrac{a\lambda}{\hbar}\left(\widehat{{\bf p}}\times\hat{{\bf z}}\right)\cdot\boldsymbol{\sigma}-J\;\hat{{\bf m}}(\widehat{{\bf r}})\cdot\boldsymbol{\sigma}+\widehat{\mathcal{H}}_{\text{imp}}$ (2) which describes itinerant electrons of mass $m$ and Rashba SOC $\lambda$ whose spin $\boldsymbol{\sigma}$ is coupled to a magnetic texture ${\bf M}=M_{s}\,\hat{{\bf m}}({\bf r})$ via an exchange interaction $J$. Elastic scattering of electrons off a disorder potential is described by $\widehat{\mathcal{H}}_{\text{imp}}$ and leads to a mean free path $\ell\gg k_{F}^{-1}$. The small hats denote unit vectors and the wide hats denote quantum mechanical operators. Based on the separation of time-scales associated with the itinerant electrons and the dynamics of spins in the texture, we assume that the texture is static. The model has three energy scales: the Fermi energy $E_{F}$, SOC $\lambda$, and exchange coupling $J$, and three length scales: the inter-particle spacing $k_{F}^{-1}$ ($\approx a$, the lattice spacing), the mean-free path $\ell$, and the length scale $L_{s}$ associated to the spatial variations of the magnetic texture. We will focus on the weak SOC regime $\lambda\ll J,E_{F}$, relevant for experiments. Semi-classical Equations of Motion: Let us focus on the semi-classical regime $L_{s}\gg k_{F}^{-1}$. To analyze the dynamics of electron wave packets in phase space $\boldsymbol{\xi}=(x,y,k_{x},k_{y})$, we follow the standard prescription Xiao _et al._ (2010) to construct the semi-classical Hamiltonian $\mathcal{H}(\boldsymbol{\xi})=\dfrac{\hbar^{2}{\bf k}^{2}}{2m}+{\bf d}(\boldsymbol{\xi})\cdot{\boldsymbol{\sigma}}$ (3) where ${\bf d}(\boldsymbol{\xi})=a\lambda({\bf k}\times\hat{{\bf z}})-J\hat{{\bf m}}({\bf r})$ captures the quantum mechanical nature of the spin. The semi-classical eigenenergies are $\mathcal{E}_{\pm}(\boldsymbol{\xi})=\hbar^{2}{\bf k}^{2}/2m\pm|{\bf d}(\boldsymbol{\xi})|$. The corresponding wavefunctions posses non-trivial phase space geometry encoded in the Berry curvatures $\displaystyle\Omega^{\pm}_{\alpha,\beta}(\boldsymbol{\xi})$ $\displaystyle=$ $\displaystyle\pm\dfrac{1}{2}\hat{{\bf d}}(\boldsymbol{\xi})\cdot\left(\partial_{\alpha}\hat{\bf d}(\boldsymbol{\xi})\times\partial_{\beta}\hat{{\bf d}}(\boldsymbol{\xi})\right)$ (4) each corresponding to one of the six orthogonal planes in the 4D phase space spanned by $\boldsymbol{\xi}$. The dynamics of the semi-classical theory describe intra-band processes such that each electronic band may be treated independently, and we will suppress the band index unless necessary. The curvatures modify the equations of motion as well as the invariant measure in phase space. To simplify notation, we introduce a $4\times 4$ matrix, $[\Gamma(\boldsymbol{\xi})]_{\alpha,\beta}=\Omega_{\alpha,\beta}(\boldsymbol{\xi})-[i\sigma_{y}\otimes\mathds{1}]_{\alpha,\beta}$ (5) to write the equations of motion $\dot{\xi}_{\alpha}(\boldsymbol{\xi})=[\Gamma^{-1}(\boldsymbol{\xi})]_{\alpha\beta}\;\big{(}\partial_{\beta}\widetilde{\mathcal{E}}(\boldsymbol{\xi})+eE\;\delta_{\beta,y}\big{)}/\hbar$ (6) where $E$ is the external electric field along the $\hat{{\bf y}}$ direction and the electron charge is $(-e)$. Here $\widetilde{\mathcal{E}}(\boldsymbol{\xi})\simeq\mathcal{E}(\boldsymbol{\xi})$ up to corrections of order $(\lambda/E_{F})(a/L_{s})$ that can be ignored in the regime of interest. Our compact notation hides all the familiar terms, including the anomalous velocity, inside $\Gamma^{-1}$; see appendix A for more details The combination of a spatially varying magnetic texture and SOC leads to finite real-space, momentum-space and mixed real-momentum space curvatures. The electrons acquire an anomalous velocity proportional to the momentum-space Berry curvature $\Omega_{k_{x},k_{y}}$, an “anomalous force” proportional to the real-space Berry curvature $\Omega_{x,y}$ and corrections to the group velocity and generalized force proportional to the mixed real-momentum-space Berry curvatures. Crucially, in addition to the equations of motion, the curvatures also modify the volume element that remains invariant under phase-space flows. Thus to satisfy Liouville’s theorem, one must use the integration measure Xiao _et al._ (2010); Addison _et al._ $dV_{\boldsymbol{\xi}}=\sqrt{\det[\Gamma(\boldsymbol{\xi})]}\,d^{4}\boldsymbol{\xi}/(2\pi)^{2}V$, where $V$ is the volume of the system. We note that in the presence of an external magnetic field $B_{z}\hat{{\bf z}}$, $\sqrt{\det[\Gamma(\boldsymbol{\xi})]}$ reduces to the well-known factor of $(1+e\Omega_{k_{x},k_{y}}B_{z}/\hbar)$ when only the momentum-space curvature is present, however, we will need the more general result here. Hall Conductivity: With electric field applied along $\hat{y}$, we must calculate the transverse current along $\hat{x}$: $j_{x}=-e\int dV_{\boldsymbol{\xi}}\;\dot{x}(\boldsymbol{\xi})\;f(\boldsymbol{\xi})$ (7) where $f(\boldsymbol{\xi})$ is the electronic distribution function. The distribution function reduces to the equilibrium Fermi-Dirac function $f^{0}[\mathcal{E}(\boldsymbol{\xi})]$ in the absence of the external electric field. The goal is to find contributions that are linear order in $E$ to calculate the electric conductivity. The anomalous Hall contribution to the current derives from the intrinsic anomalous velocity and couples to the equilibrium distribution function $f^{0}[\mathcal{E}({\bf\xi})]$. We isolate the terms in $\dot{x}$ linear in $E$ to find $\sigma^{\text{AHE}}_{xy}=-\dfrac{e^{2}}{\hbar}\sum_{l=\pm}\int\dfrac{d^{2}{\bf r}\;d^{2}{\bf k}}{(2\pi)^{2}V}\;\Omega^{l}_{k_{x},k_{y}}(\boldsymbol{\xi})\;f^{0}_{l}[\mathcal{E}_{l}(\boldsymbol{\xi})]$ (8) where $l=\pm$ indexes the two bands. We emphasize that $\sqrt{\det[\Gamma(\boldsymbol{\xi})]}$ in the measure exactly cancels the determinant factor in $\Gamma^{-1}(\boldsymbol{\xi})$ so that the final answer depends only on the momentum-space Berry curvature. We further expand $\Omega_{k_{x},k_{y}}(\boldsymbol{\xi})$ to lowest order in $\lambda/J$ to find $\displaystyle\sigma_{xy}^{\text{AHE}}$ $\displaystyle\approx-\dfrac{e^{2}a^{2}}{2\hbar}\,\overline{m}_{z}\left({\lambda}/{J}\right)^{2}\sum_{l=\pm}\,l\,n_{l}$ (9) where $\overline{m}_{z}=\int d^{2}{\bf r}\;\hat{{\bf m}}_{z}({\bf r})/V$ is the average out-of-plane magnetization and the band-resolved density $n_{l}=\int d^{2}{\bf k}\;f^{0}[\mathcal{E}_{l}({\bf k})]/(2\pi)^{2}$ with $\mathcal{E}_{l}({\bf k})=\mathcal{E}_{l}({\boldsymbol{\xi}};\lambda=0)$. The corresponding resistivity is found from the conductivity via $\rho_{xy}=-\sigma_{xy}/(\sigma^{2}_{xx}+\sigma^{2}_{xy})$ where $\sigma_{xy}\ll\sigma_{xx}=(e^{2}/h)k_{F}\ell$. This relationship will be used to convert conductivities to resistivities for each contribution to the Hall effect. For the AHE this leads to the scaling relation $\rho_{xy}^{\text{AHE}}\sim(\lambda/E_{F})^{2}(a/\ell)^{2}$. Figure 2: The Hall conductivity calculated from the Kubo formula. (a) The blue curve is the THE calculated at $\lambda=0$ with $J/t=10$. The red curve shows $\mathcal{K}(n)=\mathcal{K}_{+}+\mathcal{K}_{-}$ [eq. (14)] which describes how the band strucure controls the semi-classical THE result (13). Despite their different regimes of validity, both the Kubo $\sigma_{xy}$ and $\mathcal{K}(\mu)$ have the same sign for all densities and vanish at the Van- Hove filling where the Fermi surface undergoes a Lifshitz transition. (b) The THE conductivity ($\lambda=0$) shows a crossover from a linear regime at small $J$ to saturation at large $J$. The slope is independent of density $n$, while the saturating value increases with $n$. Both these behaviors can be qualitatively explained by analyzing $\mathcal{K}(n)$ as a function of $J$ (see appendix B). (c) Variation of $\sigma_{xy}$ with spin-orbit coupling $\lambda$ at fixed $J/t=10$ and $n=0.2$. With increasing $L_{s}/a$, the results rapidly converge to a finite value that is very weakly $\lambda$-dependent. Thus there is no linear in $\lambda$ contribution in the Kubo result in agreement with the semi-classical analysis. All other contributions to the Hall response involve the electric field induced perturbations to the distribution function determined by solving the Boltzmann equation. We expand the distribution function to linear order in the electric field, $f=f^{0}+g+\mathcal{O}(E^{2})$ and substitute it into the Boltzmann equation with a relaxation time $\tau=\ell/v_{F}$ to find the equation for $g$: $\displaystyle\left(1+\tau\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\right)g(\boldsymbol{\xi})=-\tau\;\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]$ (10) where $\dot{\boldsymbol{\xi}}^{(I)}$ and $\dot{\boldsymbol{\xi}}^{(D)}$ are the electric field independent and dependent parts of $\dot{\boldsymbol{\xi}}$ in eq. (6). We now take advantage of the fact that $\tau\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\sim(\ell/L_{s})(a/L_{s})\ll 1$ when $\ell/L_{s}\ll 1$ to invert the operator on the left hand side and solve for $g(\boldsymbol{\xi})$. This is analogous to the Zener-Jones calculation Ziman (2007) of the Hall conductivity in the weak field regime $\omega_{c}\tau\ll 1$. Solving the Boltzmann equation for $L_{s}\ll\ell$ is technically much harder. We will investigate aspects of this regime using the Kubo formalism below. The term $g^{(1)}(\boldsymbol{\xi})$ linear in $\tau$ does not contribute to the Hall conductivity and the leading order contribution proportional to $\tau^{2}$ is $g^{(2)}(\boldsymbol{\xi})=\tau^{2}\;\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\Big{(}\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]\Big{)}.$ (11) We emphasize that this equation involves all six curvatures along with mixed derivatives of the semi-classical eigenenergies. Combining $g^{(2)}(\boldsymbol{\xi})$ with eq. (7) we calculate the current which is linear in $E$: $\displaystyle j_{x}^{(2)}$ $\displaystyle=-e\tau^{2}\int dV_{\boldsymbol{\xi}}\;\dot{x}^{(I)}(\boldsymbol{\xi})\;\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\Big{(}\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]\Big{)}.$ (12) We organize the calculation of the conductivity by classifying the various terms in eq. (12) in powers of the small parameters $\lambda/E_{F}$ and $a/L_{s}$; see appendix B for details. Here we discuss the leading order contributions. We first focus on the zeroth order term in $(\lambda/E_{F})$. Without SOC, all curvatures vanish except the real-space curvature which leads to the topological Hall contribution $\sigma_{xy}^{\text{THE}}=\dfrac{e^{2}\tau^{2}}{\hbar^{3}}\,n_{\text{sk}}\;\sum_{l=\pm}\mathcal{K}_{l}(\mu)\bigg{|}_{\lambda=0}.$ (13) Here $n_{\text{sk}}=\int d^{2}{\bf r}\,\hat{{\bf m}}\cdot(\partial_{x}\hat{\bf m}\times\partial_{y}\hat{{\bf m}})/(4\pi V)$ is the skyrmion density and $\mathcal{K}_{\pm}(\mu)=\mp\hbar^{4}\int\dfrac{d^{2}{\bf k}}{(4\pi)}\left(\dfrac{\partial f^{0}_{\pm}}{\partial\mathcal{E}}\right){\bf v}^{T}(\mathbb{M}^{-1}-\text{Tr}\mathbb{M}^{-1}){\bf v}$ (14) is a Fermi surface integral that depends on the chemical potential $\mu$ (or filling $n$) and the band index. Here ${\bf v}=\boldsymbol{\nabla}_{{\bf k}}\mathcal{E}(\boldsymbol{\xi})/\hbar$ is the band velocity vector and $\mathbb{M}^{-1}_{\mu\nu}=\partial_{k_{\mu},k_{\nu}}\mathcal{E}(\boldsymbol{\xi})/\hbar^{2}$ is the inverse mass tensor. The semi-classical theory illuminates the relationship between the real-space Berry curvature which is a property of the spatial evolution of the semi-classical Bloch eigenstates and the skyrmion density which is a property of the spatial evolution of the magnetization vector. In the absence of spin-orbit coupling $\Omega_{x,y}^{\pm}=\mp\hat{{\bf m}}\cdot(\partial_{x}\hat{\bf m}\times\partial_{y}\hat{{\bf m}})/2$. The result of eq. (13) bears a striking resemblance to the canonical solution Ziman (2007) for the semi-classical Hall conductivity with the real space Berry curvature $\Omega_{x,y}$ playing the role of an external magnetic field, in agreement with the intuitive picture behind the THE. The corresponding resistivity is independent of $\tau$ and scales as $\rho_{xy}^{\text{THE}}\sim(a/L_{s})^{2}$. Next we focus on terms linear in $(\lambda/E_{F})$. Even though there are several terms, there is only one that is linear in $(a/L_{s})$. It originates from mixed spatial and momentum space derivatives of the semi-classical energies $\mathcal{E}(\boldsymbol{\xi})$ and is independent of all Berry curvatures: $\delta\sigma_{xy}=-\dfrac{e^{2}\tau^{2}}{2m}\sum_{l=\pm}\,\omega_{l}\,n_{l}$ (15) where $\omega_{l}=1/V\int d^{2}{\bf r}\,\,\hat{{\bf z}}\cdot(\boldsymbol{\nabla}_{r}\times{\bf v}_{l}({\bf r}))$ is the average “vorticity” of electrons in band $l$ with velocity ${\bf v}_{l}({\bf r})$ that is linear in $\lambda$ (see appendix C for details) and $n_{l}$ is the band- resolved density defined below eq. (9). The intuition behind this term is that real-space gradients of the magnetic texture can lead to orbital electronic motion akin to the dynamics induced by an external magnetic field. For the Rashba SOC considered here, the vorticity simplifies to $\sim\int d{\bf r}\,\,\boldsymbol{\nabla}_{r}\cdot\hat{{\bf m}}({\bf r})$. This term has been discussed in the literature Kim _et al._ (2013); Akosa _et al._ (2018, 2019); Zhang _et al._ (2020) as a ${\cal O}(\lambda)$ correction to the emergent magnetic field arising from skyrmions. Here this contribution arises not from SOC corrections to the real-space Berry curvature, but instead from mixed momentum and real space derivatives of the semi-classical eigenvalues. Like the THE the corresponding resistivity is independent of $\tau$, but instead scales as $\delta\rho_{xy}\sim(a/L_{s})(\lambda/E_{F})$. We note, however, that $\delta\rho_{xy}$ vanishes identically for any periodic spin texture, like a skyrmion crystal. More generally, for any smooth texture for which ${\bf v}({\bf r})$ has continuous first order partial derivatives, we can use Stokes’ theorem and show that the vorticity leads only to a boundary term that is negligible in the thermodynamic limit. All other contributions to $\rho_{xy}$, including the mixed curvature terms, are higher order in either $(\lambda/E_{F})$ (which is not relevant for experiments) or in $(a/L_{s})$ at which point the semi-classical analysis presented here is itself not applicable. Thus we have used the semi-classical approach that treats all curvatures on equal footing to conclude that AHE and THE resistivities are indeed additive and the largest contribution to the Hall effect for $L_{s}\gg\ell$. Kubo formula analysis: We next turn to the opposite limit of small skyrmions such that $a\approx k_{F}^{-1}\lesssim L_{s}\ll\ell$. We in fact set the mean- free path to infinity and use an exact Kubo formula to numerically calculate the Hall conductivity for a lattice model of itenerant electrons in the presence of a skyrmion crystal; see appendix D for details. The starting Hamiltonian is a tight-binding generalization of eq.(2) describing electrons on a lattice with nearest neighbor hopping $t$ and Rashba SOC $\lambda$, coupled to a background spin texture described by local moments ${\bf m}_{\bf i}$ at each lattice site ${\bf i}$. The skyrmion crystal defines an enlarged $N_{s}\times N_{s}$ unit cell, where $N_{s}=L_{s}/a$, and results in a magnetic Brillouin zone with $N_{\text{b}}=2N_{s}^{2}$ bands. We present here results for a triangle lattice, but as we show in appendix D our results are independent of the lattice for low densities. We use exact diagonalization to compute the energy eigenvalues and eigenfunctions of our lattice Hamiltonian and then use the TKNN formula Thouless _et al._ (1982) to determine the Hall conductivity in terms of the momentum-space Berry curvature in the magnetic Brillouin zone. Note that this numerically exact procedure includes all the effects of the anomalous velocity as well as the real-space Berry curvature arising from the skyrmions, however, unlike the semi-classical theory it is hard to decompose the final result into AHE and THE contributions. We thus proceed as follows. We first show that in various limits one obtains just the AHE (in a ferromagnetic background), or just the THE (in a skyrmion crystal with $\lambda=0$) . Finally, we consider the full problem and gain qualitative insights into the numerical results by comparing them with the semi-classical results described above. First, consider the simplest ferromagnetic case with uniform magnetization $\hat{{\bf m}}_{\bf i}=\hat{\bf z}$ (independent of ${\bf i}$). which is just the lattice version of the continuum model analyzed in ref Xiao _et al._ (2010) with their $\Delta\sigma_{z}$ corresponding to our $J\sigma_{z}$. An AHE is seen in this case provided both $\lambda$ and $\Delta$ are non-zero. The SOC $\lambda$ breaks the two-fold spin degeneracy of the bands everywhere except at the time-reversal invariant momenta (TRIM) where time reversal (TR) enforces a Kramers degeneracy. A non-zero $\Delta$ destroys TR symmetry, causes band inversion, and creates Berry curvature hotspots at TRIMs which then lead to an enhancement of the AHE conductivity whenever the Fermi level falls near the TRIM points. We next look at a skyrmion crystal, but set $\lambda=0$ so that there is no AHE (even though the net $M_{z}$ is non-zero). The Fourier modes of the periodic texture cause scattering between momentum eigenstates and lead to band folding. At strong coupling $J/t\gg 1$, the bands separate into two sectors with the spins aligned/anti-aligned with the local magnetic texture. The corresponding Hall conductivity is the THE arising from non-zero skyrmion number. It shows a non-trivial dependence on the band filling as seen in Fig. 2(a) (blue curve). Comparing this with the semi-classical THE prediction of eq. (13) (red curve) we see that these results, though obtained in very different regimes, share some qualitative features. Both have the same sign at each density and vanish at the van Hove filling where the Fermi surface undergoes a Lifshitz transition. Next consider the $J/t$ dependence of the numerical results for the THE shown in Fig. 2(b). We see a linear regime at small $J$ crossing over to saturation at large $J$. We can gain insight into these results by analyzing the $J$ dependence of the semi-classical THE of eq. (13), which predicts an initial slope independent of density $n$ and a saturation value that increases with $n$ (see appendix B) Finally, we turn to the Kubo results for a skyrmion crystal with non-zero SOC. In Fig. 2(c) we plot these results at strong coupling $J/t=10$ and find that in general the Hall response depends on the SOC $\lambda$. We see that with increasing $L_{s}/a$, the results converge to a non-zero value which is very weakly dependent on $\lambda$. The large $L_{s}/a$ limit allows us to make contact with the semi-classical results, where we showed above that there is no linear in $\lambda$ contribution to the Hall conductivity. For the parameters considered here, the AHE contribution that scales like $(\lambda/J)^{2}\sim 10^{-4}$ is also negligible. Discussion: We have presented a complete semi-classical analysis in the weak SOC $\lambda\ll E_{F}$ regime for $a\ll\ell\ll L_{s}$ and demonstrated that the Hall resistivity the sum of an anomalous Hall contribution, arising from the momentum space Berry curvature and proportional to the average out-of- plane magnetization, and a topological Hall contribution, arising from the real-space Berry curvature and proportional to the skyrmion density. All corrections were explicitly shown to be higher order in the small parameters. The semi-classical results are valid for any spin texture without any assumption about its periodicity. In the opposite limit $L_{s}\ll\ell=\infty$ (zero disorder) we have presented exact Kubo formula results for skyrmion crystals. We conclude by noting effects that we have not included and questions for further study. We focussed on the intrinsic anomalous Hall effect, arising for momentum space Berry curvature, often the dominant contribution Nagaosa _et al._ (2010) to the AHE, but did not consider extrinsic effects such as skew and side jump scattering. We have also not analyzed non-periodic spin textures which vary on a length scale $L_{s}\lesssim\ell$. Such a regime has been analyzed Bouaziz _et al._ (2021) in the context of electrons scattering off a single skrymion with the prediction of a novel non-collinear Hall effect linear in the SOC. It would be interesting to extend our semi-classical analysis to this regime. Finally, in the semiclassical regime that we have examined in detail, with $L_{s}\gg\ell$, there is a novel vorticity term [eq. (15)] that is linear in $\lambda$, but we were able to use Stokes’ theorem to reduce it to a boundary term that vanishes for periodic textures. An interesting question Addison _et al._ for further study is the fate of this term in the presence of singularities, such as Bloch points, that may act as obstructions to the use of Stokes theorem. Acknowledgements: This work was supported by NSF Materials Research Science and Engineering Center Grant DMR-2011876. Z.A. was also supported by the Ohio State University President’s Postdoctoral Scholars Program. We gratefully acknowledge Roland Kawakami, Siddharth Seetharaman, Po-Kuan Wu, and Fengyuan Yang for insightful discussions. ## References * Hall (1879) E. H. Hall, American Journal of Mathematics 2, 287 (1879). * Wannier (1947) G. H. Wannier, Phys. Rev. 72, 304 (1947). * Karplus and Luttinger (1954) R. Karplus and J. M. Luttinger, Phys. Rev. 95, 1154 (1954). * Nagaosa (2006) N. Nagaosa, Journal of the Physical Society of Japan 75, 042001 (2006), https://doi.org/10.1143/JPSJ.75.042001 . * Nagaosa _et al._ (2010) N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Rev. Mod. Phys. 82, 1539 (2010). * Xiao _et al._ (2010) D. Xiao, M.-C. Chang, and Q. Niu, Reviews of Modern Physics 82, 1959 (2010). * Smit (1955) J. Smit, Physica 21, 877 (1955). * Berger (1964) L. Berger, Physica 30, 1141 (1964). * Yao _et al._ (2004) Y. Yao, L. Kleinman, A. H. MacDonald, J. Sinova, T. Jungwirth, D.-s. Wang, E. Wang, and Q. Niu, Phys. Rev. Lett. 92, 037204 (2004). * Lee _et al._ (2009) M. Lee, W. Kang, Y. Onose, Y. Tokura, and N. P. Ong, Phys. Rev. Lett. 102, 186601 (2009). * Neubauer _et al._ (2009) A. Neubauer, C. Pfleiderer, B. Binz, A. Rosch, R. Ritz, P. G. Niklowitz, and P. Böni, Phys. Rev. Lett. 102, 186602 (2009). * Kanazawa _et al._ (2011) N. Kanazawa, Y. Onose, T. Arima, D. Okuyama, K. Ohoyama, S. Wakimoto, K. Kakurai, S. Ishiwata, and Y. Tokura, Phys. Rev. Lett. 106, 156603 (2011). * Li _et al._ (2013) Y. Li, N. Kanazawa, X. Z. Yu, A. Tsukazaki, M. Kawasaki, M. Ichikawa, X. F. Jin, F. Kagawa, and Y. Tokura, Phys. Rev. Lett. 110, 117202 (2013). * Gallagher _et al._ (2017) J. C. Gallagher, K. Y. Meng, J. T. Brangham, H. L. Wang, B. D. Esser, D. W. McComb, and F. Y. Yang, Phys. Rev. Lett. 118, 027201 (2017). * Ahmed _et al._ (2018) A. S. Ahmed, J. Rowland, B. D. Esser, S. R. Dunsiger, D. W. McComb, M. Randeria, and R. K. Kawakami, Phys. Rev. Materials 2, 041401 (2018). * Ahmed _et al._ (2019) A. S. Ahmed, A. J. Lee, N. Bagués, B. A. McCullian, A. M. Thabt, A. Perrine, P. K. Wu, J. R. Rowland, M. Randeria, P. C. Hammel, D. W. McComb, and F. Yang, Nano Letters 19, 5683 (2019), arXiv:1905.03650 . * Shao _et al._ (2019) Q. Shao, Y. Liu, G. Yu, S. K. Kim, X. Che, C. Tang, Q. L. He, Y. Tserkovnyak, J. Shi, and K. L. Wang, Nature Electronics 2, 182 (2019), arXiv:1904.07107 . * Ye _et al._ (1999) J. Ye, Y. B. Kim, A. J. Millis, B. I. Shraiman, P. Majumdar, and Z. Tešanović, Phys. Rev. Lett. 83, 3737 (1999). * Tatara and Kawamura (2002) G. Tatara and H. Kawamura, Journal of the Physical Society of Japan 71, 2613 (2002). * Bruno _et al._ (2004) P. Bruno, V. K. Dugaev, and M. Taillefumier, Physical Review Letters 93, 096806 (2004). * Onoda _et al._ (2004) M. Onoda, G. Tatara, and N. Nagaosa, Journal of the Physical Society of Japan 73, 2624 (2004). * Nagaosa _et al._ (2012) N. Nagaosa, X. Z. Yu, and Y. Tokura, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, 5806 (2012). * Nagaosa and Tokura (2013) N. Nagaosa and Y. Tokura, Nature Nanotechnology 8, 899 (2013). * Hamamoto _et al._ (2015) K. Hamamoto, M. Ezawa, and N. Nagaosa, Phys. Rev. B 92, 115417 (2015). * Nakazawa _et al._ (2018) K. Nakazawa, M. Bibes, and H. Kohno, Journal of the Physical Society of Japan 87, 033705 (2018). * Ishizuka and Nagaosa (2018) H. Ishizuka and N. Nagaosa, Science Advances 4, eaap9962 (2018). * Kim _et al._ (2013) K. W. Kim, H. W. Lee, K. J. Lee, and M. D. Stiles, Physical Review Letters 111 (2013), 10.1103/PhysRevLett.111.216601. * Akosa _et al._ (2018) C. A. Akosa, A. Takeuchi, Z. Yuan, and G. Tatara, Physical Review B 98, 184424 (2018). * Lux _et al._ (2018) F. R. Lux, F. Freimuth, S. Blügel, and Y. Mokrousov, Communications Physics 1, 60 (2018). * Akosa _et al._ (2019) C. A. Akosa, H. Li, G. Tatara, and O. A. Tretiakov, Physical Review Applied 12, 54032 (2019). * Zhang _et al._ (2020) S. S. Zhang, H. Ishizuka, H. Zhang, G. B. Halász, and C. D. Batista, Physical Review B 101, 024420 (2020). * Lux _et al._ (2020) F. R. Lux, F. Freimuth, S. Blügel, and Y. Mokrousov, Phys. Rev. Lett. 124, 096602 (2020). * Bouaziz _et al._ (2021) J. Bouaziz, H. Ishida, S. Lounis, and S. Blügel, Phys. Rev. Lett. 126, 147203 (2021). * Freimuth _et al._ (2013) F. Freimuth, R. Bamler, Y. Mokrousov, and A. Rosch, Phys. Rev. B 88, 214409 (2013). * Thouless _et al._ (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982). * (36) Z. Addison, N. Verma, S. Seetharaman, and M. Randeria, In Preparation . * Ziman (2007) J. Ziman, _Electrons and Phonons_ (Oxford University Press, 2007). APPENDICES ## Appendix A Semi-classical Equations of Motion with Phase Space Berry Curvatures Semi-classical theory describes transport in terms of electron wave-packets whose width is larger than microscopic lattice scale $a$ but much smaller than mean-free path $\ell$ so that the average position ${\bf r}$ and average momentum ${\bf k}$ of the wavepacket are well-defined simultaneously. This is in addition to their time evolution which is governed by a semi-classical Hamiltonian. The magnetic texture presents a new length-scale related to its size $L_{s}$. The construction now requires a gradient expansion which introduces an additional constraint that the width is smaller than $L_{s}$. The Hamiltonian thus obtained is a function of phase-space variables $\boldsymbol{\xi}=(x,y,k_{x},k_{y})$ : $\mathcal{H}(\boldsymbol{\xi})=\dfrac{\hbar^{2}{\bf k}^{2}}{2m}+{\bf d}(\boldsymbol{\xi})\cdot\boldsymbol{\sigma},\quad{\bf d}(\boldsymbol{\xi})=a\lambda({\bf k}\times\hat{{\bf z}})-J\hat{{\bf m}}({\bf r}),\quad\mathcal{E}_{\pm}(\boldsymbol{\xi})=\dfrac{\hbar^{2}{\bf k}^{2}}{2m}\pm|{\bf d}(\boldsymbol{\xi})|$ (16) and hosts six types of Berry curvatures, each corresponding to a plane in the 4D phase space $\displaystyle\Omega^{\pm}_{\alpha,\beta}(\boldsymbol{\xi})$ $\displaystyle=$ $\displaystyle\pm\dfrac{1}{2}\hat{{\bf d}}(\boldsymbol{\xi})\cdot(\partial_{\alpha}\hat{\bf d}(\boldsymbol{\xi})\times\partial_{\beta}\hat{{\bf d}}(\boldsymbol{\xi}))$ (17) where $\pm$ label the two bands. The curvatures introduce non-trivial Poisson bracket relations between the phase space variables that lead to corrections in the equations of motion and the invariant measure. Both these quantities are captured by the completely anti-symmetric matrix $[\Gamma(\boldsymbol{\xi})]_{\alpha,\beta}=\Omega_{\alpha,\beta}(\boldsymbol{\xi})-[i\sigma_{y}\otimes\mathds{1}]_{\alpha,\beta}$ as defined in the main text. Here we explicitly write the expression for completeness: $\displaystyle\hbar\begin{pmatrix}\dot{x}\\\ \dot{y}\\\ \dot{k_{x}}\\\ \dot{k_{y}}\end{pmatrix}$ $\displaystyle=$ $\displaystyle\dfrac{1}{\sqrt{\text{det}[\Gamma(\boldsymbol{\xi})]}}\left[\begin{pmatrix}0&\Omega_{k_{x},k_{y}}&-\Omega_{y,k_{y}}&\Omega_{y,k_{x}}\\\ -\Omega_{k_{x},k_{y}}&0&\Omega_{x,k_{y}}&-\Omega_{x,k_{x}}\\\ \Omega_{y,k_{y}}&-\Omega_{x,k_{y}}&0&\Omega_{x,y}\\\ -\Omega_{y,k_{x}}&-\Omega_{x,k_{x}}&-\Omega_{x,y}&0\end{pmatrix}-\begin{pmatrix}0&0&-1&0\\\ 0&0&0&-1\\\ 1&0&0&0\\\ 0&1&0&0\\\ \end{pmatrix}\right]\begin{pmatrix}\partial_{x}\widetilde{\mathcal{E}}(\boldsymbol{\xi})\\\ \partial_{y}\widetilde{\mathcal{E}}(\boldsymbol{\xi})+eE\\\ \partial_{k_{x}}\widetilde{\mathcal{E}}(\boldsymbol{\xi})\\\ \partial_{k_{y}}\widetilde{\mathcal{E}}(\boldsymbol{\xi})\end{pmatrix}$ (18) $\displaystyle dV_{\boldsymbol{\xi}}$ $\displaystyle=$ $\displaystyle\dfrac{d^{2}{\bf r}d^{2}{\bf k}}{(2\pi)^{2}V}\sqrt{\text{det}[\Gamma(\boldsymbol{\xi})]}$ (19) There is one additional change in the equations. The non-trivial spatial and momentum variation in the eigenfunction of the semi-classical Bloch Hamiltonian $|u(\boldsymbol{\xi})\rangle$ leads to a shift in the energy $\widetilde{\mathcal{E}}(\boldsymbol{\xi})=\mathcal{E}(\boldsymbol{\xi})+\delta\mathcal{E}(\boldsymbol{\xi})$ with $\delta\mathcal{E}(\boldsymbol{\xi})=-\sum_{i=x,y}\text{Im}\bigg{[}\bigg{(}\partial_{r_{i}}\langle{u(\boldsymbol{\xi})}|\bigg{)}(\mathcal{E}(\boldsymbol{\xi})-\mathcal{H}(\boldsymbol{\xi}))\bigg{(}\partial_{k_{i}}|{u(\boldsymbol{\xi})}\rangle\bigg{)}\bigg{]}.$ (20) We can ignore $\delta\mathcal{E}(\boldsymbol{\xi})$ in our calculation because it scales as $(\lambda/E_{F})(a/L_{s})$ and thus leads to higher order corrections to the Hall effect not considered here. The matrix representation in eq. (18) contains all contributions of the curvatures. In particular, the anomalous velocity can be extracted from the electric field dependent part of the velocity in $+x$ direction $\dot{x}^{(D)}(\boldsymbol{\xi})=\dfrac{e}{\hbar}\dfrac{\Omega_{k_{x},k_{y}}(\boldsymbol{\xi})}{\sqrt{\text{det}[\Gamma(\boldsymbol{\xi})]}}E.$ (21) The determinant factor in the denominator may seem unfamiliar but is absolutely crucial for calculating the correct intrinsic anomalous Hall response. There is a complete cancellation of the phase-space measure factors in the Hall current so that the anomalous Hall conductivity only depends on the momentum-space Berry curvature: $j_{x}=e\int\dfrac{d^{2}{\bf r}d^{2}{\bf k}}{(2\pi)^{2}V}\sqrt{\text{det}[\Gamma(\boldsymbol{\xi})]}\;\dot{x}^{(D)}(\boldsymbol{\xi})\;f^{0}(\boldsymbol{\xi})=\left(\dfrac{e^{2}}{\hbar}\int\dfrac{d^{2}{\bf r}d^{2}{\bf k}}{(2\pi)^{2}V}\;\Omega_{k_{x},k_{y}}(\boldsymbol{\xi})\;f^{0}(\boldsymbol{\xi})\right)E$ (22) The quantity within brackets is $\sigma_{xy}^{\text{AHE}}$. Even though the expression contains only the momentum-space Berry curvature $\Omega_{k_{x},k_{y}}(\boldsymbol{\xi})$, we must keep in mind that $\Omega_{k_{x},k_{y}}(\boldsymbol{\xi})$ is a function of momentum and real space and the two integrals are not separable: $\sigma_{xy}^{\text{AHE}}=-\sum_{l=\pm}l\dfrac{e^{2}}{\hbar}\int\dfrac{d^{2}{\bf r}\;d^{2}{\bf k}}{(2\pi)^{2}V}\;\left[\dfrac{a^{2}\lambda^{2}Jm_{z}({\bf r})}{2|{\bf d}(\boldsymbol{\xi})|}\right]\;f^{0}[\mathcal{E}_{l}(\boldsymbol{\xi})].$ (23) Since there is already an explicit $\lambda^{2}$ and $\lambda/E_{F}$ is a small parameter, we can set $\lambda=0$ in the rest of the expression to find the leading contribution. The spatial dependence in the semi-classical eigenenergies drops out when $\lambda=0$, that is $(\mathcal{E}_{l}(\boldsymbol{\xi};\lambda=0)=\mathcal{E}_{l}({\bf k}))$ and the spatial and momentum integrals become separable $\displaystyle\sigma_{xy}^{\text{AHE}}$ $\displaystyle\approx-\sum_{l=\pm}l\dfrac{e^{2}a^{2}}{2\hbar}\bigg{(}\dfrac{\lambda}{J}\bigg{)}^{2}\left(\int\dfrac{d^{2}{\bf r}}{V}\;m_{z}({\bf r})\right)\left(\int\dfrac{d^{2}{\bf k}}{(2\pi)^{2}}f^{0}[\mathcal{E}_{l}({\bf k})]\right).$ (24) We thus find that the intrinsic contribution only probes the net out-of-plane magnetization even for spatially varying textures. ## Appendix B Solution to the Boltzmann Equation Focussing on contributions that come from electric field induced perturbations to the distribution function, we write the full distribution function in the presence of electric field as $f=f^{0}+g$ where $g$ is linear order in the field. We then use the relaxation time approximation to write the Boltzmann equation as $\displaystyle\dot{\boldsymbol{\xi}}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\left(f^{0}(\boldsymbol{\xi})+g(\boldsymbol{\xi})\right)=-\dfrac{g(\boldsymbol{\xi})}{\tau}.$ (25) We write $\dot{\boldsymbol{\xi}}=\dot{\boldsymbol{\xi}}^{(I)}+\dot{\boldsymbol{\xi}}^{(D)}$ where $I$ and $D$ refer to electric field dependent and independent components to find an equation for $g(\boldsymbol{\xi})$ $\left(1+\tau\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\right)g(\boldsymbol{\xi})=-\tau\;\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})].$ (26) The differential operator on the left has a particular scaling. With $g(\boldsymbol{\xi})=g[\mathcal{E}(\boldsymbol{\xi})]$, we can infer that $\tau\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\mathcal{E}\sim(\ell/L_{s})(a/L_{s})$. Now since $a\ll\ell\ll L_{s}$, both these ratios are small and hence, we can invert the operator to find $g(\boldsymbol{\xi})=-\tau\;\left(1-\tau\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\right)\;\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]=g^{(1)}(\boldsymbol{\xi})+g^{(2)}(\boldsymbol{\xi})$ (27) where the superscripts label the order in $\tau$. The first order term $g^{(1)}(\boldsymbol{\xi})$ does not result in any Hall conductivity. While we find that at the end of a long calculation, we can use time-reversal (TR) symmetry to understand why it vanishes. Onsager’s reciprocity relation forces Hall conductivity to be odd under TR. The conductivity arising from $g^{(1)}(\boldsymbol{\xi})$ doesn’t have this property: $\sigma_{xy}\sim\dfrac{e}{V}\int\dfrac{d^{2}{\bf r}d^{2}{\bf k}}{(2\pi)^{2}}\underbrace{\sqrt{\text{det}[\Gamma(\boldsymbol{\xi})]}}_{\text{TR even}}\;\underbrace{\dot{x}(\boldsymbol{\xi})}_{\text{TR odd}}\;\Big{(}-\tau\underbrace{\dot{\boldsymbol{\xi}}^{(D)}}_{\text{TR odd}}\cdot\underbrace{\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]}_{\text{TR even}}\Big{)}.$ (28) It is clearly even under time-reversal and hence must vanish. The contributions from $g^{(2)}(\boldsymbol{\xi})$ survive this argument. The calculation for Hall conductivity involves combining the distribution function with velocity and the appropriate phase space volume factor. The algebra is tedious but there are a few simplifying factors. Products and derivatives of the curvatures can be excluded as they are all higher order in $(a/L_{s})$. There will still be many terms and hence we need to introduce a classification scheme for bookkeeping $\prod\limits_{i=1}^{m}\prod\limits_{j=1}^{n}\partial_{k_{i}}\partial_{r_{j}}(\cdot)\longrightarrow(m,n).$ (29) Here $(m,n)$ labels expressions that have $m$ momentum derivatives and $n$ spatial derivatives. These numbers count both: derivatives with respect to the semi-classical energies and the implicit derivatives hidden inside the curvatures. With these labels, the Hall conductivity is $\sigma_{xy}\;\sim\;\int\dfrac{d^{2}{\bf r}d^{2}{\bf k}}{(2\pi)^{2}V}\;\underbrace{\phantom{\Big{(}}\text{Phase space volume}\phantom{\Big{)}}}_{(0,0)+(1,1)}\;\times\;\underbrace{\phantom{\Big{(}}\text{Velocity}\phantom{\Big{)}}}_{(1,0)+{\color[rgb]{0,0,0}(2,1)}}\;\times\;\underbrace{\phantom{\Big{(}}\dot{\boldsymbol{\xi}}^{(I)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\phantom{\Big{)}}}_{(1,1)+{\color[rgb]{0,0,0}(2,2)}}\;\times\underbrace{\Big{(}\dot{\boldsymbol{\xi}}^{(D)}\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}f^{0}[\mathcal{E}(\boldsymbol{\xi})]\Big{)}}_{(1,0)+{\color[rgb]{0,0,0}(2,1)}}.$ (30) The first tuple indexes derivatives acting on the semi-classical eigenenergies, while the second tuple indexes derivatives deriving from the Berry curvatures. Since we are focussing on contributions that involve at most one curvature, there are only two broad categories: no curvature $(3,1)$ and one curvature $(4,2)$. The number of terms inside each category is still quite large. We now turn to energy scaling relations and take advantage of the fact that $\lambda/E_{F}$ is a small parameter. To the leading order, we find that $\partial_{k_{x}}\mathcal{E}\sim\lambda^{0}$, $\Omega_{x,y}\sim\lambda^{0}$, $\partial_{x}\mathcal{E}\sim\lambda$, , $\Omega_{x,k_{y}}\sim\lambda$ and $\Omega_{k_{x},k_{y}}\sim\lambda^{2}$. We are now ready to calculate the contributions order by order in $\lambda/E_{F}$ and $a/L_{s}$: * • Zeroth order in $\lambda/E_{F}$ The equations of motion are quite simple since all spatial derivatives vanish. There are no $(3,1)$ terms and the only non-zero $(4,2)$ term has both spatial derivatives coming from the real-space Berry curvature. The resulting contribution is the Topological Hall response $\sigma_{xy}^{\text{THE}}$. Simplifying eq. (13) we find $\sigma_{xy}^{\text{THE}}=-\dfrac{e^{2}\tau^{2}}{\hbar^{3}}n_{\text{sk}}\dfrac{2h^{2}}{m}\begin{cases}\dfrac{\pi\hbar n}{m},&\text{for $0<n<\dfrac{mJ}{\pi\hbar}$}\\\ J,&\text{for $\dfrac{mJ}{\pi\hbar}<n$}\end{cases}$ (31) where $n$ is the electron density. Hence as $J/E_{F}$ is tuned, $\sigma_{xy}$ crosses over from a linear in $J$ regime to a saturating value that is independent of $J$ but increases with density. Here $n_{\text{sk}}$ is the skyrmion density $n_{\text{sk}}=\dfrac{1}{V}\int\dfrac{d^{2}{\bf r}}{4\pi}\,\,\hat{{\bf m}}({\bf r})\cdot(\partial_{x}\hat{{\bf m}}({\bf r})\times\partial_{y}\hat{{\bf m}}({\bf r}))$ (32) For small densities $n<\dfrac{mJ}{\pi\hbar}$ this can be written as $\sigma_{xy}^{\text{THE}}=\dfrac{ne\tau}{m}\bigg{(}\dfrac{\tau eB_{\text{eff}}}{m}\bigg{)}$ (33) with $eB_{\text{eff}}=-2\pi n_{\text{sk}}$ acting like an effective magnetic field induced by the pressence of the spatially dispersive magnetic texture. * • First order in $\lambda/E_{F}$ There are both $(3,1)$ and $(4,2)$ type of contributions. We leave $(3,1)$ to the next section since it has a rather interesting origin, and focus on $(4,2)$, which has two possible origins. The first involves $\Omega_{x,y}$ multiplied with four momentum derivatives of energies. We will now show that the resultant Hall conductivity is even in $\lambda$ and hence is either zeroth order (discussed above) or second order (can be ignored). It can be checked that the semi-classical eigenenergies and the real-space curvature satisfy the relations $\mathcal{E}({\bf r},{\bf k},\lambda)=\mathcal{E}({\bf r},-{\bf k},-\lambda);\quad\Omega_{x,y}({\bf r},{\bf k},\lambda)=\Omega_{x,y}({\bf r},-{\bf k},-\lambda).$ (34) As a result, the integrand in phase space will switch $\lambda$ upon flipping the momentum ${\bf k}\rightarrow-{\bf k}$. The resulting Hall conductivity changes $\sigma(\lambda)\rightarrow\sigma(-\lambda)$ under the flip. However, since ${\bf k}$ is a dummy variable that is being integrated over, Hall conductivity must satisfy $\sigma(\lambda)=\sigma(-\lambda)$ and is hence even in $\lambda$. There cannot be any first order corrections. The other possibility both involves mixed curvatures, which as we showed, are at least linear in $\lambda/E_{F}$. Since the overall type has to be $(4,2)$, the pre-factors that come with mixed curvature should be of the type $(3,1)$. That is, there will be an additional spatial derivative in the full expression. It can either come from a different mixed curvature piece or from a first order spatial derivative of the semi-classical eigenenergies. It is easy to see that both these situations lead to second order contributions. In sum, the only linear order contribution in SOC is of type $(3,1)$. It is the subject of the next section. ## Appendix C Hall conductivity independent of Curvatures There are many simplifications when the curvatures are absent. We therefore find it instructive to present the full derivation, starting from the fact that the semi-classical energy is a function of both real space and momentum, $\mathcal{E}({\bf r},{\bf k})$. The derivation also appeals to the generality of the result and that it may apply to systems beyond the model Hamiltonian that we have considered in this paper. With external Electric field, ${\bf E}$, the dynamics of the wave-packet is governed by the equations : $\begin{pmatrix}\dot{{\bf r}}\\\ \dot{{\bf k}}\end{pmatrix}=\begin{pmatrix}0&\mathds{1}\\\ -\mathds{1}&0\end{pmatrix}\begin{pmatrix}\boldsymbol{\nabla}_{r}\mathcal{E}/\hbar+e{\bf E}/\hbar\\\ \boldsymbol{\nabla}_{k}\mathcal{E}/\hbar\end{pmatrix}.$ (35) that lead to the following second-order shift in the distribution function $g^{(2)}(\boldsymbol{\xi})=-\dfrac{e\tau^{2}}{\hbar^{2}}\left(\dfrac{\partial f^{0}}{\partial\mathcal{E}}\right)\big{[}\boldsymbol{\nabla}_{k}\mathcal{E}\cdot\boldsymbol{\nabla}_{r}-\boldsymbol{\nabla}_{r}\mathcal{E}\cdot\boldsymbol{\nabla}_{k}\big{]}{\bf E}\cdot\boldsymbol{\nabla}_{k}\mathcal{E}$ (36) and a Hall conductivity $\sigma_{\alpha\beta}=\dfrac{e^{2}\tau^{2}}{\hbar^{3}}\dfrac{1}{V}\int\dfrac{d^{2}{\bf r}\;d^{2}{\bf q}}{(2\pi)^{2}}\;\left(\dfrac{\partial f^{0}}{\partial\mathcal{E}}\right)\;(\partial_{k_{\alpha}}\mathcal{E})\big{[}\boldsymbol{\nabla}_{k}\mathcal{E}\cdot\boldsymbol{\nabla}_{r}-\boldsymbol{\nabla}_{r}\mathcal{E}\cdot\boldsymbol{\nabla}_{k}\big{]}(\partial_{k_{\beta}}\mathcal{E}).$ (37) where we have suppressed the sum over the band index for brevity. It is not obvious from the expression, as it stands, to see that the anti- symmetric response, $\sigma_{xy}-\sigma_{yx}$, is finite. Therefore, we next use integration by parts to rewrite the tensor as $\sigma_{\alpha\beta}=-\dfrac{e^{2}\tau^{2}}{\hbar^{3}}\dfrac{1}{V}\int d{\bf r}\;d{\bf q}\;f^{0}[\mathcal{E}]\;\big{[}\boldsymbol{\nabla}_{k}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{r}-\boldsymbol{\nabla}_{r}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{k}\big{]}(\partial_{k_{\beta}}\mathcal{E})+\mathcal{S}_{\alpha\beta}$ (38) where $\mathcal{S}$ is a symmetric tensor, $\mathcal{S}_{\alpha\beta}=\mathcal{S}_{\beta\alpha}$, and the integrand is explicitly anti-symmetric $\big{[}\boldsymbol{\nabla}_{k}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{r}-\boldsymbol{\nabla}_{r}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{k}\big{]}(\partial_{k_{\beta}}\mathcal{E})=\boldsymbol{\nabla}_{k}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{r}\partial_{k_{\beta}}\mathcal{E}-\boldsymbol{\nabla}_{r}\partial_{k_{\alpha}}\mathcal{E}\cdot\boldsymbol{\nabla}_{k}\partial_{k_{\beta}}\mathcal{E}.$ (39) Thus, the net anti-symmetric part can survive. Back to our model Hamiltonian, we see that this effect cannot be described as an anomalous or topological Hall response. It survives in the absence of both Berry curvatures. As we will show now, its origin lies in vorticity of the local electronic velocity field. We expand the integrand $\partial_{k_{x}}^{2}\mathcal{E}\partial_{x,k_{y}}\mathcal{E}+\partial_{k_{x},k_{y}}\mathcal{E}\partial_{y,k_{y}}\mathcal{E}-\partial_{x,k_{x}}\mathcal{E}\partial_{k_{x},k_{y}}\mathcal{E}-\partial_{y,k_{x}}\mathcal{E}\partial_{k_{y}}^{2}\mathcal{E}$ (40) and use the fact that $\partial_{k_{\alpha},k_{\beta}}\mathcal{E}=\delta_{\alpha,\beta}\;\hbar^{2}/m+\mathcal{O}(\lambda^{2})$ to ignore the middle two terms when $\lambda/E_{F}$ is small. The other two terms to first order can be written as $\dfrac{\hbar^{2}}{m}\left(\partial_{x,k_{y}}\mathcal{E}-\partial_{y,k_{x}}\mathcal{E}\right)=\dfrac{\hbar^{2}}{m}\left({\bf z}\cdot\boldsymbol{\nabla}_{\bf r}\times(\boldsymbol{\nabla}_{\bf k}\mathcal{E})\right)=\dfrac{\hbar^{3}}{m}\left({\bf z}\cdot\boldsymbol{\nabla}_{\bf r}\times{\bf v}({\bf r})\right).$ (41) An intuitive picture behind ordinary Hall effect is that electrons undertake cyclotron orbits under the action of the magnetic field. This results in electron velocity field forming vortices. This contribution, on the other hand, doesn’t require an external magnetic field and instead uses the underlying magnetic texture to mimic vortices. The explicit connection to the texture is $\displaystyle\partial_{x,k_{y}}\mathcal{E}_{\pm}-\partial_{y,k_{x}}\mathcal{E}_{\pm}=\mp a\lambda\left(\partial_{x}m_{x}+\partial_{y}m_{y}\right)=\mp a\lambda\boldsymbol{\nabla}\cdot\hat{m}({\bf r})$ (42) which has been reported elsewhere in the literature Freimuth _et al._ (2013); Akosa _et al._ (2018, 2019); Zhang _et al._ (2020) as a correction to the effective magnetic field in the presence of SOC. The resulting Hall conductivity for small densities $n<mJ/\pi\hbar$ is $\delta\sigma_{xy}=-\dfrac{ne^{2}\tau}{m}\left(\dfrac{\tau\lambda a}{\hbar}\int\dfrac{d^{2}{\bf r}}{V}\;\dfrac{\boldsymbol{\nabla}_{r}\cdot\hat{{\bf m}}({\bf r})}{2}\right).$ (43) and can be interpreted as arrising from an effective magnetic field $\sim\lambda\boldsymbol{\nabla}_{r}\cdot\hat{{\bf m}}({\bf r})$. Lastly, we note that this integral is a boundary term. Therefore unless there are singular features in the semi-classical velocity, the integral has to vanish. That being said the general result in eq. (37) may still be finite for systems with alternative kinetic dispersion relations $\mathcal{E}(\boldsymbol{\xi})$. ## Appendix D Kubo Formula Calculation Figure A1: Hall conductivity from exact diagonalization for a skyrmion crystal with (a) triangle and (b) square lattice with strong coupling $J/t=10$. The first panel shows the bands for a skyrmion unit cell with $L_{s}/a=6$. The skyrmion potential causes band folding between momentum eigenstates. The number of bands increases on increasing the lattice resolution of the skyrmion, as seen in the second panel with $L_{s}/a=14$. The resulting DoS and Hall conductivity are shown in the last two panels. While the semi-classical calculation produces intuitive results, the algebra is only controlled when $\lambda/E_{F}$, $(a/L_{s})$, and $\ell/L_{s}$ are small parameters. In this section, we will discuss the opposite limit with $a,L_{s}\ll\ell=\infty$. We use a tight binding model with magnetic unit cell area $\sim(L_{s}/a)^{2}$ and calculate the Hall conductance using the TKNN Kubo formula. The results of the calculation are exact and contain information deriving from all the types of contributions to the Hall effect. Guided by the semi-classical calculation here we discuss certain limiting cases. We consider a tight-binding version of the continuum model $\mathcal{H}=-t\sum\limits_{\langle{\bf i},{\bf j}\rangle,\sigma}c^{\dagger}_{{\bf i}\sigma}c^{\phantom{{\dagger}}}_{{\bf j}\sigma}-i\lambda\sum\limits_{\langle{\bf i},{\bf j}\rangle,\sigma,\sigma^{\prime}}c^{\dagger}_{{\bf i}\sigma}\left[{\bf r}_{ij}\times{\bf z}\cdot\boldsymbol{\sigma}\right]_{\sigma\sigma^{\prime}}c^{\phantom{{\dagger}}}_{{\bf j}\sigma^{\prime}}-J\sum\limits_{{\bf i},\sigma,\sigma^{\prime}}c^{\dagger}_{{\bf i}\sigma}\left[\hat{{\bf m}}_{i}\cdot\boldsymbol{\sigma}\right]_{\sigma\sigma^{\prime}}c^{\phantom{{\dagger}}}_{{\bf i}\sigma^{\prime}}$ (44) where the vector field $\hat{{\bf m}}_{\bf i}$ models a discrete version of a skyrmion ${\bf m}_{{\bf i}}=\begin{pmatrix}\sin\left(2\pi{\bf i}\cdot{\bf a}_{1}\right)\\\ \sin\left(2\pi{\bf i}\cdot{\bf a}_{2}\right)\\\ \cos\left(2\pi{\bf i}\cdot{\bf a}_{1}\right)+\cos\left(2\pi{\bf i}\cdot{\bf a}_{2}\right)+1\end{pmatrix},\quad\hat{{\bf m}}_{\bf i}=\dfrac{{\bf m}_{{\bf i}}}{\sqrt{{\bf m}_{{\bf i}}\cdot{\bf m}_{{\bf i}}}}$ (45) with winding number $+1$. Here ${\bf a}_{1}$ and ${\bf a}_{2}$ are the lattice vectors for the skyrmion lattice and ${\bf i}$ labels a position inside the skyrmion unit cell. The corresponding magnetic Brillouin Zone (MBZ) is spanned by vectors ${\bf b}_{1}$ and ${\bf b}_{2}$ that satisfy ${\bf a}_{i}\cdot{\bf b}_{j}=2\pi\delta_{i,j}$. These vectors permit a momentum representation $c^{\phantom{{\dagger}}}_{{\bf k}\sigma}=\dfrac{1}{\sqrt{N_{uc}}}\sum\limits_{{\bf k}\in\text{MBZ}}e^{i{\bf k}\cdot{\bf i}}c^{\phantom{{\dagger}}}_{{\bf i}\sigma}$ (46) with ${\bf k}$ taken from a $N_{k}\times N_{k}$ BZ mesh. The Bloch Hamiltonian which is then diagonalized to find the energies and wavefunctions $\mathcal{H}({\bf k})|u_{n,{\bf k}}\rangle=\epsilon_{n}({\bf k})|u_{n,{\bf k}}\rangle.$ (47) The wave-functions lead to the Berry curvature $\Omega_{n}({\bf k})=-2\text{Im}\langle\partial_{k_{x}}u_{n,{\bf k}}|\partial_{k_{y}}u_{n,{\bf k}}\rangle$ (48) that is then combined with the TKNN formula to calculate the Hall conductivity $\sigma_{xy}=-\dfrac{e^{2}}{\hbar}\dfrac{1}{V}\int\limits_{\text{MBZ}}\dfrac{d^{2}{\bf k}}{(2\pi)^{2}}\sum\limits_{n=1}^{N_{b}}\Omega_{n}({\bf k})\;\Theta(\mu-\epsilon_{n}({\bf k})).$ (49) Finally, we replace the integral by a discrete sum $\dfrac{1}{V}\int\limits_{\text{MBZ}}\dfrac{d^{2}{\bf k}}{(2\pi)^{2}}\longrightarrow\dfrac{1}{\mathcal{V}N_{k}^{2}}\sum\limits_{{\bf k}\in\text{MBZ}}.$ (50) where $\mathcal{V}=|{\bf a}_{1}\times{\bf a}_{2}|$ is the area of the unit cell. We chose the normalization so that the density of electrons per unit cell $n=\dfrac{1}{N_{s}^{2}N_{k}^{2}}\sum\limits_{n,{\bf k}\in\text{MBZ}}\Theta(\mu-\epsilon_{n}({\bf k})).$ (51) goes from 0 (empty) to 2 (filled) as the chemical potential $\mu$ is varied across the spectrum (see Fig. A1). The resolution of the skyrmion within the unit cell is controlled by $N_{s}=L_{s}/a$. Larger $N_{s}$ lead to a better real-space mesh but also give rise to a larger Bloch Hamiltonian with $2N_{s}^{2}$ bands. The bottleneck in our code is the matrix diagonalization step whose complexity is $\mathcal{O}(n^{3})$ where $n=N_{s}^{2}$ is the size of the matrix. Since this step has to be repeated $N_{k}^{2}$ times, the overall complexity is $\mathcal{O}(N_{k}^{2}N_{s}^{6})$ and hence the continuum limit $(N_{s}\rightarrow\infty$ is more difficult than thermodynamic limit $N_{k}\rightarrow\infty$. We find that $N_{k}\rightarrow\infty$ and $N_{s}\rightarrow\infty$ limits can be different, especially for the vorticity correction term which is sensitive to $N_{s}$.
# Cospectral lifts of graphs F. Ramezani1 Department of Mathematics, Faculty of Science, K.N.Toosi University of Technology, Tehran, Iran, P.O. Box 16315-16181 ###### Abstract We prove that for a pair of cospectral graphs $G$ and $H$, there exist their non trivial lifts $G^{\prime}$ and $H^{\prime}$ which are cospectral. More over for a pair of cospectral graphs on $6$ vertices, we find some cospectral lifts of them. AMS Subject Classification: 05C50. Keywords: lifts of graphs, eigenvalues, cospectral graphs. 11footnotetext: Corresponding Author, Email address<EMAIL_ADDRESS> ## 1 Introduction Let $G=(V,E)$ be a simple graph on the vertex set $V(G)=\\{v_{1},v_{2},\ldots,v_{n}\\}$ and edge set $E$. The adjacency matrix of $G$ is an $n$ by $n$ matrix $A(G)$ whose $(i,j)$-th entry is $1$ if vertices $v_{i}$ and $v_{j}$ are adjacent and $0$, otherwise. The spectrum of $G$ is the multi-set of eigenvalues of $A(G)$. Two graphs $G$ and $G^{\prime}$ are called cospectral if they share the same spectrum. We say $G$ is determined by spectrum (DS for short) if it has no non-isomorphic cospectral mate. The problem of constructing cospectral graphs, has been investigated by some authors. For a survey of results on this area we refer the reader to [2, 3, 6]. In [4] the authors have used the concept $m$-cospectrality to construct new cospectral graphs. Haemers et all in [1] have considered Godsil-McKay switching method to construct non-isomorphic cospectral graphs see the paper for more details. In this article we use the concept lift of graphs to construct new non-isomorphic cospectral graphs from given small cospectral pairs of graphs. ## 2 Preliminaries In this section we mention some basic definitions and results which will be used during the paper. We denote by $\dot{E}$ the set of all ordered pairs $\\{(i,j)|\textrm{ }i<j,\textrm{ }\\{v_{i},v_{j}\\}\in E\\}$. For an Abelian group of order $k$, say $Gr$, a $k-$Abelian signature $s$ of the graph $G$ is a map $s:\dot{E}\longrightarrow Gr$. A $k$-Abelian lift of the graph $G$, associated with the signature $s$, which is denoted by $G(s)$, is a graph on the vertex set $V(G)\times[k]$ ($[k]=\\{0,1,\ldots,k-1\\}$, $Gr=(\\{g_{0},g_{1},\ldots,g_{k-1}\\},*$)), where for any $(i,j)\in\dot{E}$ and $a,b\in[k]$ there is an edge between $(v_{i},a)$ and $(v_{j},b)$ if and only if $s(i,j)*g_{a}=g_{b}$. Note that in the graph $G(s)$, for any $(i,j)\in\dot{E}$, there is a matching between the vertex sets $V_{i}=\\{v_{i}\\}\times[k]$ and $V_{j}=\\{v_{j}\\}\times[k]$. If a graph have $m$ edges there may be $k^{m}$ different $k$-Abelian lifts of $G$, since the sets $V_{i},V_{j}$ are matched in $k$ different ways. If the signature $s$ maps all pairs to the same element $g\in Gr$, then we denote the corresponding lift $G(s)$ with $G_{g}$. We illustrate the definition of the $k$-lifts of a graph in the following figure. In the following graph the graph $G$ is the cycle $C_{4},$ and the corresponding signature is $s:\dot{E}\longrightarrow\mathbb{Z}_{2}$, with $s(1,3)=0,s(1,4)=0,s(2,3)=1,s(2,4)=0.$ ⋮⋮⋮⋮.4132$\longrightarrow$ $G\hskip 99.58464ptG(s)$ Figure1. 2-lift of G corresponding to the signature s Let $Gr=(\\{g_{1}=1,g_{2},\ldots,g_{n}\\},*)$ be a group of order $n$. For any group element say $g\in Gr$ there is an $n\times n$ permutation matrix $P_{g}$ in correspondence, which is defined bellow, $P_{g}(i,j)=\begin{cases}1&\text{ {if} }g_{i}*g=g_{j},\\\ 0&\text{otherwise}.\end{cases}$ ###### Lemma 1 The function $\phi:Gr\rightarrow SL(n,\mathbb{R})$, where $SL(n,\mathbb{R})$ is the set of $n\times n$ real non-singular matrices and $\phi(g)=P_{g}$, is a group homomorphism. The eigenvalues of the graph $G(s)$ has been studied in the literature. For instance in the following theorem from [5] the authors have obtained the eigenvalues of Abelian $t$-lifts. See [5] for more details and the notations. ###### Theorem 1 Let $G$ be a multigraph and $\phi$ be a signature assignment to an Abelian group. Let $\beta$ be a common basis of eigenvectors of the permutation matrices in the image of $\phi$. For every $\mathbf{x}\in\beta$, let $A_{\mathbf{x}}$ be the matrix obtained from the adjacency matrix of $G$ by replacing any $(u,v)$-entry of $A(G)$ by $\sum_{(e,u,v)\in\overrightarrow{E}(G)}\lambda_{\mathbf{x}}(\phi(e,u,v))$. Then the spectrum of the $t$-lift $G(\phi)$ of $G$ is the multiset union of the spectra of the matrices $A_{\mathbf{x}}(\mathbf{x}\in\beta)$. ## 3 Main result Our main problem here is ”for given pair of cospectral graphs $G$ and $H$, is there $k$-Abelian signatures $s,s^{\prime}$ which $G(s)$ and $H(s^{\prime})$ are cospectral?”. We look for general answers of this question. It is known that for $l,l^{\prime}\in Gr$ the permutation matrices $P_{l},P_{l^{\prime}}$ commute, so they have common basis of eigenvectors. The following theorem is a straight consequence of Theorem 1. ###### Theorem 2 Let G be a graph and $s$ be a $k$-cyclic signature of $G$. Let $\beta$ be a common basis of eigenvectors of the permutation matrices in the image of $s$. For every $\mathbf{x}\in\beta$, let $A_{\mathbf{x}}$ be the matrix defined bellow $A_{\mathbf{x}}(i,j)=\begin{cases}\lambda_{\mathbf{x}}(P_{s(i,j)})&i<j,\\\ 0&i=j,\\\ \lambda_{\mathbf{x}}^{-1}(P_{s(i,j)})&i>j.\end{cases}$ Then the spectrum of $G(s)$ is the multi-set union of the spectra of the matrices $A_{\mathbf{x}}(\mathbf{x}\in\beta)$. ###### Lemma 2 Let $Gr$ be a group of order $n$. For any $g\in Gr$, any eigenvalues of the permutation matrix $P_{g}$ is an $n$’th root of unity. ###### Proof. The assertion follows by the fact that the order of any element in the group divides the order of group. Hence $g^{n}=1$, and therefore $P_{g}^{n}=I_{n}.$ Hence the minimal polynomial of $P_{g}$, say $m(P_{g},x)$ divides the polynomial $x^{n}-1,$ thus the assertion follows. $\Box$ ###### Lemma 3 If $G$ and $H$ are cospectral graphs on $n$ verices and $Gr$ be a finite group of order $t$. If for $g\in Gr$ the matrix $P_{g}$ is symmetric, then the graphs $G_{g}$ and $H_{g}$ are cospectral. ###### Proof. Since the signature corresponds the fixed element $g$ to all the edges of the graph $G$, and $P_{g}^{-1}=P_{g},$ then by Theorem 1, the eigenvalues of the graph $G_{g}$ are the multi-set union of the matrices $\omega_{i}A(G)$, where $\omega_{i}$’s are the eigenvalues of $P_{g}$ for $i=1,2,\ldots,n.$ On the other hand the eigenvalues of $\omega_{i}A(G)$ are $\omega_{i}\lambda_{j}$ where $\lambda_{j}$ is the $j$’th eigenvalue of $G$. Hence the spectrum of $G_{g}$ and $H_{g}$ are the multi-set $\\{\omega_{i}\lambda_{j}\\}_{i=1,\ldots,t}^{j=1,\ldots,n}.$ $\Box$ ### 3.1 Examples We consider two cospectral graphs $G$ and $H$, shown in the Figure 1. We try to find possible Abelian lifts of them say $G(s)$ and $H(s^{\prime})$ which are also cospectral. $\textrm{{Figure1.} Two cospectral graphs }G\textrm{ and }H$ We first consider all possible $t$-Abelian signatures of the graphs $G$ and $H$, suppose that the matrices $A(G)_{\mathbf{x}},A(H)_{\mathbf{x}}$, corresponding to the graphs $G$, $H$ and their prescribed signatures, which is introduced in Theorem 1 are of the following general forms, $A(G)_{\mathbf{x}}=\begin{pmatrix}0&u&0&0&0&0\\\ u^{-1}&0&v&w&0&0\\\ 0&v^{-1}&0&x&y&0\\\ 0&w^{-1}&x^{-1}&0&z&0\\\ 0&0&y^{-1}&z^{-1}&0&r\\\ 0&0&0&0&r^{-1}&0\end{pmatrix}\textrm{, }A(H)_{\mathbf{x}}=\begin{pmatrix}0&u_{1}&v_{1}&0&0&0\\\ u^{-1}_{1}&0&w_{1}&0&0&0\\\ v^{-1}_{1}&w^{-1}_{1}&0&x_{1}&y_{1}&z_{1}\\\ 0&0&x^{-1}_{1}&0&0&0\\\ 0&0&y^{-1}_{1}&0&0&r_{1}\\\ 0&0&z^{-1}_{1}&0&r^{-1}_{1}&0\end{pmatrix}.$ note that $r,u,v,\ldots,z,r_{1},u_{1},v_{1},\ldots,z_{1}$ are the complex variables and stand for the eigenvalues of the permutation matrices corresponding to each edge. Using Theorem 2 we find sufficient conditions on the signatures such that the corresponding lifts become cospectral. ###### Theorem 3 Let $s,s^{\prime}$ be $k$-Abelian lifts on the graphs $G,H$ respectively. If the following situations hold, the graphs $G(s)$ and $H(s^{\prime})$ are cospectral. * • $\frac{w}{v}=\frac{y}{z}$ * • $2(\frac{xv}{w}+\frac{w}{xv})=\frac{y_{1}r_{1}}{z_{1}}+\frac{z_{1}}{y_{1}r_{1}}+\frac{u_{1}w_{1}}{v_{1}}+\frac{v_{1}}{u_{1}w_{1}}.$ ###### Proof. We consider all posibilities for the matrices $A(G)_{\mathbf{x}},A(H)_{\mathbf{x}}$ corresponding to the graphs $G,H$. Comparing the coefficients of $\chi(A(G)_{\mathbf{x}},t),\chi(A(H)_{\mathbf{x}},t)$, the equality holds if and only if $2=\frac{wz}{vy}+\frac{vy}{wz},\hskip 56.9055pt(1)$ $\frac{xz}{y}+\frac{y}{xz}+\frac{xv}{w}+\frac{w}{xv}=\frac{y_{1}r_{1}}{z_{1}}+\frac{z_{1}}{y_{1}r_{1}}+\frac{u_{1}w_{1}}{v_{1}}+\frac{v_{1}}{u_{1}w_{1}}.\hskip 56.9055pt(2)$ The first equation follows by comparing the coefficients of $t^{2}$ and the second one follows by comparing the coefficients of $t,t^{3}$. Consider the first equality above, note the variables are the $n$’th roots of unity hence the equality $(1)$ holds if and only if $wz=vy$, hence the first assertion of the statement follows. The second assertion follows by the Equation (2) and the first equality. Hence the graphs according to the mentioned signatures are cospectral. $\Box$ ###### Corollary 1 The following constraints on the variables $r,u,v,\ldots,z,r_{1},u_{1},v_{1},\ldots,z_{1}$ will give cospectral lifts of the graphs $G$ and $H$. $z=\frac{vy}{w},\textrm{ }u_{1}=y_{1}=x,\textrm{ }w_{1}=r_{1}=v,\textrm{ }z_{1}=w$ ###### Example 1 In the graphs $G$ and $H$ the following signatures have the condition stated in Theorem 3. Hence the graphs $G(s)$ and $H(s^{\prime})$ are cospectral. The group members are denoted in the cyclic representation. $s(1,2)=(1,2,3),s(2,3)=(1,3,2),s(2,4)=\textrm{id},$ $s(3,4)=(1,3,2),s(3,5)=(1,3,2),s(4,5)=(1,2,3),s(5,6)=(1,2)$ $s^{\prime}(1,2)=(1,3,2),s^{\prime}(1,3)=\textrm{id},s^{\prime}(2,3)=(1,3,2),$ $s^{\prime}(3,4)=\textrm{id},s^{\prime}(3,5)=(1,3,2),s^{\prime}(3,6)=\textrm{id},s^{\prime}(5,6)=(1,3,2)$ The adjacency matrices of the graphs $G(s)$ and $H(s^{\prime})$ are of the following forms. $A(G(s))=\begin{pmatrix}0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0\\\ 0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0\\\ 0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0\\\ 0,0,1,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0\\\ 1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0\\\ 0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0\\\ 0,0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0\\\ 0,0,0,0,0,1,0,0,0,1,0,0,1,0,0,0,0,0\\\ 0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0\\\ 0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0\\\ 0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0\\\ 0,0,0,0,0,1,1,0,0,0,0,0,1,0,0,0,0,0\\\ 0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,0\\\ 0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,1,0,0\\\ 0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,1\\\ 0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0\\\ 0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0\\\ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0\end{pmatrix},A(H(s^{\prime}))=\begin{pmatrix}0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0\\\ 0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0\\\ 0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0\\\ 0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0\\\ 0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0\\\ 1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0\\\ 0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,1\\\ 0,0,1,1,0,0,0,0,0,0,1,0,0,1,1,0,0,0\\\ 1,0,0,0,1,0,0,0,0,1,0,0,1,0,0,1,0,0\\\ 0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0\\\ 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0\\\ 0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0\\\ 0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1\\\ 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0\\\ 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0\\\ 0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0\\\ 0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0\\\ 0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0\end{pmatrix}.$ ## References * [1] A. Abiad, A.E. Brouwer and W. H. Haemers, Godsil-McKay switching and isomorphism, Electronic Journal of Linear Algebra 28 (2015), 4–11. * [2] E. R. van Dam and W. H. Haemers, Which graphs are determined by their spectrum?, Linear Algebra Appl. 373 (2003), 241–272. * [3] E. R. van Dam and W. H. Haemers, Developments on spectral characterizations of graphs, Discrete Math., 309 (2009), 576 -586. * [4] N. Ghareghani, F. Ramezani, B. Tayfeh-Rezaei, Graphs cospectral to starlike trees, Linear Algebra Appl., 429 (2008), 2691 -2701. * [5] B. Mohar and B. Tayfeh-Rezaie, Median eigenvalues of bipartite graphs, J. Algebraic Combin. 41 (2015), 899–909. * [6] F. Ramezani, On the signed graphs with two distinct eigenvalues, arXiv:1511.03511v2.
# New Protocols and Ideas for Practical Quantum Position Verification Rene Allerstorfer111Email<EMAIL_ADDRESS>QuSoft, CWI Amsterdam, Science Park 123, 1098 XG Amsterdam, The Netherlands Harry Buhrman222Email: <EMAIL_ADDRESS>QuSoft, CWI Amsterdam, Science Park 123, 1098 XG Amsterdam, The Netherlands QuSoft, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands Florian Speelman333Email: <EMAIL_ADDRESS>QuSoft, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands Philip Verduyn Lunel444Email: <EMAIL_ADDRESS>QuSoft, CWI Amsterdam, Science Park 123, 1098 XG Amsterdam, The Netherlands (Dated: ) ###### Abstract Loss of inputs can be detrimental to the security of quantum position verification (QPV) protocols, as it may allow attackers to not answer on all played rounds, but only on those they perform well on. In this work, we study loss-tolerant QPV protocols. We propose a new fully loss-tolerant protocol, based on the SWAP test, with several desirable properties. The task of the protocol, which can be implemented using only a single beam splitter and two detectors, is to estimate the overlap between two input states. By formulating possible attacks as a semi-definite program (SDP), we prove full loss tolerance against unentangled attackers restricted to local operations and classical communication (LOCC), and that the attack probability decays exponentially under parallel repetition of rounds. Furthermore, the role of loss and quantum communication attacks in QPV is investigated more generally. We construct the first known example of a QPV protocol that is provably secure against unentangled attackers restricted to LOCC, but can be perfectly attacked by local operations and a single round of simultaneous quantum communication. However, it is also shown that any protocol secure against classical communication can be transformed into a protocol secure against quantum communication. Finally, we observe that any multi-round QPV protocol can be attacked with a linear amount of entanglement if the loss is high enough. ## 1 Introduction Geographical position is an important contributor to trust—for example, a message which provably comes from a secure location in a government institution, has automatic credence to actually be sent by that government. Position-based cryptography is the study of using position as a cryptographic credential. The most basic task here is to certify someone’s position, but this can be extended to messages that can only be read at a certain location, or to _authenticating_ that a message came (unaltered) from a certain location. We will focus on the task of _position verification_ , which can be used as the building block for tasks like position-based authentication. For simplicity, the focus will be on the one-dimensional case, i.e. verifying one’s position on a line, but the relevant ideas generalize readily to more dimensions. In our case, protocols will have the form of two _verifiers_ , $\mathsf{V_{0}}$ and $\mathsf{V_{1}}$, attempting to verify the location of a _prover_ $\mathsf{P}$. An adversary to a scheme will take the form of a coalition of attackers, while the location of $\mathsf{P}$ is empty. Notationally, we’ll use $\mathsf{A}$ (or Alice) for the attacker located between $\mathsf{V_{0}}$ and the location of $\mathsf{P}$, and $\mathsf{B}$ (or Bob) for the attacker location between the location of $\mathsf{P}$ and $\mathsf{V_{1}}$. It was shown by Chandran, Goyal, Moriarty, and Ostrovsky [CGMO09] that without any additional assumptions, position verification is an impossible task to achieve classically. The quantum study of quantum position verification (QPV) was first initiated by Kent, Munro, and Spiller [KMS11], followed by various proposals and ad-hoc attacks [LL11]. A general attack on quantum protocols for this task was presented by Buhrman, Chandran, Fehr, Gelles, Goyal, Ostrovsky, and Schaffner [BCF+11], requiring a doubly-exponential amount of entanglement. This attack was further improved to requiring an exponential amount of entanglement by Beigi and König [BK11] – much more efficient but still impractically large. (See also [GLW13, Dol19] for generalizations of such attacks to different settings, with similar entanglement scaling.) A natural question is therefore whether some QPV protocols can be proven secure against attackers that share a limited amount of entanglement. Since it’s so hard to generate entanglement, it is already interesting to study whether protocols are secure against adversaries that are very limited in their access to pre-shared entangled states. For instance, the QPV${}_{\text{BB84}}$ protocol [KMS11], inspired by the BB84 quantum key- distribution protocol, involves only a single qubit sent by $\mathsf{V_{0}}$, in the state $\ket{0}$, $\ket{1}$, $\ket{+}$, or $\ket{-}$, and the choice of basis sent by $\mathsf{V_{1}}$. Even though this protocol is insecure against attackers sharing a single EPR pair [LL11], security can be proven against unentangled attackers [BCF+11], so that $\Theta({n})$ entanglement is required to break the $n$-fold parallel repetition [TFKW13, RG15]. At the current technological level, such protocols are very interesting to analyze, and would already give a super-classical level of security if implemented in practice. Additionally, other protocols have been proposed [KMS11, CL15, Unr14, JKPPG21, BCS21], that combine classical and quantum information in interesting ways, sometimes requiring intricate methods to attack [BFSS13, Spe16a, OCCG20]. Unfortunately, implementing any of the mentioned protocols would run into large obstacles: the quantum information involved would have to be sent at the speed of light, i.e., using photons, and in realistic experimental setups a large fraction of photons will be lost. Compensating for this in the most natural way, by ignoring rounds whenever the prover claims that a photon was lost in transmission, lets attackers break all protocols. In our contribution, we study the role of loss in QPV, present a new loss-tolerant protocol, together with several results that increase our understanding of loss-tolerant protocols for QPV. #### Loss-tolerance in QPV. Throughout, we will use $\eta$ as rate of transmission, i.e., the probability that an quantum message arrives – in realistic protocols. We will distinguish two types of loss tolerance that we might require schemes to satisfy. The first, _partial loss tolerance_ , refers to a protocol which is secure for some values $\eta\geq\eta_{\mathrm{threshold}}$, meaning that the honest parties have a maximum level of allowed loss. This level of loss might be inherent to a certain scheme, or a family of schemes could be designed with $\eta_{\mathrm{threshold}}$ as a parameter. Security is only guaranteed in a situation where a high enough fraction of the rounds are played. If significantly more photons than this threshold are lost, then the protocol will have to abort. Examples of partial loss tolerant schemes are extensions of QPV${}_{\text{BB84}}$ to more bases [QS15, Spe16b], that are secure against unentangled attackers in an environment with some loss.555This notion will be satisfied to a small level even by schemes that are not designed to be loss tolerant, simply by having some error-robustness. The basic QPV${}_{\text{BB84}}$ scheme can directly be seen to be partially loss tolerant for loss below $\frac{1}{2}-\frac{1}{2\sqrt{2}}$, and the simplest attack that uses loss only works when the loss is above $\frac{1}{2}$. _Full loss tolerance_ is achieved when a protocol is secure, irrespective of the loss rate. In particular, the protocol stays secure when conditioning on those rounds where the prover replied, fully ignoring rounds where a photon is lost. The protocol by Lim, Xu, Siopsis, Chitambar, Evans, and Qi [QLL+15, LXS+16], the first fully loss-tolerant protocol, consists of $\mathsf{V_{0}}$ and $\mathsf{V_{1}}$ both sending a qubit, and having the prover perform a Bell measurement on both, broadcasting the measurement outcome. This protocol is secure against unentangled attackers, no matter the loss rate. In the current work, we advance the study of loss-tolerant QPV with the following results: * • We present a new fully-loss tolerant protocol: QPV${}_{\textsf{SWAP}}$, which is based on the SWAP test [BCWdW01]. The new protocol compares favorably to Lim et al.’s protocol [LXS+16] in terms of ease of implementation using linear optics, by requiring only a single beam splitter – the Hong-Ou-Mandel effect can be viewed as equivalent to the SWAP test [GECP13]. * • We prove fully loss tolerant security by formulating possible attacks as a semi-definite program (SDP), and show that the protocol is secure against unentangled attackers that are allowed a single round of simultaneous (classical) communication. Additionally, we show that the attack probability decays exponentially under _parallel repetition_ : when attackers respond to a size $k$ subset out of $n$ parallel rounds, pretending photon loss on the other inputs, their probability of a successful attack still decays exponentially in $k$. Such a parallel repetition is not known for the protocol of [LXS+16], and this is the first parallel repetition theorem for fully loss tolerant QPV. We obtain this result by constructing an SDP formulation of the $n$-fold parallel repetition of the problem, constructing a dual of this SDP for variable $n$, and then finding a point in the generalized dual problem. * • We also show that creating a fully loss tolerant QPV protocol which requires superlinear entanglement (in the number of qubits involved) is impossible. This follows directly from a simple observation: if there is no limit to the loss, the adversaries can attempt quantum teleportation and guess the teleportation corrections, claiming ‘no photon’ if the guess is incorrect. Additionally, we present several results on the role of quantum communication in QPV attacks. #### The role of quantum communication for attacks on QPV. Some works [BRSdW11, TFKW13, BFSS13, BCS21] attempt to lower bound the pre- shared entanglement required from attackers that are allowed a round of simultaneous _quantum_ communication, while other results, such as [BK11, RG15, QS15, QLL+15, LXS+16, GC20, OCCG20] and the security proof of QPV${}_{\textsf{SWAP}}$ in the current work, assume attackers that are restricted to communicate only classically. Even though quantum communication can potentially be simulated by teleportation, it is not immediately clear how to compare bounds between these two settings, especially in case where the exact size of the lower bound is of interest.666For instance, the bound of [RG15] does not fully supersede [TFKW13], and therefore finding a tight lower bound for the parallel- repetition of the QPV${}_{\textsf{BB84}}$ protocol against attackers that have quantum communication remains open. The simplest version of this question can be asked for unentangled attackers: If a (quantum-question, classical-reply) QPV protocol is secure against unentangled attackers that communicate classically, is that protocol also secure against unentangled attackers that are allowed to use quantum communication? In the current work, we answer this question in the negative: We construct a protocol that is provably secure against unentangled attackers that can use classical communication, but can be broken by a single round of simultaneous quantum communication. This shows that some care has to be taken when interpreting results that restrict to classical messages only. Interestingly, we are additionally able to show that our counter-example is in some sense artificial: Given a protocol that is secure against classical messages, but insecure when quantum communication is allowed, it is always possible to transform this protocol into one that is secure when quantum communication is allowed. This new protocol can be constructed from the given protocol by applying local maps to the messages from the verifiers $\mathsf{V_{0}}$, $\mathsf{V_{1}}$, without having to modify the output predicate. Our proof for this statement involves a recursive argument, where we view the quantum communication of a successful attack as the input messages to two new protocols. We then recursively consider an increasing number of new possible protocols, and use _emergent classicality_ [QR20] to show that a secure protocol of the required form has to exist. ### 1.1 Structure of the paper In Section 3 we present the protocol QPV${}_{\textsf{SWAP}}$, with primary security analysis in Section 3.1, extension of the security analysis to the loss-tolerant setting in Section 3.2, and an upper bound to the entanglement required for an attack in Section 3.3. Section 4 is concerned with results applicable to QPV in general. In subsections 4.1 and 4.2 our results relating to quantum-communication attacks on QPV are presented, subsection 4.3 is concerned with results about loss tolerance. ## 2 Preliminaries ### 2.1 Notation We denote parties in QPV protocols by letters A, B, etc. and their quantum registers as $A_{1}\cdots A_{n}$, $B_{1}\cdots B_{n}$ and so on, respectively. Sometimes we may refer to “all registers party $\mathsf{X}$ holds” just by X, giving expression like $\operatorname{Pos}(\mathsf{A}\otimes\mathsf{B})$, for example. Cumulative distribution functions are written as $F_{X}$, where $X$ is either a random variable or explicitly the distribution. Unless otherwise indicated, $\lVert\cdot\rVert_{p}$ is the usual $p$-norm. The diamond norm on quantum channels is denoted by $\lVert\cdot\rVert_{\diamond}$ and is defined as $\lVert\mathcal{C}\rVert_{\diamond}\coloneqq\max_{\rho}\lVert(\mathcal{C}\otimes\mathbbm{1}_{k})(\rho)\rVert_{1}$ for quantum states $\rho$. Partial transposition of an operator $P$ with respect to party $\mathsf{B}$ is denoted $P^{T_{\mathsf{B}}}$. The set of PPT- measurements777I.e. sets of positive semi-definite operators adding up to the identity, whose partial transposes are positive semi-definite as well. on two subsystems held by parties A and B, respectively, is PPT$(\mathsf{A}:\mathsf{B})$. We use the term “Local Operations and Broadcast Communication” (LOBC) to describe the scenario of a single round of simultaneous classical communication with local quantum operations before and after the round of communication. Finally, the image of a function $f$ is denoted by $\operatorname{Im}(f)$ and for a set $X$ we write $|X|$ for its cardinality. All other notation is explained in the text. ### 2.2 The SWAP test The SWAP test was first introduced in [BCWdW01] for quantum fingerprinting as a useful tool to determine if two unknown states are identical or not. The quantum circuit of it is depicted in Figure 1. Figure 1: The SWAP test, taken from [BCWdW01]. $H$ denotes the Hadamard gate. The state to be measured in the computational basis is $\displaystyle(H\otimes\mathbbm{1})\text{c-SWAP}(H\otimes\mathbbm{1})\ket{0}\ket{\phi}\ket{\psi}=\frac{1}{2}\ket{0}(\ket{\phi}\ket{\psi}+\ket{\psi}\ket{\phi})+\frac{1}{2}\ket{1}(\ket{\phi}\ket{\psi}-\ket{\psi}\ket{\phi}).$ (2.1) Therefore we have the measurement statistics $\displaystyle\mathbb{P}(0)=\frac{1+\lvert\braket{\psi}{\phi}\rvert^{2}}{2}\qquad\text{and}\qquad\mathbb{P}(1)=\frac{1-\lvert\braket{\psi}{\phi}\rvert^{2}}{2}.$ (2.2) The output distribution only depends on the overlap $\lvert\braket{\psi}{\phi}\rvert$ between the input states. One notable special case is that for $\ket{\phi}=\ket{\psi}$ the SWAP operation has no effect and we get $\mathbb{P}(0)=1$. Another advantage of the SWAP test is that it is easily implemented experimentally with a single beam splitter and two photon detectors [GECP13]. Its flexibility concerning input states and the simplicity of its experimental realization make it a good candidate for QPV. ## 3 The QPV${}_{\textsf{SWAP}}$ protocol We define the protocol QPV${}_{\textsf{SWAP}}(\beta_{1},\dots,\beta_{k})$, depicted in the space-time diagram in Figure 2, as follows. 1. 1. By means of local and shared randomness or a secure private channel verifiers $\mathsf{V_{0}}$ and $\mathsf{V_{1}}$ uniformly draw a random overlap $\beta\in\\{\beta_{1},\dots,\beta_{k}\\}$ and agree on two uniformly random states $\ket{\psi},\ket{\phi}$ on the Bloch sphere such that $\lvert\braket{\psi}{\phi}\rvert=\beta$. Then $\mathsf{V_{0}}$ prepares the state $\ket{\psi}$ and $\mathsf{V_{1}}$ prepares $\ket{\phi}$. Each verifier sends their state to $\mathsf{P}$ such that they arrive there simultaneously. 2. 2. The honest party $\mathsf{P}$ applies the SWAP test on the two quantum inputs as soon as they arrive at $\mathsf{P}$. This yields an output bit $z\in\\{0,1,\varnothing\\}$, indicating $\mathsf{P}$’s measurement result or possibly a “loss” event. In particular, $\mathbb{P}(z=0\mid\beta,\text{not loss})=(1+\beta^{2})/2$ and $\mathbb{P}(z=0\mid\beta,\text{not loss})=(1-\beta^{2})/2$. Then $\mathsf{P}$ immediately sends $z$ to both verifiers $\mathsf{V_{0}}$ and $\mathsf{V_{1}}$. 3. 3. The verifiers closely monitor if they receive an answer in time and compare what they received. If they got different bits, or if at least one of their bits arrived too early/late, they abort and reject. Otherwise both $\mathsf{V}_{k}$ add $z$ to their (ordered) lists of answers $L_{\beta}$. 4. 4. After having completed $R_{\beta}\geq R$ rounds with a conclusive answer $z\in\\{0,1\\}$, sequentially or in parallel for each $\beta$, they abort, check if the rate of $\varnothing$ symbols is close enough888Say a rate $1-\eta$ is expected from P. The verifiers can apply an analogous statistical test around $1-\eta$ as described for the conclusive answers to check for any suspicious actions. to what is expected from P, discard any rounds with answer $\varnothing$ and proceed to the statistical analysis on $C_{\beta}=L_{\beta}-\\{\varnothing\\}$ for each $\beta$. They test if the sample $\hat{p}_{\beta}=\\#\\{z\in C_{\beta}:z=0\\}/R_{\beta}$ on conclusive answers is contained in the $(1-\alpha)$-quantile around the expected $p_{\beta}=(1+\beta^{2})/2$. 5. 5. Only if they have received the same answer in time in every single round and if the statistical test was passed on all $L_{\beta}$, they accept. Otherwise, they reject. $t$$x$$a$$\ket{\psi}$$\mathsf{V_{0}}$$b$$\ket{\phi}$$\mathsf{V_{1}}$$\mathsf{A}$$\mathsf{P}$$\mathsf{B}$ Figure 2: Space-time diagram of the QPV${}_{\textsf{SWAP}}$ protocol. We assume all information, quantum (—) and classical (- - -), travels at the speed of light. For graphical simplicity we have put $\mathsf{P}$ exactly in the middle of $\mathsf{V_{0}}$ and $\mathsf{V_{1}}$ (which is not necessary for the purposes of QPV). The attackers, not being at position $\mathsf{P}$, would like to convince the verifiers that they are at $\mathsf{P}$. Note that to have any chance of winning, attackers need to produce $a=b$. Note that in essence the task in this protocol is to estimate the overlap $\beta$ of the input states $\ket{\psi}$ and $\ket{\phi}$. This is independent of the dimensionality/nature of the input states, making the protocol very flexible. To attack this protocol, it is evident that there need to be at least two attackers due to the timing constraint. A coalition of attackers has to position at least one party $\mathsf{A}$ between $\mathsf{V_{0}}$ and $\mathsf{P}$ and one party $\mathsf{B}$ between $\mathsf{P}$ and $\mathsf{V_{1}}$. Since the SWAP test is a joint operation on two quantum states, spatially separated attackers cannot apply the SWAP test, unless they have access to pre-shared entanglement, as we will show. Since any QPV protocol can be perfectly attacked if the attackers have access to enough pre- shared entanglement [BCF+11], we assume that the attackers have no pre-shared entanglement and we also restrict them to LOCC channels only. We further assume that the attackers act non-adaptively, i.e., they do not base their strategy in a given round on results from previous rounds. This does not seem to be too restrictive, as we could simply replace $R$ in the above algorithm by $2R$ rounds and let the verifiers choose a uniformly random subset of $R$ rounds on which they perform security analysis. That way any given round has probability $1/2$ of being “relevant”. Together with the independent state preparation in each round, this seems to make adaptive strategies not more useful than non-adaptive ones. To assess the security of this protocol, we consider the following. As the individual rounds are independent, the subsets $L_{\beta}$ of answers given input $\rho_{\beta}$ will be a sample of a binomial distribution with parameters $R_{\beta}$ and some $q_{\beta}$. The verifiers can then test if what they received matches closely enough with what they expect from an honest party. We define the statistical test to be done by the verifiers as follows: 1. 1. For each overlap $\beta$, they calculate the $(1-\alpha)$-quantile999In order to capture P with high probability, $\alpha$ can be set to a small number, e.g. $10^{-6}$. around the ideal $p_{\beta}=(1+\beta^{2})/2$, which gives a lower and an upper bound $\displaystyle\begin{split}L_{\alpha,\beta}&\coloneqq z_{\frac{\alpha}{2}}(\beta,R_{\beta})/R_{\beta}=F^{-1}_{\text{Bin}(R_{\beta},p_{\beta})}\left(\frac{\alpha}{2}\right)/R_{\beta}\\\ U_{\alpha,\beta}&\coloneqq z_{1-\frac{\alpha}{2}}(\beta,R_{\beta})/R_{\beta}=F^{-1}_{\text{Bin}(R_{\beta},p_{\beta})}\left(1-\frac{\alpha}{2}\right)/R_{\beta},\end{split}$ (3.1) with $F^{-1}$ being the inverse cumulative distribution function. This defines an acceptance interval $\displaystyle\mathsf{acc}_{\beta}(\alpha,R_{\beta})\coloneqq[L_{\alpha,\beta},U_{\alpha,\beta}].$ (3.2) 2. 2. For each overlap $\beta$, they check if the sample $\hat{p}_{\beta}\in\mathsf{acc}(\alpha,R_{\beta})$. If this is the case for all $\beta$, they accept. Otherwise, they reject. By definition, the honest party will return a sample $\hat{p}^{\mathsf{P}}_{\beta}\in\mathsf{acc}_{\beta}(\alpha,R_{\beta})$ with probability $1-\alpha$ and thus the test will accept P with high probability $(1-\alpha)^{k}=1-O(k\alpha)$. To optimize the overlap between their distribution and the acceptance regions, the attackers will attempt to respond as close to each $p_{\beta}$ as possible, with a binomial parameter of $p^{\mathsf{AB}}_{\beta}=p_{\beta}-\Delta_{\beta}$, defining a vector of differences $\displaystyle\Delta=\begin{pmatrix}\Delta_{\beta_{1}}\\\ \vdots\\\ \Delta_{\beta_{k}}\end{pmatrix}.$ (3.3) If rounds are run in parallel, attackers could also decide to just respond a deterministic list with some fraction $p^{\mathsf{AB}}$ of “0” answers. They could perfectly break the protocol if $p^{\mathsf{AB}}\in\bigcap_{\beta}\mathsf{acc}_{\beta}(\alpha,R_{\beta})$. This, however, we can always prevent by choosing the $R_{\beta}$’s large enough so that the acceptance regions for different overlaps get disjoint. For attackers responding binomially we need to evaluate $\displaystyle\mathbb{P}(\mathsf{acc}|\mathsf{attack})\coloneqq\mathbb{P}\left(\hat{p}^{\mathsf{AB}}_{\beta}\in\mathsf{acc}_{\beta}(\alpha,R_{\beta})\quad\forall\beta\right)=\prod_{\beta}\mathbb{P}\left(\hat{p}^{\mathsf{AB}}_{\beta}\in\mathsf{acc}_{\beta}(\alpha,R_{\beta})\right)\eqqcolon\prod_{\beta}\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack}).$ (3.4) Now there are several cases to consider: 1. (1) $\mathbf{\lVert\Delta\rVert_{1}=0.}$ Then $\Delta_{\beta}=0$ for all $\beta$ and the attackers respond with the identical distribution as P, therefore $\mathbb{P}(\mathsf{acc}|\mathsf{attack})=(1-\alpha)^{k}=1-O(k\alpha)$. 2. (2) $\mathbf{p_{\boldsymbol{\beta}}\neq 1\textbf{ and }p^{\mathsf{AB}}_{\boldsymbol{\beta}}=1}.$ Then $\mathbb{P}(\mathsf{acc}|\mathsf{attack})=0$ as the $(1-\alpha)$-quantile around $p_{\beta}$ will exclude the value 1 (for sufficiently large $R_{\beta}$). 3. (3) $\mathbf{p_{\boldsymbol{\beta}}=1\textbf{ and }p^{\mathsf{AB}}_{\boldsymbol{\beta}}\neq 1}.$ Then $\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})=\left(p^{\mathsf{AB}}_{\beta}\right)^{R_{\beta}}=O\left(2^{-R_{\beta}}\right).$ 4. (4) $\mathbf{\lVert\Delta\rVert_{1}\neq 0\textbf{ and }p_{\boldsymbol{\beta}},p^{\mathsf{AB}}_{\boldsymbol{\beta}}\in\big{[}\frac{1}{2},1\big{)}.}$ Then there exists a $\beta\in\\{\beta_{1},\dots,\beta_{k}\\}$ such that $\Delta_{\beta}\neq 0$. By using the Gaussian approximation for the binomial distributions (which we may apply as we can always make the number of rounds sufficiently large), one can show (cf. appendix A.1) that $\displaystyle\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})\lesssim\frac{\sqrt{2}f_{\beta}^{\mathsf{AB}}}{\sqrt{\pi R_{\beta}}\Delta_{\beta}}e^{-\left(\sqrt{R_{\beta}}\Delta_{\beta}-f_{\beta}^{\mathsf{P}}c_{\alpha}\right)^{2}/\left(f_{\beta}^{\mathsf{AB}}\right)^{2}}=O\left(\frac{2^{-\Delta_{\beta}^{2}R_{\beta}}}{\Delta_{\beta}\sqrt{R_{\beta}}}\right)$ (3.5) for functions $c_{\alpha},f_{\beta}$ that are independent of $R_{\beta}$ and $\Delta_{\beta}$. Hence in this scenario the success probability of attackers is also exponentially suppressed. So unless $\Delta_{\beta}=0$ for all $\beta$, we have exponential suppression in the attacker success probability $\mathbb{P}(\mathsf{acc}|\mathsf{attack})$. In the end, we can set a threshold $R$ for the number of rounds and the protocol is to be run until $R_{\beta}\geq R$ for all $\beta$. This will guarantee that any desired security level can be achieved by increasing $R$ “uniformly” over all $\beta$. We end up with a protocol that accepts an honest party with high probability and rejects attackers with high probability. A sketch is depicted in Figure 3. Figure 3: Sketch of the idea behind the statistical test. Acceptance regions around the expected honest $p_{\beta}$’s is defined such that P will be captured with high probability (blue). Attackers trying to spoof verifiers by minimizing all $\Delta_{\beta}$ as well as possible (red) have exponentially low (in $R$) probability of returning a sample contained in the acceptance regions for all $\beta$. Here $C_{\beta,0}$ denotes the number of ‘0’ answers among the conclusive ones in $C_{\beta}$. The analysis also suggests that optimally attackers want to minimize all $|\Delta_{\beta}|$ simultaneously101010Note that minimizing $|p_{\beta}-p_{\beta}^{\mathsf{AB}}|$ also minimizes the Kullback-Leibler divergence $D_{\text{KL}}(P\parallel Q)$ between the corresponding binomial distributions $P$ and $Q$.. We therefore choose to minimize $\lVert\Delta\rVert_{1}$. As LOCC $\subset$ PPT [CLM+14], the following conic optimization program will provide a lower bound on $\lVert\Delta\rVert_{1}$ for LOCC restricted attackers. To account for imperfect quantum channel transmittance, we include a parameter $\eta\in(0,1]$ and a third answer option $\varnothing$ (“loss”). Then $\Delta_{\beta}$ is to be evaluated conditioned on conclusive answers, i.e., $\Delta_{\beta}=p_{\beta}-\operatorname{Tr}[\Pi_{0}\rho_{\beta}]/\eta$, where $\displaystyle\rho_{\beta}=\int_{\text{U}(2)}U\otimes U\ket{\psi\phi}\bra{\psi\phi}U^{\dagger}\otimes U^{\dagger}d\mu(U)=\frac{1}{3}\frac{1+\beta^{2}}{2}\Pi_{\text{sym}}+\frac{1-\beta^{2}}{2}\Pi_{\text{a-sym}}$ (3.6) is the mixed state the verifiers produce for overlap $\beta$ [Wat18]. Here $\Pi_{\text{sym}},\Pi_{\text{a-sym}}$ are the projectors onto the symmetric and antisymmetric subspace, respectively, and $\mu$ is the Haar measure on the unitary group U$(2)$. These considerations lead us to the optimization: $\displaystyle\begin{split}\textbf{minimize: }&\lVert\Delta\rVert_{1}\\\ \textbf{subject to: }&\Pi_{0}+\Pi_{1}+\Pi_{\varnothing}=\mathbbm{1}_{4}\\\ &\Pi_{k}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\qquad k\in\\{0,1,\varnothing\\}\\\ &\operatorname{Tr}[\Pi_{\varnothing}\rho_{\beta}]=1-\eta,\qquad\beta\in\\{\beta_{1},\dots,\beta_{m}\\}.\end{split}$ (3.7) The constraints involving $\eta$ stem from the fact that P will produce the same inconclusive-rate on all overlaps $\beta$ and the attackers need to mimic that. An analogous statistical test with a $(1-\alpha)$-quantile around $\eta$ can be performed to check for this and it is clear that it is optimal for attackers to choose to reply inconclusive at the exact same rate as P would do for each $\beta$. The above program can be solved with conventional conic optimization libraries, e.g. MOSEK [ApS20], and any example $\\{\beta_{1},\dots,\beta_{k}\\}$ we have tried yielded an optimal $\lVert\Delta\rVert_{1}>0$ independent of $\eta$, indicating the loss- tolerance of the protocol. Unfortunately, we were not able to prove if our protocol remains secure if attackers are allowed to use quantum communication between them. The security in that setting remains an open problem. Moreover, the analysis of QPV${}_{\textsf{SWAP}}(\beta_{1},\dots,\beta_{k})$ under realistic experimental conditions is work in progress and looks promising. We will now proceed and analyze the special case of overlaps $\\{0,1\\}$, i.e., sending orthogonal or identical states, in more detail and show analytically that it has desirable properties. ### 3.1 Security of QPV${}_{\textsf{SWAP}}(0,1)$ Protocol In this setting, there is the notion of a correct answer. On equal inputs the verifiers always expect the answer ‘0’. This allows for a SDP formulation for maximizing the average success probability of identifying if the input states were equal/unequal. In appendix A.2 it is shown that the relation between the success probability $p_{\text{succ}}$ of correctly identifying equal/orthogonal and $\lVert\Delta\rVert_{1}$ is $\displaystyle p_{\text{succ}}\leq u\implies\lVert\Delta\rVert_{1}\geq\frac{3}{2}-2u.$ (3.8) It turns out that for overlaps $\\{0,1\\}$ the optimization (3.7) returns $\Delta_{0}=0$ and $\Delta_{1}=1/4$, despite the above constraint of only $\lVert\Delta\rVert_{1}\geq 1/6$ for $p_{\text{succ}}\leq 2/3$, as will be shown below. However, constraining the SDP to $\operatorname{Tr}[\Pi_{\neq}\rho_{\neq}]/\eta=x$ and placing the objective function $\max_{\\{\Pi\\}}\operatorname{Tr}[\Pi_{=}\rho_{=}]/\eta=y$ (with PPT constraints) allows us to see the trade-off between being correct on equal inputs versus on orthogonal inputs. This yields $\displaystyle y(x)=\begin{cases}1-\frac{x}{2}&0\leq x\leq\frac{2}{3}\\\ 2-2x&\frac{2}{3}\leq x\leq 1\end{cases}.$ (3.9) It is clear that $x$ should be smaller than $2/3$, but then adding an error $\varepsilon$ to the success rate on unequal inputs only reduces the error on equal inputs by $\varepsilon/2$. It is therefore optimal to try to achieve $\varepsilon=0$, that is $\Delta_{0}=0$, and accept whatever error this implies on equal inputs. Indeed, then $\Delta_{1}=1/4$, consistent with the result of (3.7). Having drawn the connection between $p_{\text{succ}}$ and $\lVert\Delta\rVert_{1}$, we will now proceed to show that there is a finite gap in the success probability of testing for equality between adversaries restricted to LOCC operations and an honest prover who can apply entangling measurements. Extending this single round protocol to $n$ rounds played in parallel, we will also show that the best strategy for adversaries is to simply apply the optimal single round strategy to every round individually, which shows strong parallel repetition for QPV${}_{\textsf{SWAP}}(0,1)$. Furthermore we show that in both cases there is no advantage for the attackers if they have the ability to declare loss on rounds, i.e. the probability of success conditioned on answering is independent of loss. The security lies in the fact that an honest prover at his claimed position can apply entangling operations to the two incoming qubits and has a strictly higher probability of answering the question correctly than spatially separated adversaries who are restricted to single round LOCC operations. In fact, the operation that has the highest probability of generating the correct answer is the SWAP test [MdW18] and it gives a success probability $p_{\text{succ}}(\text{SWAP test})=3/4$. We will show that the best strategy for LOCC adversaries gives at most probability $p^{\text{max}}_{\text{succ}}(\text{LOCC})=2/3$. Since attackers return only a classical bit and they discard their post-measurement state, the most general type of measurement the attackers do is a positive-operator-valued meaure (POVM). The attackers’ success probability for a given admissible POVM strategy $\Pi=\\{\Pi_{0},\Pi_{1}\\}$ is then given by $\displaystyle p_{\text{succ}}(\Pi):=\frac{1}{2}\operatorname{Tr}[\Pi_{0}\rho_{0}+\Pi_{1}\rho_{1}].$ (3.10) Maximizing over all two-qubit LOCC measurements $\Pi^{\text{LOCC}}$ would give us the best probability of success of the attackers. However characterizing and maximizing over LOCC strategies is a mathematically complex task. We follow the method used in [LXS+16], and maximize our problem over the set of all positive partial transpose (PPT) operations. Since PPT measurements are a proper superset of LOCC measurements, any maximal success probability optimized over PPT measurements immediately upper bounds the success probability of all LOCC measurements. Furthermore, the PPT condition can be represented by a set of linear and positive semidefinite conditions [Cos13] which enables us to write down the maximization problem as a semidefinite program (SDP) [VB96]. This allows us to find exact solutions to the optimization problem if the values of the primal program and dual program coincide. In our case the SDP is as follows: Primal Program maximize: $\displaystyle\frac{1}{2}\operatorname{Tr}[\Pi_{0}\rho_{0}+\Pi_{1}\rho_{1}]$ subject to: $\displaystyle\Pi_{0}+\Pi_{1}=\mathbbm{1}_{2^{2}}$ $\displaystyle\Pi_{k}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ k\in\\{0,1\\}$ Dual Program minimize: $\displaystyle\operatorname{Tr}[Y]$ subject to: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{i}-\rho_{i}/2\succeq 0,\ \ \ i\in\\{0,1\\}$ $\displaystyle Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle Q_{i}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}),\ \ \ i\in\\{0,1\\},$ Note that the primal program implies a lower bound and the dual program an upper bound to $p^{\text{max}}_{\text{succ}}(\Pi^{\text{PPT}})$. We find that the optimal solution to the SDP is 2/3 (see Appendix A.3), giving an upper bound of the success probability optimized over all LOCC measurements of $\displaystyle p^{\text{max}}_{\text{succ}}(\Pi^{\text{LOCC}})\leq\frac{2}{3}.$ (3.11) The input states $\rho_{0}$ and $\rho_{1}$ have the exact same mixed state matrices as the result of uniformly choosing a mutually unbiased basis and sending either equal or orthogonal states (from the chosen basis) to P. This indicates a possible “good” LOCC strategy. Assume the incoming qubits are encoded in MUB $b$, and that the attackers choose a random MUB $b^{\prime}$, measure both incoming qubits in the basis $b^{\prime}$, send the measurement outcome to each other, and return equal if the measurement outcomes are equal and unequal otherwise. Then their probability of success is exactly $2/3$, since $\displaystyle\mathbb{P}(\text{success})=\mathbb{P}(b^{\prime}=b)\mathbb{P}(\text{success}|b^{\prime}=b)+\mathbb{P}(b^{\prime}\neq b)\mathbb{P}(\text{success}|b^{\prime}\neq b)=\frac{1}{3}\cdot 1+\frac{2}{3}\cdot\frac{1}{2}=\frac{2}{3}.$ (3.12) This attack strategy uses only local measurements and a single round of communication, so it is a valid single round LOCC operation. Thus we find that the upper bound in (3.11) over LOCC measurements is actually tight. We have shown that probability of success for identifying if given inputs were equal or not for the QPV${}_{\textsf{SWAP}}(0,1)$ protocol is strictly lower for attackers restricted to LOCC measurements than for an honest verifier who can apply entangling operations. Over sequential multi-round protocols, where we only perform a new run of the protocol after previous is finished and attackers act independently, the verifiers can increase the precision of detecting LOCC attackers to any limit they desire. A natural question to ask is whether we can extend the single round protocol to a general $n$-round parallel protocol, where the verifiers send $n$ qubits from both sides to form the density matrix $\rho_{s}=\rho_{s_{0}}\otimes\rho_{s_{1}}\otimes\dots\otimes\rho_{s_{n-1}}$ for some string $s\in\\{0,1\\}^{n}$. A proof for this does not follow trivially from the single round security proof since attackers can now in principle take blocks of inputs and apply entangling operations on them. We shall prove that for the QPV${}_{\textsf{SWAP}}$ protocol strong parallel repetition does indeed hold, i.e. the probability of success of winning $n$ rounds decreases as $(2/3)^{n}$. Again we can write down the problem as a SDP optimization task where we optimize over all PPT operations on the $2n$ qubits the attackers receive. Primal Program maximize: $\displaystyle\frac{1}{2^{n}}\sum_{s\in\\{0,1\\}^{n}}\operatorname{Tr}[\Pi_{s}\rho_{s}]$ subject to: $\displaystyle\sum_{s\in\\{0,1\\}^{n}}\Pi_{s}=\mathbbm{1}_{2^{2n}}$ $\displaystyle\Pi_{s}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}$ Dual Program minimize: $\displaystyle\operatorname{Tr}[Y]$ subject to: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle Q_{s}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}).$ As shown in Appendix A.4, there is a feasible dual solution yielding a dual value of $(2/3)^{n}$, which bounds the probability of success under LOCC measurements by $(2/3)^{n}$. A feasible solution to the primal problem is to fill in the single round solution $n$ times and this has success probability $(2/3)^{n}$. Since this strategy turns out to be the above mentioned single round LOCC measurement applied to each $\rho_{s_{i}}$, we find that the upper bound is again tight, and we have shown strong parallel repetition for the QPV${}_{\textsf{SWAP}}$ protocol. ### 3.2 Loss-Tolerance of QPV${}^{n}_{\textsf{SWAP}}$ Protocol In the previous section we have shown that the QPV${}_{\textsf{SWAP}}$ protocol is safe against attackers restricted to LOCC attackers if we require them to always answer. However this is not always possible in practice because of loss in the quantum channel. In order to prove security against any coalition of attackers in the setting with channel loss we must assume that attackers will never suffer any loss when they attack a protocol111111They could position themselves very close to the verifiers, for example.. When classical information is sent, such as in the QPV${}_{\text{BB84}}$ protocol [KMS11, BCF+11], attackers may guess the classical information that is being send. If they guess incorrectly they discard the round and declare a loss, if they guess correctly they can continue and successfully attack the protocol since the classical information is known to both attackers immediately. If the loss rate is high enough, attackers can hide their incorrect guesses in the loss declarations and the verifiers cannot distinguish the attackers from an honest prover. Since our QPV${}_{\textsf{SWAP}}$ protocol does not send any classical information we suspect it will be loss-tolerant. In section 4.3 we will consider this observation again in a more general setting. To prove loss tolerance, we can incorporate loss in the previous SDP and show that the optimal solution is independent of the loss, in a similar fashion as in [LXS+16]. We can relatively straightforwardly add the condition that attackers must mimic a certain loss rate $1-\eta$ on all inputs depending on the loss of the quantum channels. For the parallel repetition case this means that attackers either answer conclusively on all inputs or don’t answer at all. We will later show that, if security is retained if we allow attackers to play either all rounds or none at all, then this implies that security is also retained if attackers are allowed to say “loss” on any number of rounds (which means full loss tolerance). We maximize the probability of success conditioned on a conclusive answer $p_{\text{succ}}^{\text{max}}(n,\eta)$ by dividing through the success rate $\eta$. Here we consider the corresponding SDP maximization problem to the parallel repetition case with $n$-rounds right away ($n=1$ corresponds to the single round protocol). The corresponding SDP of the maximization over all attacker strategies then becomes Primal Program maximize: $\displaystyle\frac{1}{2^{n}\eta}\sum_{s\in\\{0,1\\}^{n}}\operatorname{Tr}[\tilde{\Pi}_{s}\rho_{s}]$ subject to: $\displaystyle\left(\sum_{s\in\\{0,1\\}^{n}}\tilde{\Pi}_{s}\right)+\tilde{\Pi}_{\varnothing}=\mathbbm{1}_{2^{2n}}$ $\displaystyle\operatorname{Tr}[\tilde{\Pi}_{\varnothing}\rho_{s}]=1-\eta,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle\tilde{\Pi}_{s}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}\cup\varnothing$ Dual Program minimize: $\displaystyle\frac{\operatorname{Tr}[\tilde{Y}]-(1-\eta)\gamma}{\eta}$ subject to: $\displaystyle\tilde{Y}-\tilde{Q}^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle 2^{2n}(\tilde{Y}-\tilde{Q}^{T_{\mathsf{B}}}_{\varnothing})-\gamma\mathbbm{1}_{2^{2n}}\succeq 0$ $\displaystyle\tilde{Y}\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle\tilde{Q}_{s}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}\cup\varnothing$ $\displaystyle\gamma\in\mathbb{R}.$ From the analysis in Appendix A.5, we see that the solution of the SDP is again $(2/3)^{n}$, independent of $\eta$, upper bounding the attackers restricted to LOCC measurements. The strategy in which attackers apply with probability $\eta$ the regular $n$-round parallel repetition attack and with probability $(1-\eta)$ discard the round again has conditional success probability $(2/3)^{n}$ so the bound is tight. We can use the fact that for any number of rounds $n$ the QPV${}^{n}_{\textsf{SWAP}}$ is tolerant against loss on the whole block in order to prove that the same is also true for any subset of rounds. ###### Proposition 3.1. Any multi-round QPV protocol that fulfills strong parallel repetition security against adversaries restricted to LOCC operations and is tolerant against declaring loss on all $n$ rounds, is also tolerant against declaring loss on any subset of rounds. ###### Proof. Suppose we have a secure $n$-round QPV protocol with strong parallel repetition. Then the $n$-round success probability for attackers $p_{n}=p_{1}^{n},\forall n\in\mathbb{N}$ for some single round probability $p_{1}\in[0,1]$. Suppose we perform $n$ rounds and we allow adversaries to only answer on $k$ rounds and to declare a loss on the remaining $(n-k)$ rounds, and that there exists an attacking strategy $S$ restricted to LOBC measurements that has a probability of success $p_{S}>p_{1}^{k}$ on a random subset of size $k$. We shall show that this leads to a contradiction. Consider a protocol like the $k$-round protocol QPV${}^{k}_{\textsf{SWAP}}$, which is secure and loss tolerant on all rounds by assumption and has success probability $p_{k}=p_{1}^{k}$. Since individual rounds are product states, attackers may create $n-k$ independent extra rounds locally of which they can forget the answer. This creates a $n$-round protocol. The attackers can now apply their strategy $S$. With probability $1/\binom{n}{k}$ they get an answer on their initial $k$ rounds that is correct with success probability $p_{S}$. And with probability $1-1/\binom{n}{k}$ they receive the wrong subset of $k$ rounds, in which case the attackers declare a loss (on all rounds). This defines an LOCC attack with a conditional winning probability $p_{S}>p_{1}^{k}$ and loss rate of $1-1/\binom{n}{k}$, which contradicts our assumption that the maximal success probability of being correct on the $k$-round protocol is $p_{1}^{k}$ for any loss. Therefore, for any subset of $k$ rounds out of the total of $n$ rounds, the maximal success probability on this subset is again $p_{1}^{k}=p_{k}$. ∎ Thus, in particular, we have that the QPV${}^{n}_{\textsf{SWAP}}$ is tolerant against loss on any subset of rounds. ### 3.3 Entanglement attack It turns out that there is a perfect attack on QPV${}_{\textsf{SWAP}}(\beta_{1},\dots,\beta_{k})$ using one pre-shared maximally entangled state between the attackers. This gets apparent if one looks at the purified version of the protocol. In this setting, the attackers do not receive mixed states from the verifiers, but rather halves of the corresponding purification. In QPV${}_{\textsf{SWAP}}(\beta_{1},\dots,\beta_{k})$ this purification is a maximally entangled state on each side. This does not change anything about the input/output distributions of the attackers, as already noted in [BCF+14]. Entanglement swapping is captured in the identity $\displaystyle\ket{\Phi_{+}}_{12}\ket{\Phi_{+}}_{34}=\frac{1}{2}\left(\ket{\Phi_{+}}_{14}\ket{\Phi_{+}}_{23}+\ket{\Phi_{-}}_{14}\ket{\Phi_{-}}_{23}+\ket{\Psi_{+}}_{14}\ket{\Psi_{+}}_{23}+\ket{\Psi_{-}}_{14}\ket{\Psi_{-}}_{23}\right).$ (3.13) Applying a Bell state measurement (BSM) on registers (23) swaps entanglement from registers (12) and (34) to (14) and (23). The intriguing aspect is that registers (14) could be causally separated, yet are entangled after the BSM. We use this to prove the following statement. ###### Theorem 3.2. The _SWAP-_ test can be perfectly simulated using one pre-shared maximally entangled state and one round of classical communication between $\mathsf{A}$ and $\mathsf{B}$. Thus $n$ pre-shared EPR pairs are sufficient to attack _QPV_ ${}_{\mathsf{SWAP}}^{n}(\beta_{1},\dots,\beta_{k})$, and $\sim 0.103n$ pre- shared EPR pairs are necessary to attack _QPV_ ${}_{\mathsf{SWAP}}^{n}(0,1)$. ###### Proof. In the purified protocol, verifiers do not send out their respective mixed states $\rho_{V_{0}},\rho_{V_{1}}$ but rather halves of the corresponding purified pure state. Note that for a given list of overlaps $\mathcal{O}=\\{\beta_{1},\dots,\beta_{k}\\}$ the total input state is $\rho=\frac{1}{k}\sum_{\beta\in\mathcal{O}}\rho_{\beta}$. One can then calculate $\displaystyle\operatorname{Tr}_{V_{0}}[\rho]=\operatorname{Tr}_{V_{1}}[\rho]=\frac{\mathbbm{1}_{2}}{2}$ and therefore the purifications at $\mathsf{V_{0}},\mathsf{V_{1}}$ are maximally entangled states, say $\ket{\Phi_{+}}$. The resulting entanglement structure between the verifiers and attackers throughout the attack is as depicted in Figure 4. $\mathsf{V_{0}}$$\mathsf{A_{1}}\hskip 2.84526pt$$\mathsf{V_{1}}$$\hskip 5.69054pt\mathsf{B_{1}}$$\mathsf{A_{2}}\hskip 2.84526pt$$\hskip 5.69054pt\mathsf{B_{2}}$ (a) Attackers receive quantum states. $\mathsf{V_{0}}$$\mathsf{A_{1}}\hskip 2.84526pt$$\mathsf{V_{1}}$$\hskip 5.69054pt\mathsf{B_{1}}$$\mathsf{A_{2}}\hskip 2.84526pt$$\hskip 5.69054pt\mathsf{B_{2}}$ (b) Each applies a BSM. $\mathsf{V_{0}}$$\mathsf{A_{1}}\hskip 2.84526pt$$\mathsf{V_{1}}$$\hskip 5.69054pt\mathsf{B_{1}}$$\mathsf{A_{2}}\hskip 2.84526pt$$\hskip 5.69054pt\mathsf{B_{2}}$ (c) They communicate results. Figure 4: Entanglement structure throughout a purified protocol. Assume now that attackers pre-share one EPR pair, say also $\ket{\Phi_{+}}$, in registers $A_{2}B_{2}$. Then the total input state of the protocol can be rewritten as $\displaystyle\ket{\Phi_{+}}_{V_{0}A_{1}}\ket{\Phi_{+}}_{V_{1}B_{1}}\ket{\Phi_{+}}_{A_{2}B_{2}}=$ $\displaystyle\frac{1}{2}\bigg{[}\ket{\Phi_{+}}_{V_{0}V_{1}}\otimes\frac{1}{2}\Big{(}\ket{\Phi_{+}}\ket{\Phi_{+}}+\ket{\Phi_{-}}\ket{\Phi_{-}}+\ket{\Psi_{+}}\ket{\Psi_{+}}+\ket{\Psi_{-}}\ket{\Psi_{-}}\Big{)}_{A_{1}A_{2}B_{1}B_{2}}$ $\displaystyle+\ket{\Phi_{-}}_{V_{0}V_{1}}\otimes\frac{1}{2}\Big{(}\ket{\Phi_{+}}\ket{\Phi_{-}}+\ket{\Phi_{-}}\ket{\Phi_{+}}+\ket{\Psi_{+}}\ket{\Psi_{-}}+\ket{\Psi_{-}}\ket{\Psi_{+}}\Big{)}_{A_{1}A_{2}B_{1}B_{2}}$ $\displaystyle+\ket{\Psi_{+}}_{V_{0}V_{1}}\otimes\frac{1}{2}\Big{(}\ket{\Phi_{+}}\ket{\Psi_{+}}-\ket{\Phi_{-}}\ket{\Psi_{-}}+\ket{\Psi_{+}}\ket{\Phi_{+}}-\ket{\Psi_{-}}\ket{\Phi_{-}}\Big{)}_{A_{1}A_{2}B_{1}B_{2}}$ $\displaystyle+\ket{\Psi_{-}}_{V_{0}V_{1}}\otimes\frac{1}{2}\Big{(}\ket{\Phi_{-}}\ket{\Psi_{+}}-\ket{\Psi_{+}}\ket{\Phi_{-}}+\ket{\Psi_{-}}\ket{\Phi_{+}}-\ket{\Phi_{+}}\ket{\Psi_{-}}\Big{)}_{A_{1}A_{2}B_{1}B_{2}}\bigg{]}.$ By separately performing a BSM on their two respective qubits in $A_{1}A_{2}$ and $B_{1}B_{2}$, the attackers will get one of the 16 measurement result combinations in the above equation and collapse the state in $V_{0}V_{1}$ into the corresponding Bell state. By communicating their results (classically) to each other, they can uniquely identify the state in the verifiers’ registers $V_{0}V_{1}$. Let them then use the following strategy: answer ‘0’, whenever they infer that the verifiers hold a symmetric state and ‘1’, whenever it is an anti-symmetric state. Doing so, attackers effectively perform a measurement $\\{\Pi_{\text{sym}},\Pi_{\text{a-sym}}\\}$ on $V_{0}V_{1}$, which is precisely what the SWAP test does. To make this argument formal, let A and B receive registers $V_{0}$ and $V_{1}$, respectively. Inspired by the above, define $\displaystyle W$ $\displaystyle\coloneqq$ $\displaystyle\mathbbm{1}_{V_{0}}\otimes\text{SWAP}_{A_{2}V_{1}}\otimes\mathbbm{1}_{B_{2}}$ $\displaystyle\Pi_{0}$ $\displaystyle\coloneqq\Big{(}$ $\displaystyle\ket{\Phi_{+}\Phi_{+}}\\!\bra{\Phi_{+}\Phi_{+}}+\ket{\Phi_{-}\Phi_{-}}\\!\bra{\Phi_{-}\Phi_{-}}+\ket{\Psi_{+}\Psi_{+}}\\!\bra{\Psi_{+}\Psi_{+}}+\ket{\Psi_{-}\Psi_{-}}\\!\bra{\Psi_{-}\Psi_{-}}+$ $\displaystyle\ket{\Phi_{+}\Phi_{-}}\\!\bra{\Phi_{+}\Phi_{-}}+\ket{\Phi_{-}\Phi_{+}}\\!\bra{\Phi_{-}\Phi_{+}}+\ket{\Psi_{+}\Psi_{-}}\\!\bra{\Psi_{+}\Psi_{-}}+\ket{\Psi_{-}\Psi_{+}}\\!\bra{\Psi_{-}\Psi_{+}}+$ $\displaystyle\ket{\Phi_{+}\Psi_{+}}\\!\bra{\Phi_{+}\Psi_{+}}+\ket{\Phi_{-}\Psi_{-}}\\!\bra{\Phi_{-}\Psi_{-}}+\ket{\Psi_{+}\Phi_{+}}\\!\bra{\Psi_{+}\Phi_{+}}+\ket{\Psi_{-}\Phi_{-}}\\!\bra{\Psi_{-}\Phi_{-}}\Big{)}_{V_{0}A_{2}V_{1}B_{2}}$ $\displaystyle\Pi_{1}$ $\displaystyle\coloneqq$ $\displaystyle\Big{(}\ket{\Phi_{-}\Psi_{+}}\\!\bra{\Phi_{-}\Psi_{+}}+\ket{\Psi_{+}\Phi_{-}}\\!\bra{\Psi_{+}\Phi_{-}}+\ket{\Psi_{-}\Phi_{+}}\\!\bra{\Psi_{-}\Phi_{+}}+\ket{\Phi_{+}\Psi_{-}}\\!\bra{\Phi_{+}\Psi_{-}}\Big{)}_{V_{0}A_{2}V_{1}B_{2}}.$ Note that $\Pi_{0}$ and $\Pi_{1}$ are positive semi-definite and $\Pi_{0}+\Pi_{1}=\mathbbm{1}$, so that $\\{\Pi_{0},\Pi_{1}\\}$ is a valid POVM. It can then be explicitly checked that $\displaystyle\begin{split}\Pi_{\text{sym}}^{V_{0}V_{1}}\rho_{\beta}^{V_{0}V_{1}}&=\operatorname{Tr}_{A_{2}B_{2}}\Big{[}W\Pi_{0}^{V_{0}A_{2}V_{1}B_{2}}W^{\dagger}\,\,\left(\rho_{\beta}^{V_{0}V_{1}}\otimes\ket{\Phi_{+}}\\!\bra{\Phi_{+}}^{A_{2}B_{2}}\right)\Big{]}\\\ \Pi_{\text{a-sym}}^{V_{0}V_{1}}\rho_{\beta}^{V_{0}V_{1}}&=\operatorname{Tr}_{A_{2}B_{2}}\Big{[}W\Pi_{1}^{V_{0}A_{2}V_{1}B_{2}}W^{\dagger}\,\,\left(\rho_{\beta}^{V_{0}V_{1}}\otimes\ket{\Phi_{+}}\\!\bra{\Phi_{+}}^{A_{2}B_{2}}\right)\Big{]},\end{split}$ (3.14) for any $\beta\in[0,1]$. Equation (3.14) shows that the above attack using one pre-shared EPR pair is exactly the same as the honest action on $\rho_{\beta}$. In other words, the attackers can perfectly simulate the SWAP test and thus reproduce P. To get a lower bound on the required entanglement resource in order to break QPV${}_{\mathsf{SWAP}}(0,1)$ we can use an argument already mentioned in Lemma V.3 in [BK11]. It says that if the attackers pre- share a $d$-dimensional resource state $\tau_{\mathsf{AB}}$ then the success probability (of the attackers achieving that the verifiers accept them) is related to the success probability without a pre-shared resource in the following way: $\displaystyle p_{\text{succ}|\tau_{\mathsf{AB}}}\leq dp_{\text{succ}|\emptyset}.$ (3.15) For the $\beta\in\\{0,1\\}$ case, we argued at the beginning of section 3.1 that the optimal attack strategy is to produce no error on orthogonal inputs and then accept whatever error that means for the identical inputs (which turned out to be $\Delta_{1}=1/4$). Then, the probability that the verifiers accept attackers is basically $(3/4)^{n_{=}}$, where $n_{=}$ is the number of rounds with identical inputs. Since each overlap is chosen with probability $1/2$ in each round, we have for $n$ rounds that $\mathbb{E}[n_{=}]=n/2$. Hence, we expect $p_{n,\text{succ}|\tau_{\mathsf{AB}}}\leq d\left(\frac{3}{4}\right)^{n/2}$ and thus $\displaystyle p_{n,\text{succ}|\tau_{\mathsf{AB}}}<1\qquad\text{as long as}\qquad d<\left(\frac{4}{3}\right)^{n/2}.$ (3.16) If $m$ is the number of EPR pairs in $\tau_{\mathsf{AB}}$, so that $d=2^{2m}$, it follows that, in expectation, $\displaystyle p_{n,\text{succ}|\tau_{\mathsf{AB}}}<1\qquad\text{as long as}\qquad m<\frac{1}{4}\log\left(\frac{4}{3}\right)n\approx 0.103n.$ (3.17) ∎ ## 4 More general results concerning loss and quantum communication in QPV ### 4.1 A protocol for which quantum communication gives an advantage over LOCC A natural question one might ask is whether there is any advantage for attackers if they are allowed to perform local operations and quantum communication (LOQC) instead of classical. One might think that there is no advantage in sending quantum information over classical communication when the output messages of the attackers are classical, but this turns out the be false. In what follows we will construct an explicit example of a QPV protocol with classical outputs where there is a finite gap in success probability for LOQC strategies over LOCC strategies. Consider the protocol where two verifiers both send half of either one randomly picked symmetric Bell state $\\{\ket{\Phi^{+}},\ket{\Phi^{-}},\ket{\Psi^{+}}\\}$ or the antisymmetric Bell state $\ket{\Psi^{-}}$, and ask an honest prover whether the entangled state they have sent is either symmetric or antisymmetric. An honest prover who can apply entangling operations can answer this question with success probability $1$ by applying a Bell measurement. From the analysis of the corresponding SDP optimized over PPT measurements, it turns out that the best LOCC strategy is upper bounded by $5/6$ (see Appendix A.6). The LOCC strategy of measuring both qubits in the computational basis and answering the XOR of the outcomes attains this success probability, so the upper bound over PPT measurements is tight. Now suppose the verifiers send two parallel rounds of the previous protocol under the condition that the two rounds are either both a random symmetric Bell state or they are both the antisymmetric Bell state. An honest prover who can apply entangling operations can still solve this protocol with success probability $1$ by applying a Bell measurement to one of the two pairs. Note that attackers who have access to a quantum channel can send half of their input state to each other such that both attackers end up with a local Bell state. The attackers can then both apply a Bell measurement locally and successfully attack the protocol with success probability $1$. By analysis of the SDP it turns out that the upper bound under PPT measurements is $17/18$, cf. appendix A.6. Again there is a LOCC strategy that makes this bound tight, namely measuring both pairs in the computational basis and only answering antisymmetric if both pairs have unequal measurement outcomes and equal otherwise. Again this strategy is always correct on antisymmetric inputs. And it is only incorrect on symmetric inputs if both times the state $\ket{\Psi^{+}}$ was sent, this happens with probability $1/9$, so the total probability of success of the LOCC protocol becomes $\displaystyle\mathbb{P}_{\text{succ}}=\frac{1}{2}\cdot 1+\frac{1}{2}\cdot\left(1-\frac{1}{9}\right)=\frac{17}{18}.$ (4.1) Thus we have constructed a QPV protocol where the probability of success for attackers restricted to single round LOCC measurements is strictly lower than attackers restricted to single round LOQC measurements. Similar to the loss- tolerance analysis of QPV${}_{\textsf{SWAP}}$, we even find that this protocol is also loss-tolerant. ### 4.2 Splitting Scheme In this section we present a procedure that distills a QPV protocol secure against attackers using a single round of simultaneous quantum communication from the existence of a QPV protocol that is secure against adversaries restricted to LOBC operations. We show that the existence of a perfect quantum communication attack on a QPV protocol generates two new QPV protocols, which ultimately leads to the existence of a QPV protocol that is secure against adversaries restricted to LOBC and cannot be perfectly attacked by adversaries using quantum communication. Take any QPV protocol in which two verifiers $\mathsf{V_{0}},\mathsf{V_{1}}$ send states $\rho_{A},\rho_{B}$ and ask for the outcome of, say, some entangling measurement on the joint state $\rho_{AB}$, e.g. the SWAP test. Suppose the protocol is secure against adversaries restricted to LOBC, i.e., there is a finite gap in the probability of success between an honest prover and adversaries restricted to LOBC operations, but the protocol can be broken perfectly by adversaries using quantum communication. In the most general setting the actions of the adversaries are as follows: * • Adversaries $A,B$ receive $\rho_{A},\rho_{B}$ respectively as input states. * • Apply some local channel $\mathcal{A}(\rho_{A})=\sigma_{A_{1}A_{2}},\mathcal{B}(\rho_{B})=\sigma_{B_{1}B_{2}}$. * • Send half of their local outcome to the other adversary. * • Apply a measurement on the new local states $\sigma_{A_{1}}\otimes\sigma_{B_{1}}$ and $\sigma_{A_{2}}\otimes\sigma_{B_{2}}$. * • Send the measurement outcome to their respective verifiers. Now note that both $\sigma_{A_{1}},\sigma_{B_{1}}$ and $\sigma_{A_{2}},\sigma_{B_{2}}$ can be used as input states to define two new QPV protocols, where the measurement an honest prover needs to apply is equal to the measurement the attackers would apply in the quantum communication attack in the original protocol. Then the probability of success for the honest verifier in the newly defined protocol is the probability of success of the adversaries using quantum communication in the previous protocol, which we assumed to be perfect. Note that any LOBC attack on one of these newly arising protocols was already a valid LOBC attack in the previous protocol with the inputs $\rho_{A}$ and $\rho_{B}$, the attackers can simply apply the local channels $\mathcal{A},\mathcal{B}$, discard the state they don’t use and apply their attack. Also note that if the input states $\rho_{A},\rho_{B}$ were product states the input states in the newly created protocol are also product states. We have therefore split the QPV protocol into two new protocols using only the existence of a perfect quantum communication attack. $t$$x$$\mathsf{P}$$a$$\rho_{A}$$\sigma_{B_{1}}$$\sigma_{A_{1}}$$\mathsf{V_{0}}$$a$$\rho_{B}$$\mathsf{V_{1}}$$\sigma_{A_{2}}$$\sigma_{B_{2}}$$\mathsf{A}$$\mathsf{B}$ $\sigma_{A_{2}B_{2}}$$t$$x$$\mathsf{P}$$a$$\sigma_{A_{2}}$$\mathsf{V_{0}}$$a$$\sigma_{B_{2}}$$\mathsf{V_{1}}$$\sigma_{A_{1}B_{1}}$$t$$x$$\mathsf{P}$$a$$\sigma_{A_{1}}$$\mathsf{V_{0}}$$a$$\sigma_{B_{1}}$$\mathsf{V_{1}}$ $t$$x$$\mathsf{P}$$a$$\sigma_{A_{2}}$$\tau_{B_{3}}$$\tau_{A_{3}}$$\mathsf{V_{0}}$$a$$\sigma_{B_{2}}$$\mathsf{V_{1}}$$\tau_{A_{4}}$$\tau_{B_{4}}$$\mathsf{A}$$\mathsf{B}$$t$$x$$\mathsf{P}$$a$$\sigma_{A_{1}}$$\tau_{B_{1}}$$\tau_{A_{1}}$$\mathsf{V_{0}}$$a$$\sigma_{B_{1}}$$\mathsf{V_{1}}$$\tau_{A_{2}}$$\tau_{B_{2}}$$\mathsf{A}$$\mathsf{B}$ $\tau_{A_{4}B_{4}}$$t$$x$$\mathsf{P}$$a$$\tau_{A_{4}}$$\mathsf{V_{0}}$$a$$\tau_{B_{4}}$$\mathsf{V_{1}}$$\tau_{A_{3}B_{3}}$$t$$x$$\mathsf{P}$$a$$\tau_{A_{3}}$$\mathsf{V_{0}}$$a$$\tau_{B_{3}}$$\mathsf{V_{1}}$$\tau_{A_{2}B_{2}}$$t$$x$$\mathsf{P}$$a$$\tau_{A_{2}}$$\mathsf{V_{0}}$$a$$\tau_{B_{2}}$$\mathsf{V_{1}}$$\tau_{A_{1}B_{1}}$$t$$x$$\mathsf{P}$$a$$\tau_{A_{1}}$$\mathsf{V_{0}}$$a$$\tau_{B_{1}}$$\mathsf{V_{1}}$ Figure 5: Visual representation of splitting into two new QPV protocols from the existence of a quantum communication attack on a single QPV protocol. Two attackers $A,B$ receive inputs $\rho_{A},\rho_{B}$ and apply some channel $\mathcal{A}(\rho_{A})=\sigma_{A_{1}A_{2}},\mathcal{B}(\rho_{B})=\sigma_{B_{1}B_{2}}$ and send half of their outcome to the other party. This procedure defines two new QPV protocols. If there again exists a perfect quantum communication attack for both new protocols, then by the same argument we can define 4 new QPV protocols, and so on. Now there are two options for the newly defined protocols: * • There does not exist a perfect attack using quantum communication for at least one of the two new QPV protocols, in which case we have shown the existence of a QPV protocol that is loss-tolerant and safe against adversaries using quantum communication and we are done. * • For both protocols there exists a perfect attack using quantum communication. In which case we can apply our previous argument to generate 4 new QPV protocols. See Figure 5 for a visual representation of the splitting argument. The previous options are true for all arising QPV protocols after splitting and we wish to show the existence of a QPV protocol safe against quantum communication. We therefore suppose all of the induced QPV protocols after splitting $n$ times can be attacked perfectly using quantum communication for any $n\geq 2$. Note that the input states send from verifier $\mathsf{V_{0}}$ in the induced QPV protocols after splitting only depend on the previous input states send from $\mathsf{V_{0}}$ and vice-versa for the input states from $\mathsf{V_{1}}$. We can write this action as channels $\Lambda^{A}_{n}:\mathcal{D}(A)\to\mathcal{D}(A_{1}\otimes\dots\otimes A_{2^{n}})$, mapping $\rho_{A}\mapsto\sigma_{A_{1}\ \dots\ A_{2^{n}}}$, and $\Lambda^{B}_{n}:\mathcal{D}(B)\to\mathcal{D}(B_{1}\otimes\dots\otimes B_{2^{n}})$, mapping $\rho_{B}\mapsto\sigma_{B_{1}\ \dots\ B_{2^{n}}}$. The main idea for this proof is that the reduced states $\sigma_{A_{i}}$ and $\sigma_{B_{i}}$ become approximately classical, and that attackers could immediately measure their incoming states and share the classical measurement outcome instead of sending some quantum message. This would lead to a contradiction since the success probability of this procedure would be upper bounded by the LOCC bound of the original QPV protocol, while at the same time, by assumption, this attack should become approximately close to a perfect one. We use Theorem 1 on the emergent classicality of channels from [QR20]. ###### Theorem 4.1 (Qi-Ranard). Consider a quantum channel $\Lambda:\mathcal{D}(A)\to\mathcal{D}(B_{1}\otimes...\otimes B_{n})$. For output subsets $R\subset\\{B_{1},...,B_{n}\\}$, let $\Lambda_{R}\equiv\operatorname{Tr}_{\bar{R}}\circ\Lambda:\mathcal{D}(A)\to\mathcal{D}(R)$ denote the reduced channel onto R, obtained by tracing out the complement $\bar{R}$. Then for any $|Q|,|R|\in\\{1,...,n\\}$, there exists a measurement, described by a positive-operator valued measure (POVM) $\\{M_{\alpha}\\}$, and an “excluded” output subset $Q\subset\\{B_{1},...,B_{n}\\}$ of size $|Q|$, such that for all output subsets $R$ of size $|R|$, disjoint from $Q$, we have $\|\Lambda_{R}-\mathcal{E}_{R}\|_{\diamond}\leq d^{3}_{A}\sqrt{2\ln(d_{A})\frac{|R|}{|Q|}},$ (4.2) using a measure-and-prepare channel $\mathcal{E}_{R}(X):=\sum_{\alpha}\operatorname{Tr}(M_{\alpha}X)\sigma_{R}^{\alpha}$ (4.3) for some states $\\{\sigma^{\alpha}_{R}\\}_{\alpha}$ on $R$, where $d_{A}=dim(A)$ and $\|...\|_{\diamond}$ is the diamond norm on channels. The measurement $\\{M_{\alpha}\\}$ does not depend on the choice of $R$, while the prepared states $\sigma_{R}^{\alpha}$ may depend on $R$. Applying the theorem and setting the size of the excluded output set for both channels $\Lambda_{n}^{A},\Lambda_{n}^{B}$ to $|Q_{A}|=|Q_{B}|=2^{n-1}-1$, we have, by the pigeonhole principle, that for some index $i\in\\{1,\dots,2^{n}\\}$ both output sets $A_{i},B_{i}$ must be in the sets disjoint from $Q_{A}$ and $Q_{B}$. Setting the size of the reduced channels to $|R_{A}|=|R_{B}|=1$, we see that in both cases the reduced channel $\operatorname{Tr}_{\bar{R}}\circ\Lambda^{A/B}_{n}$ converges to a measure- and-prepare channel in the number of rounds $n$ for any output: $\|\operatorname{Tr}_{\bar{R}_{A/B}}\circ\Lambda^{A/B}_{n}-\mathcal{E}_{R_{A/B}}\|_{\diamond}\leq 8\sqrt{\frac{2\ln(2)}{2^{n-1}-1}}.$ (4.4) The theorem implies that the reduced channels that maps the input states $\rho_{A}\mapsto\sigma_{A_{i}}$ and $\rho_{B}\mapsto\sigma_{B_{i}}$ become approximately close to measure-and-prepare channels. Crucially, the measurements $\\{M^{A/B}_{\alpha}\\}$ in the respective measure-and-prepare channels do not depend on the choice of $R$. This gives rise to an LOBC attack in the original QPV protocol. Two attackers $A,B$ simply apply the local measure-and-prepare channels $\mathcal{E}_{R_{A}},\mathcal{E}_{R_{B}}$ and exchange the classical measurement outcomes $\alpha_{1},\alpha_{2}$. Both attackers then know the state $\sum_{\alpha_{1}}p_{\alpha_{1}}\sigma^{\alpha_{1}}_{A_{i}}\otimes\sum_{\alpha_{2}}p_{\alpha_{2}}\sigma^{\alpha_{2}}_{B_{i}}$ which is arbitrarily close to $\sigma_{A_{i}}\otimes\sigma_{B_{i}}$ in $n$. Since for any QPV protocol the POVM measurement that the honest verifier has to apply is publicly known, both attackers can calculate the probability distribution of the answers of an honest prover. Using shared randomness to generate an equal answer both attackers can now mimic the probability of success of an honest verifier arbitrarily well. This LOBC attack allows attackers to answer correctly with a probability of success that converges to the perfect probability of success in the number of splittings $n$. By assumption $n$ can be arbitrarily large and thus the attackers have an LOBC attack that performs arbitrarily well. However, since for our original protocol there is a finite gap between the LOBC probability of success and the perfect probability of success, we have a contradiction and conclude that not all of the induced QPV protocols after some $N$ splittings can be attacked perfectly. Thus a QPV protocol arises after some amount of splittings that must be safe against unentangled adversaries restricted to quantum communication. ### 4.3 Considerations on loss tolerance in QPV Ideally, for QPV to become feasible, one would like to have a protocol that is fully loss tolerant, secure against attackers being able to pre-share a bounded amount of entanglement and to use quantum communication between them. So far, there is no such protocol. Here we give a no-go result, based on a simple observation. We show that no such protocol, fulfilling all three of the above properties, can exist. However, not all is lost as in practice one may be able to achieve good enough partial loss tolerance, for example by increasing the quantum input dimension or the number of possible quantum operations P can apply. For simplicity, consider the following quite general two-verifier QPV protocol121212The result that follows can be straightforwardly generalized to $m$ verifiers, for which a general attack would be to teleport all quantum inputs to one fixed attacker, who then performs the guessing attack. In that case (and when the locations of the verifiers are not collinear), the probability that no teleportation needs corrections is much lower, i.e. $1/d^{2(m-1)}$.: * • Verifiers $\mathsf{V}_{0},\mathsf{V}_{1}$ send $d_{0},d_{1}$ dimensional quantum inputs, respectively, to P. They also send classical information $x,y$ (of any size), respectively. * • P computes a function $f(x,y)$ and, based on that result, applies a quantum operation $\mathcal{Q}_{f(x,y)}$ to the inputs. This yields two outputs, one intended for $\mathsf{V}_{0}$ and one for $\mathsf{V}_{1}$. These outputs are forwarded to the corresponding verifier. * • The verifiers check if what they received matches the specific honest protocol and that the responses arrived in time. We will denote this as $\text{QPV}(d_{0},d_{1},f)$ and the protocol is to be repeated for $n$ rounds, either sequentially or in parallel. Now let $k\coloneqq|\operatorname{Im}(f)|$ be the number of possible quantum operations to be applied at P. It turns out that there always exists a perfect attack consuming at most $O(n\log d)$ (qubit) EPR pairs, where $d=\min\\{d_{0},d_{1}\\}$, as long as the loss is high enough131313We assume that all-powerful attackers have some way of determining the dimensionality of their inputs, so that they know who of them holds the smaller quantum input.. We make this precise in the following statement. ###### Proposition 4.2. Let $d=\min\\{d_{0},d_{1}\\}$ and $k=|\operatorname{Im}(f)|$. Any $n$-round _QPV_ $(d_{0},d_{1},f)$ protocol can be attacked with $O(n\log d)$ EPR pairs if the fraction $\eta$ of rounds that is used for security analysis fulfills $\eta\leq\frac{1}{kd^{2}}$. ###### Proof. Without loss of generality, let attacker A receive the smaller quantum input with dimension $d=\min\\{d_{0},d_{1}\\}$ and classical input $x$. As soon as A receives his quantum input, quantum teleportation [BBC+93] can be used to teleport the state to B, after which A sends to B which teleportation corrections were to apply and the classical information $x$. With probability $p_{00}=1/d^{2}$ there are no teleportation corrections to apply, in which case B holds both input states locally and before the honest party P would have received them. B can guess the value of $f(x,y)$ and immediately apply the operation P was asked to apply, send the part (e.g. a subsystem or a measurement result) that $\mathsf{V_{0}}$ is supposed to receive to A and keep the part that $\mathsf{V_{1}}$ is supposed to receive. With probability $1/k$ attacker B guesses the value of $f(x,y)$ correctly. If the quantum state picked up teleportation corrections in the first place or if it turns out that B guessed $f(x,y)$ wrongly (of which both get to know as soon as they receive the communication from the other attacker), attackers deny to answer and both send the corresponding loss symbol ‘$\varnothing$’. If there were no corrections to apply and B guessed $f(x,y)$ correctly, both send their respective parts that the verifiers are supposed to receive to them. As they are only required to answer $\eta\leq\frac{1}{kd^{2}}$ of all rounds, they can simply choose those perfect rounds without teleportation corrections and a correct guess for $f(x,y)$. If they can apply this attack on each round, requiring $O(n\log d)$ ebits in total, they perfectly reproduce P. ∎ The statement shines light on another facet of loss tolerance – the attacker’s ability to post-select on a “correct” guess after communication. They can always do that, if they pre-share entanglement, by simply guessing the teleportation corrections. In protocols with quantum input from one side and classical information determining the action of P, like [KMS11, CL15, Unr14, JKPPG21, BCS21], attackers can, even without pre-shared entanglement, do this post-selection simply by first guessing $f(x,y)$, then applying the operation $\mathcal{Q}_{f(x,y)}$ on the quantum input and communicating $x,y$ to each other so that both attackers know if the initial guess was correct or not. This is captured in the above bound, identifying all-classical input from one side with $d=1$. It is now also clearer why the protocol in [LXS+16] and our QPV${}_{\textsf{SWAP}}$ are fully loss-tolerant if attackers do not pre-share any entanglement. Without shared entanglement, the attackers simply have no way of ever knowing if their guess was correct, because there is no information about it leaving the verifiers. We conclude that there does not exist a practical fully loss tolerant QPV protocol of the above type. The best we can achieve is partial loss tolerance. By increasing $k$ and/or $d$ such that $kd^{2}>1/\eta$, however, partial loss tolerance good enough for all practical purposes could be achieved. We will therefore need to bound both the pre-shared entanglement of the attackers and the loss for a practical QPV protocol. ## 5 Conclusion We analyzed the interplay between loss tolerance, quantum communication attacks and QPV. Several results understanding loss tolerance in QPV better were given including a proof that any QPV protocol can be broken with a linear amount of EPR pair if the loss of quantum inputs is high enough. A new protocol QPV${}_{\textsf{SWAP}}$ has been presented and we have shown that it is fully loss tolerant against LOCC attackers with no pre-shared entanglement, can be attacked with $\tilde{O}(n)$ pre-shared EPR pairs and that at least $\sim 0.103n$ pre-shared EPR pairs are necessary in the $\beta\in\\{0,1\\}$ case. Moreover, it fulfills strong parallel repetition and retains the loss tolerance even if all rounds are run in parallel. In addition, the flexibility and simplicity of the SWAP test, both theoretically and experimentally, make it an excellent candidate for practical QPV. Having a loss tolerant protocol with parallel repetition, the question of security under quantum communication attacks becomes much more relevant141414For sequential protocols, attackers could in principle always distribute sufficient entanglement in the first round in order to break all subsequent rounds. If all rounds are run in parallel, they don’t have time to do that. (We assume they don’t pre-share entanglement here.). The security of our protocol in that setting remains an open problem. We have argued that care has to be taken when including quantum communication attacks into QPV, as we have presented an example for which LOCC attackers cannot break it, yet quantum communication attackers can do so perfectly. However, using a recursive splitting scheme and emergent classicality, we were able to show that, starting from a protocol that can be perfectly attacked by quantum communication attackers but not if restricted to LOCC, one can construct a QPV protocol for which quantum communication does not allow attackers to break it perfectly. We suspect that our protocol is secure even if attackers are allowed to quantum communicate and would be interested in finding a proof for it. Proofs against quantum communication attacks have been quite elusive and there does not seem to be a clear strategy. Formulating the problem as a non-local game, as done in [TFKW13], does not seem to work well for protocols where both verifiers send out quantum states. Both in [LXS+16] and our protocol the verifiers are separable. This separable structure between $\mathsf{V}_{0}$ and $\mathsf{V}_{1}$ makes it hard to optimize over all possible attacker strategies, as not all states $\rho$ between the verifiers and attackers are allowed as input to the game strategy, but the mentioned tensor product structure must be obeyed. Recent quantum communication security proofs for other protocols [JKPPG21, BCS21] were fairly involved. Future work will include a detailed analysis of our protocol under realistic, imperfect experimental conditions. This is work in progress and early results look promising. We would also like to extend our splitting scheme to the setting with loss, i.e. where we allow a third ‘$\varnothing$’ answer in all protocols involved. The goal is then to show the existence of a QPV protocol that is secure and loss tolerant, even if attackers can use quantum communication. In the end the goal is to find a practically feasible, yet secure, QPV protocol. There remain a few more fundamental open questions to be solved for that. For example, for optical fibre based QPV protocols, the speed of light in fibre $v$ is strictly smaller than $c$, allowing attackers to cheat. Can we overcome this issue by finding a feasible free-space protocol or one utilising quantum repeaters? And ultimately, the gap in necessary/sufficient amount of EPR pairs to break any QPV protocol, linear lower bound versus exponential upper bound (in the input-size), needs to be closed. Hopefully towards an exponential or at least superlinear lower bound. #### Acknowledgments We would like to thank Wolfgang Löffler, Kirsten Kanneworff and Norbert Lütkenhaus for many useful discussions. RA and HB were supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium programme (project number 024.003.037). PVL and HB were supported by the Dutch Research Council (NWO/OCW), as part of the NWO Gravitation Programme Networks (project number 024.002.003). ## References * [ApS20] MOSEK ApS. Mosek optimizer api for python. Software Package, Ver, 9, 2020. * [BBC+93] Charles H. Bennett, Gilles Brassard, Claude Crépeau, Richard Jozsa, Asher Peres, and William K. Wootters. Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels. Phys. Rev. Lett., 70:1895–1899, Mar 1993. * [BCF+11] Harry Buhrman, Nishanth Chandran, Serge Fehr, Ran Gelles, Vipul Goyal, Rafail Ostrovsky, and Christian Schaffner. Position-Based Quantum Cryptography: Impossibility and Constructions. In Phillip Rogaway, editor, Advances in Cryptology – CRYPTO 2011, pages 429–446, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. * [BCF+14] Harry Buhrman, Nishanth Chandran, Serge Fehr, Ran Gelles, Vipul Goyal, Rafail Ostrovsky, and Christian Schaffner. Position-Based Quantum Cryptography: Impossibility and Constructions. SIAM Journal on Computing, 43(1):150–178, January 2014. arXiv: 1009.2490. * [BCS21] Andreas Bluhm, Matthias Christandl, and Florian Speelman. Position-based cryptography: Single-qubit protocol secure against multi-qubit attacks. arXiv preprint arXiv:2104.06301, 2021. * [BCWdW01] Harry Buhrman, Richard Cleve, John Watrous, and Ronald de Wolf. Quantum fingerprinting. Physical Review Letters, 87(16):167902, September 2001. arXiv: quant-ph/0102001. * [BFSS13] Harry Buhrman, Serge Fehr, Christian Schaffner, and Florian Speelman. The garden-hose model. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 145–158, 2013. * [BK11] Salman Beigi and Robert Koenig. Simplified instantaneous non-local quantum computation with applications to position-based cryptography. New Journal of Physics, 13(9):093036, September 2011. arXiv: 1101.1065. * [BRSdW11] Harry Buhrman, Oded Regev, Giannicola Scarpa, and Ronald de Wolf. Near-Optimal and Explicit Bell Inequality Violations. In 2011 IEEE 26th Annual Conference on Computational Complexity, pages 157–166, San Jose, CA, USA, June 2011. IEEE. * [CGMO09] Nishanth Chandran, Vipul Goyal, Ryan Moriarty, and Rafail Ostrovsky. Position Based Cryptography. In Shai Halevi, editor, Advances in Cryptology - CRYPTO 2009, Lecture Notes in Computer Science, pages 391–407, Berlin, Heidelberg, 2009. Springer. * [CL15] Kaushik Chakraborty and Anthony Leverrier. Practical Position-Based Quantum Cryptography. Physical Review A, 92(5):052304, November 2015. arXiv: 1507.00626. * [CLM+14] Eric Chitambar, Debbie Leung, Laura Mančinska, Maris Ozols, and Andreas Winter. Everything you always wanted to know about locc (but were afraid to ask). Communications in Mathematical Physics, 328(1):303–326, 2014. * [Cos13] Alessandro Cosentino. PPT-indistinguishable states via semidefinite programming. Physical Review A, 87(1):012321, January 2013. arXiv: 1205.1031. * [Dol19] Kfir Dolev. Constraining the doability of relativistic quantum tasks. arXiv preprint arXiv:1909.05403, 2019. * [GC20] Alvin Gonzales and Eric Chitambar. Bounds on Instantaneous Nonlocal Quantum Computation. IEEE Transactions on Information Theory, 66(5):2951–2963, May 2020\. arXiv: 1810.00994. * [GECP13] Juan Carlos Garcia-Escartin and Pedro Chamorro-Posada. Swap test and hong-ou-mandel effect are equivalent. Physical Review A, 87(5):052330, 2013. * [GLW13] Fei Gao, Bin Liu, and Qiao-Yan Wen. Enhanced no-go theorem for quantum position verification. arXiv preprint arXiv:1305.4254, 2013. * [JKPPG21] Marius Junge, Aleksander M Kubicki, Carlos Palazuelos, and David Pérez-García. Geometry of banach spaces: a new route towards position based cryptography. arXiv preprint arXiv:2103.16357, 2021. * [KMS11] Adrian Kent, William J. Munro, and Timothy P. Spiller. Quantum Tagging: Authenticating Location via Quantum Information and Relativistic Signalling Constraints. Physical Review A, 84(1):012326, July 2011. arXiv: 1008.2147. * [LL11] Hoi Kwan Lau and Hoi Kwong Lo. Insecurity of position-based quantum cryptography protocols against entanglement attacks. Physical Review A, 83(1):012322, January 2011. arXiv: 1009.2256. * [LXS+16] Charles Ci Wen Lim, Feihu Xu, George Siopsis, Eric Chitambar, Philip G. Evans, and Bing Qi. Loss-tolerant quantum secure positioning with weak laser sources. Physical Review A, 94(3):032315, September 2016. arXiv: 1607.08193. * [MdW18] Ashley Montanaro and Ronald de Wolf. A Survey of Quantum Property Testing. arXiv:1310.2035 [quant-ph], March 2018. arXiv: 1310.2035. * [OCCG20] Andrea Olivo, Ulysse Chabaud, André Chailloux, and Frédéric Grosshans. Breaking simple quantum position verification protocols with little entanglement. arXiv:2007.15808 [quant-ph], July 2020. arXiv: 2007.15808. * [QLL+15] Bing Qi, Hoi-Kwong Lo, Charles Ci Wen Lim, George Siopsis, Eric A. Chitambar, Raphael Pooser, Philip G. Evans, and Warren Grice. Free-space reconfigurable quantum key distribution network. 2015 IEEE International Conference on Space Optical Systems and Applications (ICSOS), pages 1–6, October 2015. arXiv: 1510.04891. * [QR20] Xiao-Liang Qi and Daniel Ranard. Emergent classicality in general multipartite states and channels. arXiv preprint arXiv:2001.01507, 2020. * [QS15] Bing Qi and George Siopsis. Loss-tolerant position-based quantum cryptography. Physical Review A, 91(4):042337, April 2015. arXiv: 1502.02020. * [RG15] Jérémy Ribeiro and Frédéric Grosshans. A Tight Lower Bound for the BB84-states Quantum-Position-Verification Protocol. arXiv:1504.07171 [quant-ph], June 2015. arXiv: 1504.07171. * [Sim00] Rajiah Simon. Peres-horodecki separability criterion for continuous variable systems. Physical Review Letters, 84(12):2726, 2000. * [Spe16a] Florian Speelman. Instantaneous Non-Local Computation of Low T-Depth Quantum Circuits. In Anne Broadbent, editor, 11th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2016), volume 61 of Leibniz International Proceedings in Informatics (LIPIcs), pages 9:1–9:24, Dagstuhl, Germany, 2016. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. * [Spe16b] Florian Speelman. Position-based quantum cryptography and catalytic computation. PhD thesis, University of Amsterdam, Amsterdam, 2016. OCLC: 964061686. * [TFKW13] Marco Tomamichel, Serge Fehr, Jędrzej Kaniewski, and Stephanie Wehner. A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography. New Journal of Physics, 15(10):103002, October 2013. arXiv: 1210.4359. * [Unr14] Dominique Unruh. Quantum Position Verification in the Random Oracle Model. In Juan A. Garay and Rosario Gennaro, editors, Advances in Cryptology – CRYPTO 2014, pages 1–18, Berlin, Heidelberg, 2014. Springer Berlin Heidelberg. * [VB96] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM review, 38(1):49–95, 1996. * [Wat18] John Watrous. The Theory of Quantum Information, page 418. Cambridge University Press, 1 edition, April 2018. ## Appendix A QPV${}_{\textsf{SWAP}}$ ### A.1 Exponential suppression of attacker success probability Here we provide the proof of (3.5). Let $N_{\beta}$ the binomial distributed random variable describing the number of “0” answers of attackers in $L_{\beta}$. Since $p_{\beta},p^{\mathsf{AB}}_{\beta}\in\big{[}\frac{1}{2},1\big{)}$, we may approximate the binomial distribution with a normal distribution $\mathcal{N}(\mu,\sigma^{2})$ with $\mu=R_{\beta}p$ and $\sigma^{2}=R_{\beta}p(1-p)$ for $p\in\\{p_{\beta},p^{\mathsf{AB}}_{\beta}\\}$, respectively. This is valid as long as $R_{\beta}(1-p)$ is sufficiently large, which we can always achieve by making $R_{\beta}$ big enough. Then $\displaystyle\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})$ $\displaystyle=\mathbb{P}\left(z_{\frac{\alpha}{2}}^{\mathsf{P}}\leq N_{\beta}\leq z_{1-\frac{\alpha}{2}}^{\mathsf{P}}\right)$ $\displaystyle=F_{N_{\beta}}\left(z_{1-\frac{\alpha}{2}}^{\mathsf{P}}\right)-F_{N_{\beta}}\left(z_{\frac{\alpha}{2}}^{\mathsf{P}}-1\right)$ $\displaystyle\approx\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{z_{1-\frac{\alpha}{2}}^{\mathsf{P}}-\mu_{\beta}^{\mathsf{AB}}}{\sqrt{2}\sigma_{\beta}^{\mathsf{AB}}}\right)\right]-\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{z_{\frac{\alpha}{2}}^{\mathsf{P}}-\mu_{\beta}^{\mathsf{AB}}}{\sqrt{2}\sigma_{\beta}^{\mathsf{AB}}}\right)\right].$ Now for $\mathcal{N}(\mu,\sigma^{2})$ one has $z_{q}=F^{-1}(q)=\mu+\sqrt{2}\sigma\operatorname{erf}^{-1}(2q-1)$. Replacing the $z_{q}$ values and defining $c_{\alpha}\coloneqq\operatorname{erf}^{-1}(1-\alpha)$ as well as $f_{\beta}^{\mathsf{X}}=\sqrt{2p_{\beta}^{\mathsf{X}}(1-p_{\beta}^{\mathsf{X}})}$ gives $\displaystyle\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})\approx\frac{1}{2}\operatorname{erf}\left(\frac{\sqrt{R_{\beta}}\Delta_{\beta}+f_{\beta}^{\mathsf{P}}c_{\alpha}}{f_{\beta}^{\mathsf{AB}}}\right)-\frac{1}{2}\operatorname{erf}\left(\frac{\sqrt{R_{\beta}}\Delta_{\beta}-f_{\beta}^{\mathsf{P}}c_{\alpha}}{f_{\beta}^{\mathsf{AB}}}\right).$ Using that $\operatorname{erf}(x)\approx 1-\frac{e^{-x^{2}}}{\sqrt{\pi}x}$ for large $x$, we can write $\displaystyle\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})\approx\sqrt{\frac{2}{\pi}}f_{\beta}^{\mathsf{AB}}\left[\frac{e^{-\left(\sqrt{R_{\beta}}\Delta_{\beta}-f_{\beta}^{\mathsf{P}}c_{\alpha}\right)^{2}/\left(f_{\beta}^{\mathsf{AB}}\right)^{2}}}{\sqrt{R_{\beta}}\Delta_{\beta}-f_{\beta}^{\mathsf{AB}}c_{\alpha}}-\frac{e^{-\left(\sqrt{R_{\beta}}\Delta_{\beta}+f_{\beta}^{\mathsf{P}}c_{\alpha}\right)^{2}/\left(f_{\beta}^{\mathsf{AB}}\right)^{2}}}{\sqrt{R_{\beta}}\Delta_{\beta}+f_{\beta}^{\mathsf{AB}}c_{\alpha}}\right]$ As $p_{\beta}^{\mathsf{AB}}\neq 0$ and $p_{\beta}^{\mathsf{AB}}\neq 1$, we may neglect the terms $f_{\beta}^{\mathsf{AB}}c_{\alpha}$ in the denominators because we can make $R_{\beta}$ sufficiently large. Moreover, leaving out the second exponential term gives the approximate upper bound $\displaystyle\mathbb{P}(\mathsf{acc}_{\beta}|\mathsf{attack})\lesssim\frac{\sqrt{2}f_{\beta}^{\mathsf{AB}}}{\sqrt{\pi R_{\beta}}\Delta_{\beta}}e^{-\left(\sqrt{R_{\beta}}\Delta_{\beta}-f_{\beta}^{\mathsf{P}}c_{\alpha}\right)^{2}/\left(f_{\beta}^{\mathsf{AB}}\right)^{2}}=O\left(\frac{2^{-\Delta_{\beta}^{2}R_{\beta}}}{\Delta_{\beta}\sqrt{R_{\beta}}}\right).$ ### A.2 Relating $p_{\text{succ}}$ to $\lVert\Delta\rVert_{1}$ for QPV${}_{\textsf{SWAP}}(0,1)$ Relating these two quantities is fairly straigtforward and achieved by one application of the triangle inequality. Consider $\displaystyle p_{\text{succ}}=\frac{1}{2}\frac{\operatorname{Tr}[\Pi_{=}\rho_{=}]}{\eta}+\frac{1}{2}\frac{\operatorname{Tr}[\Pi_{\neq}\rho_{\neq}]}{\eta}\leq u,$ with $u\leq 3/4$. We want to massage this in order to get $\Delta_{0}$ and $\Delta_{1}$ expressions into it. Doing so gives $\displaystyle 1-\frac{\operatorname{Tr}[\Pi_{=}\rho_{=}]}{\eta}+\frac{1}{2}-\frac{\operatorname{Tr}[\Pi_{\neq}\rho_{\neq}]}{\eta}\geq\frac{3}{2}-2u.$ This implies $\displaystyle\lVert\Delta\rVert_{1}=\bigg{|}1-\frac{\operatorname{Tr}[\Pi_{=}\rho_{=}]}{\eta}\bigg{|}+\bigg{|}\frac{1}{2}-\frac{\operatorname{Tr}[\Pi_{\neq}\rho_{\neq}]}{\eta}\bigg{|}\geq\bigg{|}1-\frac{\operatorname{Tr}[\Pi_{=}\rho_{=}]}{\eta}+\frac{1}{2}-\frac{\operatorname{Tr}[\Pi_{\neq}\rho_{\neq}]}{\eta}\bigg{|}\geq\frac{3}{2}-2u.$ ### A.3 Optimal PPT Measurements for QPV${}_{\textsf{SWAP}}$ Protocol Here we shall prove the upper bound of the success probability of answering the protocol correctly for adversaries restricted to PPT operations in equation (3.11). For simplification we will refer to the equal case as the $0$ case and unequal as the $1$ case. The idea of the proof is to find analytical feasible solutions to the primal and Dual programs of the SDP. In general a feasible solution to the primal program defines a lower bound to the maximization value, while a feasible solution to the dual program defines an upper bound. This is the property of weak duality which holds for any SDP [VB96]. In all of our further proofs we find feasible primal values and dual values that coincide and thus our solutions are optimal and we have strong duality. From the density matrices we see that there is no difference between picking two random equal states or picking two equal states in a random mutually unbiased basis, see $\rho_{0}$. Similarly picking two random orthogonal states or picking two orthogonal mutually unbiased basis (MUB) states is equal, see $\rho_{1}$. These become151515Note that this is a slight change of notation with respect to the main text, where we used $\rho_{\beta}$ for overlap $\beta$. Here $\rho_{0}$ denotes the mixed state of sending identical states and $\rho_{1}$ denotes the one sending orthogonal states. $\displaystyle\rho_{0}=\frac{1}{6}\left(\begin{matrix}2&0&0&0\\\ 0&1&1&0\\\ 0&1&1&0\\\ 0&0&0&2\end{matrix}\right),$ $\displaystyle\rho_{1}=\frac{1}{6}\left(\begin{matrix}1&0&0&0\\\ 0&2&-1&0\\\ 0&-1&2&0\\\ 0&0&0&1\end{matrix}\right).$ It is useful to note that both density matrices $\rho_{0},\rho_{1}$ are a mixture of unentangled states and thereby unentangled. Thus by the Peres- Horodecki separability criterion the partial transpose of $\rho_{0}$ and $\rho_{1}$ are positive semi-definite [Sim00]. The optimization over all strategies of the single round protocol is written as follows in an SDP: Primal Program maximize: $\displaystyle\frac{1}{2}\operatorname{Tr}[\Pi_{0}\rho_{0}+\Pi_{1}\rho_{1}]$ subject to: $\displaystyle\Pi_{0}+\Pi_{1}=\mathbbm{1}_{2^{2}}$ $\displaystyle\Pi_{k}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ k\in\\{0,1\\}$ Dual Program minimize: $\displaystyle\operatorname{Tr}[Y]$ subject to: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{i}-\rho_{i}/2\succeq 0,\ \ \ i\in\\{0,1\\}$ $\displaystyle Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle Q_{i}\in\text{Pos}(\mathsf{A},\mathsf{B}),\ \ \ i\in\\{0,1\\}.$ A feasible solution for the primal program is $\displaystyle\Pi_{0}=\frac{1}{3}\left(\begin{matrix}2&0&0&0\\\ 0&1&1&0\\\ 0&1&1&0\\\ 0&0&0&2\end{matrix}\right),$ $\displaystyle\Pi_{1}=\frac{1}{3}\left(\begin{matrix}1&0&0&0\\\ 0&2&-1&0\\\ 0&-1&2&0\\\ 0&0&0&1\end{matrix}\right),$ with solution $\frac{1}{2}\operatorname{Tr}[\Pi_{0}\rho_{0}+\Pi_{1}\rho_{1}]=2/3$. Note that these measurement projectors correspond to attackers choosing a random MUB to measure in and returning 0 if the measurement outcomes were equal and 1 otherwise, which is also a single round LOCC strategy. This can be seen from the fact that $\displaystyle\frac{1}{3}(\ket{00}\bra{00}+\ket{11}\bra{11}+\ket{++}\bra{++}+\ket{--}\bra{--}+\ket{i^{+}i^{+}}\bra{i^{+}i^{+}}+\ket{i^{-}i^{-}}\bra{i^{-}i^{-}})$ $\displaystyle=\Pi_{0},$ $\displaystyle\frac{1}{3}(\ket{10}\bra{10}+\ket{01}\bra{01}+\ket{-+}\bra{-+}+\ket{+-}\bra{+-}+\ket{i^{-}i^{+}}\bra{i^{-}i^{+}}+\ket{i^{+}i^{-}}\bra{i^{+}i^{-}})$ $\displaystyle=\Pi_{1}.$ A feasible solution to the dual program is: $\displaystyle Y=\frac{\mathbbm{1}_{4}}{6},$ $\displaystyle Q_{0}=0\succeq 0,$ $\displaystyle Q_{1}=\frac{\mathbbm{1}_{4}}{6}-\frac{\rho_{1}^{T_{\mathsf{B}}}}{2}=\frac{1}{12}\left(\begin{matrix}1&0&0&1\\\ 0&0&0&0\\\ 0&0&0&0\\\ 1&0&0&1\end{matrix}\right)=\frac{1}{6}\ket{\Phi^{+}}\bra{\Phi^{+}}\succeq 0.$ Which adhere to the constraints in the Dual Program: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{0}-\frac{\rho_{0}}{2}$ $\displaystyle=\frac{\mathbbm{1}_{4}}{6}-\frac{\rho_{0}}{2}=\frac{1}{12}\left(\begin{matrix}0&0&0&0\\\ 0&1&-1&0\\\ 0&-1&1&0\\\ 0&0&0&0\end{matrix}\right)=\frac{1}{6}\ket{\Psi^{-}}\bra{\Psi^{-}}\succeq 0$ $\displaystyle Y-Q^{T_{\mathsf{B}}}_{1}-\frac{\rho_{1}}{2}$ $\displaystyle=\frac{\mathbbm{1}_{4}}{6}-\left(\frac{\mathbbm{1}_{4}}{6}-\frac{\rho_{1}}{2}\right)-\frac{\rho_{1}}{2}=0\succeq 0.$ Since $Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ and we get a feasible solution for the dual with value $\operatorname{Tr}[Y]=\frac{2}{3}$. Thus we have a feasible solution of the primal and dual program that both give the same value so we conclude that the maximal probability of success for attackers under the PPT restriction is $2/3$. ### A.4 Optimal PPT Measurements for QPV${}^{n}_{\textsf{SWAP}}$ Protocol We shall prove that the optimal probability of succes for attackers in the $n$-round parallel repetition case is $(2/3)^{n}$. The SDP of the $n$ round parallel repetition protocol was given as: Primal Program maximize: $\displaystyle\frac{1}{2^{n}}\sum_{s\in\\{0,1\\}^{n}}\operatorname{Tr}[\Pi_{s}\rho_{s}]$ subject to: $\displaystyle\sum_{s\in\\{0,1\\}^{n}}\Pi_{s}=\mathbbm{1}_{2^{2n}}$ $\displaystyle\Pi_{s}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}$ Dual Program minimize: $\displaystyle\operatorname{Tr}[Y]$ subject to: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle Q_{s}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}).$ Repeating the strategy of the single round protocol gives a feasible solution for the primal program with success probability $(2/3)^{n}$. A feasible solution to the dual problem would yield an upper bound to the problem, but requires finding a general solution for the matrices $Y,Q_{s}$. We start again with by setting $Y$ to be the identity matrix with some proper normalization $Y=\frac{\mathbbm{1}_{2^{2n}}}{2^{2n}}\left(\frac{2}{3}\right)^{n}=\frac{\mathbbm{1}_{2^{2n}}}{6^{n}},\text{ such that }\operatorname{Tr}[Y]=\left(\frac{2}{3}\right)^{n}.$ (A.1) We will construct a general feasible solution for $Q_{s}$ for any string $s\in\\{0,1\\}^{n}$ from $Q_{T(s)}$ where $T(s)$ is the reversed sorted version of $s$. First we show a general solution for $s=0^{n}$ and $s=1^{n}$ string. Again a solution for the all-0 input case is $Q_{0^{n}}=0\succeq 0$. The first constraint for $s=0^{n}$ in the dual program of the SDP then reduces to $\displaystyle\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}-\frac{\rho_{0}^{\otimes n}}{2^{n}}.$ (A.2) Note that the eigenvectors of $\rho_{0}$ are the four Bell states $\\{\ket{\Phi^{+}},\ket{\Phi^{-}},\ket{\Psi^{+}},\ket{\Psi^{-}}\\}$, with respective eigenvalues $\\{1/3,1/3,1/3,0\\}$, then the eigenvalues of $\frac{\rho_{0}^{\otimes n}}{2^{n}}$ are $1/6^{n}$ or $0$. Thus the eigenvalues of (A.2) are either $0$ or $1/6^{n}$, and since the expression is Hermitian (A.2) is positive. Similar to the single round protocol we have the following solution for the $s=1^{n}$ case $\displaystyle Q_{1^{n}}$ $\displaystyle=\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}-\frac{(\rho_{1}^{T_{\mathsf{B}}})^{\otimes n}}{2^{n}},$ the eigenvectors of $\rho_{1}^{T_{\mathsf{B}}}$ are again the Bell states, with respective eigenvalues $\\{0,1/3,1/3,1/3\\}$. The eigenvectors of $Q_{1^{n}}$ are all the combinations of tensor products of these four Bell states. If one of these states is the $\ket{\Phi^{+}}$ state the corresponding eigenvalue of $Q_{1^{n}}$ is $0$, otherwise the corresponding eigenvalue is $(\frac{1}{6})^{n}$. Since $Q_{1^{n}}$ is Hermitian and has only non-negative eigenvalues $Q_{1^{n}}\succeq 0$, as desired. The corresponding constraint in the dual program of the SDP reduces to $\displaystyle\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}-\left(\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}-\frac{\rho_{1}^{\otimes n}}{2^{n}}\right)-\frac{\rho_{1}^{\otimes n}}{2^{n}}=0\succeq 0.$ Now suppose we have a valid solution $Q_{s}$ for some $s\in\\{0,1\\}^{n}$, thus $Y-Q^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0.$ (A.3) And to this $n$-round protocol we add an extra round of equal inputs, we will show that $\displaystyle Q_{s,0}=Q_{s}\otimes\rho_{0}^{T_{\mathsf{B}}}/2,$ (A.4) is a valid solution for the $(n+1)$-round SDP. We have already shown in Appendix A.3 that $\rho_{0}^{T_{\mathsf{B}}}\succeq 0$. Since the tensor product of positive semi-definite matrices is again positive semi-definite we have that $Q_{s,0}\succeq 0$. Rewriting the first dual constraint we get $\displaystyle\frac{\mathbbm{1}_{2^{2n+2}}}{6^{n+1}}-Q_{s,0}^{T_{\mathsf{B}}}-\frac{\rho_{s}\otimes\rho_{0}}{2^{n+1}}$ $\displaystyle=\frac{\mathbbm{1}_{2^{2n+2}}}{6^{n+1}}-Q_{s}^{T_{\mathsf{B}}}\otimes\frac{\rho_{0}}{2}-\frac{\rho_{s}\otimes\rho_{0}}{2^{n+1}}$ $\displaystyle=\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}\otimes\frac{\mathbbm{1}_{4}}{6}-Q_{s}^{T_{\mathsf{B}}}\otimes\frac{\rho_{0}}{2}-\frac{\rho_{s}\otimes\rho_{0}}{2^{n+1}}$ $\displaystyle=\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}\otimes\frac{\rho_{0}+\rho_{1}}{3}-Q_{s}^{T_{\mathsf{B}}}\otimes\frac{\rho_{0}}{2}-\frac{\rho_{s}\otimes\rho_{0}}{2^{n+1}}$ $\displaystyle=\underbrace{\left(\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}-Q_{s}^{T_{\mathsf{B}}}-\frac{\rho_{s}}{2^{n}}\right)\otimes\frac{\rho_{0}}{2}}_{\textbf{A}}+\underbrace{\frac{\mathbbm{1}_{2^{2n}}}{6^{n}}\otimes\left(\frac{2\rho_{1}-\rho_{0}}{6}\right)}_{\textbf{B}}.$ We see that part A is a tensor product of two positive semi-definite matrices (A.3) and $\rho_{0}/2$, so A is also positive semi-definite. For part B note that the eigenvectors of $\frac{2\rho_{1}-\rho_{0}}{6}$ are again the Bell states with respective eigenvalues $\\{0,0,0,1/6\\}$, so part B is positive semi-definite. Since sums of positive semi-definite matrices are positive semi-definite the whole constraint is positive semi-definite. Since for any amount of rounds $n$ we have a feasible solution for the $s=1^{n}$ case, by induction, we can repeat the previous steps to get a solution for any reversed sorted string $1^{n}0^{k},k\in\mathbb{N}$, namely $\displaystyle Q_{1^{n}0^{k}}=Q_{1^{n}}\otimes\frac{(\rho_{0}^{T_{\mathsf{B}}})^{\otimes k}}{2^{k}}.$ (A.5) Now take some string $s\in\\{0,1\\}^{n}$, and let $P_{s}$ be a unitary consisting only of 2-qubit SWAP operations that reverse sorts the $n$-rounds, such that $P_{s}\rho_{s}P_{s}^{\dagger}=\rho_{T(s)}$, and $P^{\dagger}_{s}=P_{s}$. We can now write down the general solution of $Q_{s}$ using the corresponding map $P_{s}$ applied to the sorted version. Let $Q_{s}=(P_{s}Q_{T(s)}^{T_{\mathsf{B}}}P_{s})^{T_{\mathsf{B}}}$, using the fact that P is a unitary matrix we then get for the corresponding constraint in the dual SDP: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0$ $\displaystyle\Leftrightarrow P_{s}(Y-Q^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n})P_{s}\succeq 0$ $\displaystyle\Leftrightarrow Y-P_{s}Q^{T_{\mathsf{B}}}_{s}P_{s}-\rho_{T(s)}/2^{n}\succeq 0$ $\displaystyle\Leftrightarrow Y-P_{s}((P_{s}Q^{T_{\mathsf{B}}}_{T(s)}P_{s})^{T_{\mathsf{B}}})^{T_{\mathsf{B}}}P_{s}-\rho_{T(s)}/2^{n}\succeq 0$ $\displaystyle\Leftrightarrow Y-P_{s}(P_{s}Q^{T_{\mathsf{B}}}_{T(s)}P_{s})P_{s}-\rho_{T(s)}/2^{n}\succeq 0$ $\displaystyle\Leftrightarrow Y-Q^{T_{\mathsf{B}}}_{T(s)}-\rho_{T(s)}/2^{n}\succeq 0.$ Where the last expression is positive semi-definite by (A.5). Thus we get that the first constraint in the dual program of the $n$-round SDP for any string $s$ is positive semi-definite for any combination of rounds. The final step is to show that $Q_{s}=(P_{s}Q_{T(s)}^{T_{\mathsf{B}}}P_{s})^{T_{\mathsf{B}}}$ is positive. Note that $P_{s}$ permutes both registers held by $\mathsf{A}$ and $\mathsf{B}$, respectively, of the states together, since it consists only of $2$-qubits SWAP operations. The action is thus independent of the partial transpose on the second party $\mathsf{B}$. We therefore have $Q_{s}=P_{s}Q_{T(s)}P_{s}$. Now, since $P_{s}$ is unitary and $Q_{T(s)}$ is positive semi-definite we have that $Q_{s}$ is positive semi-definite. We have shown that all the constraints in the dual program of the $n$-round SDP are satisfied by our constructed $Q_{s}$ matrices, thus we have a feasible solution to the dual program with value $\operatorname{Tr}[Y]=(2/3)^{n}$, which is equal to the primal value and is attainable by a LOCC strategy. This shows that the best attacking strategy for adversaries restricted to LOCC operations playing $n$ rounds in parallel is to simply apply the single round strategy $n$ times in parallel. ### A.5 Optimal PPT Measurements for loss-tolerant $\text{QPV}^{n}_{\textsf{SWAP}}$ Protocol We shall now modify the solution to the parallel repetition case in Appendix A.4 to give a solution to the maximization of conditional success probability under LOCC restrictions. The SDP for the lossy $n$ round parallel repetition protocol in which attackers either answer on all rounds or on none is given as: Primal Program maximize: $\displaystyle\frac{1}{2^{n}\eta}\sum_{s\in\\{0,1\\}^{n}}\operatorname{Tr}[\tilde{\Pi}_{s}\rho_{s}]$ subject to: $\displaystyle\left(\sum_{s\in\\{0,1\\}^{n}}\tilde{\Pi}_{s}\right)+\tilde{\Pi}_{\varnothing}=\mathbbm{1}_{2^{2n}}$ $\displaystyle\operatorname{Tr}[\tilde{\Pi}_{\varnothing}\rho_{s}]=1-\eta,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle\tilde{\Pi}_{s}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}\cup\varnothing$ Dual Program minimize: $\displaystyle\frac{\operatorname{Tr}[\tilde{Y}]-(1-\eta)\gamma}{\eta}$ subject to: $\displaystyle\tilde{Y}-\tilde{Q}^{T_{\mathsf{B}}}_{s}-\rho_{s}/2^{n}\succeq 0,\ \ \ s\in\\{0,1\\}^{n}$ $\displaystyle 2^{2n}(\tilde{Y}-\tilde{Q}^{T_{\mathsf{B}}}_{\varnothing})-\gamma\mathbbm{1}_{2^{2n}}\succeq 0$ $\displaystyle\tilde{Y}\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle\tilde{Q}_{s}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}),\ \ \ s\in\\{0,1\\}^{n}\cup\varnothing$ $\displaystyle\gamma\in\mathbb{R}.$ We suspect our protocol is loss-tolerant, thus we want the solution to be independent of $\eta$. It turns out multiplying the POVM elements by $\eta$ and picking $\tilde{\Pi}_{\varnothing}$ accordingly, i.e. $\tilde{\Pi}_{s}=\eta\Pi_{s}$ for every $s\in\\{0,1\\}^{n}$ and $\tilde{\Pi}_{\varnothing}=(1-\eta)\mathbbm{1}_{2^{2n}}$ gives a feasible solution for the primal program with solution $(2/3)^{n}$. Note that this strategy corresponds to only playing a random fraction $\eta$ of the $n$-rounds and declaring a loss on the rest, which corresponds to the notion of loss-tolerance. For the dual program, note that if we pick $\displaystyle\tilde{Y}=\frac{\mathbbm{1}_{2^{2n}}}{6^{n}},$ $\displaystyle\tilde{Q_{s}}=Q_{s}$ $\displaystyle\tilde{Q}_{\varnothing}=0,$ $\displaystyle\gamma=(2/3)^{n},$ (A.6) then trivially $Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B}),\tilde{Q}_{s}\in\text{Pos}(\mathsf{A}\otimes\mathsf{B}),\gamma\in\mathbb{R}$ and the first condition remains satisfied since we have not changed $Y,Q_{s}$ in Appendix A.4. The second constraint becomes $\displaystyle 2^{2n}(\tilde{Y}-\tilde{Q}^{T_{\mathsf{B}}}_{\varnothing})-\gamma\mathbbm{1}_{2^{2n}}$ $\displaystyle=\mathbbm{1}_{2^{2n}}\frac{2^{n}}{3^{n}}-(2/3)^{n}\mathbbm{1}_{2^{2n}}=0\succeq 0.$ (A.7) So all constraints in the dual are satisfied. We thus get a lower bound of $\displaystyle\frac{\operatorname{Tr}[\tilde{Y}]-(1-\eta)\gamma}{\eta}=\frac{(2/3)^{n}-(1-\eta)(2/3)^{n}}{\eta}=\frac{\eta(2/3)^{n}}{\eta}=(2/3)^{n}.$ (A.8) Thus we finally have $\mathbb{P}_{guess}^{max}(n,\eta)=(2/3)^{n}$. So the $n$-round protocol is also loss tolerant for any $n$, as we see that the probability of being correct conditioned on answering is independent of the loss parameter $\eta$. ### A.6 Optimal PPT Measurements for QPV${}_{\text{Sym/Antisym}}$ In this section we will solve the SDP program that optimizes the probability of success for two adversaries restricted to LOCC operations of discriminating a random symmetric state from the antisymmetric state. The SDP formulation of this protocol is the same as in Appendix A.3 where only the density matrices are now different. Here we write $\rho_{0}$ for $\rho_{\text{sym}}$ and $\rho_{1}$ for $\rho_{\text{antisym}}$: $\displaystyle\rho_{0}=\frac{1}{6}\left(\begin{matrix}2&0&0&0\\\ 0&1&1&0\\\ 0&1&1&0\\\ 0&0&0&2\end{matrix}\right),$ $\displaystyle\rho_{1}=\frac{1}{2}\left(\begin{matrix}0&0&0&0\\\ 0&1&-1&0\\\ 0&-1&1&0\\\ 0&0&0&0\end{matrix}\right).$ A feasible solution for the primal program, is to measure both states in the computational basis and answering the XOR of the measurement outcomes, so $\Pi_{0}=\ket{00}\bra{00}+\ket{11}\bra{11},\Pi_{1}=\ket{01}\bra{01}+\ket{10}\bra{10}$. This strategy has success probability $\frac{1}{2}\operatorname{Tr}[\Pi_{0}\rho_{0}+\Pi_{1}\rho_{1}]=5/6$. A feasible solution to the the corresponding dual is $\displaystyle Y=\left(\begin{matrix}\frac{1}{6}&0&0&0\\\ 0&\frac{1}{4}&-\frac{1}{12}&0\\\ 0&-\frac{1}{12}&\frac{1}{4}&0\\\ 0&0&0&\frac{1}{6}\end{matrix}\right),$ $\displaystyle Q_{0}=0\succeq 0,$ $\displaystyle Q_{1}=\frac{1}{6}\left(\begin{matrix}1&0&0&1\\\ 0&0&0&0\\\ 0&0&0&0\\\ 1&0&0&1\end{matrix}\right)=\frac{1}{3}\ket{\Phi^{+}}\bra{\Phi^{+}}\succeq 0.$ Where $\operatorname{Tr}[Y]=5/6$ is a lower bound of the success probability of the protocol optimized over all PPT measurements. Thus we see that the highest probability of success for adversaries restricted to LOCC operations is $5/6$. Now for the protocol where we double the input rounds, but restrict the inputs to be either both symmetric or antisymmetric states we will show that there is no perfect LOCC attack. The corresponding SDP that optimizes over all PPT strategies looks as follows: Primal Program maximize: $\displaystyle\frac{1}{2}\operatorname{Tr}[\Pi_{0}(\rho_{0}\otimes\rho_{0})+\Pi_{1}(\rho_{1}\otimes\rho_{1})]$ subject to: $\displaystyle\Pi_{0}+\Pi_{1}=\mathbbm{1}_{2^{4}}$ $\displaystyle\Pi_{k}\in\text{PPT}(\mathsf{A}:\mathsf{B}),\ \ \ k\in\\{0,1\\}$ Dual Program minimize: $\displaystyle\operatorname{Tr}[Y]$ subject to: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{i}-(\rho_{i}\otimes\rho_{i})/2\succeq 0,\ \ \ i\in\\{0,1\\}$ $\displaystyle Y\in\text{Herm}(\mathsf{A}\otimes\mathsf{B})$ $\displaystyle Q_{i}\in\text{Pos}(\mathsf{A},\mathsf{B}),\ \ \ i\in\\{0,1\\}.$ We will show that the following is a feasible solution to the dual $\displaystyle Y=\frac{1}{18}\left(9(\rho_{0}\otimes\rho_{0})+8(\rho_{1}\otimes\rho_{1})\right),$ $\displaystyle Q_{0}=0\succeq 0,$ $\displaystyle Q_{1}=\frac{1}{18}\left((3\rho_{0}^{T_{\mathsf{B}}}\otimes 3\rho_{0}^{T_{\mathsf{B}}})-(\rho_{1}^{T_{\mathsf{B}}}\otimes\rho_{1}^{T_{\mathsf{B}}})\right).$ Note that in contrast to the optimizations for QPV${}^{n}_{\text{SWAP}}$ protocols the matrix $Y$ is not equal to the identity matrix with some factor, but nonetheless it is Hermitian. Secondly the eigenvectors of $3\rho_{0}^{T_{\mathsf{B}}}$ and $\rho_{1}^{T_{\mathsf{B}}}$ are both the 4 Bell states with eigenvalues $\\{3/2,1/2,1/2,1/2\\}$ and $\\{-1/2,1/2,1/2,1/2\\}$ respectively. We see for any of the 16 possible eigenvectors of $Q_{1}$ that the corresponding eigenvalues will be $0,\frac{1}{9}$ or $\frac{1}{18}$. We conclude that $Q_{1}\succeq 0$ since all its eigenvalues are non-negative and it is Hermitian. For $Q_{0}$ the first constraint in the dual program becomes: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{0}-\frac{(\rho_{0}\otimes\rho_{0})}{2}$ $\displaystyle=\frac{1}{18}\left(9(\rho_{0}\otimes\rho_{0})+8(\rho_{1}\otimes\rho_{1})\right)-\frac{(\rho_{0}\otimes\rho_{0})}{2}$ $\displaystyle=\frac{4}{9}(\rho_{1}\otimes\rho_{1})\succeq 0.$ And for $Q_{1}$ we get: $\displaystyle Y-Q^{T_{\mathsf{B}}}_{1}-\frac{(\rho_{1}\otimes\rho_{1})}{2}$ $\displaystyle=\frac{1}{18}\left(9(\rho_{0}\otimes\rho_{0})+8(\rho_{1}\otimes\rho_{1})\right)-\frac{1}{18}\left(9(\rho_{0}\otimes\rho_{0})-(\rho_{1}\otimes\rho_{1})\right)-\frac{(\rho_{1}\otimes\rho_{1})}{2}$ $\displaystyle=0\succeq 0.$ Thus we have shown that all conditions in the dual are met, and we get a lower bound on the success probability over all PPT measurements of $\displaystyle\operatorname{Tr}[Y]=\operatorname{Tr}\left[\frac{1}{18}\left(9(\rho_{0}\otimes\rho_{0})+8(\rho_{1}\otimes\rho_{1})\right)\right]=\frac{17}{18}.$ We conclude that the best attack for adversaries restricted to LOCC operations has success probability $\frac{17}{18}$, in contrast to adversaries who may use quantum communication for whom there is a perfect attack with success probability $1$ by simply swapping the second register and applying the SWAP test locally.
# Meta-Learning for Few-Shot Land Cover Classification Marc Rußwurm Sherrie Wang Marco Körner David Lobell ###### Abstract The representations of the Earth’s surface vary from one geographic region to another. For instance, the appearance of urban areas differs between continents, and seasonality influences the appearance of vegetation. To capture the diversity within a single category, like as urban or vegetation, requires a large model capacity and, consequently, large datasets. In this work, we propose a different perspective and view this diversity as an inductive transfer learning problem where few data samples from one region allow a model to adapt to an unseen region. We evaluate the model-agnostic meta-learning (MAML) algorithm on classification and segmentation tasks using globally and regionally distributed datasets. We find that few-shot model adaptation outperforms pre-training with regular gradient descent and fine- tuning on (1) the Sen12MS dataset and (2) DeepGlobe data when the source domain and target domain differ. This indicates that model optimization with meta-learning may benefit tasks in the Earth sciences whose data show a high degree of diversity from region to region, while traditional gradient-based supervised learning remains suitable in the absence of a feature or label shift. ††*equal contribution; $\dagger$ this work was conducted during a research stay at the Lobell Lab and supported by a fellowship within the IFI programme of the German Academic Exchange Service (DAAD). ## 1 Introduction Figure 1: A principle components analysis (PCA) on VGG-16 [26] features of cropland images from different countries. Representations of the same class vary geographically; applying models trained on one geography to another would violate the assumption in traditional supervised learning that train and test distributions are equal. Model-agnostic meta learning provides a framework for inductive transfer learning that adapts the model to a new region with few data samples. A growing constellation of satellites, combined with cloud computing and deep learning, offers an objective and scalable way to monitor global issues from deforestation and wildfires to urban development and road flooding [15, 6, 4, 24]. For many of these prediction problems, the bottleneck to making accurate and timely predictions has shifted away from satellite imagery availability or data processing limits and toward a lack of ground truth labels [34, 31, 25]. At the same time, these tasks share characteristics in remotely sensed imagery—such as ground sampling distance, seasonality, and spectral characteristics—no matter where on Earth they are taken. This raises the question of whether prediction in label-scarce regions could be improved if each model were to benefit from knowledge contained in all the datasets, rather than solving the same prediction problem across different geographies or time slices with independent models trained on small disjoint datasets. The concept of using knowledge gained while solving one problem to aid the solving of another is known in machine learning as transfer learning [17]. Transferring knowledge between tasks or domains is successful when the problems are different but related [29]. We argue that the diverse nature of representations on the Earth’s surface is a prime example of different-but- related tasks. We illustrate this in Fig. 1 using representations of cropland from four different countries. Croplands across the world are distinct from each other, yet they share characteristics. Transfer learning allows models to both adapt to each distribution individually and share knowledge across regions: countries like Angola and Mali, for which smaller labeled datasets are available, could then benefit from larger labeled datasets from countries like Brazil and Poland. Thus far, transfer learning on remote sensing data has largely focused on fine-tuning pre-trained models and performing domain adaptation (Section 2). In this work, we explore meta-learning, in which models not only learn from data to perform tasks but _learn how to learn_ to perform tasks through experiencing tasks on a variety of datasets. In particular, we use model- agnostic meta-learning (MAML) for the problem of inductive transfer-learning, where the generalization is induced by a few labeled examples in the target domain [17]. A schematic of MAML is shown in Fig. 2 and the algorithm is described in Section 3.2. Our main contributions are (1) demonstrating that remote sensing tasks across geographies can be restructured as one meta-learning problem and (2) evaluating MAML for few-shot classification and segmentation of multi-spectral and high-resolution remote sensing images; specifically, the well-cited benchmark datasets Sen12MS and DeepGlobe. ## 2 Related Work Figure 2: The model-agnostic meta learning (MAML) algorithm [8] finds initial weights $\theta$ from which a model can adapt to a new geographic region $\tau$ with few data samples. Transfer learning can be divided into subcategories depending on the amount of labeled data available in the source and target domains. Our work is focused on the scenario in which ample labels exist in the source domain, but few exist in the target domain. We summarize the related remote sensing methodology accordingly. In such a setting, one common transfer learning technique is pre-training a neural network on ImageNet and fine-tuning [14] it on an application-specific dataset. For high-resolution remotely sensed imagery, these include airplane detection [5], high-resolution land cover classification [28], and disaster mapping [9]. Xie et al. (2016) [32] extended this concept by swapping ImageNet for the proxy task of night-light prediction that allowed them to estimate poverty in African regions with a limited number of labeled poverty data points. These approaches require a significant amount of problem design, such as the choice of proxy datasets or model and which parameters to fine-tune, and, thus, usually focus on a limited number of hand-selected tasks. A second class of methods using deep learning for label-scarce tasks in remote sensing has focused on developing novel network architectures or loss functions to make learning more label-efficient. So far, these methods have focused on optical [11], SAR [21], and hyperspectral image classification [12]. While they decrease the number of labels required for any optical, SAR, or hyperspectral task, these methods do not explicitly endeavor to transfer knowledge from a data-rich geography to a data-poor one. Non-deep learning methods for domain adaptation were summarized by Tuia et al. (2016) [29] and include selecting invariant features, adapting data distributions, and adapting classifiers via semi-supervised learning. For the most part, such methods generalize only across small regions rather than worldwide, while sometimes requiring a feature space in which inputs can be modeled as a mixture of Gaussians or some other predefined distribution. Lastly, meta-learning is beginning to be explored for remote sensing applications. Alajaji and Alhichri (2020) [1] describe preliminary results of MAML on few-shot UC Merced, OPTIMAL-31, and AID RS classification, though again not with a focus on cross-geography generalization. ## 3 Meta-learning Meta-learning [22] considers a large number of related tasks $\tau\in\mathcal{T}=\\{\tau^{(1)},\dots,\tau^{(N)}\\}$ to arrive at a predictive function that can perform well on unseen tasks $\tau$ after seeing a few data samples. Even though meta-learning has been a topic in machine learning for decades [22, 3], it has recently gained popularity for few-shot problems [30, 27, 19] and has been re-introduced under a “model agnostic” framework [8] with rapid developments in the field [18, 16, 2]. $p(\mathcal{D})$: distribution over data points; $\alpha$: step size hyperparameters; randomly initialize $\phi$; repeat sample ${D}\sim p(\mathcal{D})$; evaluate $\boldsymbol{g}=\nabla\mathcal{L}(f_{\phi},{D})$; update parameters $\phi\leftarrow\phi-\alpha\boldsymbol{g}$; until _convergence_ ; Algorithm 1 Regular Gradient Descent $p(\mathcal{T})$: distribution over tasks; $\alpha,\beta$: step size hyperparameters; randomly initialize $\theta$; repeat sample batch of tasks $\tau\sim p(\mathcal{T})$; foreach _$\tau_{i}\in\tau$_ do initialize $\phi_{i}$ with $\theta$; sample $\\{{D}_{\text{support}},{D}_{\text{query}}\\}\sim p(\tau_{i})$; evaluate $\boldsymbol{g}=\nabla_{\phi_{i}}\mathcal{L}_{\tau_{i}}(f_{\phi_{i}},{D}_{\text{support}})$; adapt parameters $\phi_{i}\leftarrow\phi_{i}-\alpha\boldsymbol{g}$; evaluate test loss $\mathcal{L}_{\tau_{i}}(f_{\phi_{i}},{D}_{\text{query}})$ ; end foreach update $\theta\leftarrow\theta-\beta\sum_{\tau_{i}\sim p(\tau)}\nabla_{\theta}\mathcal{L}_{\tau_{i}}(f_{\phi_{i}},{D}_{\text{query}}^{\tau_{i}})$; until _convergence_ ; Algorithm 2 Model-Agnostic Meta-Learning ### 3.1 Terminology and Definitions Meta-learning introduces a set of terms that may be new to some readers, so we clarify them in this section. A task $\tau$ is comprised of a support dataset ${D}_{\text{support}}$ to adjust the model parameters to the specific task and a query dataset ${D}_{\text{query}}$ to evaluate the performance. Each dataset is comprised of inputs $\\{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\ldots,\boldsymbol{x}_{m}\\}$ and corresponding labels $\\{y_{1},y_{2},\ldots,y_{m}\\}$ from a data distribution. A $k$-shot, $n$-way classification task aims to distinguish between $n$ classes and is trained on $k$ examples per class. Each task is drawn from a distribution over tasks $\tau\sim p(\mathcal{T})$ to yield a set of tasks $\\{\tau^{(1)},\tau^{(2)},\dots,\tau^{(N)}\\}$. The meta-learner _learns how to learn_ by training and evaluating on the meta-training set. Meta-learning hyperparameters are tuned on the meta-validation set. The meta- test set measures generalization on new, unseen tasks. ### 3.2 Model-Agnostic Meta Learning (MAML) Neural network parameters $\phi$ are usually initialized randomly and optimized iteratively via gradient descent to perform well on a single dataset, as shown in Algorithm 1. Model-agnostic meta-learning (MAML) extends gradient descent by optimizing for a model initialization $\theta$ that leads to good performance on a set of related tasks $\\{\tau^{(1)},\tau^{(2)},\ldots,\tau^{(N)}\\}$. We contrast the regular gradient descent with the MAML optimization algorithm in Algorithms 1 and 2. Meta-training is divided into an inner loop and an outer loop. In the inner loop, networks initialized with $\theta$ are updated to each task via $t$ steps of gradient descent on $D_{\text{support}}$ of each task. This results in models with parameters $\phi_{i}$ adapted to each task $\tau^{(i)}$. The outer loop updates $\theta$ based on the performance of $\phi_{i}$ on $D_{\text{query}}$ of the meta-training batch. In so doing, MAML requires second-order gradient calculations. The algorithm looks for a better $\theta$ until convergence, upon which the generalization error is computed on unseen meta-test tasks. ## 4 Datasets (a) The 125 regions of the Sen12MS dataset. The 25 meta-test regions have been selected based on the hold-out set of the Data Fusion Contest 2020 [33]. The 75 meta-train and 25 meta-val have been randomly randomly partitioned. (b) Example of a Sen12MS 2-way-2-shot task from region 87 and in the summer season. Ways determines the number of classes per task while shot the number of samples per class. Figure 3: The Sen12MS dataset [23] is a public remote sensing dataset of 128 globally distributed regions and four distinct seasons. In this work, we sample tasks (b) from the dataset that include samples from one region and season aiming at adapting a deep learning model to one specific region. Figure 4: The DeepGlobe dataset contains high resolution RGB satellite imagery with land cover labels segmented by humans. To repurpose DeepGlobe for meta- learning, we (a) split the images into meta-train, meta-val, and meta-test sets. Then (b) each image was split into 16 sub-images, 8 of which were placed in the support set and 8 in the query set. Under such a setup, (c) we trained models on the meta-train set to segment the queries after seeing $k$ shots from the support. We evaluate model-agnostic meta-learning on two public remote sensing datasets that cover optical and radar data at medium and very high resolution. ### 4.1 Sentinel-1/2 Multi-Spectral (Sen12MS) Dataset The _Sentinel-1/2 Multi-Spectral (Sen12MS)_ [23] dataset is a novel globally distributed satellite image classification and segmentation dataset. It contains $280\,662$ Sentinel 2 (optical) and Sentinel 1 (radar) tiles from 125 distinct regions at four different seasons. The optical and radar images were resampled to $10\text{\,}\mathrm{m}$ ground sampling distance and span $256\text{\times}256\text{\,}\mathrm{px}$ in height and width. The original dataset uses tile-overlaps of 50%. For this work, we removed the overlap to ensure independence of support and query datasets, which yielded $200\,306$ $128\text{\times}128\text{\,}\mathrm{px}$ tiles. We show true color examples and principal component embeddings on VGG-16 features of four distinct regions in Fig. 1. Each image tile is accompanied by a land cover label with a comparatively coarse resolution of $500\text{\,}\mathrm{m}$ from the MODIS Land Cover product MCD12Q1 V6 upsampled to $10\text{\,}\mathrm{m}$. In this work, we use the Sen12MS dataset for classification and assign the most common pixel-level label to the image tile. We use the simplified label-scheme of International Geosphere Biosphere Programme (IGBP) categories [13] with 10 distinct classes, consistent with the IEEE Data Fusion Contest 2020 [33]. In Fig. 3(a), the 125 globally distributed regions are shown separated into meta- train, meta-validation, and meta-test sets. Each region contains between 196 and 850 tiles with a region-specific class distribution. We also show an overview of all tiles of the region 131 (Marseille) from the summer season true-color and labels. The individual $128\text{\times}128\text{\,}\mathrm{px}$ tiles are randomly assigned to the support or query partition of each region. The objective is to classify each tile with its most frequent label class. Figure 3(b) illustrates this on an example of a 2-shot 2-way task. In this case, task-datasets $D_{query}^{\tau}$ and $D_{support}^{\tau}$ contain $k=2$ randomly chosen tile-label pairs of $n=2$ distinct classes chosen from the available classes in the region. ### 4.2 DeepGlobe Land Cover Segmentation Dataset The DeepGlobe Challenge [7] was introduced at CVPR 2018 to advance state-of- the-art satellite image analysis. Here, we used the land cover segmentation data to explore the use of MAML on high-resolution satellite imagery. The DeepGlobe land cover segmentation dataset is comprised of very high resolution ($0.5\text{\,}\mathrm{m}$) DigitalGlobe Vivid+ images of dimension $2448\text{\times}2448\text{\,}\mathrm{px}$ with three RGB channels. In total, there are 803 training images, each with human-annotated semantic segmentation labels covering seven land cover classes: urban, agriculture, rangeland, forest, water, barren, and unknown. For the competition, 171 validation images and 172 test images were also provided. However, since they do not have corresponding labels, we did not include them in the following experiments. Across the training images, the most common class is agriculture ($58\text{\,}\mathrm{\char 37\relax}$ of pixels), followed by forest ($11\text{\,}\mathrm{\char 37\relax}$), urban ($11\text{\,}\mathrm{\char 37\relax}$), rangeland ($8\text{\,}\mathrm{\char 37\relax}$), barren ($8\text{\,}\mathrm{\char 37\relax}$), water ($3\text{\,}\mathrm{\char 37\relax}$), and unknown ($0.05\text{\,}\mathrm{\char 37\relax}$). We divided the DeepGlobe training set into three meta-datasets: a meta- training set on which to train MAML, a meta-validation set on which to tune MAML hyperparameters, and a meta-test set on which to evaluate generalization (Fig. 4a). Ideally, we would evaluate whether meta-learned models generalize better to new geographic regions. However, the DeepGlobe Land Cover dataset does not tag images with latitude and longitude. In the absence of geographic information, we split the images in two ways: 1. 1. At random, _i.e_. the 803 images were sampled uniformly at random into a 500-image meta-train, a 150-image meta-val, and a 153-image meta-test set. 2. 2. Using unsupervised clustering on features extracted from a pre-trained network. DeepGlobe images were fed into a VGG-16 network pre-trained on ImageNet, and for each image, a 4096-dimensional vector was extracted from the first layer in the classifier. We used $k$-means to assign the images into 6 clusters and the 6 clusters were divided at random into the meta-train, meta- val, and meta-test sets. The resulting datasets contained 454, 166, and 183 images, respectively. Figure 6(a) visualizes the distributions of image features for the meta-train, meta-val, and meta-test sets under these two splitting methodologies. The results across the two splits will illuminate the settings under which MAML improves upon pre-training and training from scratch. Each image was further divided into 16 sub-images, each of dimension $612\text{\times}612\text{\,}\mathrm{px}$ (Fig. 4b). Eight sub-images were placed in the support set and 8 in the query set. At meta-train time, $k$ shots of $306\text{\times}306\text{\,}\mathrm{px}$ tiles were sampled from the support set and $q$ queries were sampled from the query set. At meta-test time, the entire query set was fed into the model as 32 tiles to compute metrics (Fig. 4c). Put succinctly, our DeepGlobe experiments explore whether a model can learn to segment a large region ($1.2\text{\times}1.2\text{\,}\mathrm{km}$) of high resolution satellite imagery after seeing only a small labeled section ($153\text{\times}153\text{\,}\mathrm{m}$) of it. ## 5 Models Model-agnostic meta-learning is an optimization algorithm that uses gradient descent and can be employed for any neural network architecture. In this work, we chose two popular models for image classification and segmentation. ### 5.1 CNN Classification Model Following other meta-learning approaches [8, 30], we used a straightforward CNN architecture for the Sen12MS classification objective. The network consisted of 7 stacked layers with 64 convolutional $3\text{\times}3\text{\,}\mathrm{px}$ kernels followed by batch normalization [10], ReLU activation function, and max-pooling of size 2. The input tensor $\boldsymbol{X}\in\mathbb{R}^{128\times 128\times 15}$ of joint Sentinel-2 and Sentinel-1 bands is projected to a 64-dimensional feature vector that maps to the output vector $\boldsymbol{y}\in\mathbb{R}^{10}$ for each of the classes. ### 5.2 U-Net Segmentation Model For the DeepGlobe segmentation task, we employed the popular U-Net [20] architecture. It is a fully-convolutional segmentation model with skip connections between encoder and decoder. We used four downsampling and upsampling layers so that the input tensor is projected to a hidden representation, which is then added to intermediate hidden states from the encoder (skip connections) while being upsampled and convolved to an output tensor whereupon each pixel represents one class label. ## 6 Experiments We experimentally evaluated the classification and segmentation performance of deep learning models with the same architecture trained with regular gradient descent (pre-trained) Algorithm 1 and MAML Algorithm 2. ### 6.1 Sen12MS Classification Figure 5: Classification results on Sen12MS. Regular pre-training with gradient descent leads to good zero-shot performance, while models trained with the model-agnostic meta learning algorithms outperform regular pretraining and the randomly initialized baseline clearly throughout all ten seen examples from a unseen region. We assumed that data from the meta-train regions were readily available, but at most ten image-label pairs per class can be seen from the meta-test regions. This corresponds to a 4-way 10-shot classification scenario with four randomly selected classes from one region. It reflects use-case of interest to this work, where labeled data is available in some regions but not in others. We trained the classification models with MAML on 4-way 2-shot datasets from the meta-train regions. We treated each sub-dataset $D_{\text{support}}$ and $D_{\text{query}}$ as a single batch of $N=k\cdot n=8$ samples. Baselines. We compared the _MAML-trained_ model with a model that was pre- trained on all available data from the meta-train regions using regular gradient descent Algorithm 1. We _pre-trained_ this model with the same 4-way 2-shot batches as MAML but used the combined support and query task-datasets for training. This resulted in a batch size of 16 image label pairs. Finally, we also considered the scenario of having no additional data from meta-train regions. Here, we initialized the model randomly without any prior training, and train on each task’s support set from scratch; we refer to this baseline as the _random_ model. Evaluation. With the three initial CNN model parameterizations, _i.e_. _MAML- trained_ , _pretrained_ , and _random_ , we evaluated the ability to adapt to new unseen meta-test regions based on at most ten data samples. For this, we sampled 100 4-way 10-shot task-datasets from the meta-test regions. We report performance metrics on $D_{\text{query}}$ on all ten examples per class but fine-tuned the models on subsets of $D_{\text{support}}$. We also varied the number of seen samples from $D_{\text{support}}$ incrementally from zero-shot to ten-shot. Zero-shot represents no fine-tuning and shows the performance that can be obtained solely based on data from the meta-train regions. Training on batches of one-shot to ten-shot provides incrementally more data from the target region to the models. The meta-val regions were used to determine a suitable step size $\alpha\in\\{0.001,0.0025,\dots,0.5,0.75,1\\}$ and gradient steps on the same data batch $n\in\\{1,2,5,10,50,100\\}$ for fine-tuning the pre-trained model. We evaluated these hyperparameters via grid search for each shot independently. Classification Results. In Fig. 5, we report the accuracy scores for an increasing number of shots. The zero-shot case, without any adaptation to the particular meta-test region, shows that the regular pre-trained model performed best with $55\text{\,}\mathrm{\char 37\relax}$ accuracy and a kappa score of $0.47$. Without any adaptation on the target-region, MAML predictions are low in accuracy, which highlights a distinct difference between meta- learning and pre-training. However, when a single data sample from the meta- test region is provided (1-shot), the MAML-trained model ($74\text{\,}\mathrm{\char 37\relax}$ accuracy, $0.68$ kappa score) outperforms the pre-trained model ($59\text{\,}\mathrm{\char 37\relax}$ accuracy, $0.51$ kappa score) by a large margin. The pre-trained model only shows a comparatively slight increase in accuracy ($54\text{\,}\mathrm{\char 37\relax}$ to $66\text{\,}\mathrm{\char 37\relax}$) throughout all seen examples while the MAML-trained model scores $80\text{\,}\mathrm{\char 37\relax}$ accuracy and $0.76$ kappa score with all 10 shots. ### 6.2 DeepGlobe Land Cover Segmentation (a) The DeepGlobe images were split into meta-datasets (left) at random, or (right) in clusters based on a lower dimensional representation. Tile representations extracted from a pretrained VGG-16 are plotted along their first 2 principal components. The label distributions of each meta-dataset are shown below. (b) The effect of meta-train support size on segmentation results (mIoU) for (left) randomly split meta-datasets and (right) clustered split meta-datasets. Results are shown for 1 meta-test shot. (c) The effect of number of adaptation shots on segmentation results. Results are shown for a support size of 8. Figure 6: Segmentation results on DeepGlobe, with two ways of splitting the images into meta-datasets. Our second experiment demonstrates the use of MAML on the DeepGlobe land cover segmentation dataset. Each DeepGlobe image was considered its own task and we trained a U-Net via MAML to segment the query set of an image after being shown $k$ shots from the support set. The experiments were designed to investigate the effect on the generalization of (1) meta-training label quantity (number of support and query sub-images), (2) meta-test label quantity (number of shots), and (3) distributional shift between meta-train and meta-test sets (random split versus clustered split of meta-datasets). The number of labeled sub-images in the support and query sets was varied to be $m\in\left\\{1,2,4,8\right\\}$ and the number of shots used to adapt the U-Net was in the range $k\in\left\\{1,2,3,4,5\right\\}$. Hyperparameters, such as the number of epochs to meta-train MAML or train a model from scratch, were selected using performance on the meta-validation set. Baselines. Similar to the Sen12MS evaluation, we compared MAML to two baselines: (1) a U-Net pre-trained on the meta-training set and fine-tuned on $k$ shots in each meta-test task, and (2) randomly initialized U-Nets trained independently from scratch on $k$ shots in each meta-test task. To make comparisons fair, we showed the pre-trained model the same amount of data as seen by MAML. If MAML was meta-trained on $m$ support tiles and $m$ query tiles and adapted using $k$ shots, the baseline U-Net was pre-trained on $2m$ tiles per image and fine-tuned on $k$ shots per meta-test tile, and the randomly initialized model was trained on $k$ shots. The U-Net architecture was shared among MAML and both baselines. Evaluation. The performance of all models was evaluated on the query tiles of an unseen meta-test set of images. The location of the $k$ shots for each meta-test image was sampled at random from its support set and fixed across all models for direct comparison. The models were evaluated by means of pixel- wise accuracy and the mean intersection over union (mIoU) score across the meta-test queries. For elaboration on the formula used to compute mIoU, please refer to the DeepGlobe publication [7]. (a) Random meta-dataset splits (b) Clustered meta-dataset splits Figure 7: Example segmentation predictions by MAML, a pre-trained U-Net, and U-Nets trained from scratch. Random Meta-Dataset Split Results. When the meta-datasets were randomly split, the pre-trained model performed better than MAML and the randomly initialized model. This was especially true at smaller meta-training set sizes (Fig. 6(b)). In other words, MAML requires a large set of meta-training tasks in order to perform well on new tasks. As the number of shots seen by the meta- learner increases, MAML catches up to the pre-trained model (Fig. 6(c)). In these experiments, we did not observe fine-tuning of the pre-trained model to improve its performance. Figure 7(a) visualizes the predictions of MAML and the baselines on 1-shot learning for two images: one where MAML performs well and one where it fails. MAML appears heavily influenced by the choice of the 1 shot, while the pre- trained model is biased toward predicting agriculture (the most common class). The model trained from scratch is even more heavily influenced by the choice of the 1 shot, as this is the only data it sees during training. The success of pre-training can be attributed to the complete overlap of meta- train and meta-test distributions, seen in Fig. 6(a). In the setting where $p(X,y)$ are identical in the source domain and target domain, a model trained on the source domain transfers perfectly to the target domain. These results also expose MAML’s weaknesses when meta-train size is small: it is not able to retain information about land cover types as effectively as a straightforward supervised model. Clustered Meta-Dataset Split Results. When the meta-datasets were split along clusters, the meta-train and meta-test distributions overlapped less (Fig. 6(a)) but could still be considered to arise from the same data-generating distribution. Whereas the meta-train set contains mostly agriculture pixels, the meta-test set contains predominantly forest. Figures 6(b) and 6(c) show, first and foremost, that this meta-test set is more difficult than the randomly split meta-test set for all three models. However, MAML is able to adapt to this distributional shift more successfully than the pre-trained model. Example segmentations shown in Fig. 7(b) reveal that MAML’s flexibility to adaptation can again be both helpful and detrimental: helpful when the 1 shot is representative of the image, but detrimental when it is not. We see that the pre-trained model carries its bias toward agriculture into its meta- test set predictions, whereas MAML does not appear to retain a strong enough prior to recognize agriculture without being provided a shot containing that class. ### 6.3 Visualization of Model Adaptation (a) Adaptation of the MAML-trained CNN model to episodes from different regions. (b) 1D Loss surface on multiple query-task samples along the gradient of one support-task Figure 8: The adapted weights $\phi_{\tau}$ for task $\tau$ vary from region to region (a). The loss surface along the direction of initial weights $\theta$ to $\phi_{\tau}$ (b) is more convex and allows larger gradient step sizes for model-agnostic meta learning compared to regular pretraining. In the introduction and Fig. 1, we showed the regional diversity of representations on the Earth’s surface using PCA on pre-trained VGG-16 image features. In Section 3 and Fig. 2, we assumed that a neural network would achieve optimal performance with a different set of weights $\phi^{\ast}$ for each geographic region. In this experiment, we empirically confirmed this hypothesis with two evaluations on meta-test regions of the Sen12MS dataset. In Section 6.3.1, we visualize the adapted MAML weights for two distinct geographies. Then in Section 6.3.2 we compare the loss surfaces of a MAML- trained and a pre-trained model along one adaptation trajectory. The two evaluations are meant to provide the reader with some intuition of what MAML is doing in different regions and how this differs from pre-training. #### 6.3.1 Region-wise Adaptation We studied the adaptation of MAML-model parameters $\theta$ trained on 2-shot 4-way tasks of Section 6.1. We sampled 1000 1-shot 4-way classification task- datasets from the meta-test regions for the four most common classes (forests, grassland, savanna, urban) and split these into a support and query partition at ratio of 4:1. For each training task, we evaluated the gradient and adapted the model using gradient descent with step size $0.75$ to new parameters for each task $\phi^{\tau}$. We visualized this adaptation by flattening all model parameters to a $231\,818$-dimensional vector and used PCA to map the parameters to the first two principal components. We colored this embedding by region and drew lines from the initial weights $\theta$ to the adapted task- specific weights $\phi_{\tau}$ in Fig. 8. The adapted model-weights differ characteristically between regions in embedding space, as can be seen in the examples of Poland and South Sudan. This empirically shows that a separate set of model parameters is optimal for these two different regions. #### 6.3.2 Loss Surface along Support Gradient Next, we evaluated the query loss along one line from initial parameters $\theta$ to task-adapted parameters $\phi$ with respect to four query tasks with the MAML-trained model and the regularly pre-trained model. Here, we selected one support-task and four query-tasks from the same region and season. The gradient $\boldsymbol{g}$ was evaluated with respect to the support-task and obtained different model weights $\phi_{\alpha}$ along the gradient by $\phi_{\alpha}=\theta+\alpha\boldsymbol{g}$ with different step sizes $\alpha_{\text{MAML}}\in[0,1]$, $\alpha_{\text{pre}}\in[0,0.15]$. Initially, we evaluated larger stepsizes, but chose to show these intervals which are proportional to the optimal step sizes for MAML and pretrained model. We calculated the query loss using the model $f_{\phi_{\alpha}}$ for each of the four test-tasks at different step sizes $\alpha$. This draws a one-dimensional slice of the loss-surface along the gradient direction determined by the support-task. In Fig. 8(b), we show this loss surface for the MAML-trained model and the pre-trained classification model. Without adaptation, at $\alpha=0$, the MAML-trained model evaluated a high loss compared to the pre-trained model. This is consistent with the comparatively poor zero-shot results from Fig. 5. With increasing step size, however, we observe that the MAML loss decreased consistently while the pre-trained loss remained on similar level or increased for larger step sizes. The MAML-trained model achieved low loss in a large range of step sizes from $0.1$ to $1$ through all query-tasks while a narrow range of step sizes between $0$ and $0.05$ lead to better accuracies on some tasks from the pre-trained model initialization. In general, the loss surface of the MAML-trained model followed a convex curve for all of the test examples while the loss surface showed local minima from the pre-trained model initialization. This experiment illustrates the difference between meta-learning and regular pre-training that lead to very different model parameters. The loss surface of a meta-learned model was more convex in the gradient direction of a novel task. Overall, the MAML-trained model benefited, _i.e_. achieved lower test-loss, when being adapted to samples of a new region regardless of the step size. For the pre- trained model, it would have been beneficial not to adapt to one specific region for query tasks 2 and 4. ## 7 Discussion and Conclusion In this work, we evaluated the model-agnostic meta-learning (MAML) algorithm for few-shot problems in land cover classification to adapt deep learning models to individual regions with few data examples. Existing models use regular gradient descent to pre-train a model on a large body of data and use this pre-trained model as an initialization for datasets with fewer examples. We compared these two approaches on land cover classification on the Sen12MS dataset of optical and radar images from globally distributed regions and the DeepGlobe dataset with very high-resolution imagery in few regions. The results on Sen12MS in Section 6.1 demonstrate that MAML-optimization can outperform regular gradient descent and pre-training of models when the dataset includes a distinct regional diversity. The DeepGlobe results in Section 6.2 illustrate the advantage MAML offers when the source domain differs from the target domain in transfer learning but also highlight MAML’s weaknesses in retaining prior knowledge and under-performing in ideal (identical source and target domain) settings. In Section 6.3, we evaluated the loss surfaces for pre-trained and MAML-trained models and showed that the loss surface was more convex for MAML-trained models when adapting to new unseen data. We believe that the meta-learning framework can lead deep learning in Earth observation to a new direction: away from finding incrementally better model architectures for specific use-cases and toward unifying strategies that more closely reflect the reality on the Earth’s surface. Much work remains to be done to improve MAML performance by retaining stronger priors on land cover classes, as well as to explore other meta-learning paradigms (_e.g_. prototypical networks). ## References * [1] D. Alajaji and H. Alhichri. Few shot scene classification in remote sensing using meta-agnostic machine. In 2020 6th Conference on Data Science and Machine Learning Applications (CDMA), pages 77–80, 2020. * [2] Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. arXiv preprint arXiv:1810.09502, 2018. * [3] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche …, 1990. * [4] Basudeb Bhatta. Analysis of urban growth and sprawl from remote sensing data. Springer Science & Business Media, 2010. * [5] Zhong Chen, Ting Zhang, and Chao Ouyang. End-to-end airplane detection using transfer learning in remote sensing images. Remote Sensing, 10(1):139, 2018. * [6] Emilio Chuvieco. Remote sensing of large wildfires: in the European Mediterranean Basin. Springer Science & Business Media, 2012. * [7] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raska. DeepGlobe 2018: A challenge to parse the earth through satellite images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 172–17209. IEEE, 2018. * [8] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML), pages 1126–1135, 2017. * [9] Ananya Gupta, Elisabeth Welburn, Simon Watson, and Hujun Yin. Post disaster mapping with semantic change detection in satellite imagery. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. * [10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. * [11] Neal Jean, Sherrie Wang, Anshul Samar, George Azzari, David Lobell, and Stefano Ermon. Tile2vec: Unsupervised representation learning for spatially distributed data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3967–3974, 2019. * [12] B. Liu, X. Yu, A. Yu, P. Zhang, G. Wan, and R. Wang. Deep few-shot learning for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 57(4):2290–2304, 2019. * [13] Thomas R Loveland and AS Belward. The IGBP-DIS global 1km land cover data set, discover: first results. International Journal of Remote Sensing, 18(15):3289–3295, 1997\. * [14] Dimitrios Marmanis, Mihai Datcu, Thomas Esch, and Uwe Stilla. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geoscience and Remote Sensing Letters, 13(1):105–109, 2015\. * [15] Stephen D McCracken, Eduardo S Brondizio, Donald Nelson, Emilio F Moran, Andrea D Siqueira, and Carlos Rodriguez-Pedraza. Remote sensing and GIS at farm property level: Demography and deforestation in the Brazilian Amazon. Photogrammetric Engineering and Remote Sensing, 65:1311–1320, 1999\. * [16] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018. * [17] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009. * [18] Aravind Rajeswaran, Chelsea Finn, Sham M Kakade, and Sergey Levine. Meta-learning with implicit gradients. In Advances in Neural Information Processing Systems, pages 113–124, 2019. * [19] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), 2017\. * [20] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015. * [21] Mohammad Rostami, Soheil Kolouri, Eric Eaton, and Kyungnam Kim. Deep transfer learning for few-shot SAR image classification. Remote Sensing, 11(11), 2019. * [22] Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-… hook. Diplomarbeit, Technische Universität München, München, 1987. * [23] Michael Schmitt, Lloyd Haydn Hughes, Chunping Qiu, and Xiao Xiang Zhu. SEN12MS–A curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion. arXiv preprint arXiv:1906.07789, 2019. * [24] E Schnebele, N Waters, et al. Road assessment after flood events using non-authoritative data. Natural Hazards and Earth System Sciences, 14(4):1007, 2014. * [25] Grant J Scott, Matthew R England, William A Starms, Richard A Marcum, and Curt H Davis. Training deep convolutional neural networks for land–cover classification of high-resolution imagery. IEEE Geoscience and Remote Sensing Letters, 14(4):549–553, 2017\. * [26] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. * [27] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017. * [28] Xin-Yi Tong, Gui-Song Xia, Qikai Lu, Huanfeng Shen, Shengyang Li, Shucheng You, and Liangpei Zhang. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sensing of Environment, 237:111322, 2020. * [29] Devis Tuia, Claudio Persello, and Lorenzo Bruzzone. Domain adaptation for the classification of remote sensing data: An overview of recent advances. IEEE Geoscience and Remote Sensing Magazine, 4(2):41–57, 2016. * [30] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016. * [31] Sherrie Wang, William Chen, Sang Michael Xie, George Azzari, and David B Lobell. Weakly supervised deep learning for segmentation of remote sensing imagery. Remote Sensing, 12(2):207, 2020. * [32] Michael Xie, Neal Jean, Marshall Burke, David Lobell, and Stefano Ermon. Transfer learning from deep features for remote sensing and poverty mapping. In AAAI Conference on Artificial Intelligence, AAAI’16, pages 3929–3935. AAAI Press, 2016. * [33] Naoto Yokoya, Pedram Ghamisi, Ronny Hänsch, and Michael Schmitt. 2020 IEEE GRSS Data Fusion Contest: Global land cover mapping with weak supervision. IEEE Geosci. Remote Sens. Mag., 2020. in press. * [34] Xiao Xiang Zhu, Devis Tuia, Lichao Mou, Gui-Song Xia, Liangpei Zhang, Feng Xu, and Friedrich Fraundorfer. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5(4):8–36, 2017.
# Deep Learning for Visual Navigation of Underwater Robots MD Sunbeam Department of Aerospace Engineering Texas A&M University College Station, TX, United States <EMAIL_ADDRESS> ###### Abstract This paper aims to briefly survey deep learning methods for visual navigation of underwater robotics. The scope of this paper includes the visual perception of underwater robotics with deep learning methods, the available visual underwater datasets, imitation learning, and reinforcement learning methods for navigation. Additionally, relevant works will be categorized under the imitation learning or deep learning paradigm for underwater robots for clarity of the training methodologies in the current landscape. Literature that uses deep learning algorithms to process non-visual data for underwater navigation will not be considered, except as contrasting examples. ###### Index Terms: deep learning, underwater, imitation learning, AUV, visual navigation, reinforcement learning ## I Introduction This paper will cover deep learning for underwater robotics. It is divided into the perception of underwater robotics, underwater visual datasets, imitation learning, and reinforcement learning for underwater environments. Visual navigation in underwater robotics is more challenging than its land or air counterparts because of the limitations of perception, particularly computer vision. When light travels through the water, it is absorbed and scattered, resulting in a wavelength-dependent attenuation that disturbs the standard ways of handling vision [1]. An example of this problem is capturing images of an object with different brightness and colors based on distance. Additionally, environmental changes are more subtle underwater. Features may change from drifting sand and turbulent current. Detecting and modeling these changes is difficult, especially because of its stochastic nature and the complex fluid equations that govern the dynamics. The hydrodynamics involved is nonlinear and time-varying, and the robot is usually underactuated with not all the states reachable [2]. Deep learning methods for visual navigation are explored, because they offer a learning-based way that may handle these challenges through a data-driven approach. Because autonomous underwater vehicles (AUVs) are less common due to greater hardware considerations, underwater datasets are rare and more difficult to collect. Since deep learning is a data-driven method, it is imperative to have public datasets to benchmark models. The two main deep learning paradigms for control of an AUV through visual navigation is imitation and reinforcement learning. Imitation learning is used for underwater robotics for visual navigation, often mapping RGB images to control commands like roll, pitch, yaw, and throttle. The problem formulation is a supervised learning task where the objective is to learn a policy that imitates the trajectories of a human or robot demonstrator. Two major limitations of imitation learning algorithms are compounding error between differences of expert and agent trajectory as well as the idea that a policy derived from such a method will not outperform the expert demonstrator [4]. Reinforcement learning is the other deep learning paradigm, using a non- differentiable objective function through rewards to train neural networks for a desired task, in this case being underwater visual navigation. Reinforcement learning differs from imitation learning in that an initial dataset is not needed, and an environment must be explored to generate the data to be trained on. Though reinforcement learning is less sample efficient than imitation learning, the lack of dependence on an expert demonstrator allows it to yield policies that are potentially superhuman [15]. The paper’s contribution are as follows: * • Identify the works that focus on utilizing deep learning for perception in underwater visual navigation algorithms. * • Discuss the publicly available underwater datasets. * • Categorize the deep learning methods used in AUVs under imitation learning or reinforcement learning. ## II Perception Due to the challenges of visual perception in underwater robotics, [10] offers a deep learning method to improve loop closure and detection for a Visual Graph SLAM algorithm for underwater navigation. Siamese networks, a neural network architecture that has enjoyed great success in similarity learning, was leveraged to detect similar visual features and places. Another work for underwater loop detection for use in a visual SLAM algorithm is [11], but it sets itself apart from [10] by training a network through an unsupervised learning method. Whereas Siamese networks are trained in a supervised manner with image pairs through contrastive loss, the unsupervised learning method detects similar images and loop closure through the clustering of image descriptors. Figure 1: Redrawn network architecture diagram of the Siamese network used to detect loop-closure between two underwater images in [10]. Finally, [9] also deals with loop closure, but the role of the neural network is more limited in scope. The neural network only selects loop candidates, which are sent to an image matcher and geometric consistency verifier to output the final loop detected images. ## III Underwater Datasets There are far fewer underwater robot datasets as compared to autonomous ground vehicle or autonomous aerial vehicle datasets since collecting data underwater remains difficult due to hardware constraints and needing a relatively uncommon environment. For deep learning methods, having a wide variety of datasets is paramount because it allows the research community to tackle the same problems, often on a vetted or curated dataset. In computer vision, deep learning innovation was primarily accelerated by the existence of ImageNet [19]. In natural language processing, the large corpus of textual data in the internet has fueled the advances of large language models. For robotics, ground vehicle datasets like CARLA [20] or KITTI [21] and aerial vehicle datasets like Mid-Air [22] have paved the way for development of many deep learning based visual navigation algorithms. As such, [6] offers a visual dataset for AUVs navigating near the seabed. The images were collected from three different depths: one at a few meters, the second 270 meters, and the last at 380 meters. The purpose of this data is to provide image and inertia-pressure samples for use in simultaneous localization and mapping algorithms. A common theme is leveraging generative adversarial networks and generative models to augment real underwater datasets with generated synthetic images. This is done in [7], [17], and [18]. A publicly available dataset, called EUVP, of images that were taken during ocean experiments for exploration and human-robot collaboration is also presented in [18]. Figure 2: These are some sample training images selected from the EUVP dataset in [18]. ## IV Imitation Learning For [3], a domain expert diver collected data to generate ”good” and ”bad” navigation scenarios, which were later annotated with labels of yaw and pitch to accomplish the task of exploring a coral reef while avoiding obstacles. A convolutional neural network was trained to map the images to the control commands. They evaluated their models through the percentage of coral covered in the navigation task in certain reef area. Similar to [3], [4] also uses a behavior cloning model to map RGB images to yaw and pitch command. In this case, the task was to explore a shipwreck, and a convolutional neural network was used. For this work, the neural network was trained on a mixture of simulation and real-world data. The model was evaluated through the training-validation-test split. Different test accuracies are given for the real-world only dataset, simulation only dataset, and mixed dataset. The primary difference between the two papers were the difference in task and the nature of the demonstration data. One work that is different is [5], which uses goal-conditioned imitation learning for underwater visual navigation. The behavior cloning model learns safe, reactive behavior for difficult terrain, and is conditioned to navigate based on waypoints. The neural network architecture is a convolutional neural network, much like those in [3] and [4]. Similarly, the model was evaluated through a test set and evaluated in real-life qualitatively. ## V Reinforcement Learning In [8], a soft-actor critic deep reinforcement learning algorithm was used to train a neural network. The AUV was a soft robot, and the task was to swim in a straight line in disturbed water. A camera was used to collect RGB images, which served as the observation space for the neural network. Before the deep reinforcement learning algorithm was rolled out underwater, a model was trained in a MuJoCo simulation. Figure 3: Redrawn system diagram for the deep reinforcement learning controller in [8]. For [15], a combination of imitation learning and reinforcement learning is used. First, a generative adversarial imitation learning algorithm is used to overcome the cold start problem of the initial neural network training to learn a policy. Then, a reward function is designed and trained with proximal policy optimization and soft-actor critic. The results are compared in a Unity simulation. This work uses a light sensor, which is different from the other imitation learning and reinforcement learning work as most use a RGB camera for their visual sensor. ## VI Imitation and Reinforcement Learning Categorization The works where deep learning methods were used for only visual navigation are considered and categorized. Works like [12], [14], and [16] use deep or reinforcement learning methodologies, but fail to incorporate a visual sensor. While there is much more work on using neural networks on inertial, pressure, and position data from non-visual sensors, that is outside the scope of this survey. Furthermore, the specific neural network architectures used in underwater robot navigation will not be discussed in depth like in [13], but are useful for understanding their applications in perception and control. Deep Learning Paradigm Categorization --- Reference | Imitation Learning | Reinforcement Learning [3] | ✓ | ✗ [4] | ✓ | ✗ [5] | ✓ | ✗ [8] | ✗ | ✓ [15] | ✓ | ✓ There are far more works using imitation learning algorithms than reinforcement learning. This can be explained by the idea that the simplest class of imitation learning algorithms, behavior cloning, is good at dealing with high dimensional inputs because convolutional layers can be used as a feature extractor to reduce the dimension of the images. The behavior cloning problem formulation is also easier, where the objective is imitating an expert trajectory through a supervised learning setup. The lack of many deep reinforcement learning works for visual navigation of AUVs can be explained by the added difficulty of devising a proper reward function on top of dealing with a noisy observation space. Moreover, the reinforcement learning agent must do exploration before it can exploit through an adequate policy, which introduces an overhead of creating or using a simulation for the underwater robot as real-life exploration can be expensive and dangerous. ## VII Conclusion In this paper, we have covered the deep learning methods used to improve perception under water, which go into improving the visual navigation algorithms that use those modules. Most of the work centers around in detecting loop closures through similarity learning of images through Siamese networks or using other neural network architectures to detect and select loop closure candidates like in [9], [10], and [11]. The improvement of loop closure in visual perception through deep learning improves the visual SLAM algorithms that leverage them for navigation. Next, we discussed the underwater datasets publicly available like the ones in [6] and [7]. While there are some standard public datasets, there are far too few and even less for a robotics application. This can be attributed to the difficulty of creating such datasets due to hardware and environmental constraints. A common trend is to leverage deep learning methods to augment the available datasets, either through improving the quality of the images or generating synthetic images like in [17] and [18]. However, this type of augmentation seems more a stopgap rather than a permanent solution to the scarcity of underwater robotics data. Finally, we categorize the works that use deep learning methods to control an AUV under the imitation (like in [3], [4], and [5]) or reinforcement learning (like in [8] and [15]) paradigm. A pattern that becomes immediately obvious is that there exists far more imitation learning works than deep reinforcement learning works in the domain of visual navigation in underwater robotics. Another trend is that there aren’t that many works where some visual sensors are used in the context of imitation or reinforcement learning. This can perhaps be explained by the hardware constraints in AUVs, where limited computation and batteries onboard may make RGB cameras too energy-intensive. From this survey, some gaps in the field of deep learning for visual navigation of underwater robots become apparent. There needs to be more underwater datasets collected and made publicly available, especially with robotics applications. Even though deep learning methods are used to augment existing underwater datasets, the best way to accelerate the pace and scalability of neural networks is through more data. Like in computer vision and natural language processing, curated datasets provide a standard for research and competition, both of which push innovation within the field. Another gap is the lack of reinforcement learning work for visual navigation. The preference for imitation learning is reasonable given the harder formulation of the reinforcement learning problem, but focus in this direction may address the limitations of imitation learning, like compounding errors and the learned policy never being better than the demonstrator. Because simulation is an important component of reinforcement learning in the exploration stage, attention in this area may lead to better simulators that can address the lack of underwater datasets through synthetic data. Overall, the field of deep learning for visual navigation of underwater robots provides natural challenges crucial for the larger study of AI robotics. One such challenge is the fundamental problem of acting under noisy or misleading visual perception data. This offers opportunities for planning or navigation under uncertainty. Moreover, breakthroughs in the domain of deep learning for underwater visual navigation may be applied to the broader field of learning from sparse environments. Underwater environments are often characterized by sparsely distributed features. In imitation learning and deep reinforcement learning, environments with few features may make it difficult for the neural network to learn a useful policy for the desired task. Because of these reasons, it is crucial to place more research emphasis on the problem of visual navigation in underwater environments. ## References * [1] K. Köser and U. Frese, “Challenges in underwater visual navigation and slam,” AI Technology for Underwater Robots, pp. 125–135, 2019. * [2] L. Christensen, J. de Gea Fern´andez, M. Hildebrandt, C. E. Koch, and B. Wehbe, “Recent advances in AI for navigation and control of Underwater Robots,” Current Robotics Reports, vol. 3, no. 4, pp. 165–175, 2022. * [3] T. Manderson, J. C. Higuera, R. Cheng, and G. Dudek, “Vision-based autonomous underwater swimming in dense coral for combined collision avoidance and Target Selection,” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. * [4] N. Karapetyan, J. V. Johnson, and I. Rekleitis, “Human diver-inspired visual navigation: Towards coverage path planning of shipwrecks,” Marine Technology Society Journal, vol. 55, no. 4, pp. 24–32, 2021. * [5] T. Manderson, J. Camilo Gamboa Higuera, S. Wapnick, J.-F. Tremblay, F. Shkurti, D. Meger, and G. Dudek, “Vision-based goal-conditioned policies for underwater navigation in the presence of obstacles,” Robotics: Science and Systems XVI, 2020. * [6] M. Ferrera, V. Creuze, J. Moras, and P. Trouv´e-Peloux, “AQUALOC: An underwater dataset for visual–inertial–pressure localization,” The International Journal of Robotics Research, vol. 38, no. 14, pp. 1549–1559, 2019. * [7] I. Polymenis, M. Haroutunian, R. Norman, and D. Trodden, “Virtual underwater datasets for autonomous inspections,” Journal of Marine Science and Engineering, vol. 10, no. 9, p. 1289, 2022. * [8] G. Li, J. Shintake, and M. Hayashibe, “Deep reinforcement learning framework for underwater locomotion of Soft Robot,” 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021. * [9] A. Burguera, F. Bonin-Font, E. G. Font, and A. M. Torres, “Combining deep learning and robust estimation for outlier-resilient underwater visual graph slam,” Journal of Marine Science and Engineering, vol. 10, no. 4, p. 511, 2022. * [10] A. Burguera, “Robust underwater visual graph slam using a Siamese neural network and robust image matching,” Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2022. * [11] A. Burguera and F. Bonin-Font, “An unsupervised neural network for loop detection in Underwater Visual Slam,” Journal of Intelligent & Robotic Systems, vol. 100, no. 3-4, pp. 1157–1177, 2020. * [12] X. Mu, B. He, X. Zhang, Y. Song, Y. Shen, and C. Feng, “End-to-end navigation for autonomous underwater vehicle with hybrid recurrent neural networks,” Ocean Engineering, vol. 194, p. 106602, 2019. * [13] J. Qin, M. Li, D. Li, J. Zhong, and K. Yang, “A survey on visual navigation and positioning for autonomous uuvs,” Remote Sensing, vol. 14, no. 15, p. 3794, 2022. * [14] N. Wang, Y. Wang, Y. Zhao, Y. Wang, and Z. Li, “Sim-to-real: Mapless navigation for usvs using Deep Reinforcement Learning,” Journal of Marine Science and Engineering, vol. 10, no. 7, p. 895, 2022. * [15] Y. Mao, F. Gao, Q. Zhang, and Z. Yang, “An AUV target-tracking method combining imitation learning and deep reinforcement learning,” Journal of Marine Science and Engineering, vol. 10, no. 3, p. 383, 2022. * [16] I. B. Saksvik, A. Alcocer, and V. Hassani, “A deep learning approach to dead-reckoning navigation for autonomous underwater vehicles with limited sensor payloads,” OCEANS 2021: San Diego – Porto, 2021. * [17] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018. * [18] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3227–3234, 2020. * [19] J. Deng, W. Dong, R. Socher, L. -J. Li, Kai Li and Li Fei-Fei, ”ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248-255, doi: 10.1109/CVPR.2009.5206848. * [20] Dosovitskiy, A., Ros, G., Codevilla, F., López, A. M. & Koltun, V. (2017). CARLA: An Open Urban Driving Simulator. CoRL (p./pp. 1-16), : PMLR. * [21] Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research. 2013;32(11):1231-1237. doi:10.1177/0278364913491297 * [22] M. Fonder and M. Van Droogenbroeck, ”Mid-Air: A Multi-Modal Dataset for Extremely Low Altitude Drone Flights,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 2019, pp. 553-562, doi: 10.1109/CVPRW.2019.00081.
# Two-Photon Interference from Silicon-Vacancy Centers in Remote Nanodiamonds R. Waltrich Institute for Quantum Optics, Ulm University, 89081 Ulm, Germany M. Klotz Institute for Quantum Optics, Ulm University, 89081 Ulm, Germany V. N. Agafonov GREMAN, UMR 7347 CNRS, INSA-CVL, Tours University, 37200 TOURS, France A. Kubanek Institute for Quantum Optics, Ulm University, 89081 Ulm, Germany ###### Abstract The generation of indistinguishable photons is a key requirement for solid- state quantum emitters as a viable source for applications in quantum technologies. Restricting the dimensions of the solid-state host to a size well below the wavelength of light emitted by a defect-center enables efficient external optical coupling, for example for hybrid integration into photonic devices. However, stringent restrictions on the host dimensions result in severe limitations on the spectral properties reducing the indistinguishability of emitted photons. Here, we demonstrate two-photon interference from two negatively-charged Silicon-Vacancy centers located in remote nanodiamonds. The Hong-Ou-Mandel interference efficiency reaches 61% with a coalescence time window of 0.35 ns. We furthermore show a high yield of pairs of Silicon-Vacancy centers with indistinguishable optical transitions. Therefore, our work opens new paths in hybrid quantum technology based on indistinguishable single-photon emitters in nanodiamonds. ††preprint: APS/123-QED In the emerging field of quantum technologies the distribution of entanglement is a key ingredient, for example to establish long-distance quantum state transfer and quantum networks [1]. One possible source of distributed entanglement generation is two-photon interference (TPI), commonly known as Hong-Ou-Mandel (HOM) interference [2]. A prerequisite are single photon sources that produce indistinguishable photons. Two-photon interference has been demonstrated with various sources of single photons, for example atomic vapors [3], quantum dots [4], molecules [5], coupled atom-cavity systems [6] and negatively-charged Nitrogen-Vacancy (NV-) centers in bulk-diamond [7, 8]. Group IV color centers in diamond and, in particular, the negatively-charged Silicon-Vacancy center (SiV-) are of great interest for the generation of indistinguishable photons, due to intrinsic spectral stability and narrow inhomogeneous line-distribution shown for bulk diamond [9, 10]. In recent years, there is an increasing effort to restrict the dimensions of the diamond host to very small size well below the optical wavelength. Such nanodiamonds (NDs) give rise to a large variety of new applications in the realm of hybrid quantum systems. Individually optimized [11, 12] and large-scale [13, 14] hybrid quantum photonic circuits [15, 16, 17] are constructed by means of nanomanipulation [18] and advanced fabrication techniques, respectively. Initially, the spectral properties of color centers in NDs were inferior compared to bulk diamond blocking further use in quantum optics applications. This deficiency was resolved over the past years by improved ND-production, sample preparation and control techniques [19]. In this letter, we demonstrate two-photon interference from SiV- in remote NDs. Together with recently demonstrated access to electron spin [20] this work marks a further step towards applied hybrid quantum technology, such as the realization of scalable quantum networks, based on SiV- in NDs. (a)(b)(c) Figure 1: a) Molecular structure of the SiV-. A silicon atom (Si) is located between two adjacent carbon vacancies (V) along the [111] axis of the diamonds crystal structure. b) Electronic level structure of the SiV- with four optical transitions A, B, C, D arising from the ground and excited state doublets ($\Delta_{GS}$, $\Delta_{ES}$) due to spin-orbit interaction. c) Schematic of the two-photon interference experiment. Two identical photons from two separate SiV- enter a beam splitter from two different input ports. Constructive interference leads to maximal coalescence at the output ports. The SiV- is a point defect in the lattice of a diamond crystal where a silicon atom is located between two adjacent carbon vacancies as depicted in FIG. 1 a). At cryogenic temperatures four optically active transitions, resulting from spin-orbit coupling, can be observed. We refer to them as transitions A, B, C, D, as depicted in FIG. 1 b). To show two-photon interference, we excited two SiV- in two remote NDs, separated by approximately 95 $\mu$m, off- resonantly and spectrally filtered the dominant transition C. The photons from both SiV- interfered on a 50:50 beamsplitter, as schematically shown in FIG. 1 c). In the case of two identical photons entering the beamsplitter from two different input ports, the probability amplitudes for leaving at the same port will interfere constructively while the ones for leaving at different output ports interfere destructively. A second-order correlation measurement will therefore result in antibunching with vanishing correlations at zero time delay $g^{(2)}(\tau=0)=0$. In contrast, $g^{(2)}(\tau=0)=0.5$ is indicative of interference of two single but distinguishable photons. (a)(b)(c) Figure 2: a) PL spectra of SiV 1 and SiV 2 (blue and red). The measured wavelength of transition C is identical for both color centers. They differ only in their ground state splitting (64 GHz and 52 GHz) and local temperature (7.7K and 6.2K). b) PL spectrum of SiV 1 after filtering with the etalons, leaving only transition C visible. c) Normalized PLE scans of SiV 1 and SiV 2 (red, blue) with measured respective linewidths of ($158\pm 5$) MHz and ($177\pm 4$) MHz and a detuning of ($83\pm 6$) MHz. The plot is centered around their common center at 406.827829 THz. The SiV-s used for the experiment are located inside NDs with an avarage diameter of around 30 nm. They were coated onto a diamond substrate to ensure good thermal conductivity. The sample was investigated by off-resonant photo- luminescence (PL) and resonant photo-luminescence-excitation (PLE) measurements showing predominantly single SiV- and a spectral distribution of $\approx$ 14 GHz for transition C (see Appendix B). To find a matching SiV- pair we fixed the frequency of the scanning laser to the resonance of transition C of one SiV- and scanned the sample laterally. This way only SiV-s with spectral overlap were visible. The spectra of the two SiV- chosen for the HOM measurement are shown in FIG. 2 a). Their ground-state splitting (GSS) differs by 12 GHz, but transition C shows a good overlap. The fact that transition C overlaps although the GSS of both emitters differs can be explained by different combinations of axial and transverse strain of the host crystal [21]. After filtering transition C (see Appendix A for details), only a single line was observed for each SiV-, as exemplary shown for SiV 1 in FIG. 2 b). PLE measurements of the C transitions for both SiV-s revealed linewidths of ($158\pm 5$) MHz and ($177\pm 4$) MHz with a detuning $\frac{\Delta}{2\pi}$ of ($83\pm 6$) MHz, as shown in FIG. 2 c). Single-photon emission from both SiV-s was confirmed by off-resonant second-order correlation measurements which, after a normalization, resulted in $g^{(2)}_{1}(0)=0.33\pm 0.07$ and $g^{(2)}_{2}(0)=0.35\pm 0.08$ as depicted in FIG. 3 a) and b). We use the notation $g^{(2)}_{i}(\tau)$ for correlation functions including background noise, while the calligraphic $\mathfrak{g}^{(2)}(\tau)$ is used for the modeled correlation function without background. The measured data was fitted with $g_{i}^{(2)}(t)=1+\frac{S_{i}^{2}}{I_{i}^{2}}\mathfrak{g}_{i}^{(2)}(t)\ ,$ (1) where $S_{i}$ is the signal from the emitter $i$, $I_{i}=S_{i}+B_{i}$ is the total signal including background counts $B_{i}$ and $\mathfrak{g}_{i}^{(2)}(t)=1-(1+a)\cdot\mathrm{exp}(-|t|/\tau_{1})+a\cdot\mathrm{exp}(-|t|/\tau_{2})$ is a three level model of the correlation function [22]. From the fit we determined the signal to noise ratio $S_{i}/B_{i}\approx 4$ for each individual SiV- which sets a lower bound for the expected HOM dip, illustrated by the gray area in FIG. 3 c). To measure the two-photon interference we then off-resonantly excited both SiV-s independently and let the emitted photons interfere on a 50:50 beamsplitter after the polarization of the photons was matched by half-wave plates. The resulting correlation function is shown in FIG. 3 c), where red dots show data for parallel polarization and blue triangles show data for perpendicular polarization. The data was fitted with [5] $\displaystyle g_{\mathrm{HOM}}^{(2)}(\tau)=$ $\displaystyle\ c^{2}_{1}g^{(2)}_{1}(0)+c^{2}_{2}g^{(2)}_{2}(0)+2c_{1}c_{2}$ $\displaystyle\ \cdot\left(1-\eta\frac{\langle S_{1}\rangle\langle S_{2}\rangle}{\langle I_{1}\rangle\langle I_{2}\rangle}|g^{(1)}_{1}(\tau)||g^{(1)}_{2}(\tau)|\cos(\Delta\tau)\right),$ where $c_{i}=I_{i}/(I_{1}+I_{2})$ and $g^{(1)}_{i}(\tau)=\exp(-\gamma_{i}|\tau|/2)$. The signal and noise for each emitter were fixed with the previously determined signal to noise ratio of the individual correlation measurements of both SiV-. The variable $\eta$ in front of the interference term can be interpreted as an efficiency coefficient, where a value of 0 means no two-photon-interference and 1 means maximum interference. We determined a value of $\eta=0.61\pm 0.16$ for the case of parallel polarization. (a)(b)(c) Figure 3: a) Normalized correlation function of SiV 1 with $g_{1}^{(2)}(0)=0.33\pm 0.07$ and $\tau_{1}=(1.84\pm 0.29)$ ns. b) Normalized correlation function of SiV 2 with $g_{2}^{(2)}(0)=0.35\pm 0.08$ and $\tau_{2}=(2.02\pm 0.35)$ ns. c) Two-photon interference between SiV 1 and SiV 2 with $\eta=0.61\pm 0.16$. Parallel polarization, corresponding to indistinguishable photons, is shown with red data points. Orthogonal polarization, corresponding to distinguishable photons, is depicted by blue triangles. The visibility $V_{\mathrm{HOM}}$ of the interference is shown in green. As an additional figure-of-merit we calculated the coalescence time window (CTW) as described in [23]. It gives a time-window for which coalescence can occur on the beam-splitter. By integrating over the visibility function $\displaystyle V_{\mathrm{HOM}}=1-g_{\parallel}^{(2)}(\tau)/g_{\perp}^{(2)}(\tau)$ (3) we find $\displaystyle\mathrm{CTW}=\int V_{\mathrm{HOM}}\mathrm{d}\tau=0.35\ \mathrm{ns}$ (4) which is below the limit of 2$T_{1}$ expected for two ideal emitters with absolutely indistinguishable photons. Here $T_{1}\approx 1.9$ ns is the excited state lifetime. The result is limited by dephasing of the the emitters and background noise. We demonstrated the generation of indistinguishable single photons from SiV- in two remote NDs with an efficiency of $\eta=0.61$, the extracted CTW yielded 0.35 ns. The interference visibility is limited by technical imperfections such as polarization drifts or detector timing response, which can be improved in future experiments. Also minor spectral diffusion during the measurement can diminish the visibility. Our results establish SiV- in NDs as a viable source for the generation of indistinguishable photons and open new possibilities for the integration into hybrid quantum photonics. The incorporation into photonic structures boosts the operation bandwidth and paves the way to establish remote entanglement of distant quantum nodes in an integrated fashion. Furthermore, NDs much smaller than the wavelength of light, which contain individual quantum emitters with spectrally indistinguishable transitions offer new possibilities for the construction of cooperative quantum materials. Spatial indistinguishability can be achieved by positioning the NDs in a collective mode within a volume small compared to the third power of the radiation wavelength, $V\ll\lambda^{3}$. Thereby, collective states can be prepared in the Dicke regime [24] in a bottom-up approach by means of AFM- based nanomanipulation. Cooperative processes such as superradiance [25] and superabsorption [26] can be accessed with color centers in diamond. Pioneering work demonstrated superradiance effects with ensembles of NV- center in the optical [27, 28] and microwave [29] domain and indicated a first onset of cavity-assisted superabsorption [30]. With the presented work, collective states could now be engineered atom-by-atom and with altered settings from spatially confined, interacting atoms to far distant non-interacting atoms [31]. ###### Acknowledgements. We thank V. A. Davydov for the synthesis of the nanodiamonds. The project was funded by the Baden-Württemberg Stiftung in the project Internationale Spitzenforschung. A.K. acknowledges support of the BMBF/VDI in the project QR.X and Spinning. ## Appendix A Methods The nanodiamonds used for the experiment where synthesized by high pressure high temperature treatment of a mixture of Naphthalene - C10H8 and the Si doping component Tetrakis(Trimethylsilyl)silane – C12H36Si5. The average size is around 30nm and the NDs mostly host single SiV-. The same NDs where used in [20]. Figure 4: Schematic of the optical setup for the two-photon interference experiment. A continious-wave laser with 532nm was used for excitation of the individual color centers SiV 1 and SiV 2 inside a flow-cryostat. The fluorescence was directed through spectral filters, two consecutive etalons and half-wave plates and interfered at a 50:50 non-polarizing beam splitter. The photons were collected through single-mode fibers into SPCMs. To measure TPI we used a home-built, two-channel confocal microscope as depicted in FIG. 4. A continuous-wave 532nm laser was used to excite the individual color centers off-resonantly. The beam was split at a 50:50 polarizing beamsplitter and a half-wave plate before the beam splitter allowed to adjust the excitation power for each channel and thus balance the emission from both SiV-. Two galvo-scanners were used to direct the beam onto the SiV-. A knife edge prism divided the field of view of the confocal setup into two independent channels. The sample was placed inside a continuous-flow cryostat and cooled with liquid helium to around 4K. The NDs were coated onto a diamond substrate for good thermal conductivity and reached local temperatures between 5K and 10K. Fluorescence of the SiV- in both channels was filtered with a dichroic mirror, a 740/13 band-pass filter and two consecutive etalons. The first etalon has a free spectaral range (FSR) of 850 GHz and a linewidth of 90 GHz, the second one has a FSR of 10 GHz and a linewidth of 1 GHz. Before both paths were joined on a 50:50 non-polarizing beamsplitter, a half-wave plate was used to adjust their polarization. Photons were then collected with two single mode fibers, detected by single photon counting modules (SPCM) and correlated with a time tagger device. (a)(b) Figure 5: Confocal scan of a) SiV 1, and b) SiV 2. Both SiV- are spatially isolated from other emitters and separated by 95 $\mu$m. A confocal-scan of both SiV- used for the TPI is shown in FIG. 5. Each SiV- was spatially isolated from other SiV- and both are spatially separated by 95 $\mu$m. To determine the long time dynamics of the emitters and ensure a correct normalization, we analyzed the correlation functions of both emitters and fitted them to a three level model for the correlation function. Both SiV- showed a small amount of bunching and a shelving time of around 25 ns. Therefore, the correlation functions $g^{2}(\tau)$ are normalized to 1 with the averaged value of $g^{2}(\tau)$ within $75$ ns $<$ $\tau<100$ ns as highlighted by a grey areas in FIG. 6. (a)(b)(c) Figure 6: Long time dynamics of the measured correlation functions of SiV 1 and SiV 2 (a), b)) and the two-photon-interference (c)), revealing a minor bunching behavior. The coincidence levels within the gray shaded regions where used to normalize the correlation functions. ## Appendix B Properties of the SiVs The investigated SiV- show an inhomogeneous distribution with a full width at half maximum (FWHM) of around 14 GHz for the position of the C- transition, as shown in the histogram in FIG. 7 a). Multiple emitter show a significant overlap when comparing their spectral position, but some of the investigated lines show strong inhomogeneous broadening up to 1 GHz. (a)(b)(c) Figure 7: a) Histogram of the measured positions of transition C for different SiV-s. The distribution has a width of 13.6 GHz and is centered at 406.82031 THz. b) Zoomed view into the distribution showing multiple possible transitions suitable for measuring two-photon interference. Emitters with significant overlap are marked with a red bar representing a linewidth of 94 MHz. The pair marked with the red square was used to show the two-photon interference. c) Evolution of the spectral position and linewidth for SiV 1 and SiV 2 over 15 scans, on a timescale of around 2 minutes. The center position is represented by the lineplot, the bars represent the FWHM of the individual scans. When setting an upper limit of 400 MHz for the inhomogeneous linewidth, we can still find multiple pairs and even groups of emitters which could be used for TPI. FIG. 7 b) depicts suitable candidates for TPI with linewidths below 400 MHz. The x-axis is the transition energy, while the y-axis shows the respective linewidth of an emitter. The width of the rectangular data points represents the linewidth of the measured emitter. Transitions with significant spectral overlap within the natural linewidth of 94 MHz [9] are connected with a red line. The two SiV- used for the TPI in the manuscript are marked by the red square. They were selected because of the combination of their spectral overlap, stability and narrow inhomogeneous linewidth. Under resonant excitation both SiV-s were spectrally stable over a total of 15 scans, as shown in FIG. 7 c), where the spectral position is represented by the connecting lines and the respective linewidth by the colored bars. Also the overlapping region is depicted by the darker, shaded region. ## References * Wehner _et al._ [2018] S. Wehner, D. Elkouss, and R. Hanson, Quantum internet: A vision for the road ahead, Science 362, 6412 (2018). * Hong _et al._ [1987] C. K. Hong, Z. Y. Ou, and L. Mandel, Measurement of subpicosecond time intervals between two photons by interference, Phys. Rev. Lett. 59, 2044 (1987). * Chanelière _et al._ [2007] T. Chanelière, D. N. Matsukevich, S. D. Jenkins, S.-Y. Lan, R. Zhao, T. A. B. Kennedy, and A. Kuzmich, Quantum interference of electromagnetic fields from remote quantum memories, Phys. Rev. Lett. 98, 113602 (2007). * Flagg _et al._ [2010] E. B. Flagg, A. Muller, S. V. Polyakov, A. Ling, A. Migdall, and G. S. Solomon, Interference of single photons from two separate semiconductor quantum dots, Phys. Rev. Lett. 104, 137401 (2010). * Lettow _et al._ [2010] R. Lettow, Y. L. A. Rezus, A. Renn, G. Zumofen, E. Ikonen, S. Götzinger, and V. Sandoghdar, Quantum interference of tunably indistinguishable photons from remote organic molecules, Phys. Rev. Lett. 104, 123605 (2010). * Legero _et al._ [2004] T. Legero, T. Wilk, M. Hennrich, G. Rempe, , and A. Kuhn, Quantum beat of two single photons, Phys. Rev. Lett. 93, 070503 (2004). * Bernien _et al._ [2012] H. Bernien, L. Childress, L. Robledo, M. Markham, D. Twitchen, and R. Hanson, Two-photon quantum interference from separate nitrogen vacancy centers in diamond, Phys. Rev. Lett. 108, 043604 (2012). * Sipahigil _et al._ [2012] A. Sipahigil, M. L. Goldman, E. Togan, Y. Chu, M. Markham, D. J. Twitchen, A. S. Zibrov, A. Kubanek, and M. D. Lukin, Quantum interference of single photons from remote nitrogen-vacancy centers in diamond, Phys. Rev. Lett. 108, 143601 (2012). * Rogers _et al._ [2014] L. J. Rogers, K. D. Jahnke, T. Teraji, L. Marseglia, C. Müller, B. Naydenov, H. Schauffert, C. Kranz, J. Isoya, L. P. McGuinness, and F. Jelezko, Multiple intrinsically identical single-photon emitters in the solid state, Nature Communications 5, 4739 (2014). * Sipahigil _et al._ [2014] A. Sipahigil, K. D. Jahnke, L. J. Rogers, T. Teraji, J. Isoya, A. S. Zibrov, F. Jelezko, and M. D. Lukin, Indistinguishable photons from separated silicon-vacancy centers in diamond, Phys. Rev. Lett. 113, 113602 (2014). * Fehler _et al._ [2021] K. G. Fehler, L. Antoniuk, N. Lettner, A. P. Ovvyan, R. Waltrich, N. Gruhler, V. A. Davydov, V. N. Agafonov, W. H. Pernice, and A. Kubanek, Hybrid quantum photonics based on artificial atoms placed inside one hole of a photonic crystal cavity, ACS photonics 8, 2635 (2021). * Bayer _et al._ [2022] G. Bayer, R. Berghaus, S. Sachero, A. B. Filipovski, L. Antoniuk, N. Lettner, R. Waltrich, M. Klotz, P. Maier, V. Agafonov, and A. Kubanek, A quantum repeater platform based on single siv centers in diamond with cavity-assisted, all-optical spin access and fast coherent driving, ArXiv (2022). * Wan _et al._ [2020] N. Wan, T. Lu, K. Chen, and et al., Large-scale integration of artificial atoms in hybrid photonic circuits, Nature 583, 226–231 (2020). * Schrinner _et al._ [2020] P. P. J. Schrinner, J. Olthaus, D. E. Reiter, and C. Schuck, Integration of diamond-based quantum emitters with nanophotonic circuits, Nano Lett. 20, 8170 (2020). * Elshaari _et al._ [2020] A. W. Elshaari, W. Pernice, K. Srinivasan, O. Benson, and V. Zwiller, Hybrid integrated quantum photonic circuits, Nat Photonics (2020). * Kubanek _et al._ [2022] A. Kubanek, A. Ovvyan, L. Antoniuk, N. Lettner, and W. Pernice, Hybrid quantum nanophotonics—interfacing color center in nanodiamonds with si3n4-photonics, Yatsui, T. (eds) Progress in Nanophotonics 7. Topics in Applied Physics, Springer 147 (2022). * Sahoo _et al._ [2022] S. Sahoo, V. Davydov, V. Agafonov, and S. I. Bogdanov, Hybrid quantum nanophotonic devices with color centers in nanodiamonds, ArXiv (2022). * Häußler _et al._ [2019a] S. Häußler, L. Hartung, K. G. Fehler, L. Antoniuk, L. F. Kulikova, V. A. Davydov, V. N. Agafonov, F. Jelezko, and A. Kubanek, Preparing single siv center in nanodiamonds for external, optical coupling with access to all degrees of freedom, New Journal of Physics 21, 103047 (2019a). * Rogers _et al._ [2019] L. J. Rogers, O. Wang, Y. Liu, L. Antoniuk, C. Osterkamp, V. A. Davydov, V. N. Agafonov, A. B. Filipovski, F. Jelezko, and A. Kubanek, Single $\mathrm{Si}$-${V}^{-}$ centers in low-strain nanodiamonds with bulklike spectral properties and nanomanipulation capabilities, Phys. Rev. Applied 11, 024073 (2019). * Klotz _et al._ [2022] M. Klotz, K. G. Fehler, R. Waltrich, E. S. Steiger, S. Häußler, P. Reddy, L. F. Kulikova, V. A. Davydov, V. N. Agafonov, M. W. Doherty, and A. Kubanek, Prolonged orbital relaxation by locally modified phonon density of states for the ${\mathrm{si}v}^{-}$ center in nanodiamonds, Phys. Rev. Lett. 128, 153602 (2022). * Meesala _et al._ [2018] S. Meesala, Y.-I. Sohn, B. Pingault, L. Shao, H. A. Atikian, J. Holzgrafe, M. Gündoğan, C. Stavrakas, A. Sipahigil, C. Chia, R. Evans, M. J. Burek, M. Zhang, L. Wu, J. L. Pacheco, J. Abraham, E. Bielejec, M. D. Lukin, M. Atatüre, and M. Lončar, Strain engineering of the silicon-vacancy center in diamond, Phys. Rev. B 97, 205444 (2018). * Aharonovich _et al._ [2010] I. Aharonovich, S. Castelletto, D. A. Simpson, A. D. Greentree, and S. Prawer, Photophysics of chromium-related diamond single-photon emitters, Phys. Rev. A 81, 043813 (2010). * Proux _et al._ [2015] R. Proux, M. Maragkou, E. Baudin, C. Voisin, P. Roussignol, and C. Diederichs, Measuring the photon coalescence time window in the continuous-wave regime for resonantly driven semiconductor quantum dots, Phys. Rev. Lett. 114, 067401 (2015). * Dicke [1954] R. H. Dicke, Coherence in spontaneous radiation processes, Phys. Rev. 93 (1954). * GROSS and HAROCHE [1982] M. GROSS and S. HAROCHE, Superradiance: An essay on the theory of collective spontaneous emission, PHYSICS REPORTS 93, 301—396 (1982). * Higgins _et al._ [2014] K. Higgins, S. Benjamin, T. Stace, G. Milburn, B. Lovett, and E. Gauger, Superabsorption of light via quantum engineering, Nat. Comm. 5, 4705 (2014). * Bradac _et al._ [2017] C. Bradac, M. Johnsson, and M. e. a. Breugel, Room-temperature spontaneous superradiance from single diamond nanocrystals, Nat. Commun. 8, 1205 (2017). * Gutsche _et al._ [2022] J. Gutsche, A. Zand, M. Bültel, and A. Widera, Revealing superradiant emission in the single-to-bulk transition of quantum emitters in nanodiamond agglomerates, New J. Phys. 24, 053039 (2022). * Angerer _et al._ [2018] A. Angerer, K. Streltsov, T. Astner, and et al., Superradiant emission from colour centres in diamond, Nature Phys 14, 1168–1172 (2018). * Häußler _et al._ [2019b] S. Häußler, J. Benedikter, K. Bray, B. Regan, A. Dietrich, J. Twamley, I. Aharonovich, D. Hunger, and A. Kubanek, Diamond photonics platform based on silicon vacancy centers in a single-crystal diamond membrane and a fiber cavity, Phys. Rev. B 99, 165310 (2019b). * Bojer and von Zanthier [2022] M. Bojer and J. von Zanthier, Dicke-like superradiance of distant noninteracting atoms, PHYSICAL REVIEW A 106, 053712 (2022).
# Deep Multi-Task Model for Sarcasm Detection and Sentiment Analysis in Arabic Language Abdelkader El Mahdaouy1 Abdellah El Mekki1 Nabil El Mamoun2 Kabil Essefar1 Ismail Berrada1 Ahmed Khoumsi3 1School of Computer Sciences, Mohammed VI Polytechnic University, Morocco 2Faculty of Sciences Dhar EL Mahraz, Sidi Mohamed Ben Abdellah University, Morocco 3Dept. Electrical & Computer Engineering, University of Sherbrooke, Canada <EMAIL_ADDRESS> <EMAIL_ADDRESS> ###### Abstract The prominence of figurative language devices, such as sarcasm and irony, poses serious challenges for Arabic Sentiment Analysis (SA). While previous research works tackle SA and sarcasm detection separately, this paper introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing knowledge interaction between the two tasks. Our MTL model’s architecture consists of a Bidirectional Encoder Representation from Transformers (BERT) model, a multi-task attention interaction module, and two task classifiers. The overall obtained results show that our proposed model outperforms its single-task counterparts on both SA and sarcasm detection sub-tasks. ## 1 Introduction The popularity of the Internet and the unprecedented reach of social media platforms allow users to express their opinions on a wide range of topics. Thereby, Sentiment Analysis (SA) has become a cornerstone for many applications such as digital marketing, product review analysis, customer feedback, social media monitoring, etc. SA consists of determining the expressed sentiment (positive, negative, or neutral) conveyed by a text or a piece of text. Over the past decade, significant research advances have been achieved for Arabic SA Badaro et al. (2019); Al-Ayyoub et al. (2019); Oueslati et al. (2020); Abu Farha and Magdy (2021). However, the mutual interaction and impact of figurative language devices, like sarcasm and irony, and Arabic SA remain under-explored Abu Farha and Magdy (2020, 2021); Abbes et al. (2020). These latter devices allow us to express ourselves intelligently beyond the literal meaning of words. Although the literature uses the terms irony and sarcasm interchangeably, they have different meanings and there is no consensus on their definition Farías et al. (2016); Hernández Farías and Rosso (2017); Zhang et al. (2019). Both sarcasm and irony devices pose a real challenge for SA as they can reverse the expressed sentiment polarity from positive to negative Hernández Farías and Rosso (2017); Abu Farha and Magdy (2020, 2021). Therefore, there is an urgent need to develop sarcasm-aware SA tools. Previous research works on Arabic SA and sarcasm detection have dealt with both tasks separately Ghanem et al. (2019, 2020); Abbes et al. (2020); Abu Farha and Magdy (2020). Abbes et al. (2020) have built a corpus for irony and sarcasm detection in Arabic language from twitter using a set of ironic hashtags. Unlike the previous work of Karoui et al. (2017) that have relied on ironic hashtags to label the tweets, the annotation is performed manually by two Arabic language specialists. Abu Farha and Magdy (2021) have presented an overview of existing Arabic SA methods and approaches, and a benchmarking using three existing datasets. Their results have shown that most of the evaluated models perform poorly on the SemEval and ASTD datasets. Due to the label inconsistencies discovered, they have re-annotated the previously mentioned datasets for SA and sarcasm detection. In addition to the highly subjective nature of SA task, they have reported a large performance drop in the case of sarcastic tweets Abu Farha and Magdy (2020, 2021). (a) Distribution of sarcastic tweets per region (b) Distribution of sarcastic tweets per sentiment polarity (c) Distribution of sentiment polarities Figure 1: ArSarcasm-v2 dataset: distribution of sarcastic tweets and their sentiment polarities. True and False denote sarcastic and non-sarcastic tweets, respectively. Following the recent breakthroughs in Arabic Natural Language Processing (NLP), achieved using AraBERT model Antoun et al. (2020), Abdul-Mageed et al. (2020) have introduced two Arabic transformer-based language models, namely ARBERT and MARBERT. ARBERT is trained on large textual corpora of Modern Standar Arabic (MSA), while MARBERT is trained on 1 billion DA and MSA tweets corpus. They have shown new cutting edge performances on wide range of DA and MSA NLP tasks (AraBench datasets), including, among others, SA and sarcasm detection. In this paper, we present our end-to-end deep MTL model, submitted to SA and sarcasm detection in Arabic language shared task Abu Farha et al. (2021). Our approach is based on MARBERT Abdul-Mageed et al. (2020), and a multi-task attention interaction module. The latter consists of two task-specific attention layers for extracting task-discriminative features, and of a Sigmoid interaction layer Lan et al. (2017) for allowing interaction and knowledge sharing between sarcasm detection and SA. The task-interaction is performed using the task-specific attention outputs, a learnable shared matrix, and the Sigmoid activation. The obtained results show that our MTL model surpasses the other evaluated single-task and MTL models. Besides, the incorporation of an attention mechanism and the task-interaction boosts the performance of both sarcasm detection and SA. The rest of the paper is organized as follows. Section 2 presents the shared task’s dataset. Section 3 introduces the proposed method. In Section 4, we present the obtained results for both sarcasm detection and SA subtasks. Section 5 discusses the obtained results. Finally, Section 6 concludes the paper. ## 2 Data The ArSarcasm Shared Task consists of two subtasks for sarcasm detection and SA in Arabic language Abu Farha et al. (2021). The shared task’s dataset, ArSarcasm-v2, is built from the previously introduced datasets for sarcasm and irony detection Abbes et al. (2020); Abu Farha and Magdy (2020). The provided dataset consists of 12,548 and 3,000 tweets for the training set and test set, respectively. The task’s dataset is annotated for SA and sarcasm detection as well as the regional dialect of the tweets. Figure 1 presents the distribution of sarcastic tweets and their sentiment polarities (Figures 1(a) and 1(c)). The distribution of all sentiment polarities in the dataset is illustrated in 1(c). The dataset is unbalanced for both subtasks. Most sarcastic tweets are written in MSA and Egyptian dialect (Figure 1(a)), and are labeled with a negative sentiment (Figure 1(b)). Furthermore, approximately half of the tweets convey a neutral sentiment (Figure 1(c)). ## 3 Method Our multi-task model consists of three main components: BERT encoder, a multi- task attention interaction module, and two task classifiers. ### 3.1 BERT Encoder Fine-tuning Bidirectional Encoder Representation from Transformers (BERT) model on downstream tasks has shown a new wave of state-of-the-art performances in many NLP applications Devlin et al. (2019). BERT model’s architecture consists of multiple transformer encoders for learning contextualized word embedding of a given input text. It is trained on large textual corpora using two self-supervised objectives, namely the Masked Language Model (MLM) and the Next Sentence Prediction (NSP). The encoder of our MTL model is the pre-trained MARBERT Abdul-Mageed et al. (2020). MARBERT is fed with a sequence of wordpeices $[t_{1},t_{2},...,t_{n}]$ of the input tweet, where $n$ is the sequence length. It outputs the tweet embedding $h_{[CLS]}$ ([CLS] token embedding) and the contextualized word embedding of the input tokens $H=[h_{1},h_{2},...,h_{n}]\in\mathbb{R}^{n\times d}$. Both $h_{[CLS]}$ and $h_{i}$ have the same hidden dimension $d$. ### 3.2 Multi-task attention interaction module This module consists of two task-specific attention layers (task-specific context-rich representation) and a Sigmoid task-interaction layer. The task-specific sentence representation $v_{*}\in\mathbb{R}^{1\times d}$ (e.g. $v_{sarc}$ and $v_{sent}$) is obtained using the attention mechanism over the contextualized word embedding matrix $H$ : $C=tanh(HW^{a})$ $\alpha=softmax(C^{T}W^{\alpha})$ $v_{*}=\alpha\cdot H^{T}$ where $W^{a}\in\mathbb{R}^{d\times 1}$ and $W^{\alpha}\in\mathbb{R}^{n\times n}$ are the learnable parameters of the attention mechanism. $C\in\mathbb{R}^{n\times 1}$ and $\alpha\in[0,1]^{n}$ weights words hidden representations according to their relevance to the task. The task interaction mechanism Lan et al. (2017) is performed using a learnable shared matrix $W^{i}\in\mathbb{R}^{d\times d}$ and a bias vector $b^{i}\in\mathbb{R}^{d}$. The interaction of both task are given by: $v^{\prime}_{sarc}=v_{sarc}\odot\sigma(W^{i}v_{sent}+b^{i})$ (1) $v^{\prime}_{sent}=v_{sent}\odot\sigma(W^{i}v_{sarc}+b^{i})$ (2) where $v_{sarc}$ and $v_{sent}$ are the output of the sarcasm task-specific attention layer and the sentiment task-specific attention layer, respectively. $\odot$ is the element-wise product. ### 3.3 Task classifier We employ two task classifiers $F_{sarc}$ and $F_{sent}$ for sarcasm detection and SA, respectively. Each classifier consists of one hidden layer and one output layer. They are fed with the concatenation of the pooled output embedding and the task output of the multi-task attention interaction module $v^{\prime}_{*}$ (e.g. $v^{\prime}_{sarc}$ and $v^{\prime}_{sent}$). The outputs of the task classifiers are given by: $\hat{y}_{sarc}=F_{sarc}([h_{[CLS]},v^{\prime}_{sarc}])$ (3) $\hat{y}_{sent}=F_{sarc}([h_{[CLS]},v^{\prime}_{sent}])$ (4) ### 3.4 Multi-task learning objective We train our MTL model to jointly minimize the binary cross-entropy loss $\mathcal{L}_{BCE}$, for sarcasm detection, and the cross-entropy loss $\mathcal{L}_{CE}$, for SA. The total loss is given by: $\mathcal{L}=\mathcal{L}_{BCE}(y_{sarc},\hat{y}_{sarc})+\mathcal{L}_{CE}(y_{sent},\hat{y}_{sent})$ (5) where $\hat{y}_{*}$ is the predicted output and $y_{*}$ is the ground truth label. ## 4 Results In this section, we present the experiment settings and the obtained results. ### 4.1 Experiment settings We have compared our model (MTL_ATTINTER) with two single-task models (ST and ST_ATT) and two MTL models (MTL and MTL_ATT). * • ST consists of MARBERT with one classification layer. * • ST_ATT employs the attention mechanism on top of the contextualized word embedding of MARBERT. The classification is performed using the attention layer output and the [CLS] token embedding. * • MTL is similar to ST model and uses classification layer for each task. * • MTL_ATT is the MTL counterpart of ST_ATT model. | | Sarcasm | Sentiment ---|---|---|--- | | Precision | Recall | Accuracy | F1 | F$1^{Sarc}$ | Precision | Recall | Accuracy | F1 | F1PN ST | Dev | 0.7649 | _0.7683_ | 0.8673 | 0.7666 | 0.6132 | 0.7422 | 0.7519 | 0.7641 | 0.7465 | 0.7284 Test | 0.706 | 0.708 | 0.768 | 0.707 | 0.573 | 0.672 | 0.667 | 0.713 | 0.665 | 0.749 ST_ATT | Dev | 0.7736 | 0.7588 | 0.8622 | 0.7658 | 0.6156 | _0.7541_ | 0.7429 | 0.7629 | _0.7479_ | 0.7253 Test | 0.724 | 0.722 | 0.778 | 0.723 | 0.598 | 0.664 | 0.665 | 0.709 | 0.661 | 0.742 MTL | Dev | 0.7935 | 0.7611 | 0.8633 | 0.7753 | 0.6347 | 0.7424 | 0.748 | _0.7649_ | 0.7448 | 0.7288 Test | 0.725 | 0.714 | 0.771 | 0.719 | 0.599 | 0.676 | 0.656 | 0.703 | 0.662 | 0.736 MTL_ATT | Dev | 0.8064 | 0.7581 | 0.8606 | 0.7778 | 0.6421 | 0.7478 | _0.7524_ | _0.7649_ | 0.7465 | 0.7326 Test | 0.741 | 0.72 | 0.773 | 0.728 | 0.617 | 0.663 | 0.676 | 0.717 | 0.66 | 0.752 MTL_ATTINTER | Dev | _0.8106_ | 0.766 | _0.8661_ | _0.7846_ | _0.6522_ | 0.7511 | 0.7414 | 0.7582 | 0.7436 | _0.7358_ Test | 0.7268 | 0.7122 | 0.7680 | 0.7183 | 0.6000 | 0.6713 | 0.7183 | 0.7107 | 0.6625 | 0.7480 Table 1: Models evaluation on both SA and sarcasm detection subtasks | | Sarcasm | Sentiment ---|---|---|--- | | Precision | Recall | Accuracy | F1 | F$1^{Sarc}$ | Precision | Recall | Accuracy | F1 | F1PN MTL_ATTINTER | | 0.7268 | 0.7122 | 0.7680 | 0.7183 | 0.6000 | 0.6713 | 0.7183 | 0.7107 | 0.6625 | 0.7480 Table 2: The obtained results of our Official submission We have implemented the MARBERT’ tweets preprocessing pipeline Abdul-Mageed et al. (2020). The evaluated models have been trained using Adam optimizer with a learning rate of $5\times 10^{-6}$. Based on several experiments, the batch size and the number of epochs have been fixed to $64$ and $5$, respectively. Besides, we have used $80$% and $20$% of the provided training data for training set and development set, respectively. For comparison purposes, we have used the macro-average Precision, Recall, F1, and F1 score of positive and negative (F1PN) evaluation measures. We have also employed the Accuracy and the F1 score of the sarcastic tweets (F1Sarc). ### 4.2 Experiment results Table 1 shows the obtained models’ performances for both SA and sarcasm detection. The best results, for each evaluation measure, are highlighted with italic font and bold fond for the dev set and the test set, respectively. The overall obtained results show that MTL models outperform their single-task counterparts for most evaluation measures. In fact, incorporating attention mechanism into both ST_ATT and MTL_ATT improves the F1, F1Sarc and F1PN. The former compute the F1 score for sarcastic tweets only, while the latter consider only positive and negative sentiment. MTL_ATTINTER and MTL_ATT achieve the best performances for most evaluation measures on both the dev and the test sets of sarcasm detection sub-task. Specifically, they show far better F1 performance for the sarcastic class prediction. For SA, the other evaluated models achieve slightly better performance. However, MTL_ATTINETER and MTL_ATT yield the best F1PN performances on the dev set and the test set. Therefore, our proposed model excels in detecting sarcastic tweets as well as predicting positive and negative sentiments. #### Official results Since one submission was allowed, we have submitted the results of our MTL_ATTINETER model. Table 2 shows the official submission results. Our system is top ranked on SA Sub-task and has secured the fourth position among submitted systems for sarcasm detection. (a) Confusion matrix of the sarcasm detection task (b) Confusion matrix of SA task (c) Confusion matrix of SA among non-sarcastic tweets (d) Confusion matrix of SA among sarcastic tweets Figure 2: The confusion matrices of our MTL model’s prediction on both SA and sarcasm detection tasks ## 5 Discussion To investigate the strengths and weaknesses of our model, we have analyzed the confusion matrix of each subtask (Figures 2(a) and 2(b)) as well as the confusion matrices of sentiment analysis among sarcastic and non-sarcastic tweets respectively (Figures 2(d) and 2(c)). The analysis of these matrices shows that our MTL model leverages signals from both tasks and boosts the performances. This can be explained by the fact that most sarcastic tweets convey a negative sentiment. Besides, negative tweets tend to have a large probability of being sarcastic than the positive ones. This could be also deduced from Table 1, where MTL models achieve the best F1Sarc and F1PN scores compared to single-task models. ## 6 Conclusion In this paper, we have proposed an end-to-end deep Multi-Task Learning model for SA and sarcasm detection. Our model leverages the MARBERT’s contextualized word embedding with a multi-task attention interaction module. The aim is to allow task-interaction and knowledge sharing for both SA and sarcasm detection. Our model shows very promising results on both subtasks. Therefore, it proves the effectiveness of using task-specific attention layers as well as the task-interaction mechanism in multi-task learning. Future research work will focus on developing task-interaction and class- interaction modules and mechanisms for SA and sarcasm detection. ## References * Abbes et al. (2020) Ines Abbes, Wajdi Zaghouani, Omaima El-Hardlo, and Faten Ashour. 2020. DAICT: A dialectal arabic irony corpus extracted from twitter. In _Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020_ , pages 6265–6271. European Language Resources Association. * Abdul-Mageed et al. (2020) Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2020. Arbert & marbert: Deep bidirectional transformers for arabic. _arXiv preprint arXiv:2101.01785_. * Abu Farha and Magdy (2020) Ibrahim Abu Farha and Walid Magdy. 2020. From arabic sentiment analysis to sarcasm detection: The arsarcasm dataset. In _Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection_ , pages 32–39. * Abu Farha and Magdy (2021) Ibrahim Abu Farha and Walid Magdy. 2021. A comparative study of effective approaches for arabic sentiment analysis. _Information Processing & Management_, 58(2):102438. * Abu Farha et al. (2021) Ibrahim Abu Farha, Wajdi Zaghouani, and Walid Magdy. 2021. Overview of the wanlp 2021 shared task on sarcasm and sentiment detection in arabic. In _Proceedings of the Sixth Arabic Natural Language Processing Workshop_. * Al-Ayyoub et al. (2019) Mahmoud Al-Ayyoub, Abed Allah Khamaiseh, Yaser Jararweh, and Mohammed N. Al-Kabi. 2019. A comprehensive survey of arabic sentiment analysis. _Information Processing & Management_, 56(2):320 – 342. Advance Arabic Natural Language Processing (ANLP) and its Applications. * Antoun et al. (2020) Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding. In _LREC 2020 Workshop Language Resources and Evaluation Conference 11–16 May 2020_ , page 9. * Badaro et al. (2019) Gilbert Badaro, Ramy Baly, Hazem Hajj, Wassim El-Hajj, Khaled Bashir Shaban, Nizar Habash, Ahmad Al-Sallab, and Ali Hamdi. 2019. A survey of opinion mining in arabic: A comprehensive system perspective covering challenges and advances in tools, resources, models, applications, and visualizations. _ACM Trans. Asian Low-Resour. Lang. Inf. Process._ , 18(3). * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Farías et al. (2016) Delia Irazú Hernańdez Farías, Viviana Patti, and Paolo Rosso. 2016\. Irony detection in twitter: The role of affective content. _ACM Trans. Internet Technol._ , 16(3). * Ghanem et al. (2019) Bilal Ghanem, Jihen Karoui, Farah Benamara, Véronique Moriceau, and Paolo Rosso. 2019. Idat at fire2019: Overview of the track on irony detection in arabic tweets. In _Proceedings of the 11th Forum for Information Retrieval Evaluation_ , FIRE ’19, page 10–13, New York, NY, USA. Association for Computing Machinery. * Ghanem et al. (2020) Bilal Ghanem, Jihen Karoui, Farah Benamara, Paolo Rosso, and Véronique Moriceau. 2020. Irony detection in a multilingual context. In _Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II_ , volume 12036 of _Lecture Notes in Computer Science_ , pages 141–149. Springer. * Hernández Farías and Rosso (2017) Delia Irazú Farias Hernández Farías and Paolo Rosso. 2017. Chapter 7 - irony, sarcasm, and sentiment analysis. In Federico Alberto Pozzi, Elisabetta Fersini, Enza Messina, and Bing Liu, editors, _Sentiment Analysis in Social Networks_ , pages 113–128. Morgan Kaufmann, Boston. * Karoui et al. (2017) Jihen Karoui, Farah Banamara Zitoune, and Véronique Moriceau. 2017. Soukhria: Towards an irony detection system for arabic in social media. _Procedia Computer Science_ , 117:161–168. Arabic Computational Linguistics. * Lan et al. (2017) Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attention-based neural networks for implicit discourse relationship representation and identification. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017_ , pages 1299–1308. Association for Computational Linguistics. * Oueslati et al. (2020) Oumaima Oueslati, Erik Cambria, Moez Ben HajHmida, and Habib Ounelli. 2020. A review of sentiment analysis research in arabic language. _Future Generation Computer Systems_ , 112:408 – 430. * Zhang et al. (2019) Shiwei Zhang, Xiuzhen Zhang, Jeffrey Chan, and Paolo Rosso. 2019. Irony detection via sentiment-based transfer learning. _Information Processing & Management_, 56(5):1633–1644.
# Indoor Positioning Trends in 5G-Advanced: Challenges and Solution towards Centimeter-level Accuracy Jakub Nikonowicz, Aamir Mahmood, Muhammad Ikram Ashraf, Emil Björnson, Mikael Gidlund ###### Abstract After robust connectivity, precise positioning is evolving into an innovative component of 5G service offerings for industrial use-cases and verticals with challenging indoor radio environments. In this direction, the 3GPP Rel-16 standard has been a tipping point in specifying critical innovations, followed by enhancements in Rel-17 and Rel-18. In this article, we elaborate on the 5G positioning framework, measurements, and procedures before shifting the focus mainly to recently identified carrier-phase (CP) measurements in Rel-18 as a complementary measure for time- and angular-based positioning methods. We discuss the associated challenges and potential solutions for exploiting CP, including integer ambiguity, multipath sensitivity, and signaling aspects. Furthermore, we study how phase-continuous reference signaling can counter noisy phase measurements using realistic simulations to achieve centimeter- level accuracy in indoor factory (InF) scenarios. ## I Introduction Continuous work on the fifth-generation (5G) network aims to extend its applications beyond traditional mobile broadband, where one of the fundamental features is precise positioning. The need for high-precision positioning is anticipated in logistics, autonomous harbors and vehicles, localized sensing, digital twins, augmented and virtual reality, and more. It is desirable to deliver positioning services using cellular technology, instead of requiring a dedicated infrastructure. The 5G network design is evolving mainly in response to the needs of industrial segments, i.e., Industry 4.0. We can foresee the requirement for centimeter accuracy positioning in fully automated factories for acquiring precise knowledge of resources placement, track moving assets and machinery, up to product storage. As a result, smart factories remain the primary sector for precise positioning, and an indoor factory (InF) environment defines current requirements and challenges [1]. In this respect, the 5G new radio (NR) provides numerous innovations, from new positioning architecture and reference signals to measurement/parameter enhancements, for achieving positioning accuracy down to the centimeter. The 3GPP Rel-16 targets precise positioning and low-latency requirements of diverse application scenarios, including Industrial Internet-of-things (IIoT), transportation, and logistics. For instance, the indoor positioning accuracy (horizontal and vertical) and latency requirements for commercial use-cases are less than 3m, with less than 1s of latency and 80% service availability. Meanwhile, the Rel-17 aims to improve the positioning accuracy up to tens of centimeters and lower latency up to tens of millisecond for InF-IIoT. The 5G positioning methods, in general, are derived from timing, angular, power-based techniques and their combinations wherein Rel-17 proposes improved signaling and procedures over Rel-16 considering that i) wider bandwidth increases timing measurements resolution, ii) larger antenna array apertures in massive MIMO (multiple-input multiple-output) for mid-band and mmWave allows narrower radio beams and increased angular resolution, and iii) joint processing of time-and angular-based methods helps mitigate dense multipath in InF settings. Further, the ongoing Rel-18 study item [2] is investigating solutions to improve accuracy up to a few cm with a latency of a few ms, especially by i) evaluating bandwidth aggregation for intra-band carriers, ii), integrating sidelink information, and iii) identifying reference signals, physical layer measurements/procedures to enable carrier phase (CP)-based positioning. Fig. 1 shows the 5G positioning methods, as well as the evolving requirements and emerging use-cases in 3GPP releases [3]. Figure 1: The 5G positioning architecture and possible measurements/methods together with performance requirements across 3GPP releases for evolving needs of IIoT use-cases. Rel-17 and onward efforts are focused on accuracy enhancements by improving NR parameters (e.g., bandwidth, power, antennas), combination of measurements (e.g., CP with time/angle measurements), and other countermeasures (e.g., multipath resolution). The earlier studies (e.g., [1, 3]) are focused on general description of the 5G positioning architecture and methods proposed in Rel-17&18\. Meanwhile, the study [4] analyzed the CP-based positioning, which is finding the traction to complement time/angle-based methods and eliminate their sensitivity to sampling resolution and system parameters. However, the various entailing critical issues of realizing CP-based positioning enhancements are not considered in these works. In this respect, our main contributions are: * • To establish the relationship between NR system parameters and delay/angle error variance, we describe the 5G positioning architecture, including reference signals and basic positioning measurements/methods. * • We introduce CP-based enhancements in time/angle-based positioning schemes along with related problems and challenges. * • We consider CP measurement enhancements under continuous reference signals as a potential solution to counter noisy phase measurements. * • We provide simulation results for CP measurements, with/without continuous reference signals, in estimating 3D distance error in the InF channel models. * • Finally, further improvements/benefits in continuous CP measurements are envisioned. ## II Basic positioning approaches The 3GPP specifications entail the basic architecture, main positioning technologies, and possible combinations (see Fig. 1), which are briefly resumed here. TABLE I: Relationship between 5G NR parameters to delay/angle error variance [5, 1, 6]. Estimate | Parameter | Relationship | Comments ---|---|---|--- Delay error variance | Bandwidth (BW) | $\propto{1}/{\text{BW}^{2}}$ | | NR typically provides 100 MHz bandwidth at FR1 and --- 400 MHz at FR2 Average received power (P) | $\propto{1}/{\text{P}}$ | Received power is also increased by the array gain Subcarrier spacing (SCS) | $\propto{1}/{\text{SCS}}$ | | Higher SCS increases resolution in resolving LOS from --- the multipath Antenna array geometry | Antenna elements (N) --- | $\propto{1}/{\text{N}}$ --- Only the array gain matters, so the array geometry can be arbitrary Angle error variance | Bandwidth (BW) | – | Independent Average received power (P) | $\propto{1}/{\text{P}}$ | Less pronounced compared to delay error variance Subcarrier spacing (SCS) | $\propto{\text{SCS}}$ | | For fixed BW, increasing SCS leads to fewer details in --- the channel frequency response Antenna array geometry | Horizontal antenna elements ($\text{N}_{\text{H}}$) --- Vertical antenna elements ($\text{N}_{\text{V}}$) Antenna spacing (S) | $\propto{1}/{\text{N}_{\text{H}}^{3}}$ --- $\propto{1}/{\text{N}_{\text{V}}^{3}}$ $\propto{1}/{\text{S}^{2}}$ | The array geometry matters, the number of horizontal and vertical --- elements affect differently depending on the location of the device In the 5G positioning framework, the location management function (LMF) is a central entity to estimate UE position based on assistance/measurements from next-generation (NG)-RAN and UE through the access and mobility management function (AMF). Moreover, a new NR positioning protocol A (NRPPa) carries the positioning information between NG-RAN and LMF. Besides, NR introduced two reference signals specifically for enabling accurate positioning measurements in downlink (DL) and uplink (UL) directions, respectively, as positioning reference signal (PRS) and sounding reference signal (SRS). These reference signals are configurable to enhance the precision of UL/DL time/angular measurements (see Sec. IV-C). LMF provides DL-PRS configuration to UEs using LTE positioning protocol (LPP) while RAN configures UL-SRS to UEs using radio resource control (RRC) protocol. To understand the relations of these measurements with the system parameters, Table I provides a summary of the various factors affecting errors in time/angle-based positioning. ### II-A Angle-based Positioning Massive MIMO and smart antenna techniques in 5G allow precise control over the beamforming and angle estimation. Angle-of-departure (AOD) and angle-of- arrival (AOA) measurements are part of the beam management procedure specified by 3GPP. Based on UL or DL transmission direction, two positioning techniques are: * • DL-AOD: using PRS, a UE measures the beam reference signal received power (RSRP) of gNB and reports it to LMF via LPP [1]. * • UL-AOA: using SRS, gNB measures AOA, i.e., the azimuth and elevation angles, and reports them to the LMF by the NRPPa [1]. The achievable accuracy of triangulation based on angle measurements is limited due to the range/resolution of reportable absolute values for power measurements, which is $[-156,-31]$dBm, with 1dB resolution [1]. Therefore, there are many suggestions to improve AOA/AOD performance; for instance, by introducing CP measurement in addition to beam RSRP measurement [7] (see Sec. III-B). ### II-B Time-based Positioning As RSRP-based approaches require establishing an accurate propagation model to reliably estimate signal energy, it is challenging to achieve high measurement accuracy in dynamic radio environments, limiting the scope of AOA/AOD-based techniques [8]. An alternative solution is trilateration methods. Time-of-arrival (TOA)-based positioning transforms signals propagation delay into the distance between gNB and UE. However, this approach requires precise UE-gNB synchronization. TDOA is an improved ranging method based on differential TOA, requiring time synchronization between gNBs only. As with the angular methods, the PRS and SRS are used to estimate the DL or UL TDOA between different gNBs. * • DL time-difference-of-arrival (DL-TDOA): UE receives the PRS from several gNBs and calculates the TOA of each PRS signal. The TOA of one gNB is taken as a reference to compute the reference-signal-time-difference (RSTD) to TOAs from the remaining gNBs. UE sends the RSTD measurements to LMF to compute the UE position using known geographical coordinates of gNBs. * • UL time-difference-of-arrival (UL-TDOA): UE transmitted SRS is received by neighboring gNBs; a transmission measurement function calculates the relative- time-of-arrival (RTOA) and sends it to the LMF to compute the UE position. * • Multi-cell round-trip-time (Multi-RTT): gNB and UE perform Rx-Tx time difference measurement, using PRS and SRS signaling, for the signal of each cell. LMF initiates the procedure whereby multiple gNBs and a UE perform the gNB Rx-Tx and UE Rx-Tx measurements, respectively. Multi-RTT has higher accuracy than TDOA-based methods, and it relaxes requirements on time synchronization [1]. Clock synchronization between gNB and UE or between gNBs is required to accurately estimate TOA or TDOA, respectively. However, the precision of time measurements is limited to intervals of $T_{c}$, with flexible resolution of $2^{k}T_{c}$, where $T_{c}=0.51$ns and $k$ is an integer in the interval [2, 5] for FR1 and [0, 5] for FR2 [1]. Additionally, a dense multipath environment can cause NLOS propagation, making the measurements unreliable. Thus, the measurement quality critically depends on system timing and multipath propagation conditions [9, 10]. ## III Carrier Phase (CP)-based Enhancements This section identifies positioning solutions in NR to meet the industrial positioning accuracy requirements both for FR1 and FR2 according to [3]. Note that NR positioning naturally shares some of the challenges/problems of Global Navigation Satellite System (GNSS). Although NR brings them to the InF environment, GNSS-based solutions can still inspire indoor settings. The basic principle of distance measurement in GNSS is pseudo-range. As the pseudorandom code mode is similar to Gold sequence correlation in TOA measurements, solutions to improve GNSS positioning (e.g., CP measurements) are also considered in NR. Compared to GNSS, however, wireless networks can operate in more complex scenarios, but with flexible carrier frequency configurations and fewer error sources. Generally, the CP measurement captures the difference between the phase of the incoming carrier signal and the receiver-generated reference signal. Under LOS conditions, the phase is in the $[0,2\pi)$ range, and the measurement error is only a small fraction of the wavelength; reaching the centimeter level. As illustrated in Fig. 2, CP-measurements can improve the accuracy of trilateration-based (i.e., using solely timing measurements) and triangulation-based (i.e., using AOD/AOA measurements in multi-antenna systems) positioning [7]. ### III-A CP in Time-based Solutions As mentioned earlier, PRS are pseudo-random sequences with good autocorrelation. The UE correlates the time domain samples with the known PRS pattern when measuring the signal propagation delay. After detecting the code phase, the receiver replicates the transmitted pseudo-random code and moves the replica until the maximum correlation is achieved. The offset corresponds to the transmitter-to-receiver signal propagation time. Hence, it depends on estimating the earliest peak delay in the magnitude of the normalized cross- correlation function. Consequently, the accuracy of the range-based methods depends mainly on the UE sampling resolution and the signal bandwidth [8]. Contrary to delay in code-phase detection, carrier-phase detection translates the phase difference and wavelength into a distance. When the receiver intercepts the transmitted signal, it is locked by a phase-locked loop (PLL). From the moment the first path is received, the phase shift between the locally generated reference signal and the replica of the received carrier wave is constant. Therefore, the measuring point is not as critical to the measurement quality [10]. However, when initiating positioning, the received phase is in the range $[0,2\pi)$, so only a fraction of a single wavelength can be measured. This causes integer ambiguity (IA) problem—how many full wavelengths ($N$) precede the measured fraction at the propagated distance [8] (see Section IV-A). Nevertheless, finding the CP difference requires UE–gNB synchronization, analogous to TOA measurement. Herein, the differentiation of measurements known from TDOA is useful. Differential CP measurements allow to cancel out UE clock offset, and eliminate typical measurement errors caused by the propagation environment. The differential measurement, therefore, assumes that gNBs are synchronized. However, the gNB subsets arrive at the UE with a slight time difference, and the phase measurement may be distorted/incorrect due to the gNBs’ clock mismatch. ### III-B CP in Angle-based Solutions Carrier-phase (CP) measurement can also improve positioning accuracy by triangulation. Accurate measurement of an angle in a plane requires a configuration with (at least) two receiving antennas. The transmitter emits a sinusoidal signal with wavelength $\lambda$ that propagates spherically so that the wave phase changes continuously in $[0,2\pi)$. The instantaneous signal phases $\phi_{1}(t),\phi_{2}(t)$ measured at the two antennas will generally be different, but the difference $\Delta=\phi_{1}(t)-\phi_{2}(t)$ is constant and enables AOA estimation. Suppose the separation between the receive antennas is $d$ and that the impinging wave is planar. The AOA $\theta$ perpendicular to the line between the antennas satisfies $\cos(\theta)=\Delta\lambda/(2\pi d)$, from which the angle can be extracted (but there are multiple solutions). Similar techniques can be used to measure the AOD from a multi-antenna transmitter to a single-antenna receiver; for example, the transmitter antennas can take turn in sending the sinusoid so that receiver can measure the respective phase-shifts. Moreover, the CP method is simpler and more efficient than the beam-sweep method. It provides an accurate angle estimate, and simultaneously, multiple terminals can acquire AOA/AOD information from a single PRS transmission. Precise angle measurement using a dual-antenna configuration is also a solution proposed in Bluetooth-SIG [7]. Figure 2: Illustration of carrier phase (CP) in angle/time-based solutions. ## IV Open Challenges and Potential Solutions While already used in outdoor systems, phase measurements challenge migrating to InF environments. Cluttered propagation conditions, shorter distances, higher accuracy/latency targets, and a different signal structure require reconsidering problems/solutions known in GNSS. ### IV-A Integer Ambiguity Introducing CP measurements in NR can reduce the measurement error, typically 10% of the carrier wavelength under right conditions [11], and consequently, increase the positioning accuracy. However, using CP is coupled with a difficulty, commonly referred to as integer ambiguity (IA); that is, the phase measurement is in the $2\pi$ range, but the total distance contains unknown integer numbers of the carrier wavelength. Consequently, a fast/reliable IA solution is essential to meet the low-latency positioning demands. The unknown wavelength multiplicity between transmitter and receiver is already successfully solved in GNSS CP measurements, laying the groundwork for adaptation in NR. The main issue in IA is the large search range of possible integers, especially for the short wavelengths in the FR2 band. Using TOA can significantly reduce the IA search range or, alternately time needed to solve it. TOA depends on the geometric distance $d$ between the transmitter and receiver, their clock offsets $\delta t$ converted into the distance by the speed of light $c$, and arrival timing measurement errors $\omega_{t}$. Whereas, CP at the receiver is the result of the same distance, the same clock errors, and phase measurement errors $\omega_{p}$, reduced by the integer multiple of the wavelength, $N\lambda$. Having both measurements at disposal allows comparative equation regarding duplicated $d$ and $\delta t$. As such, the IA search space mainly depends on the measurement errors $(\omega_{t}-\omega_{p})/\lambda$, where $\omega_{p}$ is only a small fraction of the carrier wavelength, and $\omega_{t}$ is dominant. In NR, $\omega_{t}$ may be reduced to a few meters; however, bearing in mind the target applications of centimeter/millimeter carrier wavelengths, the search space for IA may remain relatively large. Another approach to reduce the IA search range is the idea of virtual wavelength. Specifically, instead of transmitting a single frequency, the transmitter transmits reference signals in two or more frequencies to get phase measurements at multiple frequencies. As the two-phase equations follow the same propagation pattern, and only differ with wavelengths, represented by $\lambda_{1}$ and $\lambda_{2}$, both equations can be unified by using two- sided multiplication by $\lambda_{2}/(\lambda_{2}-\lambda_{1})$ and $\lambda_{1}/(\lambda_{2}-\lambda_{1})$, respectively. Then, subtracting the phase equations creates the virtual phase measurement for wavelength, $\lambda_{2}-\lambda_{1}$. This provides the opportunity to make differential wavelength much longer than the initial $\lambda_{1}$ and $\lambda_{2}$ [4]. Adapting this solution in NR gives numerous possibilities as the network can configure the transmission frequencies to optimally reduce the IA search space and computational overhead. ### IV-B Multipath and NLOS The IIoT use-cases exhibit harsh channel conditions caused by multiple reflections from objects (called “clutter”) in a dense industrial environment. Thereby, the measurements suffer from, a) disturbed phase continuity, b) excessive delays with respect to the LOS transmission time, and c) the angular deviation from the actual LOS direction. NLOS causes fundamental error in phase after reflection; simultaneously, a significant error is introduced into the location equations, and the IA search space vastly extends. Consequently, the centimeter-level accuracy in CP-based positioning becomes unattainable in desired time unless NLOS measurements are excluded or mitigated. In LOS scenarios with multipaths, the Rician-K factor measures the ratio of the received power from the LOS path to the received power from all other paths [9]. It is mainly the angle of the LOS path that is useful for positioning and antenna arrays can be used to distinguish it from the clutter. This is a classical signal processing problem that has resulted in the multiple signal classification algorithm (MUSIC), signal parameters via rotational invariant techniques (ESPRIT), and space-alternating generalized expectation maximization (SAGE) algorithms [12]. The associated computational complexity and robustness are challenging for real-time positioning. It can be challenging to distinguish a LOS scenario from a NLOS environment with one or multiple strong reflected paths. An estimated Rician-K factor can be clearly above 0dB but the dominant path might be a reflection from a misleading direction. Channel features such as maximum received power, delay spread, and departure/arrival angle spread can be used to distinguish between LOS/NLOS situations. An RSRP-based NLOS link identification algorithm is developed in [13], which performs binary hypothesis testing between LOS/NLOS link types using average NLOS channel power of the subcarrier. Cooperative positioning is another approach, where erroneous distance measurements due to NLOS conditions are detected/corrected using information from neighboring anchors. However, a standalone device must be capable of autonomously detecting the NLOS state and correcting erroneous measurements only with local information. To this end, the concept of simple classifiers based on the received signal strength and first path information can be replaced by more advanced machine learning (ML) techniques [9]. ML-based solutions aggregate a variety of features. For instance, under NLOS conditions, multiple reflections cause the signals to be more attenuated and have lower energy and amplitude. While for LOS, the strongest signal path corresponds to the first arrival, for NLOS, the strongest path is preceded by weak components. ### IV-C Continuous-PRS The implementation of the CP-based positioning enhancement is based on the PRS signal transmitted by gNB. PRS is designed to support DL-based positioning schemes while delivering accuracy, coverage, and interference management. PRS can cover the full NR bandwidth, and its transmission over multiple symbols (using comb-size of 2, 4, 6, and 12, i.e., the density of subcarriers in a PRS symbol) allows accumulating power. It can start at any physical resource block (PRB), configured in steps of 4-PRBs from 24 to 276 PRBs, giving a maximum bandwidth of 100 MHz for 30 kHz subcarrier spacing (SCS) in FR1 and 400 MHz for 120 kHz SCS in FR2 [8, 1]. The current PRS structure enables the efficient determination of TOA and phase-of-arrival (POA). While a single measurement indicating a code-based shift is essential in TOA, in POA, a phase shift remains constant and measurable for the signal duration and thus can be analyzed/tracked continuously for phase noise reduction. Therefore, a continuous PRS (C-PRS) structure is advantageous compared to the current shifted PRS configuration pattern. Regardless of where the FFT sampling window starts during the signal duration, the C-PRS signals maintain orthogonality, reducing noise in phase comparison. Moreover, when PRS signal sets from neighboring gNBs arrive with different delays, signals from distant gNBs may interfere, and only a part of the signal may be included in the FFT window. By providing C-PRS, the receiving UE may discard the boundary symbols, including the error-prone part, and collect only the intermediate symbols containing the full wavelength of the subcarriers. It is also essential that multiple oversampling by shifting the FFT window over the entire signal interval gives an identical result. This carries various implications: * • The relative phase difference between the subcarriers is constant; thus, the phase difference between the PRS signals of different gNBs remain constant regardless of the position of the sampling window. * • Constant difference between the gNBs signals allows optimizing the differential measurement that solves the UE phase mismatch problem. * • Iterative sampling results can be averaged to reduce noise, improve gain, and increase angular resolution. With today’s rapid network development, it is becoming increasingly important to reuse existing hardware and possibly signals for both detection and communication functions. However, the current PRS structure, with cyclic prefix and scattering of symbols in PRBs, makes it impossible to benefit from the advantage of the signal continuity. Due to the cyclic prefix, the end part of the symbol is copied and used as a prefix. As a result, most subcarriers are discontinuous at the symbol boundaries, hindering building a continuous waveform over multiple symbols. To overcome this, 3GPP initiates discussion over C-PRS, either as a pure carrier wave of periodic wide-band sinusoidal signals or a continuous narrow-band signal transmitted at a pre-defined carrier frequency [4]. To this end, work item [14] considers a block-type PRS with a modified prefix. It proposes a low complex in-band method where the desired tone signal can be easily generated in a sequence that rotates the subsequent symbol phases by the length of the cyclic prefix interval. This process enables a seamless connection of the subcarrier waveform at the symbol boundaries (see Fig. 3). Note that the phase tracking reference symbol (PTRS), designed to correct the clock’s phase noise, also adopts a similar block-type symbol configuration pattern. ## V Performance of Continuous-PRS in InF Figure 3: Carrier phase (CP) measurement in regular PRS vs. continuous-PRS. We designed an example simulation scenario to demonstrate the NR positioning enhancement achievable by CP measurements, including continuous carrier phase (CCP) measures. The simulation parameters follow TR 38.857 as the typical InF scenario applicable in the performance analysis of current positioning solutions: * • FR1-specific values: carrier frequency 3.8 GHz, bandwidth 100 MHz, SCS 30kHz. * • FR2-specific values: carrier frequency 28 GHz, bandwidth 400 MHz, SCS 120 kHz. The simulation scenarios FR1 and FR2 shared common values, i.e., PRS sequence generated for 3276 subcarriers with a comb-6 pattern, complex Gaussian noise with power adjusted to 10dB SNR at the UE-Rx. The gNB and UE positions are fixed at (x, y, height) in meters as (100, 100, 15) and (120, 100, 1.5), respectively. Simulations demonstrate the prospect of standalone methods by specifying the distance error in measurement to a single anchor in three- dimensional space. To evaluate the performance of communication systems during standardization, it is crucial to use a geometry-based stochastic channel model suitable for the propagation environment, i.e., specific close-in free space reference distance models with frequency-dependent path loss exponents. 3GPP TR 38.901 Indoor Factory model considers the influence of industrial production on channel impulse response and corresponding power delay profile. Simulated channel characteristics follow different versions of the InF conditions defined in 3GPP 38.901, i.e., LOS and NLOS with sparse (S) or dense (D) clutter, and low (L) or high (H) relative gNB position. TOA-based distance measurements are made directly from the time lag at the correlation peak. They are mainly limited by the sampling frequency, period corresponding to the chirp signal, and the presence of LOS. The comparison of LOS conditions for FR1 and 2 indicates a reduction of the variance in the TOA by broadening the band, as pointed out in Table I. The simulated NLOS scenarios clearly show how seriously the distance error increases in the absence of a direct path. Moreover, apart from a fundamental phase error after wave reflection, NLOS vastly extends the search space of IA, for which TOA is the entry point. Therefore, CP-based simulations were limited to LOS as the only proper scenario. Figure 4: Cumulative distribution function (CDF) of the 3D distance error for the InF FR1 scenario with 100 MHz bandwidth. Figure 5: Cumulative distribution function (CDF) of the 3D distance error for the InF FR2 scenario with 400 MHz bandwidth. CP-based distance measurements follow the procedure of probing the phase of the middle subcarrier [8], as proven to reflect reliable UE–gNB distance information. The sampling process also limits single-phase measurement, i.e., the actual information is rounded to the nearest sampling point. However, the phase noise inherent in propagation significantly influences distance accuracy within the range of a single subcarrier period. Multiple phase measurements can provide phase noise reduction. CCP measurement assumes a constant phase difference over the received signal and the reference produced at the receiver. Up to this point, simulation provides double replication and concatenation of the current OFDM symbol. Then, a 1000-fold FFT window sweep is used in the phase comparisons, with a 1-sample shift between measurements. In Figs. 4 & 5, the performance of the continuous-PRS is reflected using the CDFs obtained for CCP, which shows the reduction of phase noise variance. An aspect that needs to be considered in phase folding is the possible phase shift resulting from the receiver mobility. In an indoor positioning system, a mobile receiver is moving towards a fixed transmitter with a typical speed of 3km/h (0.83m/s), resulting in the frequency shift due to the Doppler effect will be 0.003ppm. Therefore, the resulting phase shift can easily be neglected for short, time-coherent periods. ## VI Possible Further Improvements The ongoing work on positioning in 5G-and-beyond networks is dealing with a small set of problems that are entirely new. A significant part requires effectively adapting solutions from some related fields; for instance, the CP- based angular measurement developed for Bluetooth, the IA problem already explored in GNSS, and NLOS mitigation in wireless communication. In this respect, C-PRS provides similar prospects for further research through adaptation, including: #### VI-1 Digital femtosecond time difference Improving the phase measurement resolution already has promising solutions in closely-related areas, such as the synchronization of complex laboratory infrastructure. The digital dual mixer time difference (D-DMTD) is a digital femtosecond time difference circuit developed in CERN [15]. D-DMTD measures the phase difference between two digital signals with very accurate resolution using a relatively low-frequency counter. By sampling a signal of a particular frequency with a slightly slower clock, D-DMTD stretches the signal in the time domain, allowing intensive aliasing of the two input clocks fed to the phase detector with femtosecond time resolution. #### VI-2 gNB/TRPs master-slave synchronization Like the code phase-detection for TOA, the CP-detection translates the corresponding POA into a distance. Once the receiver intercepts the transmitted signal, it is locked in the PLL and allows the phase difference to be captured. However, the UE’s local oscillator phase offset can significantly influence the measurement result. This forces, e.g., double differentiation in CP. Yet, for differential measurements, gNBs must be precisely synchronized (less than one carrier wavelength), which requires a sub-nanosecond variance. It is difficult to synchronize gNBs to an ideal distant clock (GNSS) in sub- nanoseconds accuracy due to atmospheric fluctuations, temperature fluctuations, the inherent error of the device, etc., and positioning accuracy of less than 1m cannot be guaranteed [14]. However, it is relatively simple for gNBs to follow a certain near external master clock in sub-nanoseconds variance. For this purpose, the discussed C-PRS can be used to trace the master clock phase continuously. #### VI-3 Near-field positioning When an antenna array becomes sufficiently large, compared to the wavelength, the spherical curvature of the impinging wave becomes noticeable. Since the curvature depends on the propagation distance, this feature enables ranging in addition to conventional AOA estimation, so a single array can localize the transmitter. This operational regime is called the radiative near-field and exists for ranges up to the Fraunhofer distance. The combination of physically large arrays and the use of mmWave bands can jointly extend the near-field to ranges larger than a kilometer. A key implementation challenge is to keep a sufficient phase-synchronization across the array to enable accurate estimation of the curvature. ## VII Conclusions This article provides an overview of existing/emerging positioning techniques, especially illustrating how CP measurements can lead to centimeter positioning accuracy in indoor factory channels. Giving up the comb structure and cyclic prefix in the positioning reference signal and ensuring its temporal continuity enables noise reduction. Moreover, these directions open new research possibilities for providing phase measurements at the nanosecond- level and introducing high-precision synchronization mechanisms. Yet, using CP in InF scenarios has to overcome challenges by adopting existing solutions from GPS domain related to integer ambiguity and multipath mitigation. The latter is a critical issue where various channels features and fingerprinting techniques can be intelligently incorporated in on-device classification or network-level optimization of positioning schemes. ## References * [1] S. Dwivedi _et al._ , “Positioning in 5G networks,” _IEEE Commun. Mag._ , vol. 59, no. 11, pp. 38–44, 2021. * [2] 3GPP WID RP-222616, “Revised SID on study on expanded and improved NR positioning,” Sept. 2022. * [3] 3GPP TR 38.857, “Study on NR positioning enhancements,” Mar. 2021. * [4] A. Fouda _et al._ , “Toward cm-level accuracy: Carrier phase positioning for IIoT in 5G-advanced NR networks,” in _IEEE PIMRC_ , 2022. * [5] K. Shamaei _et al._ , “A joint TOA and DOA acquisition and tracking approach for positioning with LTE signals,” _IEEE Trans. Signal Process._ , vol. 69, pp. 2689–2705, 2021. * [6] ——, “Receiver design and time of arrival estimation for opportunistic localization with 5G signals,” _IEEE Trans. Wireless Commun._ , vol. 20, no. 7, pp. 4716–4731, 2021. * [7] 3GPP Tdoc R1-2104844, “Carrier phase based downlink angle of departure measurement,” May 2021. * [8] Z. Zhang _et al._ , “Indoor carrier phase positioning technology based on OFDM system,” _Sensors_ , vol. 21, no. 20, 2021. * [9] O. Kanhere _et al._ , “Position location for futuristic cellular communications: 5G and beyond,” _IEEE Commun. Mag._ , vol. 59, no. 1, pp. 70–75, 2021. * [10] S. Fan _et al._ , “Carrier phase-based synchronization and high-accuracy positioning in 5G new radio cellular networks,” _IEEE Trans. Commun._ , vol. 70, no. 1, pp. 564–577, 2022. * [11] 3GPP Tdoc R1-1901980, “Further discussion of NR RAT-dependent DL positioning,” Mar. 2019. * [12] C. Gentile _et al._ , _Multipath and NLOS Mitigation Algorithms_ , 2013\. * [13] 3GPP Tdoc R1-2104909, “Mitigation of NLOS problem for NR positioning,” May 2021. * [14] 3GPP Tdoc R1-2104880, “Carrier/subcarrier phase based enhancement for 5G NR positioning,” May 2021. * [15] D. Tso _et al._ , “D-DMTD: Digital dual mixer time difference,” Sandia National Laboratories, Report SAND2017-10097, August 2017. Jakub Nikonowicz<EMAIL_ADDRESS>received the Ph.D. degrees in telecommunication systems from Poznań University of Technology (PUT), Poznań, Poland, 2019. Since 2019, where he is currently an assistant professor. His research interests include statistical signal processing and precise synchronization in distributed systems. --- Aamir Mahmood<EMAIL_ADDRESS>is an assistant professor of communication engineering at Mid Sweden University, Sweden. He received the D.Sc. degrees in communications engineering from Aalto University School of Electrical Engineering, Finland, in 2014. His research interests include time synchronization, LPWANs, and RAN optimization/management. --- Muhammad Ikram Ashraf<EMAIL_ADDRESS>received the M.Sc. and Ph.D. degrees in telecommunication systems and communication engineering, respectively, from the University of Oulu, Finland. He is currently working as a senior research specialist, 5G Advanced, at Nokia Bell Labs, Finland. His research interests include URLLC, TSN, Positioning, and AI/ML. --- Emil Björnson<EMAIL_ADDRESS>received his Ph.D. degree from the KTH Royal Institute of Technology, Stockholm, Sweden, in 2011. He is now a professor of wireless communication at KTH. His research interests include multi-antenna and reconfigurable intelligent surface-aided communications, radio resource allocation, and energy efficiency. He is a Fellow of IEEE. --- Mikael Gidlund<EMAIL_ADDRESS>is a professor of computer engineering at Mid Sweden University, Sweden. He has worked as Senior Principal Scientist and Global Research Area Coordinator of Wireless Technologies, ABB Corporate Research, Sweden. His research interests include wireless communication and networks, access protocols, and security. ---
# Pretty good fractional revival via magnetic fields: theory and examples Whitney Drazen, Mark Kempton111Department of Mathematics, Brigham Young University, Provo, UT<EMAIL_ADDRESS>Gabor Lippner222Department of Mathematics, Northeastern University, Boston, MA, <EMAIL_ADDRESS> ###### Abstract We develop the theory of pretty good quantum fractional revival in arbitrary sized subsets of a graph, including the theory for fractional cospectrality of subsets of arbitrary size. We use this theory to give conditions under which a magnetic field can induce pretty good fractional revival, and give several examples. ## 1 Introduction “Fractional revival is a quantum transport phenomenon important for entanglement generation in spin networks” [2]. In a continuous time quantum walk fractional revival (FR) refers to the situation when the walk “preserves” a subset at a certain moment in time. That is, there is a subset $K$ of the nodes and a time $t$ such that if the initial state of the walk is supported on $K$ then it is also supported on $K$ at time $t$. For entanglement generation one would, typically, be interested in starting the walk from a single vertex $v\in K$ and obtaining a superposition of the nodes in $K$ at time $t$. For more background and a comprehensive characterization of FR see [1]. Fractional revival is, in a sense, a relaxation of perfect state transfer (PST). Nevertheless, finding examples of FR turned out to be nearly as difficult as for PST. This naturally led to the study of further relaxations of the phenomenon. Just as with PST, one can introduce an asymptotic variant that has now been routinely dubbed “pretty good PST” in the literature [10, 5]. Informally, _pretty good fractional revival_ (PGFR) requires a sequence of times at which the walk is closer and closer to actually preserving the subset $K$. A complete characterization of PGFR between a pair of nodes on paths and cycles was given in [3]. In a series of papers [7, 8, 4] (some of) the present authors have developed methods to construct examples of pretty good (or even perfect) state transfer using a diagonal perturbation of the matrix - sometimes referred to as a magnetic field in the context of quantum spin networks. The goal of the current paper is to extend these ideas to the case of PGFR. In particular, we further develop the theory of PGFR to obtain a practical, verifiable condition that guarantees that a subset of nodes will exhibit pretty good fractional revival after adding a “generic” constant diagonal perturbation to the matrix. In order to achieve this goal, we * • devise a way to split the characterization of PGFR into somewhat separate “eigenvector” and “eigenvalue” conditions, * • introduce the notion of fractional cospectrality and provide a comprehensive characterization of it in order to construct families of graphs that satisfy the eigenvector part of the condition, * • prove a suitable generalization of the well-known Kronecker condition for pretty good state transfer to the case of pretty good fractional revival, * • extend the field-trace method of [8] into a tool that allows one to verify this new Kronecker-type condition if certain factors of the characteristic polynomial are irreducible * • prove that under a generic diagonal perturbation the relevant factors are indeed irreducible. The paper is structured as follows: in Section 2 we introduce pretty good fractional revival (PGFR) and provide a spectral characterization. Then we generalize Kronecker’s criterion to our setting and, using the field-trace method developed in [8], we derive a sufficient condition for PGFR based on the irreducibility of certain factors of the characteristic polynomial together with a trace and degree condition on these factors. In Section 3 we explain our generalization of the idea of cospectrality to the fractional setting, and prove a theory analogous to the characterizations of the original notion. This allows us to prove that for fractionally cospectral subsets, under suitable diagonal perturbations of the adjacency matrix, the factors of the characteristic polynomial relevant to PGFR are indeed irreducible. Finally, in Section 4 we construct examples where we can prove fractional cospectrality of certain subsets and also verify the trace and degree condition of Theorem 2.11, thereby guaranteeing PGFR in these graphs. ## 2 Pretty good fractional revival We work in the following general setting: fix an index set $X$ and consider a real symmetric matrix $M\in\mathbb{R}^{X\times X}$. ###### Definition 2.1. Let $K\subset X$ be a subset of indices. We say that the $X\times X$ matrix $M$ exhibits pretty good fractional revival with respect to $K$ if $\operatorname{cl}\\{\exp(itM)_{K\times K}:t\geq 0\\}\cap U(K)\nsubseteq\\{\rho\operatorname{Id}_{K\times K}:\rho\in\mathbb{C}\\},$ that is, if we take the family of $K\times K$ submatrices of $\exp(itM)$ for all $t\geq 0$, then the closure of this family contains a unitary matrix with at least two distinct eigenvalues. Equivalently, there is a $K\times K$ unitary matrix $H$ with at least two distinct eigenvalues, and a sequence $0<t_{1}\leq t_{2}\leq\dots$ such that $\lim_{k\to\infty}\exp(it_{k}M)_{K\times K}=H$. This includes fractional revival if $t=t_{1}=t_{2}=\dots$. The convergence can be understood entry wise or, equivalently, with respect to any standard matrix norm. Since the $K\times K$ submatrix of a unitary matrix $A$ is unitary if and only if $A$ is block-diagonal relative to $K$, a further equivalent way to describe pretty good fractional revival is to require that there is a sequence $0<t_{1}\leq t_{2}\leq\dots$ and a matrix $A$ that is block-diagonal relative to $K$ such that $\lim_{k\to\infty}\exp(it_{k}M)=A$, and $A_{K\times K}$ has at least two distinct eigenvalues. ###### Remark 2.2. This generalizes the concept of pretty good fractional revival on 2 nodes of a graph from [3]: Two vertices $u$ and $v$ of a graph $G$ with adjacency matrix $A$ exhibit pretty good fractional revival if, for all $\epsilon>0$ there is some time $t>0$ such that $|e^{itA}(u,u)|^{2}+|e^{itA}(u,v)|^{2}>1-\epsilon.$ ###### Remark 2.3. It would be perhaps more natural to require a non-diagonal $K\times K$ unitary matrix in the closure of $\exp(itM)_{K\times K}$ instead of simply one that’s not the multiple of the identity. However, it turns out that for primitive matrices (eg for adjacency matrices of connected graphs) this stronger requirement is equivalent to the one in Definition 2.1. See the second part of Theorem 2.7 for an explanation of this. Notice that $\\{\exp(itM):t\geq 0\\}\subset\langle M\rangle$ is a bounded subset of the (finite dimensional) polynomial algebra generated by $M$. Hence if $M$ exhibits pretty good fractional revival with respect to $K$ and $\lim_{k\to\infty}\exp(it_{k}M)_{K\times K}=H$ then there is a unitary matrix $\hat{H}\in\langle M\rangle$ that is block diagonal relative to $K$, and $\operatorname{Id}_{K\times K}\neq H=\hat{H}_{K\times K}$. ### 2.1 Non-degenerate partition of the spectrum Here we provide a spectral characterization of pretty good fractional revival that can be summarized as follows: the subset $K$ induces a natural partition $\mathcal{P}_{K}$ of the eigenvalues of $M$, and pretty good fractional revival is exhibited with respect to $K$ if and only if a certain simultaneous approximation problem is solvable in this partition. Let us start by recalling some important notions and facts from [1]. ###### Definition 2.4. Let $M=\sum_{i=1}^{d}\theta_{i}E_{i}$ be the spectral decomposition of $M$, and $K\subset X$ a non-empty set of indices. 1. 1. We denote by $D_{K}$ the $X\times X$ diagonal matrix whose entries on the diagonal are 1 in $K$ and 0 outside of $K$. 2. 2. The _eigenvalue support_ of $K$ is the binary relation $\Phi_{K}=\\{(\theta_{r},\theta_{s}):E_{r}D_{K}E_{s}\neq 0\\}.$ 3. 3. Define $\mathcal{P}_{K}=(\Pi_{0},\Pi_{1},\dots,\Pi_{s})$ to be the partition of $\\{1,2,\dots,d\\}$ where $\Pi_{0}=\\{i:(E_{i})_{K\times K}=0\\},$ and $\Pi_{1},\dots,\Pi_{s}$ are the remaining equivalence classes of the transitive closure of $\Phi_{K}$. ###### Lemma 2.5 (Lemma 2.5 and Theorem 2.10 from [1]). $A=\sum_{j}c_{j}E_{j}$ is block-diagonal relative to $K$ if and only if the $c_{j}$s are equal to each other within each part $\Pi_{r}:1\leq r\leq s$. ###### Definition 2.6. The partition $\mathcal{P}_{K}$ is _non-degenerate_ if there is a _mod 1 non- constant_ vector $(\rho_{1},\dots,\rho_{s})\in\mathbb{R}^{s}$ such that for all $\varepsilon>0$ there is a $t>0$ so that for any $1\leq r\leq s$ $\forall j\in\Pi_{r}:\lVert t\cdot\theta_{j}-\rho_{r}\rVert<\varepsilon.$ (1) where $\lVert x\rVert=\min\\{\lvert x-n\rvert:n\in\mathbb{Z}\\}$ is the distance of $x$ to the nearest integer. In particular, $s\geq 2$ is required. ###### Theorem 2.7. The matrix $M$ exhibits pretty good fractional revival with respect to $K$ if and only if the partition $\mathcal{P}_{K}$ is non-degenerate. Furthermore, if $M$ is primitive then pretty good fractional revival also implies that $\operatorname{cl}\\{\exp(itM)_{K\times K}:t\geq 0\\}\cap U(K)$ contains a non-diagonal matrix. ###### Proof. First assume $M$ exhibits pretty good fractional revival with respect to $K$, that is, there is a block-diagonal matrix $A$ such that $H=A_{K\times K}$ has at least two distinct eigenvalues, and a sequence $0<t_{1}\leq t_{2}\leq\dots$ such that $\lim_{k\to\infty}\exp(2\pi it_{k}M)=A$. Since $\exp(2\pi itM)=\sum_{j}\exp(2\pi it\theta_{j})E_{j}$, it follows that $\mu_{j}=\lim_{k\to\infty}exp(2\pi it_{k}\theta_{j})$ exists for all $j$, and that $A=\sum\mu_{j}E_{j}$. By Lemma 2.5 this implies that there exist reals $\rho_{1},\dots,\rho_{s}$ such that $\mu_{j}=\exp(2\pi i\rho_{r})$ for all $j\in\Pi_{r}$. Thus $\lVert t_{k}\theta_{j}-\rho_{r}\rVert\to 0$ as $k\to\infty$ for all $j\in\Pi_{r}$. Finally, since $H$ has at least two distinct eigenvalues, all the $\mu_{j}$s can’t be identical, which implies that all the $\rho_{r}$ cannot be congruent mod 1. This proves the only if part. Conversely, if we are given $(\rho_{1},\dots,\rho_{s})$ that proves the non- degenerateness of $\mathcal{P}_{K}$, let $t_{k}$ be the time for which (1) holds for $\varepsilon=1/k$. Then, clearly, $\lim_{k\to\infty}\exp(2\pi it_{k}\theta_{j})=\exp(2\pi i\rho_{r})$ for all $j\in\Pi_{r}$. Choose a subsequence for which $\gamma_{j}=\lim_{k\to\infty}\exp(2\pi it_{k}\theta_{j})$ also exists for all $j\in\Pi_{0}$. (This can be done simply by compactness.) Then $A=\lim_{k\to\infty}\exp(2\pi it_{k}M)=\sum_{j\in\Pi_{0}}\gamma_{j}E_{j}+\sum_{r=1}^{s}\left(\exp(2\pi i\rho_{r})\sum_{j\in\Pi_{r}}E_{j}\right)$ (2) is block-diagonal by Lemma 2.5, and since not all the $\rho_{r}$s are congruent mod 1, the restriction $H=A_{K\times K}$ is not a scalar multiple of the identity. To prove that the non-degeneracy of $\mathcal{P}_{K}$ implies the existence of non-diagonal matrices in $\operatorname{cl}\\{\exp(itM)_{K\times K}:t\geq 0\\}\cap U(K)$, it suffices to show that $H=A_{K\times K}$ is non-diagonal. The argument below is adapted from the proof of Lemma 2.9 in [1]. Let $x\in X$ and let $e_{x}$ denote the corresponding standard basis vector. Then, for any $j\in\Pi_{r}$ we can write $H(x,x)E_{j}e_{x}=E_{j}Ae_{v}=\exp(2\pi i\rho_{r})E_{j}e_{x},$ which implies that $E_{j}e_{x}=0$ unless $\exp(2\pi i\rho_{r})=H(x,x)$. By symmetry the same holds for $e_{x}^{\mathrm{T}}E_{j}$. Now suppose, for a contradiction, that $H$ is diagonal. Since it is not a scalar multiple of the identity according to the first half of the theorem, we can find two elements $x,y\in X$ such that $H(x,x)\neq H(y,y)$. Thus $e_{y}^{\mathrm{T}}E_{j}e_{x}=0$ for all $j$, since $\exp(2\pi i\rho_{r})$ can’t equal to both $H(x,x)$ and $H(y,y)$. Then, however, $e_{y}^{\mathrm{T}}M^{n}e_{x}=\sum_{j}\theta_{j}^{s}e_{y}^{\mathrm{T}}E_{j}e_{x}=0$ for any integer $n\geq 0$ which contradicts the primitivity of $M$. ∎ ### 2.2 A number theoretic characterization In this section we develop a method to verify that a partition is degenerate. This can be considered as the generalization of previous results for the case of pretty good state transfer. Its basis, as in earlier results, is the following number theoretic lemma due to Kronecker. ###### Lemma 2.8 (Kronecker). Let $\theta_{1},...,\theta_{k}$ and $\zeta_{1},...,\zeta_{k}$ be arbitrary real numbers. For an arbitrarily small $\varepsilon>0$, the system of inequalities $\lVert\theta_{j}y-\zeta_{j}\rVert<\varepsilon\ (j=1,...,k),$ has a solution $y$ if and only if, for integers $\ell_{1},...,\ell_{k}$, $\ell_{1}\theta_{1}+\cdots+\ell_{k}\theta_{k}=0,$ implies $\lVert\ell_{1}\zeta_{1}+\cdots+\ell_{k}\zeta_{k}\rVert=0.$ ###### Lemma 2.9. Given a sequence of real numbers $\theta_{1},\dots,\theta_{k}$ and a partition $\mathcal{P}=(\Pi_{1},\dots,\Pi_{s})$ of $\\{1,2,\dots,k\\}$, the following are equivalent: 1. i) $\mathcal{P}$ is non-degenerate in the sense of Definition 2.6. 2. ii) There is a pair of indices $1\leq r_{1},r_{2}\leq s$ such that no sequence integers $\ell_{1},\ell_{2},\dots,\ell_{k}$ satisfies all of the following: 1. a) $\sum_{1}^{k}\ell_{j}\theta_{j}=0$, 2. b) $\sum_{j\in\Pi_{r}}\ell_{j}=0$ for all $r\neq r_{1},r_{2}$ 3. c) $\sum_{j\in\Pi_{r_{1}}}\ell_{j}=-1$ and $\sum_{j\in\Pi_{r_{2}}}\ell_{j}=1$. ###### Remark 2.10. Note that this lemma is a direct generalization of [3][Theorem 2.4] which only considers the $s=2$ case. The only essential new ingredient in the following proof is the use of integer lattices in place of subgroups of $\mathbb{Z}$. ###### Proof. First, suppose that $\mathcal{P}$ is non-degenerate as witnessed by the sequence $(\rho_{1},\dots,\rho_{s})$ where $\lVert\rho_{r_{1}}-\rho_{r_{2}}\rVert>0$ for some $r_{1},r_{2}$. Assume, for the same $r_{1},r_{2}$, that there exists integers $\ell_{1},\dots,\ell_{k}$ satisfying the above criteria. Let $\varepsilon=\varepsilon_{0}/(k\max\\{|\ell_{j}|\\})$, and choose $t$ such that for any $1\leq r\leq s$ $\forall j\in\Pi_{r}:\lVert t\cdot\theta_{j}-\rho_{r}\rVert<\varepsilon.$ Then for all $j\in\Pi_{r}$ $\lVert t\ell_{j}\theta_{j}-\ell_{j}\rho_{r}\rVert<\varepsilon_{0}/k$ and thus $\left\lVert\sum_{1}^{k}t\ell_{j}\theta_{j}-\sum_{r=1}^{s}\rho_{r}\sum_{j\in\Pi_{r}}\ell_{j}\right\rVert<\varepsilon_{0}.$ Here the left and the right sums are both $\rho_{r_{1}}-\rho_{r_{2}}$ and the middle is 0. Thus $\lVert\rho_{r_{1}}-\rho_{r_{2}}\rVert<\varepsilon_{0}$. Since this holds for any $\varepsilon_{0}$, we get that $\lVert\rho_{r_{1}}-\rho_{r_{2}}\rVert=0$, contradicting the choice of $r_{1},r_{2}$. This proves the $i)\Rightarrow ii)$ implication. Next, we prove $ii)\Rightarrow i)$. Assume, without loss of generality, that $r_{1}=1$ and $r_{2}=2$ is a pair of indices for which $ii)$ holds. Further assume, again without loss of generality, that $\theta_{1}\in\Pi_{1}$. Let $\tilde{\theta_{j}}=\theta_{j}-\theta_{1}:j=2,\dots,k$. Note that $\sum_{j=2}^{k}\ell_{j}\tilde{\theta_{j}}=0\mbox{ implies }\sum_{1}^{k}\ell_{j}\theta_{j}=0\mbox{ and }\sum_{1}^{k}\ell_{j}=0\mbox{ where $\ell_{1}=-\sum_{2}^{k}\ell_{j}$}.$ (3) Consider the following integer lattice $\mathcal{S}=\\{(a_{2},\dots,a_{s})\in\mathbb{Z}^{s-1}|\exists\mkern 2.0mu\ell_{2},\dots,\ell_{k}\in\mathbb{Z}:\sum_{2}^{k}\ell_{j}\tilde{\theta_{j}}=0\mbox{ and }a_{r}=\sum_{j\in\Pi_{r}}\ell_{j}\mbox{ for all $2\leq r\leq s$}\\}.$ By (3) we see that $(1,0,0,\dots,0)\not\in\mathcal{S}$ hence $\mathcal{S}\subsetneq\mathbb{Z}^{s-1}$. This implies that the dual lattice $\mathcal{S}^{*}\supsetneq\mathbb{Z}^{s-1}$. (For instance, because the determinant of $\mathcal{S}$ has to be greater than 1, and thus the determinant of the dual has to be less than 1 in absolute value so it can’t be an integer lattice.) In other words, there exists vector $(\tilde{\rho}_{2},\dots,\tilde{\rho}_{s})$ that is not congruent to $(0,0,\dots,0)$ mod 1, such that $\sum_{2}^{s}\tilde{\rho}_{r}a_{r}\in\mathbb{Z}$ for all $(a_{2},\dots,a_{s})\in\mathcal{S}$. Now let $\zeta_{j}=\left\\{\begin{array}[]{lll}\tilde{\rho}_{r}&\mbox{ if }&j\in\Pi_{r},r\geq 2\\\ 0&\mbox{ if }&j\in\Pi_{1}\end{array}\right.$ If $\ell_{2},\dots,\ell_{k}$ are integers such that $\sum_{2}^{k}\ell_{j}\tilde{\theta_{j}}=0$, then $(a_{2},\dots,a_{s})\in\mathcal{S}$ for $a_{r}=\sum_{j\in\Pi_{r}}\ell_{r}:r=2,\dots,s$. Hence $\sum_{2}^{k}\ell_{j}\zeta_{j}=\sum_{r=1}^{s}\left(\tilde{\rho}_{r}\sum_{j\in\Pi_{r}}\ell_{j}\right)=0+\sum_{r=2}^{s}\tilde{\rho}_{r}a_{r}\in\mathbb{Z}$ by the choice of $(\tilde{\rho}_{2},\dots,\tilde{\rho}_{s})\in\mathcal{S}*$. So Lemma 2.8 implies that for any $\varepsilon>0$ there is a $t=t(\varepsilon)$ such that we have $\lvert t\tilde{\theta_{j}}-\tilde{\rho}_{r}\rvert<\varepsilon\pmod{1}$ for all $j=2,\dots,k$ where $\Pi_{r}$ is the partition containing the index $j$. To finish the proof, let $\rho_{1}$ be a mod 1 accumulation point of the sequence $t(\varepsilon)\theta_{1}$ as $\varepsilon\to 0$. Then for any $\varepsilon>0$ there is a $t=t(\varepsilon)$ such that $\lVert t\theta_{1}-\rho_{1}\rVert<\varepsilon$ and $\lvert t\tilde{\theta_{j}}-\tilde{\rho}_{r}\rvert<\varepsilon\pmod{1}$ for all $j$, simultaneously. Set $\rho_{r}=\tilde{\rho}_{r}+\rho_{1}$. Then $(\rho_{1},\dots,\rho_{s})$ is not the constant vector mod 1. Since $\lVert t(\varepsilon)\theta_{j}-\rho_{r}\rVert\leq\lVert t(\varepsilon)\tilde{\theta_{j}}-\tilde{\rho}_{r}\rVert+\lVert t(\varepsilon)\theta_{1}-\rho_{1}\rVert<2\varepsilon$ for $j=2,\dots,k$ where $\Pi_{r}$ is the partition containing the index $j$. Thus the partition is non-degenerate. ∎ In general, verifying ii) is difficult. Here we generalize a tool from [4] that allows to show ii) holds under certain conditions that are easy to verify. ###### Theorem 2.11. Fix a field $\mathcal{F}$, a sequence of real numbers $\theta_{1},\dots,\theta_{k}$ and a partition $\mathcal{P}=(\Pi_{1},\dots,\Pi_{s})$ of $\\{1,2,\dots,k\\}$. Suppose there are polynomials $P_{1},P_{2},\dots,P_{s}\in\mathcal{F}[x]$ that are irreducible over $\mathcal{F}$ and such that $\Pi_{r}$ is exactly the set of roots of $P_{r}$ for each $1\leq r\leq s$. If for some pair of integers $1\leq r_{1},r_{2}\leq s$ $\frac{\operatorname{Tr}(P_{r_{1}})}{\deg(P_{r_{1}})}\neq\frac{\operatorname{Tr}(P_{r_{2}})}{\deg(P_{r_{2}})},$ where $\operatorname{Tr}$ denotes the trace (i.e. the sum of roots) of a polynomial, then the partition $\mathcal{P}$ is non-degenerate. ###### Proof. We verify ii) of Lemma 2.9 for $r_{1},r_{2}$ via the field trace a method introduced in [8]. For a field extension $\mathcal{K}$ of $\mathcal{F}$, the _field trace_ is a linear functional $\operatorname{Tr}_{\mathcal{K}/\mathcal{F}}:\mathcal{K}\rightarrow\mathcal{F}$ defined for each element $\alpha\in\mathcal{K}$ as the trace of the $\mathcal{F}$-linear map $x\mapsto\alpha x$. See [9] for details about the field trace. For each $1\leq r\leq s$ let $\mathcal{L}_{r}$ denote the splitting field of $P_{r}$ over $\mathcal{F}$. Since $P_{r}$ is irreducible, for any $j\in\Pi_{r}$ we have $\operatorname{Tr}_{\mathcal{L}_{r}/\mathcal{F}}(\theta_{j})=\frac{[\mathcal{L}_{r}:\mathcal{F}]}{\deg P_{j}}\sum_{j\in\Pi_{r}}\theta_{j}=\frac{[\mathcal{L}_{r}:\mathcal{F}]}{\deg P_{j}}\operatorname{Tr}(P_{r})$ according to Lemma A.1 of [4]. Let us suppose for a contradiction that there are integers $\ell_{j}\in\mathbb{Z}:j=1,\dots,k$ satisfying $\displaystyle\sum_{j=1}^{k}\ell_{j}\theta_{j}$ $\displaystyle=0$ (4) $\displaystyle\sum_{j\in\Pi_{r_{1}}}\ell_{j}$ $\displaystyle=1\text{ and }\sum_{j\in\Pi_{r_{2}}}\ell_{j}=-1$ (5) $\displaystyle\sum_{j\in\Pi_{r}}\ell_{j}$ $\displaystyle=0\text{ for all }r\neq r_{1},r_{2}.$ (6) Let $\mathcal{K}/\mathcal{F}$ the smallest field extension containing $\mathcal{L}_{1},...,\mathcal{L}_{s}$. We apply $\operatorname{Tr}_{\mathcal{K}/\mathcal{F}}$ to (4). Then, according to (5), (6), and the basic properties of the field trace from Lemma A.1 in [4], $\displaystyle 0$ $\displaystyle=\operatorname{Tr}_{\mathcal{K}/\mathcal{F}^{\prime}}\left(\sum_{j=1}^{k}\ell_{j}\theta_{j}\right)=\sum_{r=1}^{s}[\mathcal{K}:\mathcal{L}_{r}]\operatorname{Tr}_{\mathcal{L}_{r}/F}\left(\sum_{j\in\Pi_{r}}\ell_{j}\theta_{j}\right)$ $\displaystyle=\sum_{r=1}^{s}\left([\mathcal{K}:\mathcal{L}_{r}]\sum_{j\in\Pi_{r}}\ell_{j}\operatorname{Tr}_{\mathcal{L}_{r}/F}\left(\theta_{j}\right)\right)$ $\displaystyle=\sum_{r=1}^{s}\left(\frac{[\mathcal{K}:\mathcal{L}_{r}][\mathcal{L}_{r}:\mathcal{F}]}{\deg(P_{r})}\operatorname{Tr}(P_{r})\sum_{j\in\Pi_{r}}\ell_{j}\right)$ $\displaystyle=[\mathcal{K}:\mathcal{F}]\sum_{r=1}^{s}\frac{\operatorname{Tr}(P_{r})}{deg(P_{r})}\sum_{j\in\Pi_{r}}\ell_{j}=[\mathcal{K}:\mathcal{F}^{\prime}]\left(\frac{\operatorname{Tr}(P_{1})}{deg(P_{1})}-\frac{\operatorname{Tr}(P_{2})}{deg(P_{2})}\right).$ This contradicts $\frac{\operatorname{Tr}(P_{1})}{deg(P_{1})}\neq\frac{\operatorname{Tr}(P_{2})}{deg(P_{2})}$ hence ii) of Lemma 2.9 holds for $r_{1},r_{2}$, and thus $\mathcal{P}$ is non- degenerate. ∎ ## 3 Generalized cospectrality It is now a well-known fact that perfect state transfer can be characterized by an eigenvector and an eigenvalue condition. The eigenvector condition is called strong cospectrality (see [6]), a strengthening of the classical notion of cospectrality of two nodes. In [8, 4] cospectrality was used as a starting point to construct examples for pretty good state transfer. In [1] the study of fractional revival between two nodes led to a generalization of both cospectrality and strong cospectrality of two nodes to the fractional setting. Further, decomposability was identified as the correct generalization of strong (fractional) cospectrality to arbitrary subsets. However, the analogous extension of the theory of cospectrality was not discussed. ### 3.1 $H$-cospectrality Since our goal is to construct examples that admit pretty good fractional revival, in this section we complete the picture by developing the theory of (fractional) cospectrality for arbitrary subsets. It turns out that most features of the classical theory carry over to the general setting. Let $K\subset X$ be a subset of indices. For the sake of the applications later on, it turns out to be simpler to work in the more general setting of complex vector spaces and normal matrices. We consider $\mathbb{C}^{X}$ and $\mathbb{C}^{K}$ equipped the usual Hermitian scalar product $\langle v,w\rangle:=v^{*}w\in\mathbb{C}$. For a vector $v\in\mathbb{C}^{X}$ (respectively a matrix $A\in\mathbb{C}^{X\times X}$) we denote $\tilde{v}=v_{K}$ (respectively $\tilde{A}=A_{K\times K}$) its restriction to the subset $K$. Conversely, given a vector $v\in\mathbb{C}^{K}$ we denote $\hat{v}\in\mathbb{C}^{X}$ its extension by 0s to the other coordinates of $X$. Note that $\widetilde{A\hat{v}}=\tilde{A}v\mbox{ and }\langle A\hat{v},\hat{w}\rangle=\langle\tilde{A}v,w\rangle$ (7) for any $A\in\mathbb{C}^{X\times X}$ and $v,w\in\mathbb{C}^{K}$. Let $M\in\mathbb{C}^{X\times X}$ and $H\in\mathbb{C}^{K\times K}$ be normal matrices with spectral decompositions $M=\sum_{i=1}^{d}\theta_{i}E_{i}$ and $H=\sum_{j=1}^{r}\rho_{j}F_{j}$. The $E_{i}$ and $F_{j}$ are self-adjoint projections. ###### Definition 3.1. We say that $K$ is $H$-cospectral in $M$ (or $H$-cospectral, for short) if there is an orthonormal (with respect to the Hermitian scalar product) eigenbasis $\psi_{1},\dots,\psi_{|X|}$ such that $\tilde{\psi_{j}}$ is either 0 or an eigenvector of $H$ for all $j=1,\dots,\lvert X\rvert$. ###### Remark 3.2. * • Clearly, the dependence on $H$ is only via its spectral idempotents $\\{F_{j}\\}$. However, it is often more convenient to refer to the matrix $H$ instead of a collection of projectors. * • This generalizes fractional cospectrality between two nodes (the $\lvert K\rvert=2$ case) introduced in [1] to subsets of arbitrary size. In particular, if $H=\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right)$ (8) then we recover the classical notion of cospectrality. ###### Theorem 3.3. Let $K\subset X,M,H$ as above. The following are equivalent: 1. 1. $K$ is fractionally cospectral with respect to $H$. 2. 2. $H\widetilde{M^{k}}=\widetilde{M^{k}}H$ for all $k$. 3. 3. $H\tilde{E_{i}}=\tilde{E_{i}}H$ for all $j$. 4. 4. $F_{j}\tilde{E_{i}}=\tilde{E_{i}}F_{j}$ for all $i,j$. 5. 5. For any $v,w$ eigenvectors of $H$ belonging to different eigenvalues, $\langle\tilde{E_{i}}v,w\rangle=0$ for all $i$. 6. 6. For each $i$, there is an orthonormal basis of $\operatorname{Im}E_{i}$ that contains exactly $\dim\\{E_{i}\hat{v}:v\in\operatorname{Im}F_{j}\\}$ vectors that satisfy $0\neq\tilde{\psi}\in\operatorname{Im}F_{j}$ for each $j$, and the rest of the basis elements satisfy $\tilde{\psi}=0$. 7. 7. For any $v,w$ eigenvectors of $H$ belonging to different eigenvalues, the subspaces $\langle M^{k}\hat{v}:k=0,1,\dots\rangle$ and $\langle M^{k}\hat{w}:k=0,1,\dots\rangle$ are orthogonal. ###### Proof. We prove implications in a cyclic order. ##### $\ref{it:frco}\implies\ref{it:HMk}$: Let $\psi_{1},\dots,\psi_{\lvert X\rvert}$ as in Definition 3.1 with corresponding eigenvalues $\lambda_{1},\dots,\lambda_{\lvert X\rvert}$. Then $M^{k}=\sum\lambda_{i}^{k}\psi_{i}\psi_{i}^{*}$ and hence $\widetilde{M^{k}}=\sum\lambda_{i}^{k}\tilde{\psi_{i}}\tilde{\psi_{i}}^{*}$. The $\tilde{\psi_{i}}$s are all eigenvectors of $H$. If $Hv=\rho v$ then $Hvv^{*}=\rho vv^{*}=v(\bar{\rho v})^{*}=v(H^{*}v)^{*}=vv^{*}H$ by the normality of $H$. Thus $H$ commutes with all $\tilde{\psi_{i}}\tilde{\psi_{i}}^{*}$ terms, and in turn also with $\widetilde{M^{k}}$ for any $k$. ##### $\ref{it:HMk}\implies\ref{it:HEi}$: This follows since each $E_{i}$ is a polynomial of $M$, and thus $\tilde{E_{i}}$ is a linear combination of the $\widetilde{M^{k}}$s. ##### $\ref{it:HEi}\implies\ref{it:FjEi}$: This follows since each $F_{j}$ is a polynomial of $H$. ##### $\ref{it:FjEi}\implies\ref{it:Eivw}$: Let $v$ be an eigenvector in the $F_{j}$ eigenspace of $H$. That is, $F_{j}v=v$. Then $F_{j}w=0$ since $w$ is in a different eigenspace. Now, using the self-adjointness of the $F_{j}$, we get $\langle\tilde{E_{i}}v,w\rangle=\langle\tilde{E_{i}}F_{j}v,w\rangle=\langle F_{j}\tilde{E_{i}}v,w\rangle=\langle\tilde{E_{i}}v,F_{j}w\rangle=0$ as claimed. ##### $\ref{it:Eivw}\implies\ref{it:basisrk}$: Let $E=E_{i}$ for some fixed $i$. For any $j$ consider the subspace $\mathcal{S}_{j}\subset\operatorname{Im}E$ defined as $\mathcal{S}_{j}=\\{E\hat{v}:v\in\operatorname{Im}F_{j}\\}.$ If $v\in\operatorname{Im}F_{j_{1}}$ and $w\in\operatorname{Im}F_{j_{2}}$ for some $j_{1}\neq j_{2}$, then $\langle E\hat{v},E\hat{w}\rangle=\langle E\hat{v},\hat{w}\rangle=\langle\tilde{E}v,w\rangle=0$ by the condition. Hence $\mathcal{S}_{j_{1}}\bot\mathcal{S}_{j_{2}}$. Thus we can pick an orthonormal basis in each of the $\mathcal{S}_{j}$s as well as the orthogonal complement of $\oplus\mathcal{S}_{j}$ in $\operatorname{Im}E$. We claim that the union of these bases satisfies our requirements. To show this, consider $u\in\operatorname{Im}E$ that is orthogonal to $\mathcal{S}_{j}$, and take any $v\in\operatorname{Im}F_{j}$. Then $0=\langle u,E\hat{v}\rangle=\langle Eu,\hat{v}\rangle=\langle u,\hat{v}\rangle=\langle\tilde{u},v\rangle$ and thus $\tilde{u}$ is orthogonal to $\operatorname{Im}F_{j}$. This means that if $u\in\mathcal{S}_{j}$ then it is orthogonal to $\mathcal{S}_{l}$ for all $l\neq j$, hence $\tilde{u}$ is orthogonal to $\operatorname{Im}F_{l}$ for all $l\neq j$. That is only possible if $\tilde{u}\in\operatorname{Im}F_{j}$. If $u=E\hat{v}\neq 0$, then $0<\langle E\hat{v},E\hat{v}\rangle=\langle\hat{v},E\hat{v}\rangle=\langle v,\tilde{u}\rangle$, thus $\tilde{u}\neq 0$ as claimed. Lastly if $u\in\operatorname{Im}E$ is orthogonal to all $\mathcal{S}_{j}$s, then by the same argument $\tilde{u}$ is orthogonal to all $\operatorname{Im}F_{j}$s, hence $\tilde{u}=0$ in this case. ##### $\ref{it:basisrk}\implies\ref{it:frco}$: The union for all $i$ of the bases defined in 6 obviously satisfies Definition 3.1. ##### $\ref{it:Eivw}\Longleftrightarrow\ref{it:Mkw}$: This follows since $\tilde{E_{i}}$ is a linear combination of the $\widetilde{M^{k}}$s as we have seen before, and vice versa: $\widetilde{M^{k}}$ is obviously a linear combination of the $\tilde{E_{i}}$s. Finally $\langle M^{a}\hat{v},M^{b}\hat{w}\rangle=\langle M^{a+b}\hat{v},\hat{w}\rangle=\langle\widetilde{M^{a+b}}v,w\rangle$, so the latter is 0 for all $a,b$ if and only if the two subspaces are orthogonal. ∎ The following observation will be useful later, but we state it here since the computation is essentially the same as in the proof of $\ref{it:Eivw}\implies\ref{it:basisrk}$ above. ###### Claim 3.4. Let $v\in\operatorname{Im}F_{j}$. Then $\langle E_{i}\hat{v},E_{i}\hat{v}\rangle=\langle\hat{v},E_{i}\hat{v}\rangle=\langle v,\widetilde{E_{i}\hat{v}}\rangle=\langle v,\tilde{E_{i}}v\rangle,$ hence $E_{i}\hat{v}\neq 0$ if and only if its restriction to $K$ is non-zero. And in this case the restriction, $\tilde{E_{i}}v$, contains a component parallel to $v$. ###### Remark 3.5. In the case when $|K|=2$ and $H$ is a 2-by-2 matrix with eigenvectors $(p,q)$ and $(-q,p)$, our definition of $K$-cospectral coincides with what is called a fractionally cospectral pair of nodes in [1, Theorem 3.3, condition (ii)], hence this is indeed a direct generalization of that notion. ### 3.2 A factorization of $\phi(M,t)$ ###### Definition 3.6. Let $K$ be $H$-cospectral in $M$. Define $b_{i,j}=\dim\\{E_{i}\hat{v}:v\in\operatorname{Im}F_{j}\\}$ for any $1\leq j\leq r$, and $b_{i,0}=\dim\\{u\in\operatorname{Im}E_{i}:\tilde{u}=0\\}$. Let $P_{j}(t)=\prod_{i=1}^{d}(t-\theta_{i})^{b_{i,j}}:j=0,\dots,r$. The following is immediate from 6 of Theorem 3.3: ###### Corollary 3.7. Let $K$ be $H$-cospectral in $M$. Then the characteristic polynomial $\phi(M)=\phi(M,t)$ can be decomposed as $\phi(M,t)=\prod_{j=0}^{r}P_{j}(t).$ $H$-cospectrality becomes the most useful when $H$ has $|K|$ distinct eigenvalues. That is, when all $F_{j}$s have rank 1. Let us fix an eigenvector $v_{j}\in\operatorname{Im}F_{j}$ of $H$ for each $j$. ###### Lemma 3.8. Assume $H$ has $K$ distinct eigenvalues. Then 1. 1. $P_{j}(t)=p_{v_{j}}(t)$ for all $1\leq j\leq r=|K|$, where $p_{v_{j}}$ is the minimal polynomial of $M$ relative to $v_{j}$, that is, the smallest degree monic polynomial $p$ such that $p(M)\hat{v_{j}}=0$, or, equivalently, the characteristic polynomial of the action of $M$ restricted to the space $\langle v_{j},Mv_{j},M^{2}v_{j},\dots,\rangle$. 2. 2. $(\theta_{a},\theta_{b})\in\Phi_{K}$, if and only if there is a $j$ such that $\theta_{a},\theta_{b}$ are both roots of $P_{j}(t)$. ###### Proof. It is well-known that the relative minimal polynomial has only simple roots, and $p_{v_{j}}(\theta_{i})=0$ if and only if $E_{i}\hat{v_{j}}\neq 0$. Since $\dim\operatorname{Im}F_{j}=1$, the integers $b_{i,j}$ can only be 0 or 1, so $P_{j}$ has only simple roots. And $\theta_{i}$ is a root of $P_{j}(t)$ if and only if $E_{i}\hat{v_{j}}\neq 0$. Thus $p_{v_{j}}$ and $P_{j}$ have exactly the same roots, and they are both monic, so they are equal. To prove the 2nd part, note that $(\theta_{a},\theta_{b})\in\Phi_{K}$ if and only if $E_{a}D_{K}E_{b}\neq 0$, according to Definition 2.4. Here $E_{a}D_{K}E_{b}w=E_{a}\widehat{\widetilde{D_{K}}\widetilde{E_{b}w}}=\sum_{j}E_{a}\widehat{F_{j}\widetilde{E_{b}w}}.$ Thus if $E_{a}D_{K}E_{b}w\neq 0$ then $E_{a}\widehat{F_{j}\widetilde{E_{b}w}}\neq 0$ for some $j$. Since $\operatorname{Im}F_{j}$ is 1-dimensional, this implies both that $E_{a}\hat{v_{j}}\neq 0$ and that $F_{j}\widetilde{E_{b}w}\neq 0$. By Lemma 5.1 we see that there exists a vector $z\in\mathbb{R}^{K}$ such that $\widetilde{E_{b}w}=\widetilde{E_{b}\hat{z}}=\tilde{E_{b}}z$, and so $0\neq F_{j}\tilde{E_{b}}z=\tilde{E_{b}}F_{j}z$, which implies $\tilde{E_{b}}v_{j}\neq 0$ and hence $E_{b}\hat{v_{j}}\neq 0$. We have already established $E_{a}\hat{v_{j}}\neq 0$, and so we find that both $\theta_{a}$ and $\theta_{b}$ are roots of $P_{j}(t)$. To see the converse direction, first note that Claim 3.4 implies $E_{i}\hat{v_{j}}\neq 0$ if and only if $\tilde{E_{i}}v_{j}=\widetilde{E_{i}\hat{v_{j}}}$ contains a component parallel to $v_{j}$. But since $\tilde{E_{i}}v_{j}=\tilde{E_{i}}F_{j}v_{j}=F_{j}\tilde{E_{i}}v_{j}\in\operatorname{Im}F_{j}$ the latter of which is 1-dimensional, we find that $E_{i}\hat{v_{j}}\neq 0$ if and only if $v_{j}$ is an eigenvector of $\widetilde{E_{i}}$. If $\theta_{a}$ and $\theta_{b}$ are both roots of $P_{j}(t)$ then $E_{a}\hat{v_{j}}$ and $E_{b}\hat{v_{j}}$ are both non-zero. Hence $D_{K}E_{b}\hat{v_{j}}=\widehat{\tilde{E_{b}}v_{j}}$ is a non-zero constant multiple of $\hat{v_{j}}$, and then $E_{a}D_{K}E_{b}\hat{v_{j}}$ is also non- zero, implying that $(\theta_{a},\theta_{b})\in\Phi_{K}$ ∎ ###### Corollary 3.9. If $K$ is $H$-cospectral in $M$ for some $H$ that has no multiple eigenvalues, then $\mathcal{P}_{K}$ is the most refined partition where for each $j$ the roots of $P_{j}(t)$ fall in the same part. ### 3.3 Gluing and Diagonal perturbation ###### Theorem 3.10. Let $X=X_{1}\cup X_{2}$ a splitting of the index set $X$ such that $K=X_{1}\cap X_{2}$. Let $M_{1}$ and $M_{2}$ be two matrices supported on $X_{1}$ and $X_{2}$ respectively. Suppose that $K$ is $H$-cospectral in both $M_{1}$ and $M_{2}$ for some $H\in\mathbb{C}^{K\times K}$. Then $K$ is also $H$-cospectral in $M=M_{1}+M_{2}$. ###### Proof. By Theorem 3.3 it suffices to show that $H$ commutes with $\widetilde{M^{k}}$ for all $k$. Note that $M^{k}$ can be expressed as $M^{k}=\sum_{\sum(a_{i}+b_{i})=k}M_{1}^{a_{1}}M_{2}^{b_{2}}M_{1}^{a_{2}}M_{2}^{b_{2}}\cdots$ Let $\Pi\in\mathbb{R}^{K\times X}$ denote the projection onto $K$, that is $\Pi$ has 1 on its diagonal in $S$. Then $\tilde{N}=\Pi N\Pi^{\mathrm{T}}$ for any $X\times X$ matrix $N$. Further, examining the matrix multiplication one can see that for any matrices $N_{i}$ supported on $X_{i}$ for $i=1,2$, on has $N_{1}N_{2}\Pi^{\mathrm{T}}=N_{1}\Pi^{\mathrm{T}}\Pi N_{2}\Pi^{\mathrm{T}}\mbox{ and }N_{2}N_{1}\Pi^{\mathrm{T}}=N_{2}\Pi^{\mathrm{T}}\Pi N_{1}\Pi^{\mathrm{T}}$ Thus $\widetilde{M^{k}}=\Pi M^{k}\Pi^{\mathrm{T}}=\sum_{\sum(a_{i}+b_{i})=k}\Pi M_{1}^{a_{1}}\Pi^{\mathrm{T}}\Pi M_{2}^{b_{2}}\Pi^{\mathrm{T}}\Pi M_{1}^{a_{2}}\Pi^{\mathrm{T}}\Pi M_{2}^{b_{2}}\Pi^{\mathrm{T}}\cdots=\sum_{\sum(a_{i}+b_{i})=k}\widetilde{M_{1}^{a_{1}}}\widetilde{M_{2}^{b_{2}}}\widetilde{M_{1}^{a_{2}}}\widetilde{M_{2}^{b_{2}}}\cdots.$ By assumption, $H$ commutes with both $\widetilde{M_{1}}$ and $\widetilde{M_{2}}$, so it also commutes with $\widetilde{M^{k}}$. ∎ In the next section we will study a diagonal perturbation of $M$ in the form of $M+QD_{K}$. Using the previous theorem with $X_{1}=X,X_{2}=K,M_{1}=M,M_{2}=D_{K}$ and noting that $K$ is $H$-cospectral in $D_{K}$ for any $H$, we get the following. ###### Corollary 3.11. If $K$ is $H$-cospectral in $M$ then it is also $H$-cospectral in $M+QD_{K}$ for any $Q\in\mathbb{R}$. ### 3.4 Diagonal perturbation Let $H$ be a normal matrix in $\mathbb{C}^{K\times K}$ that has $|K|$ distinct eigenvalues and an orthonormal eigenbasis $v_{1},v_{2},\dots,v_{|K|}$. In this section we assume that $K$ is $H$-cospectral in $M$. Let $\mathcal{F}\geq\mathbb{Q}$ denote the smallest number field containing all entries of $M$ and all roots of $\phi(H)$. Then $v_{1},\dots,v_{|K|}\in\mathcal{F}^{|K|}$. Let $Q\in\mathbb{R}$ denote a transcendental number that is algebraically independent of $\mathcal{F}$, and consider $M^{K}=M+Q\cdot D_{K}\in\mathbb{R}[Q]^{X\times X}$. Then, according to Corollary 3.11, $K$ is also $H$-cospectral in $M^{K}$. In particular, according to Corollary 3.7 and Lemma 3.8, we have the factorization $\phi(M^{K})=P_{0}(t)\cdot\prod_{j=1}^{|K|}P_{j}(t)$ (9) where $P_{j}$ is the minimal polynomial of $M^{K}$ relative to $\hat{v}_{j}$ for $j=1,\dots,|K|$. ###### Claim 3.12. $P_{0}\in\mathcal{F}[t]$ and $P_{j}\in\mathcal{F}[Q,t]$ for $j=1,\dots,|K|$. Furthermore, the $Q$-degree of $P_{j}$ is 1 and $\operatorname{Tr}P_{j}-Q\in\mathcal{F}$ for each $j=1,\dots,|K|$ ###### Proof. Since $P_{j}$, for $j\geq 1$, is a relative minimal polynomial of a matrix in $\mathcal{F}(Q)^{X\times X}$ relative to a vector with entries in $\mathcal{F}$, it is automatic that $P_{j}\in\mathcal{F}(Q)[t]$. Then (9) implies the same for $P_{0}$. However, $\mathcal{F}(Q)$ is the quotient field of the ring $\mathcal{F}[Q]$ which is a UFD. Hence by Gauss’s Lemma the factorization (9) is valid in $\mathcal{F}[Q,t]$. The $Q$-degree of $\phi(M^{K})$ is obviously $|K|$. It is easy to see that each $P_{j}:j\geq 1$ is at least linear in $Q$ and, in fact, satisfies $\operatorname{Tr}P_{j}-Q\in\mathcal{F}$. (This follows from a short analysis of the $Q^{s}$-terms in the linear combination of $(M^{K})^{s}\hat{v}_{j}$ vectors that define the coefficients of $P_{j}$. See [4, Lemma 3.3] for a detailed proof.) This is only possible if the $Q$-degree of $P_{0}$ is 0, and the $Q$-degree of the other $P_{j}$s are exactly 1. ∎ ###### Theorem 3.13. $P_{j}$ is irreducible in $\mathcal{F}[Q,t]$ (and hence also in $\mathcal{F}(Q)[t]$) for all $j\geq 1$. ###### Proof. Now suppose for a contradiction that, say, $P_{1}$ is reducible in $\mathcal{F}[Q,t]$. Since its $Q$-degree is 1, this means $P_{1}(t)=R(t)\cdot\tilde{P}_{1}(t)$ for some non-constant $R\in\mathcal{F}[t]$ and $\tilde{P}_{1}\in\mathcal{F}[t,Q]$. Let $\theta$ be a root of $R(t)$, and let $m$ denote the multiplicity of $\theta$ as a root of $P_{0}(t)$. Then $\theta$ is a root of $\phi(M^{K})$ with multiplicity at least $m+1$ for all values of $Q$. Since $D_{K}$ is a projection matrix, Lemma 5.2 implies that there are $m+1$ orthogonal eigenvectors of $M$ (corresponding to the $\theta$ eigenvalue) that vanish on $K$. But then the multiplicity of $\theta$ in $P_{0}$ should have been at least $m+1$ according to its definition. This is a contradiction, hence each $P_{j}:j=1,2,\dots,|K|$ is indeed irreducible. ∎ ## 4 Examples In this section we apply the above techniques to provide explicit families of graphs where diagonal perturbation can be shown to induce pretty good fractional revival. We use the graph and its adjacency matrix interchangeably. The blueprint is the following: * • Find a family of graphs and a subset $K$ of the nodes which are $H$-cospectral for some normal matrix $H$ with only simple eigenvalues. * • Add a transcendental diagonal perturbation to the nodes in $K$. * • Identify the factorization $\Phi(M^{K},t)=\prod P_{j}(t)$ of the characteristic polynomial as the relative minimal polynomials of the eigenvectors of $H$. * • Using this, compute the degrees and traces of the $P_{j}$ polynomials. * • The transcendentality of the diagonal perturbation guarantees that the $P_{j}$ are irreducible over a certain field, hence each part in the partition $\mathcal{P}_{K}$ coincides with the roots of one of the $P_{j}$s. * • Show that for some choice of $i,j$ we have $\operatorname{Tr}P_{i}/\deg P_{i}\neq\operatorname{Tr}P_{j}/\deg P_{j}$. This implies that $\mathcal{P}_{K}$ is non-degenerate, and hence there is pretty good state transfer relative to $K$. ### 4.1 PGFR on 2 nodes Here we present two infinite families of graphs, both of which are built on a path of length $2k+1$. In $S_{k,m}$ we take a path with $2k+1$ edges, and add a loop edge of weight $m$ at the $k+1$st node, that is, one of the nodes forming the middle edge. In $T_{k}$ we take a path with $2k+1$ edges and add two new nodes. One is going to be adjacent to nodes $k$ and $k+1$ on the path, the other is going to be adjacent to node $k+3$ on the path. In both families, the endpoints of the path exhibit pretty good fractional revival under magnetic fields applied to them. $m$uv$k$$k$ uzxyabv$k$$k$ Figure 1: Two families of graphs with cospectral pairs: $S_{k,m}$ (left) and $T_{k}$ (right). ###### Theorem 4.1. Let $M$ denote the adjacency matrix of either $S_{k,m}$ or $T_{k}$, and $K=\\{u,v\\}$ be the endpoints of the path, as in the figure. Let $Q$ be a transcendental real number. Then $M^{K}=M+Q\cdot D_{K}$ exhibits PGFR with respect to $K$. The proof of this result requires some preparation, as outlined in the blueprint above. The first step is to verify fractional cospectrality of $u$ and $v$. ###### Lemma 4.2. Let $M$ denote the adjacency matrix of either $S_{k,m}$ or $T_{k}$. Then for all $j\geq 0$ $(M^{j})_{u,u}-(M^{j})_{v,v}=c\cdot(M^{j})_{u,v}$ where $c=m$ in the case of $S_{k,m}$ and $c=2$ in the case of $T_{k}$. ###### Proof. Let $n$ be the size of the matrix $M$. Since $M^{j}$, for any $j\geq n$ is a linear combination of $M^{0},M^{1},\dots,M^{n-1}$, it suffices to check the equation for $j=0,\dots,n-1$. First we consider the $S_{k,m}$ case. Here $n=2k+2$. For $j=0,1,\dots,2k$ it is clear that $(M^{j})_{u,u}=(M^{j})_{v,v}$ and $(M^{j})_{u,v}=0$. For $j=2k+1$ we have $(M^{2k+1})_{u,u}-(M^{2k+1})_{v,v}=m$ and $(M^{2k+1})_{u,v}=1$. Hence $c=m$ satisfies the equation for all $j=0,\dots,n-1$. Next we consider $T_{k}$. Here $n=2k+4$. Clearly, $(M^{j})$ counts walks of length $j$ in the graph. This graph is symmetric, except for the $xy$ edge. For $j<2k+4$ no $v\to v$ walk of length $j$ can use the $xy$ edge, and for $j<2k+1$ no $u\to u$ walk of length $j$ can use the $xy$ edge. Hence for $j<2k+1$ we have $(M^{j})_{u,u}=(M^{j})_{v,v}$, and for $2k+1\leq j\leq 2k+3$ we have $(M^{j})_{u,u}-(M^{j})_{v,v}=$ the number of such $u\to u$ walks of length $j$ that do use the $xy$ edge. For $j=2k+1$ this number is 2 since you can only use this edge once and thus the rest of the walk has to be straight between $u$ and $x$ (respectively between $u$ and $y$). For $j=2k+2$ such a walk will have to use the $xy$ edge twice due to parity constraints. But then it has to use it twice back-to-back due to distance constraints. Hence there are only two such walks: $u\to x\to y\to x\to u$ and $u\to y\to x\to y\to u$. For both $j=2k+1$ and $j=2k+2$ there is clearly only a single $u\to v$ walk of this length. Hence $(M^{j})_{u,u}-(M^{j})_{v,v}=2\cdot(M^{j})_{u,v}$ for $j=0,1,\dots,2k+2$. It remains to check the same expression holds for $j=2k+3$. Due to parity and length constraints, each $u\to u$ walk of length $2k+3$ must “go around” the $xyz$ triangle exactly once. Reversal of the walk yields a bijection between those walks that go around clockwise and those that do so counterclockwise. By similar considerations, each $u\to v$ walk of length $2k+3$ must traverse the $zy$ edge in this direction. **** ∎ ###### Corollary 4.3. 1. 1. According to Remark 3.5, in both families $K=\\{u,v\\}$ is $H$-cospectral for a 2-by-2 matrix $H$ whose eigenvectors are $(p,q)$ and $(-q,p)$ where $p/q-q/p=c$. 2. 2. Furthermore, according to Corollary 3.7 and Lemma 3.8, as long as $c\neq\pm\infty$, the characteristic polynomial factors as $\Phi(M,t)=P_{0}(t)P_{1}(t)P_{2}(t)$ where $P_{1}(t)$ (respectively $P_{2}(t)$) is the minimal polynomial of $M$ relative to $v_{1}$ (respectively $v_{2})$ where these vectors are defined as $v_{1}(x)=\left\\{\begin{array}[]{cc}p/q:&x=u\\\ 1:&x=v\\\ 0:&x\neq u,v\end{array}\right.\hskip 28.45274ptv_{2}(x)=\left\\{\begin{array}[]{cc}-q/p:&x=u\\\ 1:&x=v\\\ 0:&x\neq u,v\end{array}\right.$ 3. 3. Then, according to Corollary 3.11, $K$ is also $H$-cospectral in $M^{K}=M+QD_{K}$, and $\Phi(M^{K},t)=P^{K}_{0}(t)P^{K}_{1}(t,Q)P^{K}_{2}(t,Q)$ where $P^{K}_{1}$ and $P^{K}_{2}$ are degree 1 in $Q$, and for any choice of a transcendental $Q_{0}$, the one-variable polynomials $P^{K}_{i}(t,Q_{0}):i=1,2$ are irreducible over $\mathbb{Q}(p/q,Q_{0})$. Next, we show that the traces of $P^{K}_{1}$ and $P^{K}_{2}$ are not equal to each other. ###### Lemma 4.4. With the above notation, $\deg P^{K}_{1}=\deg P^{K}_{2}$ but $\operatorname{Tr}P^{K}_{1}\neq\operatorname{Tr}P^{K}_{2}$. In particular they are not the same polynomial. ###### Proof. Since $c$ in Lemma 4.2 is always an integer and $p/q,-q/p$ are the roots of $x-1/x=c$, it follows that $p/q$ and $-q/p$ are always quadratic integers that are each others’ conjugates. This conjugation maps $v_{1}$ to $v_{2}$ in Corollary 4.3 and thus also $P^{K}_{1}$ to $P^{K}_{2}$. So these polynomials always have the same degree. They are both linear in $Q$, and, by definition, substituting $Q=0$ into them we get $P_{1}$ and $P_{2}$ respectively. So it is sufficient to show that $\operatorname{Tr}P_{1}\neq\operatorname{Tr}P_{2}$. We will do so by examining specific coordinates of $M^{j}v_{1}$ and $M^{j}v_{2}$ for $j\leq k+2=n/2$. We outline the computation for the $T_{k}$ case only, as the $S_{k,m}$ case follows in a similar but simpler way. For $T_{k}$ we saw that $c=2$ and thus $p/q=1+\sqrt{2}$ and $-q/p=1-\sqrt{2}$. The following are easily checked by direct calculation of walk counts in $T_{k}$: $\displaystyle(M^{j})_{u,y}=$ $\displaystyle\left\\{\begin{array}[]{cl}0:&j<k\\\ 1:&j=k\\\ 1:&j=k+1\\\ k+3:^{*}&j=k+2\end{array}\right.$ $\displaystyle(M^{j})_{v,y}=$ $\displaystyle\left\\{\begin{array}[]{cl}0:&j\leq k\\\ 1:&j=k+1\\\ 0:&j=k+2\end{array}\right.$ $\displaystyle(M^{j})_{u,b}=$ $\displaystyle 0:j\leq k+2$ $\displaystyle(M^{j})_{v,b}=$ $\displaystyle\left\\{\begin{array}[]{cl}0:&j<k\\\ 1:&j=k\\\ 0:&j=k+1\\\ k+1:^{*}&j=k+2\end{array}\right.$ The entries marked by $(*)$ can be seen by noting that a walk of the appropriate length will use exactly one edge more than once. Specifying which edge this is determines the walk. In the $u\to y$ case this can be any of the edges on the $uy$ path, the $xz$ edge, or any of the other two edges incident to $y$. In the $v\to b$ case this can be any of the edges on the $va$ path, or one of the other two edges incident to $a$. Hence we get that $\displaystyle M^{j}v_{1}|_{y,b}=$ $\displaystyle\left\\{\begin{array}[]{cl}(0,0):&j<k\\\ (1+\sqrt{2},1):&j=k\\\ (2+\sqrt{2},0):&j=k+1\\\ (k+3+(k+3)\sqrt{2},k+1):&j=k+2\end{array}\right.$ $\displaystyle M^{j}v_{2}|_{y,b}=$ $\displaystyle\left\\{\begin{array}[]{cl}(0,0):&j<k\\\ (1-\sqrt{2},1):&j=k\\\ (2-\sqrt{2},0):&j=k+1\\\ (k+3-(k+3)\sqrt{2},k+1):&j=k+2\end{array}\right.$ On one hand, these calculations imply that the degree of $P_{1}$ and $P_{2}$ are both at least $k+2$, but since their product has degree $2k+4$, their degrees must, in fact, be equal to $k+2$. If $P_{1}(t)=x^{k+2}+c_{1}x^{k+1}+c_{2}x^{k}+\dots$ then $k+1+c_{2}=0$ from the $b$ coordinate of $0=P_{1}(M)v_{1}$, and $(k+3)(1+\sqrt{2})+c_{1}(2+\sqrt{2})+c_{2}(1+\sqrt{2})=0$ from the $y$ coordinate of $0=P_{1}(M)v_{1}$. Hence $c_{2}=-k-1$ and $c_{1}=(2+2\sqrt{2})/(2+\sqrt{2})=\sqrt{2}$. So $\operatorname{Tr}P_{1}=-\sqrt{2}$. Similarly $\operatorname{Tr}P_{2}=\sqrt{2}$. ∎ We are finally ready to prove the validity of our examples. ###### Proof of Theorem 4.1. According to Corollary 4.3 the polynomials $P^{K}_{1}$ and $P^{K}_{2}$ are irreducible and distinct from $P^{K}_{0}$. By Lemma 4.4 they are distinct from each other, thus it follows that none of these polynomials share roots. Then by part 2) of Lemma 3.8, the eigenvalue support consists entirely of pairs of roots of $P^{K}_{1}$ and pairs of roots of $P^{K}_{2}$. Thus, the partition $\mathcal{P}_{K}=(\Pi_{0},\Pi_{1},\Pi_{2})$ is such that $\Pi_{1}$ consists exactly of all roots of $P^{K}_{1}$ and $\Pi_{2}$ consists exactly of all roots of $P^{K}_{2}$. Now, Lemma 4.4 combined with Theorem 2.11 implies that the partition $\mathcal{P}_{K}$ is non-degenerate. Hence, our result follows by Theorem 2.7. ∎ ### 4.2 Cyclic Symmetry Consider a graph $G$ with an automorphism $T:V(G)\to V(G)$ of order $r$. We denote its adjacency matrix by $M$. Let $K\subset V(G)$ be an orbit of cardinality $r$. In this section we establish certain conditions under which $M^{K}=M+Q\cdot D_{K}$ exhibits PGFR relative to $K$. This turns out to be easier than for the previous examples. Since $T$ has order $r$, it must act as a cyclic permutation on any orbit of cardinality $r$, and on $K$ in particular. Let $H$ denote the $K\times K$ permutation matrix describing the (cyclic) action of $T$ on $K$. It is obvious then that $H$ commutes with $\widetilde{M^{k}}$ for any $k\geq 0$, and thus $K$ is $H$-cospectral in $M$, and also in $M^{K}$ by Corollary 3.11. The eigenvalues of $H$ are simple, with eigenvectors $v_{k}=(1,\rho^{k},\rho^{2k},\dots,\rho^{(r-1)k}):k=1,\dots,r,$ (10) where $\rho=e^{\frac{2\pi i}{r}}$. Thus $\Phi(M^{K},t)=P_{0}(t)\prod_{k=1}^{r}P_{k}(t,Q)$ and according to Lemma 3.8, for each $k=1,\dots,p$ the polynomial $P_{k}(t)$ is the relative minimal polynomial of $v_{k}$. According to Theorem 3.13 the polynomials $P_{1},\dots,P_{r}$ are irreducible, so they have disjoint sets of roots unless they are the same polynomial. Hence in the partition $\mathcal{P}_{K}$ each part is the set of roots of one of the $P_{k}$ polynomials. ###### Lemma 4.5. Let $G,T,K$ be as above. If either $\deg P_{i}\neq\deg P_{j}$ or $\operatorname{Tr}P_{i}\neq\operatorname{Tr}P_{j}$ for some $i,j\geq 1$ then $M^{K}$ exhibits PGFR relative to $K$. ###### Proof. By Claim 3.12 each of the $P_{k}:k=1,\dots,r$ polynomials have $\operatorname{Tr}P_{k}-Q\in\mathbb{Q}(\rho)$. Since $Q$ is transcendental, if $\deg P_{i}\neq\deg P_{j}$ then $\operatorname{Tr}P_{i}/\deg P_{i}\neq\operatorname{Tr}P_{j}/\deg P_{j}$. On the other hand, if $\deg P_{i}=\deg P_{j}$ but $\operatorname{Tr}P_{i}\neq\operatorname{Tr}P_{j}$ then again clearly $\operatorname{Tr}P_{i}/\deg P_{i}\neq\operatorname{Tr}P_{j}/\deg P_{j}$. Thus, we are done by Theorem 2.11. ∎ #### 4.2.1 Orbits of Unequal Size ###### Theorem 4.6. Let $G$ and $T$ as above. Let $d$ denote the largest distance between $K$ and any other node of $G$. If $r(d+1)>|V(G)|$ then $M^{K}=M+Q\cdot D_{K}$ exhibits PGFR with respect to $K$ for any transcendental $Q\in\mathbb{R}$. ###### Remark 4.7. For this condition to hold, it is necessary that not all orbits of $T$ have size $r$. ###### Proof. Since there is a node of $G$ that is $d$ distance away from $K$, and since $v_{r}=(1,1,\dots,1)$, we see that for each $j=0,1,\dots,d$ the vectors $(M^{K})^{j}v_{p}$ have strictly growing support, hence they can’t be linearly dependent. Thus $\deg P_{p}\geq d+1$. Let $j_{0}$ be such that $P_{j}$ has the smallest degree among $P_{1},\dots,P_{r-1}$. We have $|V(G)|=\deg P_{0}+\deg P_{p}+\sum_{j=1}^{r-1}\deg P_{j}\geq d+1+(p-1)\deg P_{j_{0}}$, hence $\deg P_{j_{0}}\leq(|V(G)|-d-1)/(p-1)<d+1$ according to the conditions on $d$ and $|V(G)|$. Thus $\deg P_{j_{0}}<d+1\leq\deg P_{p}$ and thus the statement follows from Lemma 4.5. ∎ #### 4.2.2 Cycles with Added Diamond Graphs In this section, we consider another family with general cyclic symmetry. We define $G_{r}$ to be the graph obtain by starting with a cycle $C_{r}$ of order $r$, and attaching along each edge a diamond graph (two triangles sharing an edge). The graph $G_{5}$ is pictured in Figure 2. Figure 2: The graph $G_{5}$. Note that $G_{r}$ has the cyclic group of order $r$ as its automorphism group. There are three orbits of the automorphism group, consisting of the vertices of degree 5, the vertices of degree 3, and the vertices of degree 2 respectively. ###### Theorem 4.8. Let $K$ be the vertices of any single orbit, for instance the vertices of the central cycle $C_{r}$, and as above $M^{K}=M+Q\cdot D_{K}$ for transcendental $Q$. Then $M^{K}$ exhibits PGFR with respect to $K$. ###### Proof. By Lemma 4.5 it suffices to show that $\operatorname{Tr}P_{r}\neq\operatorname{Tr}P_{k}$ for some other $1\leq k\leq r-1$, where $P_{k}$ is the minimal polynomial of $M^{K}$ relative to the $v_{k}$ vector defined in (10). Let us define $A_{C_{r}}$ to be the adjacency matrix for $C_{r}$ and $R=\begin{bmatrix}1&1&0&0&\cdots&0\\\ 0&1&1&0&\cdots&0\\\ 0&0&1&1&\cdots&0\\\ \vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\\ 0&0&0&\cdots&1&1\\\ 1&0&0&\cdots&0&1\end{bmatrix}$ and note that $M^{K}=\begin{bmatrix}A_{C_{r}}+Q\cdot I&R&I\\\ R^{T}&0&I\\\ I&I&0\end{bmatrix}.$ Let $\lambda_{k}=\rho^{-k}+\rho^{k}$ and observe that simple calculation yields $\displaystyle A_{C_{n}}v_{k}$ $\displaystyle=\lambda_{k}v_{k}$ $\displaystyle Rv_{k}$ $\displaystyle=(1+\rho^{k})v_{k}$ $\displaystyle R^{T}v_{k}$ $\displaystyle=(1+\rho^{-k})v_{k}.$ Thus we can verify that $\begin{bmatrix}A_{C_{r}}+Q\cdot I&R&I\\\ R^{T}&0&I\\\ I&I&0\end{bmatrix}\begin{bmatrix}av_{k}\\\ bv_{k}\\\ cv_{k}\end{bmatrix}=\begin{bmatrix}a^{\prime}v_{k}\\\ b^{\prime}v_{k}\\\ c^{\prime}v_{k}\end{bmatrix}$ as an equation of $3r\times 3$ matrices, where $\begin{bmatrix}\lambda_{k}+Q&1+\rho^{k}&1\\\ 1+\rho^{-k}&0&1\\\ 1&1&0\end{bmatrix}\begin{bmatrix}a\\\ b\\\ c\end{bmatrix}=\begin{bmatrix}a^{\prime}\\\ b^{\prime}\\\ c^{\prime}\end{bmatrix}.$ It is easy to see that $\\{(av_{k},bv_{k},cv_{k}):a,b,c\in\mathbb{R}\\}$ is the subspace generated by $(M^{K})^{j}v_{k}:j=0,1,\dots$, hence the roots of the relative minimal polynomial $P_{k}$ are exactly the eigenvalues of $N_{k}:=\begin{bmatrix}\lambda_{k}+Q&1+\rho^{k}&1\\\ 1+\rho^{-k}&0&1\\\ 1&1&0\end{bmatrix}.$ Thus $P_{k}$ has degree 3, and we can see that $\operatorname{Tr}P_{k}=\operatorname{Tr}N_{k}=\lambda_{k}+Q$ for each $k$. Note in particular that $\operatorname{Tr}P_{r}\neq\operatorname{Tr}P_{k}$ for $k\neq r$, and hence Lemma 4.5 finishes the proof. ∎ ## 5 Appendix ###### Lemma 5.1. Let $E\in\mathbb{R}^{X\times X}$ be a projection, and $K\subset X$. If $v=Ew$ then there is a $w^{\prime}\in\mathbb{R}^{K}$ such that $\tilde{v}=\tilde{E}w^{\prime}$. Here we used the notation from Section 3.1. ###### Proof. Let $F=E_{K\times X}$. Then $\\{\tilde{v}:v\in\operatorname{Im}E\\}=\operatorname{Im}F$. So it is sufficient to show that $\operatorname{Im}\tilde{E}=\operatorname{Im}F$. In fact, $\operatorname{Im}\tilde{E}\leq\operatorname{Im}F$ by definition, so showing $\operatorname{Im}F\leq\operatorname{Im}\tilde{E}$ suffices. Since $E$ is a projection, we can write $E=\sum_{j}v_{j}v_{j}^{\mathrm{T}}$, and $\tilde{E}=\sum_{j}\tilde{v_{j}}\tilde{v_{j}}^{\mathrm{T}}$. Thus $\ker\tilde{E}=\\{w\in\mathbb{R}^{K}|\forall j:\tilde{v_{j}}^{\mathrm{T}}w=0\\}.$ (11) Since $\tilde{E}$ is self-adjoint, $\operatorname{Im}\tilde{E}$ is the orthogonal complement of $\ker\tilde{E}$. From (11) it is clear that $\operatorname{Im}\tilde{E}=\langle\tilde{v_{1}},\tilde{v_{2}},\dots\rangle$. On the other hand, $F=\sum\tilde{v_{j}}v_{j}^{\mathrm{T}}$, so clearly $\operatorname{Im}F\leq\langle\tilde{v_{1}},\tilde{v_{2}},\dots\rangle=\operatorname{Im}\tilde{E}$. ∎ ###### Lemma 5.2. Let $M$ be a symmetric matrix and $N$ be a projection matrix. Suppose $\theta$ has multiplicity $k$ as an eigenvalue of $M+Q\cdot N$ for all $Q\in\mathbb{R}$. Then $\ker N$ contains $k$ orthonormal eigenvectors of $M$, each with eigenvalue $\theta$. ###### Proof. Suppose we already exhibited $0\leq j\leq k-1$ such orthonormal vectors, $w_{1},\dots,w_{j}\in\ker N$. We exhibit one more as follows. Let $v_{q}$ be a unit length eigenvector of $M+Q\cdot N$ with eigenvalue $\theta$ that is orthogonal to $w_{1},\dots,w_{j}$. Such vectors exist because the multiplicity of $\theta$ is more than $j$, and $w_{1},\dots,w_{j}$ are all eigenvectors of $M+Q\cdot N$ as well. Let $w$ be a subsequential limit of the $v_{q}$s as $Q\to 0$. Such a limit exists because of compactness. From now on we restrict $Q$ to such a subsequence. Clearly $w$ is unit length and orthogonal to $w_{1},\dots,w_{j}$. Taking limit as $Q\to 0$ in $(M+Q\cdot N)v_{q}=\theta v_{q}$ gives that $Mw=\theta w$. It remains to show that $w\in\ker N$. Taking the scalar product of both sides with $w$, and using the symmetry of $M$ as well as $Mw=\theta w$, we obtain $\theta\langle v_{q},w\rangle=\langle Mv_{q},w\rangle+Q\langle Nv_{q},w\rangle=\langle v_{q},Mw\rangle+Q\langle Nv_{q},w\rangle=\langle v_{q},\theta w\rangle+Q\langle Nv_{q},w\rangle.$ Hence $\langle Nv_{q},w\rangle=0$, from which we get $\langle Nw,w\rangle=\langle Nw,Nw\rangle=0$ after passing to the limit and using that $N$ is idempotent. Thus $Nw=0$ as claimed. ∎ ## References * [1] Ada Chan, Gabriel Coutinho, Whitney Drazen, Or Eisenberg, Chris Godsil, Mark Kempton, Gabor Lippner, Christino Tamon, and Hanmeng Zhan. Fundamentals of fractional revival in graphs. Linear Algebra and its Applications, 655:129–158, 2022. * [2] Ada Chan, Gabriel Coutinho, Christino Tamon, Luc Vinet, and Hanmeng Zhan. Quantum fractional revival on graphs. Discrete Applied Mathematics, 269:86–98, 2019. * [3] Ada Chan, Whitney Drazen, Or Eisenberg, Mark Kempton, and Gabor Lippner. Pretty good quantum fractional revival in paths and cycles. Algebraic Combinatorics, 4(6):989–1004, 2021. * [4] Or Eisenberg, Mark Kempton, and Gabor Lippner. Pretty good quantum state transfer in asymmetric graphs via potential. Discrete Mathematics, 342(10):2821–2833, 2019. * [5] Chris Godsil, Stephen Kirkland, Simone Severini, and Jamie Smith. Number-theoretic nature of communication in quantum spin systems. Physical Review Letters, 109(5):050502, 2012. * [6] Chris Godsil and Jamie Smith. Strongly cospectral vertices. preprint, 2017, arXiv:1709.07975. * [7] Mark Kempton, Gabor Lippner, and Shing-Tung Yau. Perfect state transfer on graphs with a potential. Quantum Information & Computation, 17(3):303–327, 2017, arxiv.org/pdf/1611.02093v2.pdf. * [8] Mark Kempton, Gabor Lippner, and Shing-Tung Yau. Pretty good quantum state transfer in symmetric spin networks via magnetic field. Quantum Information Processing, 16(9):16:210, 2017. * [9] Serge Lang. Algebra, volume 211 of Graduate Texts in Mathematics. Springer-Verlag New York, 3 edition, 2002. * [10] Luc Vinet and Alexei Zhedanov. Almost perfect state transfer in quantum spin chains. Physical Review A, 86:052319, Nov 2012.
# Generating 4-dimensional Wormholes with Yang-Mills Casimir Sources A. C. L. Santos<EMAIL_ADDRESS>Universidade Federal do Ceará (UFC), Departamento de Física, Campus do Pici, Fortaleza - CE, C.P. 6030, 60455-760 - Brazil. R. V. Maluf <EMAIL_ADDRESS>Universidade Federal do Ceará (UFC), Departamento de Física, Campus do Pici, Fortaleza - CE, C.P. 6030, 60455-760 - Brazil. C. R. Muniz <EMAIL_ADDRESS>Universidade Estadual do Ceará (UECE), Faculdade de Educação, Ciências e Letras de Iguatu, Av. Dário Rabelo s/n, Iguatu-CE, 63.500-00 - Brazil. (August 28, 2024) ###### Abstract This work presents a new wormhole solution in General Relativity supported by the quantum vacuum fluctuations of the Casimir effect between perfect chromometallic mirrors in $(3+1)$ dimensions, which was recently fitted using first-principle numerical simulations. Initially, we employ a perturbative approach for $x=mr\ll 1$, where $m$ represents the Casimir mass. This approach has proven to be a reasonable approximation when compared with the exact case in this regime. To find well-behaved redshift functions, we impose constraints on the free parameters. As expected, this solution recovers the electromagnetic-like Casimir solution for $m=0$. Analyzing the traversability conditions, we graphically find that all will be satisfied for $0\leq m\leq 0.20$. On the other hand, all the energy conditions are violated, as usual in this context. Stability from Tolman-Oppenheimer-Volkov (TOV) equation is guaranteed for all $r$ and from the speed of sound for $0.16\leq m\leq 0.18$. Therefore, for $0.16\leq m\leq 0.18$, we will have a stable solution that satisfies all traversability conditions. General Relativity. Yang-Mills Field. Casimir Effect. Wormholes. ## I Introduction Some inconsistencies within the framework of General Relativity have demanded severe efforts from physicists. Singularities and the incompatibility with observations in the cosmological scenario, among others, have guided different proposals: adding new invariants or fields, exploring alternative geometries and quantization (a great compilation can be found in CANTATA:2021ktz ). However, faced with these problems, we inquire: Are we adequately comprehending and describing the vacuum structure? What effects could arise when we include experimentally confirmed quantum vacuum effects? One of the most established effects is called the Casimir effect and is associated with quantum vacuum fluctuations when we impose boundary conditions Casimir:1948dh . Over the years, studies have emerged to investigate the effect of curved spaces on Casimir energy density Sorge:2019kuh ; Santos:2020taq ; Santos:2021jjs . On the other hand, following Garattini’s work, this energy density and the associated pressure was incorporated as source in Einstein’s equations, resulting in the formation of wormholes for the Casimir effect of the electromagnetic field in $(3+1)$ Garattini:2019ivd , $(2+1)$ Alencar:2021ejd and $D$ dimensions Oliveira:2021ypz . Also with the Casimir effect of the Yang-Mills field in $(2+1)$ dimensions Santos:2023zrj and extensions of General Relativity Cruz:2024ihb ; Zubair:2023abs ; Hassan:2022hcb ; Tripathy:2020ehi ; Azmat:2023ygn ; Mishra:2023bfe . Wormholes are hypothetical structures that would connect two distinct regions of spacetime. However, unlike Black Holes, which already have observational confirmation and well-established sources provided by stellar evolution EventHorizonTelescope:2019ths ; EventHorizonTelescope:2022wkp , wormholes suffer from a series of questions, including a natural formation process. In 1988, Morris and Thorne conducted a detailed study of the characteristics this hypothetical structure would need to possess in order to be traversable Morris:1988cz . They concluded that it would require negative energy density - exotic matter - which at the time increased the skepticism related to this solution. Interestingly, one of the characteristics of the Casimir energy density is that for certain configurations, it can assume negative values. This is why it has been proposed as a potential source for this solution. In this sense, based on the previously mentioned results, this work aims to reaffirm the possibility of a wormhole formation with a new source given by the Casimir energy density of the Yang-Mills field in $(3+1)$ dimensions, that was recently fitted through first-principles numerical simulations, where was identified that the Casimir interaction between perfect chromometallic mirrors reveals the presence of a new gluonic state with a mass of $m=0.49(5)$ GeV Chernodub:2023dok . On the other hand, due to the numerical difficulties in treating the solution exactly and taking into account that this effect occurs at small distances for small masses in a gravitational context, we will consider a perturbative approach for $x=mr\ll 1$ and in order to eliminate the singularities that arise at the throat - common in this context - we will impose a constraint on the free parameters. Our paper is organized as follows: In section II, considering a perturbative approach, we find the shape and redshift functions, analyzing how the Casimir mass influences the wormhole characteristics. In section III, to identify the physical consistency of this solution, we investigated the traversability conditions, energy conditions, stability from the speed of sound, and TOV equation. Finally, in section IV we outline our conclusions. ## II Wormhole Solution Since we are going to consider General Relativity, we begin by defining the well-known Einstein-Hilbert action in $(3+1)$ dimensions $S=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}(R+\mathcal{L}_{m}),$ (1) where $g$ stands for the determinant of the metric $g_{\mu\nu}$, $R$ is the Ricci scalar and $\mathcal{L}_{m}$ is the Lagrangian density of matter. Varying (1) concerning the metric, we obtain the equations of motion $R^{\mu}_{\nu}-\frac{1}{2}g^{\mu}_{\nu}R=\kappa T^{\mu}_{\nu},$ (2) with $\displaystyle\kappa=8\pi$, where $G=\hbar=c=1$. Assuming that the Casimir energy density and pressures effectively act as a fluid with density $\rho(r)$, radial pressure $p_{r}(r)$, and tangential pressure $p_{t}(r)$, we will consider: $T_{\nu}^{\mu}=\mbox{diag (-$\rho(r),p_{r}(r),p_{t}(r),p_{t}(r)$)}.$ (3) Let us assume a spherically symmetric and static ansatz for the spacetime metric with the line element given by $ds^{2}=-e^{2\Phi(r)}dt^{2}+\frac{1}{1-\frac{b(r)}{r}}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2},$ (4) which represents a $(3+1)$-dimensional Morris-Thorne wormhole Morris:1988cz , where the redshift function $\Phi(r)$ and the shape function $b(r)$ are arbitrary functions of the polar coordinate $r\in\left[r_{0},+\infty\right)$. Thus, the coordinate $r$ must be decreased from infinity to a minimum value $r_{0}$, the radius of the throat. Replacing the ansatz to the metric (4) and the energy-momentum tensor (3) in the Einstein field equations (2), we find $\frac{b^{\prime}(r)}{r^{2}}=\kappa\rho(r),$ (5) $\kappa p_{r}(r)=-\frac{b(r)}{r^{3}}+\frac{2\Phi^{\prime}(r)}{r}-\frac{2b(r)\Phi^{\prime}(r)}{r^{2}},$ (6) $\displaystyle\left(1-\frac{b(r)}{r}\right)\left(\Phi^{\prime\prime}(r)+\Phi^{\prime 2}(r)+\frac{\Phi^{\prime}(r)}{r}\right)-\frac{b^{\prime}(r)r-b(r)}{2r^{2}}\left(\Phi^{\prime}(r)+\frac{1}{r}\right)=\kappa p_{t}(r).$ (7) The prime $(^{\prime})$ stands for the total derivative concerning the radial coordinate $r$. On the other hand, the covariant energy-momentum conservation law leads to $p_{r}^{\prime}(r)=\frac{-2p_{r}(r)+2p_{t}(r)-r(p_{r}(r)+\rho(r))\Phi^{\prime}(r)}{r}.$ (8) We will consider as a source the Casimir energy in non-Abelian gauge theory described as the Casimir energy of a massive scalar particle with certain mass $m$ which was recently fitted using first-principles numerical simulations Chernodub:2023dok $\frac{\mathcal{E}_{Cas}}{S}=\sum_{n=1}^{\infty}-\frac{2C_{0}m^{2}}{\pi^{2}r}\frac{K_{2}(2nmr)}{n^{2}},$ (9) where $S$ is the surface of the plates, $C_{0}$ is a phenomenological parameter, $r$ is the distance between the plates and $\displaystyle K_{2}(z)$ gives the modified Bessel function of the second kind. The analysis provides the following best-fit parameters: $C_{0}=5.60(7)$ and $m=0.49(5)$ GeV. Due to the numerical difficulties encountered when dealing with this expression exactly, we will use the perturbative approach considering $\displaystyle x=mr$, for $x\ll 1$. Let us consider the following expansion Maluf:2019ujv : $\frac{\mathcal{E}_{Cas}}{S}=\sum_{n=1}^{\infty}\left(-\frac{\lambda}{2n^{4}r^{3}}+\frac{\lambda x^{2}}{2n^{2}r^{3}}+O(x^{4})\right)=-\frac{\pi^{2}\lambda\left(\pi^{2}-15x^{2}\right)}{180r^{3}}+O(x^{4}),$ (10) where, $\lambda=\frac{2C_{0}}{\pi^{2}}.$ (11) Which leads to the Casimir energy density $\rho(m\ll r^{-1})=\frac{\mathcal{E}_{Cas}(m\ll r^{-1})}{Sr}=-\frac{\pi^{2}\lambda\left(\pi^{2}-15m^{2}r^{2}\right)}{180r^{4}}.$ (12) To confirm the accuracy of this density as an approximation of the exact function within this regime, we generated a comparative graph depicted in Figure 1. This graph juxtaposes the exact densities with those expanded using the best fit parameters $C_{0}$ and $m=5\times 10^{-2}$ Chernodub:2023dok . Through this comparison, we can confidently assert its validity as a reliable approximation. Figure 1: The graphical representation of the radial dependence for $\rho(r)$ exact and expanded, with $n=1,...,100,C_{0}=5.6$ and $m=5$x$10^{-2}$. Figure 2: Representation of the embedding diagram with different values of $m$, with $C_{0}=5.6$ and $r_{0}$ given by (18), in natural units where $G=\hbar=c=1$. --- Figure 3: Embedded shape of the Casimir-Yang-Mills wormhole with $m=5$x$10^{-2},C_{0}=5.6$ and $r_{0}$ given by (18), in natural units where $G=\hbar=c=1$. From (5) and (12) we obtain the shape function $\displaystyle b(m\ll r^{-1})=\frac{\pi^{2}\kappa\lambda(r-r_{0})\left(15r_{0}m^{2}r-\pi^{2}\right)}{180r_{0}r}+r_{0}.$ (13) The constant of integration was fixed such that the throat condition $b(r_{0})=r_{0}$ is satisfied. When $m\to 0$, we recover the profile of the wormhole solution sustained by the Casimir effect of the electromagnetic field, as expected Garattini:2019ivd . Analyzing Figure 2, which depicts embedding diagrams for different values of $m$, we identified that the larger the Casimir mass, the smaller the throat radius, which aligns with the constraint given by the equation (18) and can be interpreted as an increase in $m$ leading to more intense quantum effects, causing pronounced distortions in the geometry of the wormhole. However, very small masses have a throat radius that is very close due to the dependence of $r_{0}$ with the inverse of $m^{2}$. Figure 3 represents this three-dimensional embedding, highlighting its asymptotic flatness. Let us turn our attention to the redshift function associated with the Casimir-Yang Mills wormhole. The radial pressure is given by $\displaystyle p_{r}(m\ll r^{-1})=-\frac{1}{S}\frac{d\mathcal{E}_{Cas}}{dr}=-\frac{\pi^{2}\lambda\left(\pi^{2}-5m^{2}r^{2}\right)}{60r^{4}}.$ (14) This allows us to conclude the following equation of state: $p_{r}(m\ll r^{-1})=\omega(m\ll r^{-1})\rho(m\ll r^{-1});\;\;\omega(m\ll r^{-1})=\frac{2\pi^{2}}{\pi^{2}-15m^{2}r^{2}}+1,$ (15) which recovers the $\omega$ of the electromagnetic case for $m=0$, as expected. From (6), (13) and (14) we obtain $\displaystyle\Phi(m\ll r^{-1})=\frac{1}{2\left(\pi^{2}\kappa\lambda m^{2}-12\right)\left(\pi^{4}\kappa\lambda-15{r_{0}}^{2}\left(\pi^{2}\kappa\lambda m^{2}-12\right)\right)}$ $\displaystyle\times\left(3\ln(r-r_{0})\left(\pi^{2}\kappa\lambda m^{2}-12\right)\left(5{r_{0}}^{2}\left(\pi^{2}\kappa\lambda m^{2}+12\right)-\pi^{4}\kappa\lambda\right)\right.$ $\displaystyle+\left.\left(45{r_{0}}^{2}\left(\pi^{2}\kappa\lambda m^{2}-12\right)^{2}-\pi^{4}\kappa\lambda\left(\pi^{2}\kappa\lambda m^{2}+12\right)\right)\ln\left(15r_{0}r\left(\pi^{2}\kappa\lambda m^{2}-12\right)-\pi^{4}\kappa\lambda\right)\right)$ $\displaystyle+\ln(r)+c_{1},$ (16) where $c_{1}$ is a constant. In order to have a traversable wormhole we need to eliminate the divergence in the logarithmic term at $r\to r_{0}$, which implies the constraint $5{r_{0}}^{2}\left(\pi^{2}\kappa\lambda m^{2}+12\right)-\pi^{4}\kappa\lambda=0,$ (17) which leads $\displaystyle r_{0}=\pi^{2}\sqrt{\frac{\kappa\lambda}{5\pi^{2}\kappa\lambda m^{2}+60}}.$ (18) Considering (17), the redshift function becomes: $\displaystyle\Phi(m\ll r^{-1})$ $\displaystyle=$ $\displaystyle\frac{2\left(\pi^{2}\kappa\lambda m^{2}-6\right)\ln\left(10{r_{0}}^{2}\left(\pi^{2}\kappa\lambda m^{2}-24\right)\right)}{\pi^{2}\kappa\lambda m^{2}-12}+\ln\left(\frac{r}{r_{0}}\right)+1$ (19) $\displaystyle-$ $\displaystyle\frac{2\left(\pi^{2}\kappa\lambda m^{2}-6\right)\ln\left(-5\pi^{2}r_{0}\kappa\lambda m^{2}(r_{0}-3r)-60r_{0}(r_{0}+3r)\right)}{\pi^{2}\kappa\lambda m^{2}-12},$ which is a finite quantity when $r\to r_{0}$. The constant of integration $c_{1}$ was fixed such that $\Phi(r_{0})=1$ is satisfied. This behavior is illustrated in Figure 4. Note that for the argument of the first logarithm we have the condition $\pi^{2}\kappa\lambda m^{2}\neq 24$ and for the second logarithm to be null, we need $5\pi^{2}r_{0}\kappa\lambda m^{2}(3r-r_{0})-60r_{0}(r_{0}+3r)=0,$ (20) which implies, $r=r_{0}\left(\frac{8}{\pi^{2}\kappa\lambda m^{2}-12}+\frac{1}{3}\right)<r_{0}.$ (21) It is valid to mention that for $m\to 0$, we recover the profile of the solution sustained by the electromagnetic Casimir effect Garattini:2019ivd . Certainly, the redshift function will diverge as $r\to\infty$ due to the presence of the second logarithmic term. However, in this limit, the approximation $x=mr\ll 1$ is no longer valid. Therefore, we cannot use (12) to describe the Casimir energy density and, consequently, (19) as a solution to the redshift function. Since the Kretschmann scalar is an involved expression, to ensure the absence of singularities in this spacetime for this regime we plotted Figure 5. Figure 4: Redshift function from equation (19) with $m=5$ x $10^{-4},C_{0}=5.6$ and $r_{0}$ given by (18), in natural units where $G=\hbar=c=1$. Figure 5: Kretschmann scalar with $m=5$x$10^{-4},C_{0}=5.6$ and $r_{0}$ given by (18), in natural units where $G=\hbar=c=1$. ## III Source Properties ### III.1 Traversability Conditions Let’s investigate whether the conditions established for traversability Morris:1988cz are satisfied by the redshift and shape functions found. $(i)$ A flaring-out condition, associated with the minimality of the wormhole throat, implies that $\frac{b(r)-rb^{\prime}(r)}{b(r)^{2}}=-\frac{180r_{0}r\left(-15{r_{0}}^{2}r\left(12-\pi^{2}\kappa\lambda m^{2}\right)-2\pi^{4}r_{0}\kappa\lambda+\pi^{4}\kappa\lambda r\right)}{\left(180{r_{0}}^{2}r+\pi^{2}\kappa\lambda(r_{0}-r)\left(\pi^{2}-15r_{0}m^{2}r\right)\right)^{2}}>0.$ (22) And on the throat $b^{\prime}(r_{0})=\frac{1}{180}\pi^{2}\kappa\lambda\left(15m^{2}-\frac{\pi^{2}}{{r_{0}}^{2}}\right)<1.$ (23) $(ii)$ Another condition is given by $\displaystyle 1-\frac{b(r)}{r}=\frac{(r-r_{0})\left(15r_{0}r\left(12-\pi^{2}\kappa\lambda m^{2}\right)+\pi^{4}\kappa\lambda\right)}{180r_{0}r^{2}}\geq 0.$ (24) We plotted a graph in Figure 6 to analyze which values of $m$ allow the conditions (22), (23), and (24) to be satisfied for all $r$ (within the approach), since $r_{0}$ must be given by (18). Thus, we can conclude that for $0\leq m\leq 0.20$, all traversability conditions will be satisfied, which is consistent with the approximation we are employing. Figure 6: Region where the traversability conditions given by (22), (23) and (24) are satisfied with $C_{0}=5.6$ and $r_{0}$ given by (18), in natural units where $G=\hbar=c=1$. On the other hand, the shape function is asymptotically flat-like, since $\lim_{r\rightarrow\infty}\left(1-\frac{b(r)}{r}\right)=1-\frac{\pi^{2}\kappa\lambda m^{2}}{12}.$ (25) Despite exhibiting the desired behavior, as mentioned earlier, the asymptotic limit $r\to\infty$ does not fit within the approximation we are using. Finally, from (7), (13) and (II) and imposing the constraint (18), we can obtain the tangential pressure: $\displaystyle p_{t}(m\ll r^{-1})$ $\displaystyle=$ $\displaystyle-\frac{\pi^{2}\lambda\left(3\pi^{2}r\left(5\kappa\lambda m^{2}\left(\pi^{2}-3m^{2}r^{2}\right)-36\right)-\alpha\left(15m^{2}r^{2}+\pi^{2}\right)\right)}{180r^{4}\left(\alpha+r\left(36-3\pi^{2}\kappa\lambda m^{2}\right)\right)},$ (26) where, $\alpha=\pi^{2}\sqrt{\frac{\kappa\lambda(\pi^{2}\kappa\lambda m^{2}+12)}{5}}.$ (27) It is worth mentioning that the same result is found using the equation (8), as expected. Therefore, the Casimir source that generates a wormhole effectively acts as an anisotropic fluid, since the radial and tangential pressures are distinct as evidenced by the the equations (14) and (26). This is in agreement with analysis in more general contexts Kim:2019ojs . ### III.2 Energy Conditions The energy conditions state that for a given fluid with density $\rho(r)$, radial pressure $p_{r}(r)$, and lateral pressures $p_{t}(r)$, the following relationships must be satisfied: $\rho(r)\geq 0$ and $\rho(r)+p_{i}(r)\geq 0$ (Weak Energy Condition), $\rho(r)+p_{i}(r)\geq 0$ (Null Energy Condition), $\rho(r)+\sum_{i}p_{i}(r)\geq 0$ and $\rho(r)+p_{i}(r)\geq 0$ (Strong Energy Condition) and $\rho(r)-|p_{i}(r)|\geq 0$ (Dominant Energy Condition). As we can see in Figure 7, all the energy conditions will be violated. This fact is common in other contexts of wormholes sustained by the Casimir source and is reasonable since these conditions have a classical nature, whereas the analyzed source has a quantum origin. --- (a) (b) Figure 7: The graphical representation of the radial dependence for a combination of density and pressures with $r_{0}$ given by (18), $m=5$ x $10^{-4}$ and $C_{0}=5.6$, in natural units where $G=\hbar=c=1$. ### III.3 Stability from sound velocity In order to evaluate the stability of the Casimir wormhole, we must first analyze the condition described by the expression Capozziello:2020zbx ; Capozziello:2022zoz : $v_{s}^{2}(r)=\frac{1}{3}\left[\frac{d(p_{r}+2p_{t})}{d\rho}\right]=\frac{1}{3}\left[\frac{p_{r}^{\prime}(r)+2p_{t}^{\prime}(r)}{\rho^{\prime}(r)}\right]\geq 0,$ (28) with $v_{s}$ representing the sound velocity in the medium. Considering (12), (14) and (26) we obtain $\displaystyle v_{s}^{2}(r)=\frac{-15m^{2}r^{2}+\gamma(r)}{3\left(2\pi^{2}-15m^{2}r^{2}\right)}\geq 0,$ (29) where, $\displaystyle\gamma(r)$ $\displaystyle=$ $\displaystyle\frac{1}{\left(\pi^{2}\sqrt{5(\kappa\lambda\pi^{2}\kappa\lambda m^{2}+12)}-15r\left(\pi^{2}\kappa\lambda m^{2}-12\right)\right)^{2}}$ (30) $\displaystyle\times$ $\displaystyle\left(10\pi^{2}\left(45\sqrt{5\kappa\lambda}m^{2}r^{3}\left(\pi^{2}\kappa\lambda m^{2}-18\right)\sqrt{\pi^{2}\kappa\lambda m^{2}+12}+675\kappa\lambda m^{4}r^{4}\left(\pi^{2}\kappa\lambda m^{2}-12\right)\right.\right.$ $\displaystyle+$ $\displaystyle\pi^{4}\kappa\lambda\left(\pi^{2}\kappa\lambda m^{2}+12\right)+12\pi^{2}\sqrt{5\kappa\lambda}r\left(\pi^{2}\kappa\lambda m^{2}-3\right)\sqrt{\pi^{2}\kappa\lambda m^{2}+12}$ $\displaystyle-$ $\displaystyle\left.\left.30r^{2}\left(\pi^{2}\kappa\lambda m^{2}-6\right)\left(11\pi^{2}\kappa\lambda m^{2}-108\right)\right)\right).$ Since stability must be analyzed in the throat, let’s consider $r\approx r_{0}$, where $r_{0}$ is given by (18), and examine in the Figure 8 which values of $m$ allow for a stable solution. As we can identify, to satisfy the condition given by (28) and avoid superluminal velocities, we need $0.16\leq m\leq 0.18$. Therefore, for $0.16\leq m\leq 0.18$ we have a stable solution with the traversability conditions satisfied. Figure 8: Graphical representation of $v^{2}_{s}$ with $r\approx r_{0}$ given by (18) and $C_{0}=5.6$, in natural units where $G=\hbar=c=1$. ### III.4 Stability from TOV equation Figure 9: The graphical representation of $\mathcal{F}_{g},\mathcal{F}_{h}$ and $\mathcal{F}_{a}$ as functions of the radial coordinate $r$, with $r_{0}$ given by (18), $m=5$x$10^{-4}$ and $C_{0}=5.6$, in natural units where $G=\hbar=c=1$. From (8), we can identify: $\mathcal{F}_{g}+\mathcal{F}_{h}+\mathcal{F}_{a}=0,$ (31) where, $\mathcal{F}_{g}=-\Phi^{\prime}(r)(\rho(r)+p_{r}(r));\;\mathcal{F}_{h}=-\frac{dp_{r}(r)}{dr};\;\mathcal{F}_{a}=\frac{-2(p_{r}(r)-p_{t}(r))}{r},$ (32) with $\mathcal{F}_{h}$ the hydrostatic force, $\mathcal{F}_{g}$ the gravitational force and $\mathcal{F}_{a}$ the anisotropic force. It is straightforward to verify that our traversable wormhole solution satisfies the TOV equation with $r_{0}$ given by (18). The profiles of $\mathcal{F}_{g}$, $\mathcal{F}_{h}$ and $\mathcal{F}_{a}$ are depicted in Figure 9. $\mathcal{F}_{a}$ and $\mathcal{F}_{g}$ take positive values near the throat, while $\mathcal{F}_{h}$ is negative, clearly indicating that to maintain the system in an equilibrium state, the hydrostatic force is balanced by the combined effect of gravitational and the anisotropic forces, in a similar behavior of $(2+1)$ dimensions Santos:2023zrj . ## IV Conclusion In summary, we have found a wormhole solution supported by the density and Casimir pressures of the Yang-Mills field, which has recently been simulated using first-principles numerical simulations in $(3+1)$ dimensions. We considered a perturbative approach for $x=mr\ll 1$, where $m$ is the Casimir mass. To eliminate the divergence in the throat, we imposed a constraint on the free parameter $r_{0}$. We identified in the shape function that a larger Casimir mass implies a smaller wormhole throat. In order to the traversability conditions to be satisfied, it is necessary $0\leq m\leq 0.20$, which is consistent with the approximation considered. In turn, the solution is stable for all $r$ from the point of view of TOV equation, as well as of the speed sound for $0.16\leq m\leq 0.18$. Thus, $0.16\leq m\leq 0.18$ implies that we will have a stable solution that satisfies all traversability conditions. All the energy conditions are violated, this is a common fact in this context given by a source of quantum origin. Finally, for $m=0$ we recover the Casimir electromagnetic solution, as expected. ## Acknowledgments The authors thank the Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP), the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Grants no. 88887.822058/2023-00 (ACLS), no. 308268/2021-6 (CRM), and no. 200879/2022-7 (RVM) for financial support. ## References * (1) E. N. Saridakis et al. [CANTATA], Springer, 2021, ISBN 978-3-030-83714-3, 978-3-030-83717-4, 978-3-030-83715-0 doi:10.1007/978-3-030-83715-0 [arXiv:2105.12582 [gr-qc]]. * (2) H. B. G. Casimir, Indag. Math. 10 (1948) no.4, 261-263 * (3) F. Sorge, Int. J. Mod. Phys. D 29 (2019) no.01, 2050002 doi:10.1142/S0218271820500029 * (4) A. C. L. Santos, C. R. Muniz and L. T. Oliveira, Int. J. Mod. Phys. D 30 (2021) no.05, 2150032 doi:10.1142/S0218271821500322 [arXiv:2007.00227 [gr-qc]]. * (5) A. C. L. Santos, C. R. Muniz and L. T. Oliveira, EPL 135 (2021) no.1, 19002 doi:10.1209/0295-5075/135/19002 [arXiv:2103.03368 [gr-qc]]. * (6) R. Garattini, Eur. Phys. J. C 79 (2019) no.11, 951 doi:10.1140/epjc/s10052-019-7468-y [arXiv:1907.03623 [gr-qc]]. * (7) G. Alencar, V. B. Bezerra and C. R. Muniz, Eur. Phys. J. C 81 (2021) no.10, 924 doi:10.1140/epjc/s10052-021-09734-0 [arXiv:2104.13952 [gr-qc]]. * (8) P. H. F. Oliveira, G. Alencar, I. C. Jardim and R. R. Landim, Mod. Phys. Lett. A 37 (2022) no.15, 2250090 doi:10.1142/S0217732322500900 [arXiv:2107.00605 [hep-th]]. * (9) A. C. L. Santos, C. R. Muniz and R. V. Maluf, JCAP 09 (2023), 022 doi:10.1088/1475-7516/2023/09/022 [arXiv:2304.11131 [gr-qc]]. * (10) M. B. Cruz, R. M. P. Neves and C. R. Muniz, JCAP 05 (2024), 016 doi:10.1088/1475-7516/2024/05/016 [arXiv:2401.08885 [gr-qc]]. * (11) M. Zubair, S. Waheed, M. Farooq, A. H. Alkhaldi and A. Ali, Eur. Phys. J. Plus 138 (2023) no.10, 902 doi:10.1140/epjp/s13360-023-04539-4 * (12) Z. Hassan, S. Ghosh, P. K. Sahoo and K. Bamba, Eur. Phys. J. C 82 (2022) no.12, 1116 doi:10.1140/epjc/s10052-022-11107-0 [arXiv:2207.09945 [gr-qc]]. * (13) S. K. Tripathy, Phys. Dark Univ. 31 (2021), 100757 doi:10.1016/j.dark.2020.100757 [arXiv:2004.14801 [gr-qc]]. * (14) H. Azmat, Q. Muneer, M. Zubair, E. Gudekli, I. Ahmad and S. Waheed, Nucl. Phys. B 998 (2024), 116396 doi:10.1016/j.nuclphysb.2023.116396 * (15) A. K. Mishra, Shweta and U. K. Sharma, Universe 9 (2023) no.4, 161 doi:10.3390/universe9040161 [arXiv:2303.04641 [physics.gen-ph]]. * (16) K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 875 (2019) no.1, L4 doi:10.3847/2041-8213/ab0e85 [arXiv:1906.11241 [astro-ph.GA]]. * (17) K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 930 (2022) no.2, L12 doi:10.3847/2041-8213/ac6674 [arXiv:2311.08680 [astro-ph.HE]]. * (18) M. S. Morris and K. S. Thorne, Am. J. Phys. 56 (1988), 395-412 doi:10.1119/1.15620 * (19) M. N. Chernodub, V. A. Goy, A. V. Molochkov and A. S. Tanashkin, Phys. Rev. D 108 (2023) no.1, 014515 doi:10.1103/PhysRevD.108.014515 [arXiv:2302.00376 [hep-lat]]. * (20) R. V. Maluf, D. M. Dantas and C. A. S. Almeida, Eur. Phys. J. C 80 (2020) no.5, 442 doi:10.1140/epjc/s10052-020-8020-9 [arXiv:1905.04824 [hep-th]]. * (21) H. C. Kim and Y. Lee, JCAP 09 (2019), 001 doi:10.1088/1475-7516/2019/09/001 [arXiv:1905.10050 [gr-qc]]. * (22) S. Capozziello, O. Luongo and L. Mauro, Eur. Phys. J. Plus 136 (2021) no.2, 167 doi:10.1140/epjp/s13360-021-01104-9 [arXiv:2012.13908 [gr-qc]]. * (23) S. Capozziello, and N. Godani, Phys. Lett. B835, 137572 (2022), doi:10.1016/j.physletb.2022.137572, [arXiv:2211.06481 [gr-qc]].
# A Pub-Sub Architecture to Promote Blockchain Interoperability ††thanks: This project has been supported by _The Linux Foundation_ as part of the _Hyperledger Summer Internships_ program under the _Towards Blockchain Interoperability with Hyperledger_ project. Sara Ghaemi1, Sara Rouhani2, Rafael Belchior3, Rui S. Cruz3, Hamzeh Khazaei4, Petr Musilek1 1University of Alberta, Edmonton, Canada, {sghaemi, <EMAIL_ADDRESS>2University of Saskatchewan, Saskatoon, Canada, <EMAIL_ADDRESS>3Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal, {rafael.belchior<EMAIL_ADDRESS>4York University, Toronto, Canada<EMAIL_ADDRESS> ###### Abstract The maturing of blockchain technology leads to heterogeneity, where multiple solutions specialize in a particular use case. While the development of different blockchain networks shows great potential for blockchains, the isolated networks have led to data and asset silos, limiting the applications of this technology. Blockchain interoperability solutions are essential to enable distributed ledgers to reach their full potential. Such solutions allow blockchains to support asset and data transfer, resulting in the development of innovative applications. This paper proposes a novel blockchain interoperability solution for permissioned blockchains based on the publish/subscribe architecture. We implemented a prototype of this platform to show the feasibility of our design. We evaluate our solution by implementing examples of the different publisher and subscriber networks, such as Hyperledger Besu, which is an Ethereum client, and two different versions of Hyperledger Fabric. We present a performance analysis of the whole network that indicates its limits and bottlenecks. Finally, we discuss the extensibility and scalability of the platform in different scenarios. Our evaluation shows that our system can handle a throughput in the order of the hundreds of transactions per second. ###### Index Terms: Distributed Ledger Technology, Blockchain, Interoperability, Publish/Subscribe ## I Introduction The distributed ledger technology (DLT) enables a set of independent untrusted nodes to establish an agreement on the state of a shared ledger. Blockchain, a type of DLT, is mostly known for its use cases in cryptocurrencies such as Bitcoin [1], Ethereum [2], and XRP[3], among others. However, the technology can be used for more diverse applications and industries. Some examples are biomedical and health care [4, 5], Internet of Things (IoT) [6, 7], public administration [8, 9], and cloud computing [10, 11]. Since each industry has its own unique sets of requirements, many isolated permissioned and permissionless blockchains have been introduced, posing limits regarding its interoperability. Currently, developers commit to a single blockchain solution, and they cannot use the capabilities of more than one blockchain (vendor lock-in). These isolated, incompatible networks have resulted in silos of data and assets, which cannot be used from other networks. Blockchain interoperability solutions are needed to enable asset and information transfer from one blockchain to another. However, interoperability for blockchains encounters challenges that make it different from interoperability for other software networks. First, each interoperability solution should take into account the differences in the architecture of blockchain networks. Although all blockchains have an immutable ledger that stores the history of assets, they usually reach a consensus on the order of transactions using different algorithms. As a result, their underlying network and their validation mechanisms can be different from other blockchains. Each blockchain network that participates in the interoperation is independent and in full control of their assets and information. Moreover, the interoperability solutions should not require significant changes in the underlying blockchain networks, and it should be usable with minimal effort for existing blockchains. This paper aims to tackle this problem by proposing a blockchain interoperability solution based on the publish/subscribe architecture across permissioned blockchains. We have implemented a broker blockchain that acts as a middleman in the interoperability process between the source and the destination networks. It is worth noting that since the broker is itself a blockchain network, it is not a central authority and peers from the source and destination blockchains can also participate in the governance of this network. The _broker_ blockchain keeps a copy of the information that needs to be shared in the form of a _topic_. A topic has a name, message, publisher, and a set of subscribers. The _publisher_ is the source blockchain network that wants to share the information. It is responsible for creating the topic on the broker and publishing it to the corresponding topic whenever the information needs an update. The _subscribers_ are the destination networks that need some information from the source network. As soon as the subscriber network subscribes to a topic, the broker network notifies it whenever a change happens. This solution enables interoperability between blockchains with minimal effort. We used a performance benchmark tool to analyze the performance of the implemented prototype of the platform. The throughput and average latency for different functionalities of the broker network were investigated. The results indicate that our network can handle hundreds of transactions per second. Moreover, the evaluations identified the PublishToTopic functionality to be the broker network’s bottleneck. The rest of this paper is organized as follows. Section II gives a summary of the related work on blockchain interoperability and blockchain-based publish/subscribe protocols. Section III introduces the system design details for the proposed interoperability solution. Section IV demonstrates the implementation and deployment details of the platform, while section V presents its performance evaluation. Section VI outlines some discussions on the design and evaluation of the platform and section VII concludes the paper. ## II Related Work In this section, we discuss the related work in the field of blockchain interoperability, as well as blockchain-based publish/subscribe protocols. ### II-A Blockchain Interoperability The emergence of the blockchain interoperability research area has brought attention from both the industry and academia. Blockchain interoperability projects have been tackled as early as in 2016 [12]. A recent survey classifies blockchain interoperability studies in three categories: Cryptocurrency-directed interoperability approaches, Blockchain Engines, and Blockchain Connectors [12]. Cryptocurrency-directed approaches are mostly industry solutions that provide interoperability across public blockchains. This category has a focus on asset interoperability and is divided into sidechains, hash lock time contracts, notary schemes, and hybrid solutions. Sidechains allow for offloading transactions to a secondary chain, enhance performance, and provide features that the main chain would not provide. Sidechains also enable the representation of a token from the mainchain at the secondary chain. Some sidechain solutions include the BTC Relay [13], Zendoo [14], and RSK [15]. Hash lock time contract solutions enable cross-chain atomic operations using smart contracts. Wanchain uses this scheme and provides loan services with cryptocurrencies [16]. Notary schemes are centralized or decentralized entities that mediate token exchange (e.g., cryptocurrency exchanges). Finally, hybrid solutions combine characteristics from previous approaches. The cryptocurrency-directed approaches only work for transferring different types of cryptocurrencies between blockchain network. As a result, these approaches cannot be used for permissioned blockchains with arbitrary assets and smart contracts, which are the focus of this paper. The second category is blockchain engines, which enable creating customized blockchains that can interoperate by providing reusable data, network, consensus, and contract layers. Platforms like Polkadot [17] and Cosmos [18] provide such infrastructure, with “free interoperability” among the instances they allow to create. These approaches are fundamentally different from what has been proposed in this paper. Instead of enabling blockchain interoperability for currently running blockchains, blockchain engines propose blockchain networks that are interoperable by design. As a result, these solutions cannot be used for currently running permissioned blockchains. The Blockchain Connector category is composed of interoperability solutions that are not cryptocurrency-directed or blockchain engines. They include trusted relays, blockchain agnostic protocols, blockchain of blockchains solutions, and blockchain migrators. Each of these categories responds to a particular set of use cases. Trusted relays allow discovering the target blockchains, often appearing in a permissioned blockchain environment where trusted escrow parties route cross-blockchain transactions. An interesting trusted relay approach is Hyperledger Cactus [19], the most recent Hyperledger project aiming to connect a client to several blockchains, whereby transactions are endorsed by trusted validators. Cactus focuses on providing multiple use case scenarios via a trusted consortium. Another trusted relay is proposed by Abebe et al. [20], which is an interoperability solution between Fabric networks using Hyperledger Fabric chaincode and a protocol-buffer based communication protocol. Blockchain agnostic protocols enable cross-blockchain communication between arbitrary distributed ledger technologies, including refactoring and making changes to each blockchain. Blockchain of blockchains are approaches that allow users to build decentralized applications using multiple blockchains. Finally, blockchain migrators enable the state of one blockchain to be migrated to another blockchain. Simple blockchain migrations have already been performed, paving the direction for complex blockchain migrations [12]. Our pub-sub solution is considered a trusted relay (although decentralized), as it contains a blockchain mediating communication across heterogeneous blockchains [12]. While trusted relays can provide a straightforward way of achieving interoperability, most of them are not trustless (e.g., contain a centralization point). Our solution is a decentralized trusted relay that implements a pub/sub system, anchored on the trust that underlying blockchains offer. ### II-B Blockchain-based Publish/Subscribe Protocol The blockchain technology has been applied to the pub/sub paradigm in a few previous studies. However, those studies adopt blockchain to address the existing problems in other areas, such as IoT [21], supply chain [22], multi- tenant edge cloud [23], and digital trading [24]. Huang et al. [23] exploit blockchain technology to enhance the security of pub/sub communications in multi-tenant edge clouds. Most topic-based and broker-enabled pub/sub streaming systems use centralized cloud servers to store sensitive metadata and access control lists (ACL), which can compromise the confidentiality, anonymity, and integrity of tenants’ data. Alternatively, critical data such as ACL and identity information, as well as the hash of raw messages, and operation logs, can be stored on the blockchain to guarantee data security and integrity. Their smart contracts implement access control mechanisms to authorize requests from publishers and subscribers. Trinity [22] proposes a blockchain-based distributed publish/subscribe broker to solve the existing flaws of centralized brokers in IoT and supply chain monitoring applications. Trinity has three main components: blockchain network, broker, and clients. The blockchain network is responsible for consensus in the system and persistent storage. The broker handles the communications between the blockchain network and clients. The clients are the publishers and subscribers of the topics. The blockchain network is pluggable, and the authors have used Tendermint, Hyperledger Fabric, Ethereum, and IOTA. The broker has used the Mosquitto MQTT broker, which provides a single point of failure. Zhao et al. [25] have proposed Secure Pub-Sub (SPS), which provides fair payment based on a reputation for publishers and subscribers in cyber-physical systems. They use the Bitcoin’s network to enable payments between the entities, and they propose a reputation mechanism that helps calculate the price of data. Lv et al. [21] presents a decentralized privacy-preserving pub/sub model for IoT systems to solve centralized brokers’ problems such as single point of failure, data tampering due to corrupter brokers, and heavy encryption algorithms. The presented model applies public-key encryption with equality test [26] and ElGamal [27] to protect participants’ (both publishers and subscribers) privacy. A system prototype is implemented and evaluated against the feasibility of the proposed model. Bu et al. [24] and Zupan et al. [28] have proposed blockchain-based pub/sub brokers to address the drawbacks of traditional pub/sub systems such as privacy and accountability. However, they have not explained their implementation and evaluation in their studies. All these studies adopt blockchain to improve the centralized predicaments in traditional pub/sub systems in distinct application domains, whereas our paper focuses on establishing robust and practical interoperability between permissioned blockchains with different architectures and infrastructures. To the best of our knowledge, our paper is the first study that enhances blockchain interoperability utilizing the pub/sub communication model across heterogeneous blockchains (Hyperledger Fabric/Hyperledger Besu). ## III System Design In this section, we first discuss the design principles that a blockchain interoperability solution should follow. We then propose our interoperability solution and explain its components and message flow. Figure 1: Architecture of the platform and the message flow. ### III-A Design Principles Blockchain interoperability aims to enable applications to use the assets and information available on blockchains other than their main blockchain network [12]. This allows for a greater range of applications. A blockchain interoperability solution should take into account the following design principles: * • The blockchain networks are independent, and they may have different architectures. * • The blockchain networks are in full control of their assets and information. * • The transfer protocol should be technology agnostic. * • The interoperability solution should not require significant changes in the source and destination networks. * • The blockchain networks should be able to incorporate the solution with minimal effort. Following the mentioned principles, we present our solution, which allows interoperability based on a publish/subscribe architecture. ### III-B Components The platform proposed in this paper aims to solve the interoperability problem of permissioned blockchains using the publish/subscribe pattern. When a blockchain wants to use the data from another blockchain, there needs to be a way to fetch and transfer this data between the networks securely. Moreover, if the data changes in the source network, the destination network should be notified of the change. Figure 1 shows the architecture of the platform and the message flow. The publisher blockchain is the blockchain network that sends the data, also referred to as the source network. For a publisher to participate in this platform, it needs to run the appropriate connector smart contract on its blockchain and enroll as a publisher in the broker. The publisher can then create as many topics as they want and use the connector smart contract to publish the changes to the topic. The subscriber blockchain is the blockchain network that received the data, also referred to as the destination network. Similar to the publisher, the subscriber also needs to run the appropriate connector smart contract and enroll as a subscriber. It can then subscribe to any topic available on the broker blockchain. Every time the topic changes, the broker notifies the subscriber by invoking the connector smart contract. There can exist many subscribers for a topic. The broker blockchain is the core component of the platform. It is a blockchain network that stores all the information about the topics and the blockchains that participate in the interoperability process. Since the broker is a blockchain, operations regarding interoperation are immutable and traceable, providing a robust basis for accountability. The broker has two different smart contracts, the connector smart contract and the topics smart contract. The connector smart contract stores the information about the participating blockchain networks and how the broker can interact with them. The topics smart contract is responsible for storing the information about the topics such as their publisher, subscribers, and the latest message. ### III-C Message Flow The complete interoperation process in the platform is shown in Figure 1. For simplicity, only one publisher and one subscriber blockchain are shown in this figure. However, for each topic, there can be an unlimited number of subscribers and, in general, there is no limit on the number of publisher and subscriber blockchains. A detailed explanation of each step in Figure 1 follows: 1. 1. For any blockchain network to interact with the broker blockchain, it must enroll in the connector smart contract. During this process, the information that the broker needs to be able to interact with the blockchain is registered in the connector smart contract. As a result, the publisher is required to enroll in the connector smart contract as a publisher. This step only needs to be performed once, when the publisher wants to create its first topic. It can then create topics or publish to existing ones without the need to be enrolled again. 2. 2. A publisher that is registered in the connector smart contract can create a new topic. In this step, the publisher needs to specify a name for the topic and the initial topic message. This step only needs to be executed once for each topic. 3. 3. Similar to the publisher blockchain, the subscriber blockchain should also enroll in the connector smart contract. This step only needs to be done once, when the subscriber wants to subscribe to a topic for the first time. 4. 4. To receive a notification when a topic has been changed, the subscriber should subscribe to the topic by sending a request to the topics smart contract. This results in the subscriber being added to the list of all subscribers for the topic. This step only needs to be performed once for each topic. 5. 5. Whenever needed, the application in the publisher blockchain can update the topic by sending a request to the connector smart contract. 6. 6. The connector smart contract sends a publish request to the topics smart contract for the existing topic. 7. 7. As soon as a publish request is received by the topics smart contract, the smart contract fetches the information about all the subscribers of the topic from the connector smart contract. It includes information on how the broker can interact with each of the subscribers. 8. 8. The topics smart contract then uses the data fetched from the connector smart contract to notify all the subscribers of the change in the topic. This happens by sending an update request to the connector smart contract in each of the subscriber networks. 9. 9. In each subscriber network, the connector smart contract receives the update for the topic and notifies the application. ## IV Implementation and Deployment A prototype of the proposed platform has been implemented as a proof-of- concept to demonstrate the feasibility of the design. This project is a Hyperledger Labs open-source project111https://github.com/hyperledger- labs/pubsub-interop, under the Apache License, Version 2.0. To show the interoperability capabilities of the platform, we implemented two example subscriber networks, as well as an example publisher network. The broker and the example publisher are implemented using two different Hyperledger Fabric V2 [29] networks. The two example subscribers are implemented using Hyperledger Fabric V1.4 [30, 31], and Hyperledger Besu [32]. In this section, the implementation and deployment details of the broker and the example networks are discussed. ### IV-A Broker Blockchain The broker blockchain acts as a messaging broker between other blockchains to enable interoperability. When choosing the blockchain solution to implement the broker network, we had to ensure that the solution fits well with the needs of this platform. First, since we aim to address interoperability in permissioned blockchains, the broker also needs to be permissioned. Moreover, many permissioned blockchains are enterprise-level, and they may have privacy and governance concerns, which our broker aims to addresses using blockchain that enables immutability and traceability. We need a blockchain solution for the broker network that considers these needs. Finally, the smart contracts that need to be implemented on the broker blockchain are complicated, and the blockchain needs to support this kind of smart contract. Hyperledger Fabric is an open-source permissioned blockchain that has been designed for enterprise use cases. Unlike the open permissionless blockchains that have scalability issues, Fabric enables high transaction throughput and low transaction confirmation latency. The architecture of Hyperledger Fabric is highly modular and configurable, which enables customization for each specific use case. Many blockchains only support smart contracts written in non-standard and domain-specific programming languages, making it impossible to implement smart contracts that require a Turing-complete language. On the other hand, Hyperledger Fabric supports smart contracts in general-purpose programming languages such as Go, Node.js, and Java [29]. In the broker network, we leverage the capabilities of Hyperledger Fabric V2.2 to implement a messaging broker. We implement two chaincodes called the topics and the connector to support the features needed for the broker. The topics chaincode is responsible for keeping all the topics and their corresponding details. In Hyperledger Fabric, everything is stored as a key-value pair. In the topics smart contract, the key is a unique topic ID. The value is an object that includes the following properties: name, publisher, subscribers, and message. The name of a topic is a string value set by the publisher when creating the topic. Each topic has one publisher, the blockchain network that has registered the topic on the broker, which is the only blockchain that can make changes to the topic. The subscribers’ property stores a list of all the blockchains that have subscribed to the topic. It is worth mentioning that the publisher and the subscribers’ properties only accept objects stored on the connector blockchain. The connector chaincode is responsible for storing the connection details of other blockchain networks. Similar to the topics chaincode, the key in the key-value pair used in this chaincode is a unique ID for each blockchain. The value is an object that has the following properties: name, type, server IP, port, extra information. The name is a string value that can be selected when enrolling in the network. Type shows what kind of blockchain technology this network is using. Currently, support for Fabric and Besu has been implemented, and other blockchains will be added in future versions. The server IP and port are used by the broker blockchain to access the publisher or subscriber using an HTTP request. The extra information property stores network-specific details that may be needed when interacting with the blockchains. For instance, for a Hyperledger Besu network, this includes the private key, address, and the contract application binary interface (ABI) that the broker should use to send a request to the Besu network. This kind of implementation allows our solution to be extendable to other blockchains, independent of their underlying network. To better understand how the topics and connector chaincodes work, we need to discuss their implemented functionalities. Figure 2 shows the UML class diagram of the implemented chaincodes. The Hyperledger Fabric contract API provides an interface for developing smart contracts and applications. Each developed chaincode should extend the contract class from this API and then implement the required logic. In each smart contract, the InitLedger function is used to create a set of initial assets on the ledger when the chaincode is deployed. In the topics chaincode, the CreateTopic function is used to create a new asset of type topic. The QueryTopic and the QueryAllTopics functions can be used to query one specific topic and all the existing topics, respectively. The connector chaincode implements the same initialize, create, and query functionalities but for assets of type blockchain. Figure 2: UML class diagram of the implemented chaincodes. Other than the mentioned functions, the topics blockchain also implements SubscribeToTopic, UnsubscribeFromTopic, and PublishToTopic functionalities. When a destination blockchain wants to get notified of a topic’s change, it subscribes to that topic by invoking the SubscribeToTopic function. The subscriber can also unsubscribe from the topic at any time by invoking the UnsubscribeFromTopic function. Finally, the PublishToTopic function is used by the source blockchain network when they want to update the topic’s message. An invoke request to this function triggers update requests to all the subscribers of the topic. Algorithm 1 shows the detailed implementation of the PublishToTopic method. First, the broker retrieves the latest version of the topic from the ledger. In the case that no topic was found, it immediately throws an error. Next, the topic’s message is updated with the new message value and the topic’s state is put to the ledger. The next step is for the broker to notify all the subscribers. For each of the subscribers of the topic, the blockchain object is queried from the connector contract. This inter-chaincode communication is also shown in Figure 2. Then, given the type of subscriber blockchain, the steps to invoke the remote network are followed. Input: topicID, newMessage Result: Subscribers are notified of the new message 1 topicState $\leftarrow$ getState(topicID) 2 if _!topicState_ then 3 throw error 4 5 end if 6topicState.message $\leftarrow$ newMessage 7 putState(topicID, topicState) 8 for _subID in topicState.subscribers _ do 9 subState $\leftarrow$ query $subID$ from connector contract 10 if _subState.type = Fabric_ then 11 follow steps to invoke a remote Fabric network 12 else if _subState.type = Besu_ then 13 follow steps to invoke a remote Besu network 14 end if 15 16 end for Algorithm 1 PublishToTopic Method ### IV-B Subscriber Blockchains The subscriber, or destination blockchain, is the blockchain that requires information from another blockchain to run a task. For the subscriber to be able to participate in the platform, it needs to have the appropriate connector smart contract deployed on it. We have implemented subscriber connector contracts for Hyperledger Fabric V1.4 and Hyperledger Besu. However, the connector is a simple smart contract that can also be developed by the owners of the subscriber blockchain. This smart contract needs to keep track of the topics that the subscriber has subscribed to and store their latest version for other smart contracts to access at any time. Two example subscriber networks have been implemented to demonstrate the interoperability capabilities of the platform. The first example subscriber is implemented using Hyperledger Fabric V1.4. The second example subscriber is implemented using Hyperledger Besu, an open- source Ethereum client that supports private and permissioned blockchains. Besu can create networks that work based on a proof of work (PoW) or a proof of authority (PoA) consensus algorithm. In this work, we implemented a PoW network using Besu, which can be thought of as a private Ethereum network. We then implemented a connector smart contract in Solidity to keep a record of the subscribed topics. ### IV-C Publisher Blockchains The publisher, or the source blockchain, is the blockchain network that needs to send information to other blockchains. Similar to what we have in the subscriber blockchain, a connector smart contract is also required for the publishers. However, the connector is slightly different in the publisher. The publisher connector should not only keep track of the topics, but it should also connect to the broker blockchain to publish the topics. We implemented an example publisher network using Hyperledger Fabric V2.2. ## V Experimental Evaluation In this section, we focus on evaluating the performance of the implemented prototype of the broker blockchain. The goal is to see how the throughput and latency of the system changes in different scenarios. We have conducted two series of experiments to achieve this goal. The first set of experiments aims to show the performance metrics of different functionalities in the broker blockchain. The second set of experiments focuses on the publish function, which is the most important and time-consuming component of the broker blockchain. We have used Hyperledger Caliper [33] to run the experiments. Hyperledger Caliper is an open-source blockchain performance benchmark tool that allows performance measurement for different blockchains, such as Hyperledger Fabric, Ethereum, Hyperledger Besu. In Hyperledger Caliper, the workloads or benchmarks are responsible for generating the content of each transaction that is sent to the blockchain network. Given the network and benchmark configurations, Caliper uses a set of independent workers to send scheduled requests to the blockchain network and monitor the response. When the tests are finished, Caliper generates a performance report consisting of the average throughput and minimum, maximum, and average latency throughout the test. The throughput shows the number of transactions that were processed in the system in a given time. The latency shows the amount of time it takes for a transaction to be finished and added to the ledger. Table I summarizes the specifications of each component in the experimental evaluation. We have set up Hyperledger Caliper on a separate machine to ensure that its process does not affect the performance of the broker network. We use five workers, a fixed rate controller, and a test duration of 60 seconds for each benchmark round. The broker network is implemented using Hyperledger Fabric V2.2 with two peer organizations and an orderer organization, each with an independent certificate authority. Each of the peer organizations hosts one peer node, and the orderer uses Raft implementation. Two chaincodes have been implemented that run on one channel. The Fabric subscriber, implemented using Fabric V1.4, has two organizations, each hosting two peers. The whole subscriber network uses one Solo orderer and one Fabric certificate authority. The Besu subscriber implements a private Ethereum network with PoW consensus algorithm. The publisher has been implemented using Hyperledger Fabric V2.2 with the same configurations as the broker network. TABLE I: Experimental evaluation setup Component | Type | CPU | RAM | Disk ---|---|---|---|--- Caliper Benchmark | N/A | 8 vCPU | 30 GB | 288 GB Broker | Fabric V2.2 | 8 vCPU | 30 GB | 288 GB Fabric Subscriber | Fabric V1.4 | 2 vCPU | 7.5 GB | 36 GB Besu Subscriber | Besu | 2 vCPU | 7.5 GB | 36 GB Publisher | Fabric V2.2 | 2 vCPU | 7.5 GB | 36 GB The first set of experiments focuses on the performance evaluation of broker blockchain. In these experiments, we conduct a series of tests using Hyperledger Caliper for each functionality that broker blockchain offers. Figure 2 summarizes all these functionalities. Each type of transaction goes through a specific set of steps in Hyperledger Fabric, which highly influences the response time for that transaction. For instance, an invoke transaction goes through endorse, order and commit steps. On the other hand, a query transaction is not transferred to the orderer, and the response is immediately sent back by the peer. The create actions in the connector and topics smart contract are invoke actions that have very similar implementations. The same goes for the query actions in the two smart contracts. As a result, it would be repetitive to run performance evaluation experiments for both smart contracts. Therefore, we run the experiments on the topics smart contract. The topics smart contract has five important functionalities: create a topic, query a topic, publish to a topic, subscribe to a topic, and unsubscribe from a topic. For each of these actions, we run a set of experiments by changing the transaction send rate in the Hyperledger Caliper benchmark. The goal is to see how the system’s throughput and average latency changes when the send rate is changed. Figure 3 shows the details of these experiments. It can be seen that the send rate follows the same pattern for all the actions except for PublishToTopic. The reason for this difference is that the PublishToTopic action takes more time and needs more resources to run compared to other actions. Consequently, broker blockchain’s hardware limits are reached when the network receives more than roughly 100 publish transactions in each second. We discuss the behaviour of the network with different PublishToTopic requests in the second set of experiments shown in Figure 4. As a result of this limitation, we lowered the send rate for the PublishToTopic action in our experiments. Figure 3: The trend of system throughput and average latency for various functionalities throughout time with the change of request send rate. The words publish, sub, unsub, query, and create in the plots stand for PublishToTopic, SubscribeToTopic, UnsubscribeFromTopic, QueryTopic, and CreateTopic functions, respectively. It can be seen in Figure 3 that the SubscribeToTopic, UnsubscribeFromTopic, and CreateTopic have similar behaviours under the same send rate. These three actions are of type invoke. Since an invoke transaction proposes a change in the blockchain, it needs to go through the consensus algorithm, which can be time-consuming. Since the three actions are of the same type, and none need heavy computations in execution, the system throughput and latency for all of them are similar. As shown in the experimentation results, when the send rate is lower than a threshold (around 160 TPS in this case), the throughput is the same as the send rate, and the average latency is only a few hundred milliseconds (around 100 to 300 milliseconds). This shows that with send rates below the threshold, all transactions are processed immediately. When the number of create, subscribe, or unsubscribe transactions sent in each second is more than the threshold, the broker network’s processing limit is reached. The throughput is limited to the broker’s maximum capacity (around 160 TPS), and the transactions are queued before being processed, which results in an increase in the latency. Figure 3 shows that when the send rate for the create, subscribe, or unsubscribe transactions is around 210 TPS, the average latency increases to about 11 seconds. The latency keeps increasing with higher send rates and reaches approximately 50 seconds with a send rate of 360 TPS. The QueryTopic action is different from the previous ones. Since a query transaction does not go through the consensus protocol, its process is much faster. The send rate pattern used for the query is similar to that of create, subscribe, and unsubscribe. However, the throughput and average latency act very differently. The throughput follows the same pattern as the send rate, and the average latency is around 10 milliseconds throughout the whole experiment. These results show that this experiment does not reach the process limit for QueryTopic. Finally, the PublishToTopic is similar to create, subscribe, and unsubscribe because they are all invoke transactions. However, the publish action requires stronger computations. As mentioned earlier, since the publish action needs more time and computational resources, we use a different send rate pattern. If we were to use the same send rate, the broker blockchain’s hardware limits would be reached, resulting in the experiments being halted. We discuss this in more detail in the second set of experiments shown in Figure 4. To ensure that the performance of the remote source and destination networks do not influence the performance evaluation of the broker network, we only send dummy requests to the subscriber networks during the experiments. It can be observed from Figure 3 that the publish action reaches the processing limit of the broker network much faster than the other invoke transactions. With send rates of about 70 TPS and more, the throughput is limited to 65 TPS. The average latency for the publish action has more fluctuations compared to other invoke actions. The main reason for this fluctuation is that in the publish method, depending on the number of subscribers that the topic has, the processing time can vary. In this experiment, the average latency gets as high as 80 seconds, with the send rate of 110 TPS. Figure 4: The trend of system throughput, average latency, and request success rate throughout time with the change of send rate. Given the limits of the PublishToTopic action, we decided to run some additional experiments on this type of transaction. This experiment aims to find the broker network’s limits and discover what happens when the limit is reached. In the previous experiment, we discovered that the processing limit for the publish transactions is reached at the sent rate of around 70 TPS. We also observed that the latency increases and the throughput is limited for send rates above 70 TPS and below 110 TPS. However, we would like to know what happens if the send rate is more than 110 TPS. In this experiment, we linearly increase the send rate from 50 to 150 TPS and observe the throughput, latency, and transaction success rate. Figure 4 shows the results of this experiment. Similar to the previous experiment, we see that the throughput is limited, and the latency is increased when the send rate reaches 70 TPS. Nevertheless, the interesting change happens at the 120 TPS send rate. At this point, a significant drop in the throughput and a significant rise of latency are observed. Moreover, the transaction success rate is not 100% anymore. From this point on, a portion of the transactions fail since the broker network has reached its hardware limits. ## VI Discussion and Future Work To enable blockchain interoperability, we have proposed the use of a broker blockchain as a middleman. The broker blockchain acts as a decentralized trusted relay between the source and destination network. Using a relay enables the interoperating networks to transfer data with minimal effort. Verifying the data and handling the communications between different blockchain networks can be delegated to the relay. As a result, there is no need for source and destination networks to make fundamental changes to their underlying structure. The relay network is also a blockchain network; while exploiting all desirable features offered by blockchain, it runs smart contracts implementing the interoperability functionality. Therefore, the broker blockchain allows the interoperation to be seamless, transparent, and secure. The platform proposed in this paper stores the destination and source blockchains as assets on the distributed ledger. As a result, a large number of blockchains can be supported as there are no limits on the number of assets. A study on the performance of Hyperledger Fabric V1.1 shows that the network can scale to 100+ peers [34, 35]. As the broker network has been implemented using Fabric V2.2, we expect this number to be even higher in our network. Therefore, at least 100 peers can participate in the governance of the broker blockchain. In the current prototype of our platform, every participant can subscribe to all existing topics, and there is no access control mechanism implemented. The private data feature presented by Hyperledger Fabric can be utilized to establish private channels in the broker blockchain and keep data topics separate from other organizations. Furthermore, an access control mechanism can be added to our pub/sub system to control the flow of data at a more granular level, for instance, the decentralized access control proposed by Rouhani et al. [36]. It is also possible to conduct authorization processes with minimal information disclosure if one uses the novel self-sovereign based access control model [37]. This way, we could model each blockchain as an agent to prove that they own specific credentials needed for the access control process. Moreover, the publisher network may choose only to make a topic available to a subset of subscribers. Access control can be used to manage the blockchains that can access each topic. ## VII Conclusion With blockchain technology gaining popularity in academia and industry, many blockchain networks are being introduced worldwide. These networks are highly isolated and incompatible with each other, resulting in silos of data and assets. Blockchain interoperability solutions can revolutionize this technology by enabling data and asset transfers between homogeneous and heterogeneous blockchains. In this paper, we proposed a blockchain interoperability solution based on the publish/subscribe architecture. Our solution consists of a broker blockchain that keeps a record of the data being transferred between blockchain networks. The blockchains that aim to participate in the interoperability can connect to the broker network as publishers or subscribers, depending on their role. A prototype of the broker blockchain has been implemented using Hyperledger Fabric. Moreover, an example publisher and two example subscribers have been implemented using Hyperledger Besu and two versions of Hyperledger Fabric to show that the design works for heterogeneous blockchains. The network’s performance has been analyzed using a benchmark tool to identify the platform’s limits and bottlenecks. The implementation and evaluations indicate the feasibility of the idea with satisfactory performance, and the bottleneck is identified to be the process of publishing a new message to a topic. Finally, a discussion on the extensibility, scalability, and possible improvements of the system is presented. ## References * [1] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system.” https://bitcoin.org/bitcoin.pdf, 2008. Last accessed 2020-07-17. * [2] G. Wood et al., “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum project yellow paper, vol. 151, no. 2014, pp. 1–32, 2014. * [3] B. Chase and E. MacBrough, “Analysis of the xrp ledger consensus protocol,” arXiv preprint arXiv:1802.07242, 2018. * [4] T.-T. Kuo, H.-E. Kim, and L. Ohno-Machado, “Blockchain distributed ledger technologies for biomedical and health care applications,” Journal of the American Medical Informatics Association, vol. 24, no. 6, pp. 1211–1220, 2017. * [5] S. Rouhani, L. Butterworth, A. D. Simmons, D. G. Humphery, and R. Deters, “Medichain tm: a secure decentralized medical data asset management system,” in 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 1533–1538, IEEE, 2018. * [6] T. M. Fernández-Caramés and P. Fraga-Lamas, “A review on the use of blockchain for the internet of things,” IEEE Access, vol. 6, pp. 32979–33001, 2018. * [7] C. Fan, H. Khazaei, Y. Chen, and P. Musilek, “Towards a scalable dag-based distributed ledger for smart communities,” in 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), pp. 177–182, IEEE, 2019. * [8] R. Belchior, M. Correia, and A. Vasconcelos, “JusticeChain: Using Blockchain To Protect Justice Logs,” in CoopIS 2019: 27th International Conference on Cooperative Information Systems, 2019. * [9] R. Belchior, A. Vasconcelos, and M. Correia, “Towards Secure, Decentralized, and Automatic Audits with Blockchain,” in European Conference on Information Systems, 2020. * [10] S. Ghaemi, H. Khazaei, and P. Musilek, “Chainfaas: An open blockchain-based serverless platform,” IEEE Access, vol. 8, pp. 131760–131778, 2020. * [11] J. H. Park and J. H. Park, “Blockchain security in cloud computing: Use cases, challenges, and solutions,” Symmetry, vol. 9, no. 8, p. 164, 2017. * [12] R. Belchior, A. Vasconcelos, S. Guerreiro, and M. Correia, “A survey on blockchain interoperability: Past, present, and future trends,” arXiv preprint arXiv:2005.14282, 2020. * [13] Ethereum Foundation and Consensys, “BTC-relay: Ethereum contract for Bitcoin SPV,” 2015. * [14] A. Garoffolo, D. Kaidalov, and R. Oliynykov, “Zendoo: a zk-SNARK Verifiable Cross-Chain Transfer Protocol Enabling Decoupled and Decentralized Sidechains,” tech. rep., 2020. * [15] S. Lerner tech. rep., RSK, 2015. * [16] J. Lu, B. Yang, Z. Liang, Y. Zhang, S. Demmon, E. Swartz, and L. Lu, 2017. * [17] G. Wood, “Polkadot: Vision for a Heterogeneous Multi-Chain Framework,” Whitepaper, 2017. * [18] J. Kwon and E. Buchman, “Cosmos Whitepaper,” tech. rep., 2016. * [19] H. Montgomery, H. Borne-Pons, J. Hamilton, M. Bowman, P. Somogyvari, S. Fujimoto, T. Takeuchi, T. Kuhrt, and R. Belchior, “Hyperledger Cactus Whitepaper.” https://github.com/hyperledger/cactus/blob/master/whitepaper/whitepaper.md, 2020\. Last accessed 2020-09-28. * [20] E. Abebe, D. Behl, C. Govindarajan, Y. Hu, D. Karunamoorthy, P. Novotny, V. Pandit, V. Ramakrishna, and C. Vecchiola, “Enabling enterprise blockchain interoperability with trusted data transfer (industry track),” in Proceedings of the 20th International Middleware Conference Industrial Track, pp. 29–35, 2019. * [21] P. Lv, L. Wang, H. Zhu, W. Deng, and L. Gu, “An iot-oriented privacy-preserving publish/subscribe model over blockchains,” IEEE Access, vol. 7, pp. 41309–41314, 2019. * [22] G. S. Ramachandran, K.-L. Wright, L. Zheng, P. Navaney, M. Naveed, B. Krishnamachari, and J. Dhaliwal, “Trinity: A byzantine fault-tolerant distributed publish-subscribe system with immutable blockchain-based persistence,” in 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), pp. 227–235, IEEE, 2019. * [23] B. Huang, R. Zhang, Z. Lu, Y. Zhang, J. Wu, L. Zhan, and P. C. Hung, “Bps: A reliable and efficient pub/sub communication model with blockchain-enhanced paradigm in multi-tenant edge cloud,” Journal of Parallel and Distributed Computing, 2020. * [24] G. Bu, T. S. L. Nguyen, M. P. Butucaru, and K. L. Thai, “Hyperpubsub: Blockchain based publish/subscribe,” in 2019 38th Symposium on Reliable Distributed Systems (SRDS), pp. 366–3662, IEEE, 2019. * [25] Y. Zhao, Y. Li, Q. Mu, B. Yang, and Y. Yu, “Secure pub-sub: Blockchain-based fair payment with reputation for reliable cyber physical systems,” IEEE Access, vol. 6, pp. 12295–12303, 2018. * [26] G. Yang, C. H. Tan, Q. Huang, and D. S. Wong, “Probabilistic public key encryption with equality test,” in Cryptographers’ Track at the RSA Conference, pp. 119–131, Springer, 2010. * [27] T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete logarithms,” IEEE transactions on information theory, vol. 31, no. 4, pp. 469–472, 1985. * [28] N. Zupan, K. Zhang, and H.-A. Jacobsen, “Hyperpubsub: a decentralized, permissioned, publish/subscribe service using blockchains,” in Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference: Posters and Demos, pp. 15–16, 2017. * [29] Hyperledger, “Hyperledger fabric v2.2 documentation.” https://hyperledger-fabric.readthedocs.io/en/release-2.2/, 2020. Last accessed 2020-10-22. * [30] Hyperledger, “Hyperledger fabric v1.4 documentation.” https://hyperledger-fabric.readthedocs.io/en/release-1.4/, 2019. Last accessed 2020-10-22. * [31] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, et al., “Hyperledger fabric: a distributed operating system for permissioned blockchains,” in Proceedings of the thirteenth EuroSys conference, pp. 1–15, 2018. * [32] Hyperledger, “Hyperledger besu documentation.” https://besu.hyperledger.org, 2020. Last accessed 2020-10-22. * [33] T. L. Foundation, “Hyperledger Caliper.” https://www.hyperledger.org/use/caliper, 2020. Last accessed 2020-11-5. * [34] C. Fan, S. Ghaemi, H. Khazaei, and P. Musilek, “Performance evaluation of blockchain systems: A systematic survey,” IEEE Access, vol. 8, pp. 126927–126950, 2020. * [35] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, S. Muralidharan, C. Murthy, B. Nguyen, M. Sethi, G. Singh, K. Smith, A. Sorniotti, C. Stathakopoulou, M. Vukolić, S. W. Cocco, and J. Yellick, “Hyperledger fabric: A distributed operating system for permissioned blockchains,” in Proceedings of the Thirteenth EuroSys Conference, EuroSys ’18, (New York, NY, USA), Association for Computing Machinery, 2018. * [36] S. Rouhani, R. Belchior, R. S. Cruz, and R. Deters, “Distributed Attribute-Based Access Control System Using a Permissioned Blockchain,” arXiv preprint, 2020. * [37] R. Belchior, B. Putz, G. Pernul, M. Correia, A. Vasconcelos, and S. Guerreiro, “SSIBAC : Self-Sovereign Identity Based Access Control,” in The 3rd International Workshop on Blockchain Systems and Applications, IEEE, 2020.
# On isomorphism classes of leaf-induced subtrees in topological trees. Audace A. V. Dossou-Olory and Ignatius Boadi audace a. v. dossou-olory department of mathematics and applied mathematics university of johannesburg p.o. box 524 auckland park johannesburg 2006 south africa and département d’hydrologie et de gestion des ressources en eau centre d’excellence d’afrique pour l’eau et l’assainissement institut national de l’eau<EMAIL_ADDRESS>ignatius boadi african institute for mathematical sciences ghana summerhill estates east legon hills santoe accra ghana<EMAIL_ADDRESS> ###### Abstract. A subtree can be induced in a natural way by a subset of leaves of a rooted tree. We study the number of nonisomorphic such subtrees induced by leaves (leaf-induced subtrees) of a rooted tree with no vertex of outdegree 1 (topological tree). We show that only stars and binary caterpillars have the minimum nonisomorphic leaf-induced subtrees among all topological trees with a given number of leaves. We obtain a closed formula and a recursive formula for the families of $d$-ary caterpillars and complete $d$-ary trees, respectively. An asymptotic formula is found for complete $d$-ary trees using polynomial recurrences. We also show that the complete binary tree of height $h>1$ contains precisely $\lfloor 2(1.24602...)^{2^{h}}\rfloor$ nonisomorphic leaf- induced subtrees. ###### Key words and phrases: topological trees, non-isomorphic leaf-induced subtrees, complete $d$-ary trees, $d$-ary caterpillars, stars, graph theory, enumerative combinatorics, networks ## 1\. Introduction and preliminary Analogous to sampling in statistics, subtrees of trees are very useful when networks are being analyzed. Since networks can be large and also data structures are usually complex and large, subtrees are needed because they are usually manageable to analyze than the whole structure. Also, to determine the reliability of a network when there is vertex or edge failure, one can look at the number of subtrees of the graph which represents the network. The larger the number of subtrees of this graph, the more reliable this network is; see, as examples, [4, 5, 6]. Furthermore, leaf-induced subtrees are applicable in web analytics, especially in analyzing access patterns of visitors of a particular website [14]. EvoMiner, an efficient algorithm for frequent subtree mining (FST) of subtrees of pyhologenetic trees is introduced in [15]. FST is particularly useful when considering ancestry of current generation which are naturally the leaves of a rooted tree [16]. When a vertex of a tree is designated as the root, we have a rooted tree. A topological tree is a rooted tree with no vertex of outdegree one (or degree two, except possibly the root). They are also referred to as series-reduced or homeomorphically irreducible trees [20, 21]. By definition, the tree that has only one vertex is a topological tree. A topological tree with each vertex having outdegree at most $d$ is a $d$-ary tree. In a strict or full $d$-ary tree, each vertex has outdegree 0 or $d$. They are applicable mainly in computer science where they are used in designing and building data structures. This leads to efficient data organization, easy access and modification. In [7], $d$-ary trees are used to develop an efficient solution to the Empirical Cumulative Distribution Function (ECDF) searching problem. The ECDF searching problem arises in multivariate statistics where given $N$ points in $d$-dimensional space, a point $X=\\{x_{1},x_{2},\ldots,x_{d}\\}$ dominates another point $Y=\\{y_{1},y_{2},\ldots,y_{d}\\}$ if for all $i\in\\{1,2,\ldots,d\\}$, $x_{i}\geq y_{i}$. The ECDF searching problem primarily deals with finding the ratio of the number of $Ys$ dominated by a given $X$, to $N$, the number of data points [11]. Data structures using $d$-ary trees are designed in [7] which enables efficient ways of conducting ECDF queries. A caterpillar is a tree whose internal vertices lie on a single path (called the backbone) and all leaves are attached directly to the internal vertices. The caterpillar is found to be an extremal graph when determining the maximum or minimum of various distance-based graph invariants [9, 10]. For example, Dadedzi et al. found in [10] that the caterpillar has the greatest distance spectral radius among trees with a given degree sequence. Caterpillars are applicable in layouts of communication and electrical networks where the internal vertices represent major outlets and leaves represent consumers. They are also widely used in chemical graph theory; see, as example, [2]. They are applied in modeling interactions and in analysis of benzenoid hydrocarbons. Caterpillars are sometimes called Gutman trees, since Ivan Gutman introduced them to chemical graph theory and made extensive use of them in his works; see, as examples, [1, 3]. They are also sometimes called benzenoid trees [8]. When one of the internal vertices of a caterpillar is designated as the root, we have a rooted caterpillar. We mainly focus on two families of topological trees in this work: 1. (1) $d$-ary caterpillars: A $d$-ary caterpillar is defined as a strict $d$-ary tree which is also a rooted caterpillar. A $d$-ary caterpillar is denoted by $F^{d}_{n}$, where $n$ is the number of leaves. In Figure 1, we show the ternary caterpillar with $7$ leaves, $F^{3}_{7}$. At each level of a $d$-ary caterpillar, there are $d-1$ leaves and one internal vertex except at the highest level, where there are $d$ leaves and one internal vertex. Thus, the number of leaves of a $d$-ary caterpillar of height h is $1+h(d-1)$. Figure 1. The ternary caterpillar with $7$ leaves, $F^{3}_{7}$. 2. (2) The complete $d$-ary tree of height $h$, denoted by $C^{d}_{h}$ is a strict $d$-ary tree which has all leaves at distance $h$ from the root, thus all leaves lie at the same level in the tree. In Figure 2, we show the complete ternary tree whose height is two, $C_{2}^{3}$. In a complete $d$-ary tree, there are $d$ vertices at level 1. Each of these vertices are then connected to $d$ vertices at level 2, yielding $d^{2}$ vertices at level 2. Hence at level $h$, there are $d^{h}$ vertices. Since all leaves of a complete $d$-ary tree are at level $h$, the number of leaves of $C^{d}_{h}$ is $d^{h}$. When $h=0$, we have the one-vertex tree and when $h=1$, we have the star with $d$ leaves. Figure 2. The complete ternary tree with height 2, $C^{3}_{2}$. Two graphs are isomorphic if you can label both graphs with the same labels such that any selected vertex has the same neighbors in both graphs. By a leaf-induced subtree of a rooted tree, we mean a subtree induced by selected leaves of a tree. A subtree can be induced in a natural way by a subset of leaves of a rooted tree. The number of nonisomorphic such leaf- induced subtrees of a given rooted tree is the main subject of this work. The subject of leaf-induced subtrees of rooted trees has been well studied [12, 13, 17, 18, 19, 22, 23, 24]. To form a leaf-induced subtree of a given rooted tree $T$, we consider the power set of the leaves of $T$ excluding the null set. If $S$ is a member of this set, then the subtree induced by the leaves in $S$ is obtained by 1. (1) extracting the smallest subtree of $T$ containing no leaf other than those in $S$. To do this, we first locate the vertex which is the most recent common ancestor of the leaves in $S$. We then select all vertices and edges connecting the leaves in $S$ to the vertex identified as the most recent common ancestor [13, 19], 2. (2) contracting all vertices of outdegree 1 in the smallest subtree from step 1 [13, 19]. Suppose $g$ is a vertex with outdegree 1 in the subtree obtained from step 1 and it has neighbors $f$ and $h$, then we have edges $\\{f,g\\}$ and $\\{g,h\\}$; we contract $g$ and create the edge $\\{f,h\\}$. This is done for all instances of vertices with outdegree 1 to create a leaf-induced subtree. A single leaf can be selected and used to create a leaf-induced subtree which is the one-vertex tree. By definition, any leaf-induced subtree of a rooted tree is a topological tree. ###### Definition 1.1. We define $N(T)$ as the number of nonisomorphic leaf-induced subtrees of a topological tree $T$. Clearly, there is only one (up to isomorphism) subtree induced by a given subset of leaves of a topological tree. Hence, the total number of leaf- induced subtrees of a topological tree with $n$ leaves is just $2^{n}-1$ (excluding the empty set). We consider subtrees induced by leaves of a topological tree and select only one representative from every isomorphism class. In Example 1, we construct two isomorphic leaf-induced subtrees of the tree in Figure 3. ###### Example 1. Consider the tree labeled 1 in Figure 3. Let $L=\\{l_{1},l_{3},l_{4},l_{6}\\}$. We first extract the smallest subtree containing all the vertices and edges lying on the paths connecting the leaves in $L$ to their most recent common ancestor. Hence we get the leaf-induced subtree labeled 2 in Figure 3. Figure 3. A topological tree and a subtree induced by some of its leaves. ## 2\. main results We present formulas for the number of nonisomorphic leaf-induced subtrees of a $d$-ary caterpillar and a complete $d$-ary tree. We also give the minimum number of nonisomorphic leaf-induced subtrees of a topological tree with $n$ leaves together with the characterization of all extremal trees. All topological trees with up to four leaves are either stars or binary caterpillars except the trees shown in Figure 4. Figure 4. Some topological trees with 4 leaves. There are 4 leaf-induced subtrees of $C^{2}_{2}$ and 5 of each of $Q_{4}$ and $E^{3}_{4}$. Theorem 2.1 states the minimum number of leaf-induced subtrees among all topological trees with $n>4$ leaves and also states the corresponding extremal trees. ###### Theorem 2.1. Let $n\geq 5$ be an integer. Every $n$-leaf topological tree $T$ has at least $n$ nonisomorphic leaf-induced subtrees, with equality if and only if $T$ is a star or a binary caterpillar. ###### Proof. Consider the star $S_{n}$, with $n$ leaves. Since all leaf-induced subtrees of $S_{n}$ are themselves stars (see Figure 5), we obtain $n$ nonisomorphic leaf- induced subtrees for $S_{n}$. Consider the binary caterpillar with $n$ leaves $f_{n}^{2}$. All leaf-induced subtrees of $F_{n}^{2}$ are also binary caterpillars. Hence we obtain $n$ nonisomorphic leaf-induced subtrees for $F^{2}_{n}$ (see Figure 6). Figure 5. All leaf-induced subtrees of $S_{n}$. Figure 6. All leaf-induced subtrees of $F^{2}_{n}$. Finally, we show that a topological tree $T$ with $n\geq 5$ leaves which is neither a star nor a binary caterpillar has at least $n+1$ nonisomorphic leaf- induced subtrees. Since $T$ is neither a star, nor a binary caterpillar, its root should have outdegree at most $n-1$ and at most $n-2$ leaves attached to it. The key to the proof of this part of the theorem is to show that there exists at least two nonisomorphic leaf-induced subtrees with $k$ leaves for some $k\in\\{3,\ldots,n\\}$. We consider two possible cases: 1. (1) If $T$ contains a vertex of outdegree at least three, then the leaf-induced subtrees with three leaves, $S_{3}$ and $F^{2}_{3}$ are both present (see the trees labeled $3$ in Figures 5 and 6 respectively). Hence we have at least $n+1$ nonisomorphic leaf-induced subtrees for $T$. 2. (2) If $T$ is a binary tree, then it must be of height at least three since it has at least five leaves. The trees $C_{2}^{2}$ (see Figure 4) and $F_{2}^{2}$ (see the tree labeled 4 in Figure 6) each with four leaves, are leaf-induced subtrees of $T$. Hence we have at least $n+1$ nonisomorphic leaf-induced subtrees for $T$. ∎ ## 2.1 $d$-ary Caterpillars ###### Theorem 2.2. Let $d\geq 3$ be a fixed positive integer. For the $d$-ary caterpillar $F_{n}^{d}$, we have $N(F^{d}_{n})=\dfrac{(d-1)^{\frac{n+d-2}{d-1}}-1}{d-2}.$ ###### Proof. Since $F_{n}^{d}$ is a strict $d$-ary tree, it has precisely $(n-1)/(d-1)$ internal vertices which is also its height. Consider the internal vertex at the highest level which has $d$ leaves. 1. (1) We can attach from 2 to $d$ leaves to this vertex to get $d-1$ leaf-induced subtrees. Hence in addition to the one-leaf rooted subtree, we have 1 + $(d-1)$ leaf-induced subtrees. 2. (2) For each of the $d-1$ leaf induced subtrees from step $1$, we move to the next internal vertex above the vertex considered in step 1 and attach from 1 to $d-1$ leaves of this vertex to give $(d-1)(d-1)=(d-1)^{2}$ leaf-induced subtrees resulting in $1+(d-1)+(d-1)^{2}$ leaf-induced subtrees. 3. (3) For each of the $(d-1)^{2}$ leaf-induced subtrees from step 2, we move to the next vertex and attach from 1 to $d-1$ leaves to give $(d-1)^{2}(d-1)=(d-1)^{3}$ leaf -induced subtrees yielding $1+(d-1)+(d-1)^{2}+(d-1)^{3}$ leaf-induced subtrees in total. This process is repeated for the new subtrees formed up the backbone of the tree until we reach the root. The number of times this operation is done is equal to the height of the tree. We therefore have $\displaystyle N(F^{d}_{n})$ $\displaystyle=1+(d-1)+(d-1)^{2}+\cdots+(d-1)^{\frac{n-1}{d-1}}$ $\displaystyle=\dfrac{(d-1)^{\frac{n+d-2}{d-1}}-1}{d-2}.$ ∎ ## 2.2 Complete $d$-ary Trees In this section, we develop a recursive formula and an asymptotic formula for $N(C_{h}^{d})$. Lemma 1 below will be used in deriving the asymptotic formula for $N(C_{h}^{d})$. ###### Lemma 1. Let $d\geq 2$ be a fixed integer and $a_{0},a_{1},a_{2},\ldots,a_{d}$ be real numbers such that $a_{i}\geq 0,$ and $a_{d}\neq 0$. If $(A_{n})_{n\geq 0}$ is a polynomial sequence defined recursively by $A_{0}>0,A_{n}=\sum_{k=0}^{d}a_{k}.A_{n-1}^{k},$ then (1) $\log(A_{n})=\dfrac{d^{n}-1}{d-1}\log(a_{d})+d^{n}\bigg{(}\log(A_{0})+\sum_{j=0}^{n-1}d^{-1-j}.\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}\bigg{)}$ for every $n\geq 1.$ Moreover, if $(A_{n})_{n\geq 0}$ is an increasing sequence and $\lim\limits_{n\rightarrow\infty}A_{n}=\infty$, we also obtain (2) $\small{A_{n}=(1+o(1))a_{d}^{-\frac{1}{d-1}}\exp\bigg{(}d^{n}\bigg{(}\dfrac{\log(a_{d})}{d-1}+\log(A_{0})+\sum_{j=0}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-1}^{k-d}\bigg{)}\bigg{)}\bigg{)}}$ as $n\rightarrow\infty$. ###### Proof. To prove (1), consider the recursion $\displaystyle A_{n}$ $\displaystyle=a_{0}+a_{1}A_{n-1}+a_{2}A_{n-1}^{2}+\cdots+a_{d}A_{n-1}^{d},$ $\displaystyle\dfrac{A_{n}}{A^{d}_{n-1}}$ $\displaystyle=a_{d}+\dfrac{a_{d-1}}{A_{n-1}}+\dfrac{a_{d-2}}{A_{n-1}^{2}}+\cdots+\dfrac{a_{0}}{A^{d}_{n-1}},$ $\displaystyle\log\bigg{(}\dfrac{A_{n}}{A_{n-1}^{d}}\bigg{)}$ $\displaystyle=\log\bigg{(}a_{d}\bigg{(}1+\dfrac{a_{d-1}}{a_{d}A_{n-1}}+\dfrac{a_{d-2}}{a_{d}A_{n-1}^{2}}+\cdots+\dfrac{a_{0}}{a_{d}A^{d}_{n-1}}\bigg{)}\bigg{)},$ $\displaystyle\log A_{n}-d\log A_{n-1}$ $\displaystyle=\log a_{d}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-1}^{k-d}\bigg{)},$ (3) $\displaystyle\log A_{n}$ $\displaystyle=d\log A_{n-1}+\log a_{d}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-1}^{k-d}\bigg{)}.$ From (3), we get (4) $\displaystyle\log A_{n-1}$ $\displaystyle=d\log A_{n-2}+\log a_{d}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-2}^{k-d}\bigg{)}.$ Substituting (4) into (3), we get $\displaystyle\log A_{n}$ $\displaystyle=d^{2}\log A_{n-2}+(d+1)\log a_{d}+d\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-2}^{k-d}\bigg{)}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-1}^{k-d}\bigg{)}.$ Clearly, we have (5) $\log A_{n-j}=d\log A_{n-j-1}+\log a_{d}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-j-1}^{k-d}\bigg{)}$ for every $j$ such that $n-j>0.$ Using (5) to find expressions for $A_{n-j},j=1,2,\ldots,n-1$ and substituting into (3), we get $\displaystyle\log A_{n}$ $\displaystyle=(d^{n-1}+d^{n-2}+\cdots+d+1)\log a_{d}+d^{n}\log A_{0}+d^{n-1}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{0}^{k-d}\bigg{)}$ $\displaystyle\quad+\cdots+d\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-2}^{k-d}\bigg{)}+\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n-1}^{k-d}\bigg{)}$ (6) $\displaystyle=\dfrac{d^{n}-1}{d-1}\log a_{d}+d^{n}\log A_{0}+d^{n}\bigg{(}\sum_{j=0}^{n-1}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}\bigg{)}.$ This proves (1). Consider the series $\displaystyle d^{n}\sum_{j=n}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}.$ Note that we have $k<d$. Recall that $a_{k}/a_{d}\geq 0$. Since $\sup_{j\geq n}A_{j}^{k-d}=A_{n}^{k-d}$, we have $\displaystyle 0\leq d^{n}\sum_{j=n}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}$ $\displaystyle\leq d^{n}\sum_{j=n}^{\infty}d^{-1-j}.\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n}^{k-d}\bigg{)}$ $\displaystyle=\dfrac{1}{d-1}.\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n}^{k-d}\bigg{)}$ and $\displaystyle\lim\limits_{n\rightarrow\infty}\bigg{(}\dfrac{1}{d-1}.\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n}^{k-d}\bigg{)}\bigg{)}$ $\displaystyle=0.$ Hence $d^{n}\sum_{j=n}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}=o(1).$ Now, let (7) $R_{n}(d)=\sum_{j=n}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}.$ We rewrite (6) as $\displaystyle\log A_{n}$ $\displaystyle=\dfrac{d^{n}-1}{d-1}\log a_{d}+d^{n}\log A_{0}+d^{n}\bigg{(}\sum_{j=0}^{n-1}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}+R_{n}(d)-R_{n}(d)\bigg{)}$ $\displaystyle=\dfrac{d^{n}-1}{d-1}\log a_{d}+d^{n}\log A_{0}+d^{n}\bigg{(}\sum_{j=0}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}-R_{n}(d)\bigg{)},$ which holds for all $n\geq 1$. Let $K(d)=\sum_{j=0}^{\infty}d^{-1-j}\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{j}^{k-d}\bigg{)}.$ Then we have $\displaystyle\log A_{n}$ $\displaystyle=d^{n}\bigg{(}\log(A_{0})+\dfrac{\log(a_{d})}{d-1}+K(d)\bigg{)}-\dfrac{\log(a_{d})}{d-1}+o(1),$ $\displaystyle A_{n}$ $\displaystyle=\exp\bigg{(}d^{n}\bigg{(}\log(A_{0})+\dfrac{\log(a_{d})}{d-1}+K(d)\bigg{)}-\dfrac{\log(a_{d})}{d-1}+o(1)\bigg{)}$ $\displaystyle=\exp(o(1)).\exp\bigg{(}\log\bigg{(}a_{d}^{\frac{-1}{d-1}}\bigg{)}\bigg{)}.\exp\bigg{(}d^{n}\bigg{(}\log(A_{0})+\dfrac{\log(a_{d})}{d-1}+K(d)\bigg{)}\bigg{)}.$ Using Taylor series expansion for $\exp(o(1))$, we have $\displaystyle=(1+o(1))a_{d}^{-\frac{1}{d-1}}\exp\bigg{(}d^{n}\bigg{(}\log(A_{0})+\dfrac{\log(a_{d})}{d-1}+K(d)\bigg{)}\bigg{)}$ as $n\rightarrow\infty$. This completes the proof of the lemma. ∎ ###### Theorem 2.3. The number of nonisomorphic leaf-induced subtrees of the complete $d$-ary tree, $C_{h}^{d}$ is given by (8) $N(C_{h}^{d})=-N(C_{h-1}^{d})+\begin{pmatrix}d+N(C_{h-1}^{d})\\\ d\end{pmatrix}$ with $N(C_{0}^{d})=1$. Furthermore, (9) $N(C_{h}^{d})=(1+o(1))(d!)^{\frac{1}{d-1}}\kappa(d)^{d^{h}}$ for some effectively computable constant $\kappa(d)>1$ as $h\rightarrow\infty$. ###### Proof. Fix $h\geq 1$. Call $v$ the root of $C^{d}_{h}$ and $A_{h}$ the set of all nonisomorphic leaf-induced subtrees of $C_{h}^{d}$ whose root is $v$ and has root degree $r$ is uniquely determined by a choice of a $r$-element multiset $A_{h-1}$ (since all $r$ branches lie in $A_{h-1}$). So for every $r\in\\{2,3,\ldots,d\\}$, there are precisely $\begin{pmatrix}N(C^{d}_{h-1})+r-1\\\ r\end{pmatrix}$ such subtrees (with root degree $r$). Therefore, taking into consideration the subtree consisting only of the single leaf, and using the fact that the elements of $A_{h-1}$ are already counted this way, we get $N(C_{h}^{d})=1+\sum_{r=2}^{d}\begin{pmatrix}N(C^{d}_{h-1})+r-1\\\ r\end{pmatrix}$ nonisomorphic leaf-induced subtrees. Let $j=N(C^{d}_{h-1})+r-1$. We get $\displaystyle N(C^{d}_{h})$ $\displaystyle=1+\sum_{j=N(C^{d}_{h-1})+1}^{N(C^{d}_{h-1})+d-1}\begin{pmatrix}j\\\ j+1-N(C^{d}_{h-1})\end{pmatrix}$ $\displaystyle=1+\sum_{j=N(C^{d}_{h-1})+1}^{N(C^{d}_{h-1})+d-1}\begin{pmatrix}j\\\ N(C^{d}_{h-1})-1\end{pmatrix}\quad\text{since $\begin{pmatrix}n\\\ r\end{pmatrix}=\begin{pmatrix}n\\\ n-r\end{pmatrix}$}$ $\displaystyle=-N(C^{d}_{h-1})+N(C^{d}_{h-1})+1+\sum_{j=N(C^{d}_{h-1})+1}^{N(C^{d}_{h-1})+d-1}\begin{pmatrix}j\\\ N(C^{d}_{h-1})-1\end{pmatrix}$ $\displaystyle=-N(C^{d}_{h-1})+\sum_{j=-1+N(C^{d}_{h-1})}^{N(C^{d}_{h-1})+d-1}\begin{pmatrix}j\\\ N(C^{d}_{h-1})-1\end{pmatrix}.$ It can be shown by induction on $d$ that $\sum_{j=-1+N(C^{d}_{h-1})}^{N(C^{d}_{h-1})+d-1}\begin{pmatrix}j\\\ N(C^{d}_{h-1})-1\end{pmatrix}=\begin{pmatrix}d+N(C_{h-1}^{d})\\\ N(C_{h-1}^{d})\end{pmatrix}.$ Hence we have $N(C^{d}_{h})=-N(C^{d}_{h-1})+\begin{pmatrix}d+N(C_{h-1}^{d})\\\ N(C_{h-1}^{d})\end{pmatrix}=-N(C^{d}_{h-1})+\begin{pmatrix}d+N(C_{h-1}^{d})\\\ d\end{pmatrix}.$ Observe that $N(C_{h}^{d})$ increases with $h$ since the nonisomorphic leaf- induced subtrees of $C^{d}_{h}$ include all nonisomorphic leaf-induced subtrees of $C^{d}_{h-1}$. To prove (9), first set $A_{h}=N(C_{h}^{d})$. We have $\displaystyle A_{h}$ $\displaystyle=-A_{h-1}+\begin{pmatrix}d+A_{h-1}\\\ d\end{pmatrix}$ $\displaystyle=-A_{h-1}+\dfrac{(A_{h-1}+d)!}{(A_{h-1}!)(d!)}$ $\displaystyle=-A_{h-1}+\dfrac{(A_{h-1}+d)(A_{h-1}+d-1)(A_{h-1}+d-2)\cdots(A_{h-1}+2)(A_{h-1}+1)}{d!}$ $\displaystyle=-A_{h-1}+\dfrac{A_{h-1}^{d}+\bigg{(}\sum_{i=1}^{d}i\bigg{)}A_{h-1}^{d-1}+\cdots+\bigg{(}\sum_{i=1}^{d}\prod_{j=1,j\neq i}^{d}j\bigg{)}A_{h-1}+d!}{d!}$ $\displaystyle=\dfrac{-A_{h-1}d!+A_{h-1}^{d}+\bigg{(}\sum_{i=1}^{d}i\bigg{)}A_{h-1}^{d-1}+\cdots+\bigg{(}\sum_{i=2}^{d}\prod_{j=1,j\neq i}^{d}j\bigg{)}A_{h-1}+d!A_{h-1}+d!}{d!}$ $\displaystyle=\dfrac{A_{h-1}^{d}+\bigg{(}\sum_{i=1}^{d}i\bigg{)}A_{h-1}^{d-1}+\cdots+\bigg{(}\sum_{i=2}^{d}\prod_{j=1,j\neq i}^{d}j\bigg{)}A_{h-1}+d!}{d!}.$ By setting $a_{i}$ as the coefficients of $A_{h-1}^{i}$, $i\in\\{0,1,2,3,\ldots,d\\}$, noting that $A_{0}=1$ and $a_{d}=1/d!$ and using the identity derived in (3), we have (10) $\displaystyle\log A_{h}$ $\displaystyle=d\log A_{h-1}-\log d!+\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}A_{h-1}^{k-d}\bigg{)}.$ From Lemma 1, we get $N(C_{h}^{d})=(1+o(1))(d!)^{\frac{1}{d-1}}\kappa(d)^{d^{h}},$ where (11) $\displaystyle\kappa(d)$ $\displaystyle=\exp\bigg{(}-\dfrac{\log d!}{d-1}+K(d)\bigg{)}$ and (12) $\displaystyle K(d)$ $\displaystyle=\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}(N(C^{d}_{j})^{k-d})\bigg{)}.$ From (10), we get $\displaystyle\log A_{h+1}$ $\displaystyle=d\log A_{h}-\log d!+\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}A_{h}^{k-d}\bigg{)},$ (13) $\displaystyle\log\bigg{(}\dfrac{d!A_{h+1}}{A_{h}^{d}}\bigg{)}$ $\displaystyle=\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}A_{h}^{k-d}\bigg{)}.$ Recall that $A_{h}=N(C_{h}^{d})$. Substituting (13) into (12), we get (14) $\displaystyle K(d)$ $\displaystyle=\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}d!.\dfrac{N(C^{d}_{j+1})}{N(C^{d}_{j})^{d}}\bigg{)}.$ We have $-\dfrac{\log d!}{d-1}+K(d)=-\dfrac{\log d!}{d-1}+\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}d!.\dfrac{N(C^{d}_{j+1})}{N(C^{d}_{j})^{d}}\bigg{)}.$ Since $N(C^{d}_{j+1})>N(C^{d}_{j})^{d}$, we obtain $\displaystyle-\dfrac{\log d!}{d-1}+K(d)$ $\displaystyle>-\dfrac{\log d!}{d-1}+\sum_{j=0}^{\infty}d^{-1-j}.\log d!$ $\displaystyle=-\dfrac{\log d!}{d-1}+\dfrac{\log d!}{d-1}$ $\displaystyle=0.$ Hence $\kappa(d)>1$. ∎ The value of $\kappa(d)$ derived in (11) for every $d$ can be numerically evaluated. Table 1 gives values of $\kappa(d)$ for $d\in\\{2,3,\ldots,10\\}$. Table 1. Approximated values of $\kappa(d)$. $d$ | $\kappa(d)$ ---|--- 2 | 1.246020832983661 3 | 1.254860390384554 4 | 1.2189114976086313 5 | 1.1888457507131132 6 | 1.165394603276801 7 | 1.1469724134908297 8 | 1.1322182196849957 9 | 1.1201639471936817 10 | 1.1101387293827483 From Table 1, when $d=2$, we get $N(C_{h}^{2})\sim 2(1.246020832983661)^{2^{h}}$ as $h\rightarrow\infty$. Propositions 2.4 and 2.5 below give further results about $\kappa(d)$. ###### Proposition 2.4. The sequence $\kappa(d)_{d\geq 2}$ converges to 1 as $d\rightarrow\infty$. ###### Proof. From equations (11) and (12), we have (15) $\log(\kappa(d))=-\dfrac{\log d!}{d-1}+\dfrac{\log d!+\log(d)}{d}+\sum_{j\geq 1}\dfrac{\log d!+\log(N(C^{d}_{j+1}))-d.\log(N(C^{d}_{j}))}{d^{j+1}}.$ From (8), we can write $\displaystyle N(C^{d}_{j+1})$ $\displaystyle=-N(C^{d}_{j})+\begin{pmatrix}d+N(C^{d}_{j})\\\ d\end{pmatrix}=\sum_{k=0}^{d}a_{k}.N(C^{d}_{j})^{k}$ $\displaystyle\leq\sum_{k=0}^{d}a_{k}.N(C^{d}_{j})^{d}$ $\displaystyle N(C^{d}_{j+1})$ $\displaystyle\leq N(C^{d}_{j})^{d}\sum_{k=0}^{d}a_{k}$ (16) $\displaystyle\dfrac{N(C^{d}_{j+1})}{N(C^{d}_{j})^{d}}$ $\displaystyle\leq\sum_{k=0}^{d}a_{k}=d.$ We make a substitution of (16) into (15). We have $\displaystyle 0\leq\log(\kappa(d))$ $\displaystyle\leq-\dfrac{\log d!}{d-1}+\dfrac{\log d!+\log(d)}{d}+\sum_{j\geq 1}\dfrac{\log d!+\log(d)}{d^{j+1}}$ $\displaystyle=-\dfrac{\log d!}{d-1}+\dfrac{\log d!+\log(d)}{d}+(\log d!+\log(d))\sum_{j\geq 1}\dfrac{1}{d^{j+1}}$ $\displaystyle=-\dfrac{\log d!}{d-1}+\dfrac{\log d!+\log(d)}{d}+\dfrac{\log d!+\log(d)}{d(d-1)}$ $\displaystyle=\dfrac{\log d}{d-1}$ It is clear that $\lim\limits_{d\rightarrow\infty}\dfrac{\log d}{d-1}=0.$ Hence $\kappa(d)\rightarrow 1$ as $d\rightarrow\infty$. ∎ ###### Proposition 2.5. Given $d\geq 2$, there exists a positive integer $H\geq 2$ such that $N(C^{d}_{h})=\big{\lfloor}d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}\big{\rfloor}$ for every $h\geq H$. ###### Proof. From (7), we can show that the relation $0<R_{n}(d)\leq\dfrac{d^{-n}}{d-1}.\log\bigg{(}1+\sum_{k=0}^{d-1}\dfrac{a_{k}}{a_{d}}.A_{n}^{k-d}\bigg{)}$ holds for every $d\geq 2$ and every $n\geq 1$. We deduce the double inequality (17) $\displaystyle d^{n}\bigg{(}\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}.A_{j}^{k-d}\bigg{)}-\dfrac{d^{-n}}{d-1}.d!\sum^{d-1}_{k=0}a_{k}.A_{n}^{k-d}\bigg{)}-\dfrac{d^{n}-1}{d-1}\log d!$ $\displaystyle\leq\log(A_{n})\leq d^{n}\bigg{(}\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}.A_{j}^{k-d}\bigg{)}\bigg{)}-\dfrac{d^{n}-1}{d-1}\log d!$ From (17), we obtain (18) $\displaystyle d^{n}\bigg{(}\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}.A_{j}^{k-d}\bigg{)}\bigg{)}-\dfrac{d!}{d-1}A_{n}^{-1}\sum^{d-1}_{k=0}a_{k}-\dfrac{d^{n}}{d-1}\log d!$ $\displaystyle+\dfrac{1}{d-1}\log d!\leq\log(A_{n})\leq d^{n}\bigg{(}\sum_{j=0}^{\infty}d^{-1-j}.\log\bigg{(}1+d!\sum_{k=0}^{d-1}a_{k}.A_{j}^{k-d}\bigg{)}\bigg{)}-\dfrac{d^{n}}{d-1}\log d!+\dfrac{1}{d-1}\log d!$ Given that $\sum^{d-1}_{k=0}a_{k}=\dfrac{dd!-1}{d!}$ and substituting (14) into (18), we obtain $\displaystyle d^{n}\bigg{(}-\dfrac{\log d!}{d-1}+K(d)\bigg{)}-\dfrac{d!}{d-1}.A_{n}^{-1}.\dfrac{dd!-1}{d!}+\log d!^{\frac{1}{d-1}}$ $\displaystyle\leq\log(A_{n})\leq d^{n}\bigg{(}-\dfrac{\log d!}{d-1}+K(d)\bigg{)}+\log d!^{\frac{1}{d-1}}$ $\displaystyle d!^{\frac{1}{d-1}}\kappa(d)^{d^{n}}.\exp\bigg{(}-\dfrac{d!}{d-1}.A_{n}^{-1}.\dfrac{dd!-1}{d!}\bigg{)}\leq A_{n}\leq d!^{\frac{1}{d-1}}\kappa(d)^{d^{n}}$ Replacing $A_{n}$ with $N(C_{h}^{d})$ and using the basic inequality $e^{t}\geq 1+t$, we obtain (19) $d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}-\dfrac{d!^{\frac{d}{d-1}}}{d-1}.\dfrac{\kappa(d)^{d^{h}}}{N(C^{d}_{h})}\bigg{(}d-\dfrac{1}{d!}\bigg{)}\leq N(C^{d}_{h})\leq d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}$ We substitute $N(C_{h}^{d})$ in the left-hand side of (19) with $d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}-\dfrac{d!^{\frac{d}{d-1}}}{d-1}.\dfrac{\kappa(d)^{d^{h}}}{N(C^{d}_{h})}\bigg{(}d-\dfrac{1}{d!}\bigg{)}.$ Iterating this substitution process a (finite) number of times leads to $d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}-\delta(d)\leq N(C^{d}_{h})\leq d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}$ for some constant $0<\delta(d)<1.$ An immediate consequence of this double inequality is that $N(C^{d}_{h})=\big{\lfloor}d!^{\frac{1}{d-1}}\kappa(d)^{d^{h}}\big{\rfloor}$ holds for all $h\geq H$ for certain $H$. ∎ For $d=2$, the previous condition $A_{h}>(dd!-1)(d-1)$ becomes $A_{h}>3$, which means that $N(C^{2}_{h})\geq 4$, hence $H=2$. We deduce that $N(C^{2}_{h})=\lfloor 2(1.2460208329836625089431529441999359284665241772983812581\ldots)^{2^{h}}\rfloor$ for every $h\geq 2$. ## References * [1] Ivan Gutman and Edward C Kirby. Benzenoids with branching graphs that are trees. _Croatica Chemica Acta_ , 73(4):943–955, 2000. * [2] Stephan Wagner and Hua Wang. Introduction to chemical graph theory. CRC Press, 2018. * [3] Ivan Gutman and Sven J. Cyvin. Introduction to the theory of benzenoid hydrocarbons. Springer Science & Business Media, 2012. * [4] Yuzhi Xiao, Haixing Zhao, Zhen Liu and Yaping Mao. Trees with large numbers of subtrees. International Journal of Computer Mathematics, 94(2):372–385, 2017. * [5] Laszlo A. Székely and Hua Wang. Binary trees with the largest number of subtrees. Discrete applied mathematics, 155(3): 374–385, 2007. * [6] Laszlo A. Székely and Hua Wang. On subtrees of trees. Advances in Applied Mathematics, 34(1): 138–155, 2005. * [7] Maurizio Talamo and Giorgio Gambosi. An application of m-ary trees to the design of data structures for geometric searching problems. RAIRO-Theoretical Informatics and Applications, 23(2):165–176, 1989. * [8] Sherif El-Basil. Applications of caterpillar trees in chemistry and physics. Journal of Mathematical Chemistry, 1(2):153–174, 1987. * [9] Nina S. Haug, Stephan Wagner and Hua Wang. Greedy trees, caterpillars, and Wiener-type graph invariants. MATCH Communications in Mathematical and Computer Chemistry, 68(1):273–292, 2012. * [10] Kenneth Dadedzi, Valisoa R. Misanantenaina and Stephan Wagner. On the distance spectral radius of trees with given degree sequence. Discussiones Mathematicae Graph Theory, 40(2):295–524, 2020. * [11] Jon L. Bentley and Michael I. Shamos. A problem in multivariate statistics: Algorithm, data structure and applications. Carnegie-Mellon Univ Pittsburg PA Dept of Computer Science, 1978. * [12] Audace A. V. Dossou-Olory. Leaf-induced subtrees of leaf-Fibonacci trees. Discrete Mathematics Letter, 1: 1–7. 2019. * [13] Éva Czabarka, Audace A. V. Dossou-Olory and Laszlo A. Székely. Inducibility of $d$-ary trees. Discrete Mathematics, 343(2): 111671, 2020. * [14] Alexandre Blondin Massé et al. Fully leafed induced subtrees. In International Workshop on Combinatorial Algorithms, 90-101, 2018. * [15] Akshay Deepak et al. EvoMiner: frequent subtree mining in phylogenetic databases. Knowledge and Information Systems, 41(3): 559–590, 2014. * [16] Xin-Hui Zou et al. Analysis of 142 genes resolves the rapid diversification of the rice genus. Genome biology, 9(3): 1-13, 2008. * [17] Audace A. V. Dossou-Olory and Stephan Wagner. On the inducibility of small trees. Discrete Mathematical and Theoretical Computer Science, 21(4): 5804, 2019. * [18] Audace A. V. Dossou-Olory. The minimum asymptotic density of binary caterpillars. Graphs and Combinatorics, 35(1):303–320, 2019. * [19] Éva Czabarka, Laszlo A. Székely and Stephan Wagner. Inducibility in binary trees and crossings in random tanglegrams. SIAM Journal on Discrete Mathematics, 31(3): 1732-1750, 2017. * [20] Francois Bergeron, Gilbert Labelle and Pierre Leroux. Combinatorial species and tree-like structures. Cambridge University Press, No. 67, 1998. * [21] Elizabeth S. Allman and John A. Rhodes. Mathematical models in biology: an introduction. Cambridge University Press, 2004. * [22] Audace A. V. Dossou-Olory and Stephan Wagner. Inducibility of topological trees. Quaestiones Mathematicae 42, no. 6 (2019): 749-764. * [23] Audace A. V. Dossou-Olory and Stephan Wagner. Further results on the inducibility of $d$-ary trees. Australas J. Comb. 81: 1-4 2021. * [24] Audace A. V. Dossou-Olory. On the inducibility of rooted trees. PhD dissertation, Stellenbosch: Stellenbosch University, 2018.
# Restoring Images Captured in Arbitrary Hybrid Adverse Weather Conditions in One Go Ye-Cong Wan, Ming-Wen Shao Yuan-Shuo Cheng, Yue-Xian Liu, Zhi-Yuan Bao, and De-Yu Meng Y.-C. Wan, M.-W. Shao, Y.-S. Cheng, Y.-X. Liu, and Z.-Y. Bao are with the College of Computer Science and Technology, China University of Petroleum, China. D.-Y. Meng is with the Institute for Information and System Sciences, Xi’an Jiaotong University, China. ###### Abstract Adverse conditions typically suffer from stochastic hybrid weather degradations (e.g., rainy and hazy night), while existing image restoration algorithms envisage that weather degradations occur independently, thus may fail to handle real-world complicated scenarios. Besides, supervised training is not feasible due to the lack of comprehensive paired dataset to characterize hybrid conditions. To this end, we have advanced the forementioned limitations with two tactics: framework and data. On the one hand, we present a novel unified framework, dubbed RAHC, to Restore Arbitrary Hybrid adverse weather Conditions in one go, which can comfortably cope with hybrid scenarios with insufficient remaining background constituents and restore arbitrary hybrid conditions with a single trained model flexibly. On the other hand, we establish a new dataset, termed HAC, for learning and benchmarking arbitrary Hybrid Adverse Conditions restoration. HAC contains 31 scenarios composed of an arbitrary combination of five common weather, with a total of $\sim\\!316K$ adverse-weather/clean pairs. As for fabrication, the training set is automatically generated by a dedicated AdverseGAN with no- frills labor, while the test set is manually modulated by experts for authoritative evaluation. Extensive experiments yield superior results and in particular establish new state-of-the-art results on both HAC and conventional datasets. ###### Index Terms: Hybrid weather conditions, Unified framework, Data generation. ## 1 Introduction In real-world adverse scenarios, different weather degradations often occur simultaneously in an uncertain way, e.g., snowy night, rainy night. Images captured under these conditions inevitably incur abysmal visibility and corrupted feature remnants, and the stochastic hybrid disruption may dramatically hamper outdoor high-level vision systems such as autonomous driving systems [1, 2, 3] and surveillance systems [4]. Unfortunately, prominent algorithms [5, 6, 7, 8] deal with each weather degradation individually ignoring the hybrid degradation characterization of combined action. Specifically, early researchers focused on developing task-specific restoration algorithms for given weather conditions [9, 10, 11, 12, 13, 14] (Fig. 1(a)). Afterwards various generic networks were proposed to handle different degradations with identical architecture [15, 16, 17, 18, 19] (Fig. 1(b)), but with different weights for different tasks. To address this issue, recent researches aim at restoring degradation under multiple adverse conditions with a single unified model [20, 21, 22, 23, 24] (Fig. 1(c)). While combined rain and haze scenes were evaluated [21, 23, 24], more hybrid conditions and more complex hybrid combinations remain untapped by these methods especially when there are more than two weather degradations mixed together. Despite few task-specific works [25, 26, 27] have partly explored restoring two-weather superimposed scenarios, they were designed for specialized combinations (e.g., rain streak and raindrop) and unable to be extended to other real-world diverse hybrid scenarios. To more flexibly and practically deal with real-world scenarios, a turnkey solution that can restore arbitrary hybrid adverse conditions in one go is imminent. Compared to this goall, the existing works have the following limitations. In terms of framework, (i) existing restoration networks are limited in characterizing hybrid multiple weather degradations simultaneously due to the lack of a multi-subspace feature extraction mechanism. (ii) Models utilized in single degradation removal are restricted in restoring hybrid adverse weather conditions with insufficient remaining background constituents. (iii) Previous unified learning strategies are designed for non- overlapping degradations and are constrained to popularize to diverse hybrid scenarios. In terms of data, (i) existing datasets only cover separate single or special double weather degradations without various hybrid conditions. (ii) Collecting real-world paired data would be virtually infeasible, due to uncontrollable factors, e.g., wind, lighting, and camera shifts. (iii) Manual synthesis of massive paired data with hybrid adverse weather conditions tends to be time-consuming and labor-consuming. With the above observations, there are two posers need to answer: (1) How to construct a unified framework that can flexibly tackle real-world diverse hybrid scenarios with a single trained model? (2) How to effectively establish a comprehensive paired dataset that includes an exhaustive list of all possible real-world adverse weather conditions? Figure 1: Overview of adverse conditions restoration frameworks. (a) Separate networks designed for specific tasks; (b) generic networks with task-specific weights; (c) unified all-in-one networks with single trained weights; (d) our proposed RAHC framework. In contrast to existing approaches that aim to tackle conditions with a single weather type, RAHC can handle arbitrary hybrid adverse weather conditions in one go, thus enjoying better flexibility and practicality in realistic applications. To tackle the aforementioned problems, in this paper, we propose a novel unified framework RAHC and a new hybrid dataset HAC to restore arbitrary hybrid adverse weather conditions in one go (Fig. 1(d)). For the framework RAHC, there are three innovations for defeating the above limitations. (1) Multi-Head Blend Block (MHBB) for multi-weather degradation representation: the multi-head mechanism overriding the blend operator of convolution and attention can provide multiple “representation subspaces” [28] as well as complementary feature for multi-weather learning. (2) Reconstruction vectors aided restoration for hybrid conditions with limited image constituents retention: we propose using discrete representations from the Codebook [29] pre-trained on large-scale natural images to. These discrete representations, which we refer to as reconstruction vectors, can provide additional visual content cues to auxiliary the reconstruction of realistic and clean output. (3) Output space discrimination for efficient arbitrary hybrid conditions restoration: we design a simple multilabel-classification discriminator from the output space to encourage the restoration network to learn degradation- agnostic repair capabilities, which enable RAHC to flexibly handle diverse hybrid scenarios without requiring complex strategies or modules. Notably, this protocol can be seamlessly integrated into existing universal image restoration algorithms in a non-destructive manner, enhancing their performance in the all-in-one multi-weather removal setting. For the construction of hybrid adverse weather conditions dataset HAC, there are $\sim\\!316K$ pairs of $2^{5}\\!-\\!1\\!=\\!31$ adverse conditions with five common weather types (namely haze, rain streak, snow, night, and raindrop) arranged in combination except for the clean one. To synthesize sufficient and diverse pairwise data efficiently and at low consumption, we develop a powerful generator AdverseGAN to learn from adverse conditions so as to approximate the degradation implicitly rather than expensive manual labeling. Thus, the training set can be automatically generated by AdverseGAN with minimal labor cost. On the contrary, the test set is meticulously handcrafted by our recruited experts for authoritative evaluation. The domain gap between the training and test sets allows for better evaluation of the generalization ability which is critical for real-world applications, especially when the real data is infeasible. Comprehensive experimental results substantiate the superiority of the RAHC beyond state-of-the-art restoration methods on both HAC and conventional datasets. In conclusion, the main contributions are summarized as follows: * • We propose a novel framework, dubbed RAHC, to restore diverse hybrid adverse weather conditions while enjoying the properties of being concise, flexible, and powerful. * • We present a multi-head blend block to provide multiple representation subspaces for multi-degeneration learning. Meanwhile, a reconstruction vectors aided restoration scheme is devised for severely deteriorated hybrid adverse conditions, as well as an output space discrimination regime for unified learning of diverse hybrid degradations. * • A new synthetic dataset HAC for arbitrary hybrid adverse weather conditions restoration is constructed. To the best of our knowledge, HAC is the first to encompass such a wide range of scenarios and provides the most comprehensive benchmark for this task. * • Solid and promising experimental results on HAC and conventional datasets demonstrate the effectiveness, superiority, and robustness of our proposed RAHC. ## 2 Related Work ### 2.1 Adverse Weather Conditions Restoration Numerous algorithms have been proposed to recover images captured in adverse weather conditions, e.g., rain [12, 30], haze [31, 32, 33], snow [34, 9], etc. While these methods perform well on the specific weather type, significant performance degradation was observed when migrating to other tasks. Aiming at this limitation, a broad spectrum of research [35, 15, 16, 36, 17] has explored generic frameworks to accommodate different degradation types with an identical network. Even so, they still require separate training of independent weights for each task, hindering its generalizability. To repair multiple degradations in an all-in-one fashion, several unified models are proposed [20, 21, 22, 23, 24]. However, these approaches ignore that real- world adverse conditions often suffer from superimposed multiple degradations [25], e.g., rain streaks and raindrops [26], rain and nighttime [27], etc. Although several methods [21, 23, 24] include the superposition of two weather (rain and fog) at test time, they suffer from relatively limited image constituents retention as well as inadequate multi-degradation characterization mechanism when there are more than two degradations in the hybrid condition. Meanwhile, the unified learning strategies of these methods are constrained in modeling hybrid multiple degradations simultaneously. On the contrary, in this paper, adverse scenarios with an arbitrary hybrid of five weather types are considered for a total of 31 conditions, and RAHC can cope with all conditions in one go driving arbitrary hybrid adverse conditions restoration in a broad sense. ### 2.2 Pseudo Data Generation Data-driven vision tasks rely heavily on high-quality datasets. Unfortunately, the labeling and acquiring of data tends to be expensive, challenging and time-consuming. To surmount this bottleneck, several recent efforts [37, 38, 39, 40] delve into pseudo-data generation by leveraging Generative Adversarial Networks (GAN). For instance, Zhang _et al._ [37] proposed a DatasetGAN to synthesize highly realistic images with pixel-wise annotations, the model trained on the synthetic dataset can even surpass the ones trained on the real dataset. Yang _et al._ [38] released SurfelGAN, which facilitates the training of autonomous driving models by simulating realistic road scenarios. In addition to high-level tasks, this research boom has also been conveyed to low-level tasks. Wang _et al._ [39] exceeded SOTA with only $0.17\%$ original data and synthesized pseudo-data. Yue _et al._ [41] introduced a dual adversarial network to simultaneously tackle noise removal and noise generation. Inspired by CycleGAN [42], Wei _et al._ [43] generated a pseudo- paired dataset Rain200A by cycle translation, and the models trained on it exhibit better robustness and generalization. Inspired by these pioneer works, we pursue the efficient and inexpensive synthesis of paired data for training by utilizing a deliberate GAN. ### 2.3 Perceptual Image Compression Tremendous success in two-stage image generation [29, 44, 45, 46, 47] based on perceptual image compression has been witnessed. These works compress images into discrete latent vectors in the first stage and synthesize high-quality images leveraging encoded vectors in the second stage. In this paper, we focus on perceptual image compression in the first stage. VQVAE [46] first presents an auto-encoder model to implement multi-scale quantization of images. Based on VQVAE, VQGAN [29] introduces adversarial and perceptual objectives to obtain higher compression rates while preserving satisfactory perceptual quality. Furthermore, LDM [47] explores different compression rates and different kinds of regularizations, deriving a compression model that is more adept at preserving details. Intrigued by the extensive image priors contained in the discrete latent vectors of image compression, we advocate utilizing these auxiliary priors to guide the restoration, since the discrete vectors comprise context-rich visual parts. ### 2.4 Unsupervised Domain Adaptation (UDA) UDA algorithms [48, 49, 50, 51, 52, 53, 54] for semantic segmentation aim to constrain the model trained on the source domain to learn domain-invariant features, thereby generalizing to the target domain. One category of these studies [51, 52, 53, 54] was devoted to training a discriminator to distinguish whether the output results come from the source or target domain, while the segmentation network learns domain-invariant knowledge to confuse the discriminator. Motivated by these methods, RAHC interestingly treats the unified restoration of different degradations as a domain adaptive problem, i.e., no matter which degradation the image suffers, the restored result should be a degeneration-agnostic high-quality clean image. As will be demonstrated, fooling a degradation-type classifier is more straightforward and concise than sophisticated units and training strategies in degradation- agnostic feature learning. ## 3 Methodology We first present an overview of our proposed tactics for arbitrary hybrid adverse weather conditions restoration in Sec 3.1. Then we introduce the restoration network as well as the core elements in Sec 3.2, and describe the output space discriminative learning scheme in Sec 3.3. Finally, we discuss and detail our adverse conditions generator AdverseGAN in Sec 3.4.1 and the established HAC dataset in Sec. 3.4.2. ### 3.1 Overview of the Proposed Method Figure 2: Illustration of the proposed RAHC architecture. The restoration network consists of an encoder, a decoder, and a feature mapping network. The encoder and decoder are both cascaded from multiple MHBBs. The mapping network first maps the encoded feature to the latent clean space and then locates the reconstruction vectors in the pre-established Codebook by nearest neighbor matching to provide privileged visual cues for the decoder. To enable degradation-agnostic learning, we utilize a discriminator to distinguish the type of the weather degradation from the restored image while the restoration network struggles to fool the discriminator. Schematic diagrams for Multi-Head Blend Block (MHBB), Convolution-Attention Module (CAM), and Dual-Path Feed- Forward Network (DP-FFN) are illustrated in colored dashed boxes. The training procedure for RAHC is illustrated in Fig. 2. RAHC aims to tackle arbitrary hybrid adverse conditions restoration via a unified framework, and the central notion is to generate adequate paired data using AdverseGAN with minimal labor costs, and training a degradation-agnostic restoration network through a discriminative learning scheme. Simultaneously, the framework also incorporates visual constituents embedded in the Codebook to provide auxiliary cues for restoring highly challenging hybrid conditions. Formally, given an adverse condition $D\in\mathbb{R}^{H\times W\times 3}$ generated by AdverseGAN and corresponding clean counterpart $C\in\mathbb{R}^{H\times W\times 3}$, the degraded image $D$ is first fed into the restoration network to produce the restored clean image $R\in\mathbb{R}^{H\times W\times 3}$. Meanwhile, the feature mapping network learns the projection from the encoded feature to corresponding clean embedding to locate the reconstruction vectors in the Codebook, supplying additional auxiliary visual atoms for restoration. Finally, the restored result is input into the discriminator to distinguish the type of degradation, while the restoration network tries to restore degradation-agnostic image to confuse the discriminator. The whole procedure is a snap, and there is no extra inference cost or modification during testing besides the restoration network. ### 3.2 Network Architecture The restoration network adheres to a U-shaped structure, which hierarchically cascades multiple tailored multi-head blend blocks, and the knowledge domain is broadened at the bottleneck supported by reconstruction vectors, thereby leading to more optimal repair results. Mathematically, given a degraded image $D\in\mathbb{R}^{H\times W\times 3}$, a $3\times 3$ convolution is first adopted to produce shallow feature embeddings $F_{in}\in\mathbb{R}^{H\times W\times C}$. Then, $F_{in}$ is propagated through four encoder layers built upon MHBB to obtain the deep feature $F_{mi}\in\mathbb{R}^{\frac{H}{8}\times\frac{W}{8}\times 8C}$. Thereafter, $F_{mi}$ is transmitted by the feature mapping network to locate the reconstruction vectors $F_{rv}\in\mathbb{R}^{\frac{H}{8}\times\frac{W}{8}\times N_{z}}$ that most likely can reconstruct the hidden clean image from the Codebook, where $N_{z}$ represents the dimension of reconstruction vector. Eventually, $F_{mi}$ and $F_{rv}$ are concatenated together and fed into the symmetric decoder to recover the final result $R\in\mathbb{R}^{H\times W\times 3}$ via a $3\times 3$ convolution. Next, we will describe the core components of the proposed restoration network. _Multi-Head Blend Block (MHBB)._ Existing feature extraction modules lack a multi-degradation representation mechanism to capture the characteristics of hybrid multiple weather, leading to inadequate feature modeling. Besides, the complementary properties of convolution, which has strong local computational capabilities, and Transformer, which is excellent at capturing long-range dependencies, make the hybrid structure a better alternative for feature extraction [55, 56]. To this end, we propose a Multi-Head Blend Block (MHBB) to provide multiple “representation subspaces” as well as complementary features for multi-weather learning. Fig. 2 illustrates the two core units (CAM & DP-FFN) of MHBB. Rather than simply combining Transformer and Convolution in parallel or series, we treat convolution and self-attention as equivalent micro-level operators and deploy them in parallel. By applying a multi-head mechanism that overrides the blend parallel operator, the proposed block is able to provide multi-degradation representation subspaces for unified learning. More concretely, an input feature $X$, is first divided into “heads” which bears a resemblance to the vanilla Transformer [28]. The multi- head design allows separate branches to learn different representations and thus adaptively extract different degradation cues to guarantee the ability of diversity restoration. Then, each head is transformed by a CAM. CAM contains two branches: the attention path and the convolution path, which are split and merged for parallel processing with the addition of $1\times 1$ convolution. To reduce the high computational effort of vanilla self-attention while preserving the global computational properties, we adopt the pixel-shuffle [57] operator in the attention path to diminish the number of tokens while avoiding information loss. The convolution path, on the other hand, consists of two convolutional layers and a GELU activation function. We formulate such a process as: $\displaystyle X_{k}^{a},X_{k}^{b}=Split(Norm(Conv_{1\times 1}(X_{k}))),$ (1) $\displaystyle\hat{X_{k}^{a}}=PS(Self\mbox{-}Attention(PU(X_{k}^{a}))),$ $\displaystyle\hat{X_{k}^{b}}=Conv_{3\times 3}(GELU(Conv_{3\times 3}(X_{k}^{b}))),$ $\displaystyle\hat{X_{k}}=Conv_{1\times 1}(Cat(\hat{X_{k}^{a}},\hat{X_{k}^{b}}))+X_{k},$ where $k$ represents the $k_{th}$ head, $PU$ and $PS$ denote pixel-unshuffle and pixel-shuffle operations. Subsequently, the results of different heads are integrated together with a stacked $1\times 1$ convolutional. Locally relevant information is crucial for image restoration, while the original Feed-Forward Network (FFN) is insensitive and inept at this demand since it processes each token independently without considering the relationships between them. Considering that the convolution operation can capture local contextual information by sharing weights over neighboring pixels. To compensate for the above limitations, we present a Dual-Path FFN (DP-FFN) to extract local contextual information by introducing convolutional branches parallel with a fully connected layer. The process of DP-FFN is formulated as: $\displaystyle\hat{Y_{1}},\hat{Y_{2}}=Split(Norm(\hat{Y})),$ (2) $\displaystyle Y_{1}=GELU(Linear(\hat{Y_{1}})),$ $\displaystyle Y_{2}=GELU(DConv_{3\times 3}(Conv_{1\times 1}(\hat{Y_{2}}))),$ $\displaystyle Y=Conv_{1\times 1}(Cat(Y_{1},Y_{2}))+\hat{Y},$ where DConv${}_{3}\times 3$ represents $3\times 3$ depthwise convolution with group number equal to channel dimension. Overall, the MHBB process can be expressed as: $\displaystyle\hat{Y}=$ $\displaystyle Cat(CAM_{1}(X_{1}),CAM_{2}(X_{2}),...CAM_{k}(X_{k})),$ (3) $\displaystyle Y=$ $\displaystyle DP\mbox{-}FFN(Conv_{1\times 1}(\hat{Y})).$ _Reconstruction Vectors Aided Restoration._ Existing image restoration algorithms intend to recover the degraded image from the remaining ambiguous background content. In this case, the available features are limited, especially hybrid conditions, and it is extremely challenging to recover high- quality images with insufficient information. Inspired by two-stage image generation models, which build a Codebook with rich context in the first stage and then generate images with encoded discrete vectors in the second stage. We utilize the context-rich vectors embedded in the Codebook to assist the network in repairing hybrid degraded images, which allows the knowledge domain of restoration to be extended from a single image to the entire vectors repository. Benefiting from the information-rich image components contained in the reconstruction vectors, the restoration network can better reconstruct high-quality images. Figure 3: Brief illustration of VQGAN model [29]. Figure 4: Detailed illustration of reconstruction vectors aided scheme. We first briefly describe the VQGAN model, which is proposed by Esser _et al._ [29] for two-stage image generation. VQGAN consists of an encoder E, a codebook Q with discrete codes, and a decoder G. Given an input image $x\in\mathbb{R}^{H\times W\times 3}$, the encoder E first embeds $x$ to latent representation $Z\in\mathbb{R}^{\frac{H}{f}\times\frac{W}{f}\times N_{z}}$, where $f$ and $N_{z}$ are downsampling factor and the dimension of latent vectors, respectively. Then, a quantization $q(\cdot)$ is performed on each representation vector $z\in\mathbb{R}^{N_{z}}$ to locate the most similar discrete vector $q_{i}$ in the Codebook. $\displaystyle Z_{q}=q(Z):=(argmin_{q_{i}\in Q}\parallel z-q_{i}\parallel)\in\mathbb{R}^{\frac{H}{f}\times\frac{W}{f}\times N_{z}}.$ (4) Finally, the decoder G reconstructs the output image $\hat{x}$ from the quantized representation $Z_{q}$. The whole process of encoding, quantizing, and reconstructing can be expressed as follows: $\displaystyle\hat{x}=G(Z_{q})=G(q(E(x))).$ (5) With the knowledge about VQGAN, the innovation of this paper can be better understood. To be specific, we utilize the VQGAN [29] pre-trained on OpenImages [58] with 8192 quantization encodings as a library of reconstruction vectors. The schematic illustration of this scheme is depicted in Fig. 4. The VQGAN encoder first produces the quantized encoding $F_{rv}^{c}\in\mathbb{R}^{\frac{H}{8}\times\frac{W}{8}\times N_{z}}$ of the clean image, and for the degraded image feature $F_{mi}$ encoded by the restoration network encoder, the mapping network learns to predict the possible embedding $F_{rv}^{d}$ that consistent with $F_{rv}^{c}$. Attention and convolution layers are cascaded to construct the mapping network which can be optimized by the following cosine similarity loss. $\displaystyle\mathcal{L}_{map}=\sum_{i=0}^{\frac{W}{8}}\sum_{j=0}^{\frac{H}{8}}[\frac{1-cos(F_{rv}^{d}(i,j),F_{rv}^{c}(i,j))}{\frac{W}{8}\times\frac{H}{8}}].$ (6) We can then obtain $F_{rv}$ by leveraging a subsequent nearest neighbor matching $N\\!N\\!M(\cdot)$ of each spatial encoding $F_{rv}^{d}(i,j)\in\mathbb{R}^{N_{z}}$ from $F_{rv}^{d}\in\mathbb{R}^{\frac{H}{8}\times\frac{W}{8}\times N_{z}}$ onto its closest reconstruction vector $rv_{k}$ in the Codebook. RAHC approximates quantized reconstruction vectors by mapping networks rather than predicting logits directly. The advantages are mainly twofold: (1) the transform dimension of the feature mapping is equal to the length of the quantized vector 256, while the dimension of the direct prediction is the size of the whole Codebook 8192, thus our mapping network is more lightweight. (2) Compared with direct classification to predict the reconstruction vectors, feature mapping followed by quantization would be less challenging and yield more accurate results (see Sec. 4.5.3). ### 3.3 Output Space Discrimination for Efficient Arbitrary Hybrid Conditions Restoration Figure 5: Detailed illustration of output space discriminative learning scheme. Existing all-in-one restoration approaches rely on distillation [20], degradation guidance [22], and querying [23] to gain knowledge of different degradations. Although these methods achieve excellent performance for non- overlapping degradation, they are limited in modeling the joint characterization of hybrid multiple degradations. Our intuition is that regardless of the degradation type image suffered, the restored result should be a degeneration-agnostic high-quality clean image. Thus, we innovatively treat the unified learning of multiple degradations as a domain adaptive problem and cultivate the degradation-agnostic adaptive restoration network via discriminative adversarial learning scheme. In contrast to Li _et al._ [21] that strives to reserve degradation cues to train multiple feature extractors, we dedicate to back-constraining the restoration network to yield consistent degradation-agnostic ideal images. Based on the concept of adversarial gaming, the discriminator aims to identify from the restored image which degradations it suffered. On the contrary, the restoration network endeavors to yield consistent untracked images to confuse the discriminator. The detailed engraving is given in Fig. 5. _Discriminator Training._ Given the restoration output $R=Network(D)$, we forward $R$ to discriminator $Dis$ to distinguish the type of weather degradation from the restored image. The training of the discriminator can be regarded as a multilabel classification task, and the cross-entropy loss objective can be defined as: $\displaystyle\mathcal{L}_{d}=\sum_{i=0}^{n-1}[-t_{i}\log Dis(R)_{i}-(1-t_{i})\log(1-Dis(R)_{i})],$ (7) where $n$ denotes the number of degenerate types, $t_{i}=1$ if $I$ suffer from degradation $i$ else $t_{i}=0$. _Restoration Network Training._ First, pixel-level L1 loss is equipped to allow the restoration results $R$ to approximate the ground truth clean image $C$: $\displaystyle\mathcal{L}_{net}^{l1}=\parallel R-C\parallel_{1}.$ (8) Second, to allow restoring degeneration-agnostic results, i.e., results that can not be recognized by the discriminator what degeneration types suffered, a discriminative loss is employed: $\displaystyle\mathcal{L}_{net}^{dis}=\sum_{i=0}^{n}-\log(1-Dis(R)_{i}).$ (9) The ultimate goal is to minimize the L1 loss while allowing the results to approximate consistent degradation-agnostic clean distribution. Meanwhile, perceptual loss $\mathcal{L}_{net}^{per}$ [59] is also employed to weaken the interference of noise from pseudo-data on the training. Thus, the final training objective can be expressed as: $\displaystyle\mathcal{L}_{net}=\mathcal{L}_{net}^{l1}+\lambda_{dis}\mathcal{L}_{net}^{dis}+\mathcal{L}_{net}^{per},$ (10) where $\lambda_{dis}$ is set to 0.1 to balance relative weight of $\mathcal{L}_{net}^{dis}$. The proposed paradigm can cope with 31 hybrid weather conditions by relying only on a five-class classification discriminator, whereas existing approaches have to treat 31 scenarios separately and are limited in capturing cross- weather interrelationships. ### 3.4 Hybrid Adverse Conditions Dataset (HAC) #### 3.4.1 AdverseGAN for Training Set Generation. Figure 6: Pipeline of our proposed AdverseGAN. The generator is an encoder- decoder structure, consisting of two latent space injection branches: content branch (top) and style branch (bottom). And there are two discriminators: Realism-Discriminator (R-Discriminator) and Pairing-Discriminator (P-Discriminator). R-Discriminator is designed to constrain the realism of the weather degradation, while P-Discriminator guarantees the matching with the input, i.e., the background is consistent, and only the weather is added. For brevity, we take the generation of rain streaks as an example, the other four weather types are similar. It is worth noting that AdverseGAN can generate different adverse conditions by controlling type vector and without have to train five separate models. Pseudo-data generation [60, 37, 38, 39, 40, 41, 43] has been shown to be feasible and has achieved notable performance. Inspired by the above research, to save time and effort in building dataset for end-to-end training of arbitrary hybrid adverse conditions restoration, we propose to generate pseudo-adverse conditions with GAN rather than artificial processing. The key insight is to firstly train an elaborated AdverseGAN that can generate five non-hybrid weather conditions and then hybrid conditions can be generated by recursive calls. Therefore, ultimately we can automatically generate 31 adverse conditions automaticall without manual operation. Schematic illustration of the AdverseGAN is provided in Fig. 6. _Generator._ Following recent works [61, 62, 63, 64], our generator is constructed based on a dual space structure, content space $\mathcal{C}$ and style space $\mathcal{S}$. The generator first encodes the input clean image into latent space and then generates the degraded image with the injection of style vector $z_{s}$ and content vector $z_{c}$ randomly drawn from a normal distribution. Type vector $z_{t}$ mapping from type label $t$ is integrated together to control condition type like cGAN [65]. The injection of style vector and content vector follow StyleGAN2 [66] and SNI [63] respectively. Conversely, the functions of our content latent space are different from SNI which can be summarized in the following three aspects: (1) content code and image code are concatenated together as input feature map of the decoder achieving natural decoupling of clean background and degradation; (2) a separate content space separate from the style space can generate richer and more diverse conditions; (3) dual space allows for better disentanglement of content and style to generate more realistic and reliable adverse conditions. After introducing dual space, the generation procedure from clean image $C$ to corresponding degraded image $D^{t}$ can be depicted by the conditional distribution $p(D^{t}|C,t,z_{c},z_{s})$. The generator $G$ expresses an implicit distribution $p_{G}(D|C,t,z_{c},z_{s})$ to approximate the true distribution $p(D^{t}|C,t,z_{c},z_{s})$. The pseudo degraded image $D$ can be easily obtained: $\displaystyle z_{c}\sim p(z_{c}),z_{s}\sim p(z_{s}),D=G(C,t,z_{c},z_{s}),$ (11) where $p(i)$ denotes the distribution of the latent variable $i$. _Discriminators._ Although paired datasets of individual degradation types are readily available, most of them are synthetic data by artificial modulation and are unable to simulate real degradation scenarios well. In addition, generative adversarial networks cannot guarantee that the background of the generated images remains constant except for degradation. In order to remedy the above limitations, dual discriminators were introduced to generate more realistic and content-preserved adverse conditions. Realism-discriminator (_RD_) tries to distinguish between real-world adverse weather conditions and generated fake weather conditions. And pairing-discriminator (_PD_) tries to distinguish between real pairing data and pseudo pairing data, i.e., constraining the background consistency of the generated images. To allow multiple adverse conditions generation, our discriminator produces probability distributions over both sources and condition types [67], $D:\mu\to{Dsrc(\mu),Dcls(\mu)}$. For _RD_ , the adversarial loss is first deployed to constrain the generated image $D$ to be indistinguishable from the real-world image $E$. The loss can be written as: $\displaystyle\mathcal{L}_{adv}^{RD}=$ $\displaystyle\mathbb{E}_{r}[\log RDsrc(E)]+$ (12) $\displaystyle\mathbb{E}_{C,t,z_{c},z_{s}}[\log(1-RDsrc(G(C,t,z_{c},z_{s})))].$ Apart from adversarial loss, type classification loss is also imposed to guarantee that the generated weather type meets expectations. We decompose the loss into two terms: a type classification loss of real-world images used to optimize _RD_ , and a type classification loss of generated images used to optimize $G$. In detail, the former is defined as: $\displaystyle\mathcal{L}_{cls(r)}^{RD}=$ $\displaystyle\mathbb{E}_{E,\hat{t}}[-\log RDcls(\hat{t}|E)].$ (13) where $\hat{t}$ represents the predicted weather type. On the other hand, the latter can be defined as: $\displaystyle\mathcal{L}_{cls(f)}^{RD}=$ $\displaystyle\mathbb{E}_{C,t,z_{c},z_{s}}[-\log RDcls(t|G(C,t,z_{c},z_{s}))].$ (14) Likewise, _PD_ also contains two loss objectives, the adversarial loss for constraining the pairing of generated images and clean counterparts and the type classification loss for ensuring the pairing type matched expectation. The adversarial loss for optimizing _PD_ can be expressed as: $\displaystyle\mathcal{L}_{adv}^{PD}=$ $\displaystyle\mathbb{E}_{C,H}[\log PDsrc(C,H)]+$ (15) $\displaystyle\mathbb{E}_{C,t,z_{c},z_{s}}[\log(1-PDsrc(C,G(C,t,z_{c},z_{s})))].$ Correspondingly, the type classification loss utilized to optimize _PD_ and _G_ , respectively, are represented as: $\displaystyle\mathcal{L}_{cls(r)}^{PD}=\mathbb{E}_{C,H,\hat{t}}[-\log PDcls(\hat{t}|C,H)],$ (16) $\displaystyle\mathcal{L}_{cls(f)}^{PD}=\mathbb{E}_{C,t,z_{c},z_{s}}[-\log PDcls(t|C,G(C,t,z_{c},z_{s}))].$ (17) _Full Objective._ Finally, the objective functions to optimize _G_ , _RD_ , and _PD_ are written, respectively, as $\displaystyle\mathcal{L}_{G}=$ $\displaystyle\alpha(-\mathcal{L}_{adv}^{RD}+\lambda_{cls}\mathcal{L}_{cls(f)}^{RD})+$ (18) $\displaystyle\beta(-\mathcal{L}_{adv}^{PD}+\lambda_{cls}\mathcal{L}_{cls(f)}^{PD}),$ $\displaystyle\mathcal{L}_{RD}=\alpha(\mathcal{L}_{adv}^{RD}+\lambda_{cls}\mathcal{L}_{cls(r)}^{RD}),$ (19) $\displaystyle\mathcal{L}_{PD}=\beta(\mathcal{L}_{adv}^{PD}+\lambda_{cls}\mathcal{L}_{cls(r)}^{PD}),$ (20) where $\alpha$, $\beta$, and $\lambda_{cls}$ are hyper-parameters that balance the weights of each term and are set to 1, 2 and 3, respectively. To unify the learning rate and avoid mode collapse, both $\mathcal{L}_{RD}$ and $\mathcal{L}_{PD}$ are modulated by $\alpha$ and $\beta$. Thanks to the decoupled design and pairing-discriminator which has served as a content preserver, cycle consistency loss is not introduced since it is time-consuming and resource-intensive. In this paper, we have considered $2^{5}-1=31$ combinations for arbitrary hybrid adverse conditions restoration (five weather types arranged in combination except for the clean one). To cost-effectively synthesize sufficient paired data for end-to-end training, the enormous amount of training data is produced by AdverseGAN almost without any labor cost. We first train the AdverseGAN which can generate five basic adverse weather conditions by leveraging existing data, and then superimposed hybrid scenarios can be generated by recursive calls. Ultimately, the production of the training set can be done automatically by AdverseGAN. The datasets for training AdverseGAN are summarized in Tab. I. TABLE I: Summary of the dataset for AdverseGAN training. It is worth noting that all data are from the training set and 10% of real-world images were split out for the visual testing purpose. Adverse Condition | Paired Data | Real-world Data ---|---|--- Haze | OTS [68] | Li _et al._ [68] Rain Streak | Rain12000 [7] | Wei _et al._ [69] Snow | Snow100K [34] | Liu _et al._ [34] Night | SICE [70] | Loh _et al._ [71] Raindrop | Raindrop [10] | Qian _et al._ [10] Algorithm 1 Arbitrary Adverse Condition Generation With AdverseGAN Input: $C$: input clean image; $t$: type label initialized to $"00000"$; $Codes$: five binary codes from $"00001"$ to $"11111"$. Each digit represents a weather type, and from left to right, it represents haze, rain streak, snow, night, and raindrop in turn. ’1’ indicates that the condition contains that type of weather, while ’0’ is the opposite. Output: $D$: degraded image 1:for $i=0$ to $4$ do 2: if $Codes[i]="1"$ then 3: $t[i]="1"$ 4: Sample $z_{c}$ from $\mathcal{N}(0,1)$ 5: Sample $z_{s}$ from $\mathcal{N}(0,1)$ 6: $C=AdverseGAN(C,t,z_{c},z_{s})$ 7: $t="00000"$ 8:$D=C$ 9:return $D$ After training, we manually curate 5000 well-illuminated outdoor clean images from three datasets [10, 34, 68] as the groundtruth. Then each image is randomly cropped to a fixed size of $256\times 256$. Thereafter, for each cropped image, we generate 31 adverse scenes, each scene was randomly sampled twice from the style space and content space, resulting in $310K$ paired data. We also provide the algorithm procedures in Algo. 1 to better illustrate the training data generation pipeline. Moreover, we demonstrate the amazing generative power of AdverseGAN in Fig. 7. Inevitably, the generated fake images would introduce undesired artifacts, which seems to compromise the training of the model, but we found the hazard of this to be minimal. As will be demonstrated in Sec. 4.3, RAHC trained only with data generated by AdverseGAN can achieve competitive results to the one trained on the standard training set and outperform most state-of-the-art methods. Figure 7: Superimposed adverse conditions (top two rows) and interpolation results (bottom five rows) generated by AdverseGAN. AdverseGAN can generate infinite, controllable, realistic, and diverse adverse scenarios. #### 3.4.2 Handcrafted Test Set Since the training set can be generated by AdverseGAN, to guarantee the authority and accuracy of evaluation, the test set is carefully fabricated artificially. We first capture 200 high-quality well-curated images ($720\times 480$) as ground truth with Canon EOS 60D, and then synthesize corresponding 31 adverse conditions for each image, resulting in 6200 image pairs. The synthesis of haze, rain streak, snow, night, and raindrop follows previous academic literature [68, 72, 34, 73, 26]. Photoshop-related works were performed by three photography experts we recruited. Lighting conditions, scenes, subjects, and styles, e.g., are all taken into account to ensure realism and diversity. To the best of our knowledge, HAC covers the wealthiest of adverse conditions and owns the largest amount of data. In some way, AdverseGAN can generate infinite adverse conditions. In Tab. II, we compare HAC to other adverse conditions restoration datasets. These datasets only focus on single or double degradations, while our HAC dataset covers 31 scenarios combined by five types of weather. TABLE II: Comparison of HAC against conventional adverse conditions restoration datasets. Dataset | Haze | Rain | Snow | Night | Raindrop | Total ---|---|---|---|---|---|--- RESIDE [68] | $\checkmark$ | | | | | $\sim\\!87K$ Rain1200 [7] | | $\checkmark$ | | | | $13.2K$ Snow100K [34] | | | $\checkmark$ | | | $100K$ SICE [70] | | | | $\checkmark$ | | $\sim\\!0.6K$ Raindrop [10] | | | | | $\checkmark$ | $\sim\\!1.1K$ Outdoor-Rain [25] | $\checkmark$ | $\checkmark$ | | | | $10.5K$ DarkRain [27] | | $\checkmark$ | | $\checkmark$ | | $\sim\\!5K$ RainDS [26] | | $\checkmark$ | | | $\checkmark$ | $5.8K$ HAC | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\sim\\!316K$ ## 4 Experiments and Analysis In this section, we conduct extensive experiments to demonstrate the effectiveness of the proposed method. We first present a detailed experimental setup in Section 4.1 and qualitative and quantitative comparisons with state- of-the-art methods are performed in Sections 4.2 and 4.3. The performance in real-world hybrid scenarios is verified in Section 4.4. We then performed ablation experiments on the restoration framework RAHC and the adverse weather generator AdverseGAN in Section 4.5 and Section 4.6, respectively. The complexity of the model is subsequently analyzed in Section 4.7. Finally, we further explored the application to high-level vision tasks in Section 4.8. ### 4.1 Implementation Details All experiments are implemented with the Pytorch framework on two NVIDIA RTX 3090ti GPUs. For AdverseGAN, we use Adam as the optimizer with a batch size of 16, the learning rate is initialized to 0.00001 and then decreased to 0.000005 after 200 epochs. Once the network structure is determined, the size of the generated image is fixed, so the training data are randomly cropped to $256\times 256$. We trained from scratch for 400 epochs on the above-mentioned datasets. Considering the differences in the number of different datasets, the resampling strategy is applied throughout the training process. For the restoration network, the number of MHBBs in different layers $n_{1}\sim n_{8}$ is set to 2, 4, 6, 4, 4, 2, 2, and 2, respectively. The number of heads is symmetrical, set to 1, 2, 4, and 8 in order from shallow to deep layers. We train our RAHC for $6\times 10^{5}$ iterations using Adam as the optimizer with a batch size of 4, except for the first 5K iterations where the mapping network is solely trained to avoid cold-start. The initial learning rate is $2\times 10^{-4}$, which is steadily decreased to $1\times 10^{-6}$ using the cosine annealing strategy [74]. Notably, all methods in the comparison are trained on the same dataset until convergence with default configurations for a fair comparison. PSNR [75] and SSIM [76] are utilized to evaluate the restoration performance. These metrics are calculated in the RGB color space except for deraining on Rain1200 [7] where PSNR and SSIM are calculated on the Y channel in the YCbCr color space as in other works [16, 15]. The implementation code and dataset are available at https://github.com/Jeasco/RAHC. ### 4.2 Comparison with the State-of-the-Arts on HAC Dataset TABLE III: Quantitative comparison with the SOTA methods on our proposed benchmark HAC dataset. Top super row: single model instance for single condition (condition-specific). Bottom super row: one model instance for all conditions (all-in-one). Multiplicative numeral indicates the number of weather types contained in an image (e.g., Triple represents the average scores of $C_{5}^{3}=10$ conditions containing three weather types). Method | Single | Double | Triple | Quadruple | Pentuple | Average ---|---|---|---|---|---|--- | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM MIRNet [35] | 28.44 | 0.9285 | 22.10 | 0.8749 | 19.11 | 0.7940 | 15.74 | 0.6483 | 14.30 | 0.6257 | 19.94 | 0.7743 HINet [36] | 28.99 | 0.9393 | 23.33 | 0.9101 | 20.48 | 0.8422 | 16.14 | 0.6982 | 14.69 | 0.6689 | 20.73 | 0.8117 MPRNet [15] | 28.84 | 0.9322 | 22.79 | 0.9147 | 20.19 | 0.8375 | 15.99 | 0.6858 | 14.65 | 0.6653 | 20.49 | 0.8071 SwinIR [18] | 29.08 | 0.9411 | 24.21 | 0.9199 | 20.85 | 0.8517 | 16.78 | 0.7599 | 14.87 | 0.6672 | 21.16 | 0.8280 Uformer [19] | 29.17 | 0.9425 | 24.48 | 0.9228 | 21.05 | 0.8533 | 16.89 | 0.7683 | 14.95 | 0.6728 | 21.31 | 0.8319 Restormer [16] | 29.30 | 0.9478 | 24.60 | 0.9239 | 21.08 | 0.8531 | 16.92 | 0.7701 | 15.04 | 0.6783 | 21.39 | 0.8350 NAFNet [17] | 29.35 | 0.9513 | 24.71 | 0.9276 | 21.12 | 0.8583 | 17.01 | 0.7864 | 15.23 | 0.6844 | 21.48 | 0.8416 RAHC | 29.45 | 0.9517 | 24.98 | 0.9298 | 22.03 | 0.8817 | 19.53 | 0.8395 | 18.09 | 0.7502 | 22.82 | 0.8706 NAFNet [17] | 29.10 | 0.9407 | 24.22 | 0.9143 | 20.74 | 0.8465 | 15.83 | 0.6623 | 14.33 | 0.6287 | 20.84 | 0.7985 TransWeather [23] | 29.16 | 0.9433 | 24.25 | 0.9155 | 20.83 | 0.8495 | 16.72 | 0.7555 | 14.72 | 0.6652 | 21.14 | 0.8258 WeatherDiff [24] | 29.19 | 0.9431 | 24.36 | 0.9215 | 20.99 | 0.8530 | 16.90 | 0.7685 | 14.93 | 0.6711 | 21.27 | 0.8314 AirNet [22] | 29.19 | 0.9436 | 24.32 | 0.9207 | 20.97 | 0.8522 | 16.87 | 0.7655 | 15.01 | 0.6741 | 21.27 | 0.8312 TKL [20] | 29.23 | 0.9441 | 24.36 | 0.9229 | 21.05 | 0.8531 | 16.93 | 0.7725 | 14.92 | 0.6734 | 21.30 | 0.8332 RAHC | 29.40 | 0.9514 | 24.91 | 0.9283 | 22.17 | 0.8946 | 19.88 | 0.8425 | 18.24 | 0.7612 | 22.92 | 0.8756 The quantitative results on HAC dataset are reported in Tab. III. As can be found, our RAHC delivers unparalleled performance gains and outperforms all competitive models both in the condition-specific setting and in the all-in- one setting, especially for extremely adverse scenarios such as “Quadruple” and “Pentuple”. Notably, RAHC exceeds the top-performing unified approach TKL [20] by 3.32dB on PSNR when there are pentuple degradation types. Furthermore, RAHC trained in the all-in-one setting even surpasses the results separately trained in each single condition. This phenomenon can be ascribed to the fact that the proposed discriminative learning scheme allows the network to learn more generalized and degradation-agnostic repair capabilities, and that the reconstruction vectors aided scheme can benefit from more sufficient data. We also demonstrate visual comparisons in Fig. 8 and Fig. 9. As suggested, RAHC recovers clean and crisp results while achieving a harmonious global tone without introducing visible artifacts or color shifts suffered by other methods, especially for complicated hybrid scenarios. Figure 8: Visual comparisons with SOTA adverse conditions restoration methods on outdoor natural scenery. Our RAHC restores more visually pleasing results with finer details and consistent illumination. Figure 9: Visual comparisons with SOTA adverse conditions restoration methods on outdoor building structures. Our RAHC restores more photorealistic results with coherent fine structural and textural details. ### 4.3 Comparison with the State-of-the-Arts on Conventional Datasets TABLE IV: Quantitative comparisons with the SOTA methods on the Snow100K [34] dataset. Top super row: training with task-specific data. Bottom super row: training with mixed data. $\dagger$ indicates training with pure data generated by AdverseGAN. The performance gain from applying AdverseGAN is reported in parenthesis. Method | Snow100K-S | Snow100K-M | Snow100K-L ---|---|---|--- | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM DesnowNet [34] | 32.33 | 0.9500 | 30.87 | 0.9409 | 27.17 | 0.8983 DDMSNet [77] | 34.34 | 0.9445 | 32.89 | 0.9330 | 28.85 | 0.8772 MPRNet [15] | 35.25 | 0.9537 | 33.02 | 0.9422 | 30.30 | 0.8999 HDCWNet [9] | 35.28 (0.07) | 0.9571 | 33.14 (0.09) | 0.9436 | 30.49 (0.14) | 0.9017 NAFNet [17] | 35.68 (0.06) | 0.9609 | 33.80 (0.11) | 0.9479 | 30.99 (0.17) | 0.9084 RAHC | 35.81 (0.09) | 0.9647 | 34.03 (0.13) | 0.9525 | 31.45 (0.16) | 0.9189 RAHC† | 35.32 | 0.9584 | 33.25 | 0.9421 | 30.56 | 0.9009 All-in-One [21] | – | - | – | - | 28.33 | 0.8820 TransWeather [23] | 35.21 (0.09) | 0.9532 | 33.02 (0.16) | 0.9408 | 30.28 (0.18) | 0.8987 AirNet [22] | 35.37 (0.09) | 0.9584 | 33.24 (0.15) | 0.9419 | 30.41 (0.19) | 0.9087 TKL [20] | 35.44 (0.08) | 0.9597 | 33.38 (0.14) | 0.9428 | 30.52 (0.17) | 0.9098 RAHC | 35.53 (0.11) | 0.9625 | 33.66 (0.17) | 0.9457 | 31.08 (0.23) | 0.9115 TABLE V: Quantitative comparison with the SOTA methods on the Raindrop removal test dataset [10]. Please reffer to Tab. V for table notes. Method | Raindrop ---|--- | PSNR | SSIM pix2pix [78] | 28.02 | 0.8547 Quan [8] | 31.44 | 0.9263 DuRN [79] | 31.24 (0.18) | 0.9259 Restormer [16] | 32.69 (0.14) | 0.9407 NAFNet [17] | 32.51 (0.16) | 0.9322 RAHC | 33.37 (0.15) | 0.9445 RAHC† | 32.25 | 0.9326 All-in-One [21] | 31.12 | 0.9268 TransWeather [23] | 31.74 (0.22) | 0.9279 AirNet [22] | 32.27 (0.18) | 0.9315 TKL [20] | 32.35 (0.19) | 0.9341 RAHC | 32.82 (0.24) | 0.9401 TABLE VI: Quantitative comparison with the SOTA methods on the SOTS-outdoor [68] dataset. Please reffer to Tab. V for table notes. Method | SOTS-outdoor ---|--- | PSNR | SSIM GridDehazeNet [6] | 31.47 | 0.9779 MPRNet [15] | 32.17 | 0.9799 FFA-Net [31] | 32.89 | 0.9802 AECR-Net [32] | 33.07 (0.14) | 0.9804 DehazeFormer [33] | 33.22 (0.10) | 0.9814 RAHC | 33.89 (0.16) | 0.9897 RAHC† | 33.24 | 0.9809 TransWeather [23] | 33.01 (0.20) | 0.9768 AirNet [22] | 33.09 (0.19) | 0.9808 TKL [20] | 33.13 (0.17) | 0.9792 RAHC | 33.54 (0.22) | 0.9865 TABLE VII: Quantitative comparison with the SOTA methods on the Rain1200 [7] dataset. Please reffer to Tab. V for table notes. Method | Rain1200 ---|--- | PSNR | SSIM DDN [30] | 30.97 | 0.9116 PReNet [80] | 33.17 | 0.9481 RCDNet [12] | 34.08 | 0.9532 Restormer [72] | 34.67 (0.12) | 0.9623 NAFNet [17] | 34.69 (0.08) | 0.9617 RAHC | 34.93 (0.11) | 0.9652 RAHC† | 34.25 | 0.9572 TransWeather [23] | 34.05 (0.16) | 0.9535 AirNet [22] | 34.19 (0.13) | 0.9562 TKL [20] | 34.25 (0.15) | 0.9549 RAHC | 34.71 (0.14) | 0.9641 TABLE VIII: Quantitative comparison with the SOTA methods on the SICE [70] test dataset. Please reffer to Tab. V for table notes. Method | SICE ---|--- | PSNR | SSIM RUAS [13] | 15.26 | 0.5481 EnlightenGAN [81] | 17.55 | 0.5915 SCI [82] | 19.26 | 0.6324 KinD [5] | 20.13 (0.09) | 0.6502 MIRNetv2 [83] | 21.37 (0.15) | 0.7203 RAHC | 22.73 (0.14) | 0.7425 RAHC† | 21.09 | 0.6705 TransWeather [23] | 20.42 (0.14) | 0.6973 AirNet [22] | 20.65 (0.18) | 0.6993 TKL [20] | 21.26 (0.16) | 0.7131 RAHC | 21.56 (0.18) | 0.7269 In addition to HAC, we also evaluate the performance of RAHC on five conventional adverse weather removal datasets: Snow100K [34], Raindrop [10], SOTS-outdoor [68], Rain1200 [7], and SICE [70]. And only the training sets from these datasets are utilized to train AdverseGAN, so there is no information leakage. In addition, real-world images used for training AdverseGAN are replaced by handcrafted images in the corresponding datasets to better verify the data distribution simulation performance of AdverseGAN. The quantitative experimental results are presented in Tab. V, Tab. V, Tab. VIII, Tab. VIII, and Tab. VIII respectively. In the tables, we provide the results of four types of experiments: (1) task- specific setting that is training with task-specific data only; (2) all-in-one setting that is training with mixed data from all datasets; (3) training with pure data generated by AdverseGAN; (4) the performance gain from applying AdverseGAN as a data augmentor. As can be seen, our RAHC achieves the best scores on all five tasks both in the task-specific setting and in the all-in-one setting. Take Tab. VIII for example, our method outperforms previous SOTA TKL [20] by a PSNR margin of up to 0.3dB. Moreover, with the all-in-one setting, our RAHC is only slightly inferior to the task-specific results, demonstrating consistent restoration performance. To verify the reliability and distribution simulation capability of AdverseGAN, we trained RAHC only with data generated by AdverseGAN without original degraded images. As we have seen, RAHC trained from the generated data achieves competitive results, although tolerably worse than the standard results, this is negligible compared to the tedious and labor-intensive data synthesis process. And after all, both qualitative and quantitative experimental results on the HAC dataset demonstrate the plausibility of AdverseGAN. The results also indicate that our proposed AdverseGAN is trustworthy and excellent in simulating the source distribution, thus can perform as an augmentor to boost performance gains to existing methods. Apart from the above experiments, we also provide the performance gain from applying AdverseGAN as an augmentor in parenthesis. Obviously, AdverseGAN delivers consistent performance gains for both task-specific and all-in-one approaches. Compared to task-specific approaches, the unified frameworks can obtain higher performance gains from augmentation, manifesting the importance of adequate training data for multi-weather learning. ### 4.4 Comparison with the State-of-the-Arts on Real-World Conditions TABLE IX: Quantitative comparison results of average NIQE/SSEQ on real-world adverse weather conditions. The smaller scores indicate better perceptual quality. Method | TransWeather [23] | AirNet [22] | TKL [20] | RAHC ---|---|---|---|--- NIQE | 6.918 | 7.266 | 5.625 | 4.514 SSEQ | 50.17 | 45.66 | 40.11 | 30.62 Figure 10: Visual comparisons with SOTA adverse conditions restoration methods on real-world examples. We conduct additional comparisons on real-world adverse weather conditions to further verify the realistic reliability of HAC and the robustness of RAHC. We use NIQE [84] and SSEQ [85] to quantitatively evaluate the reference-free restoration performance. Quantitative results are shown in Tab. IX. The smaller scores of SSEQ and NIQE indicate better perceptual quality and clearer con- tents. It is observed that our RAHC delievers the best average scores on real-world samples, outperforming the state-of-the-art reatoration methods [20, 22, 23] by a large margin. Moreover, Fig. 10 exhibits the visual comparisons on real-world conditions. It can be seen that RAHC and TKL trained on HAC dataset can comfortably handle real-world hybrid adverse weather conditions, adequately indicating the realism and effectiveness of our proposed dataset. Meanwhile, our RAHC produces cleaner and more pleasing results compared to TKL and TransWeather, which strongly proves the superiority of our proposed RAHC. Additionally, this experiment also reveals the truth that real-world scenarios often suffer from multiple superimposed degradations rather than simple one corruption, and our method can flexibly restore arbitrary hybrid conditions in one go. ### 4.5 Analyzing Framework RAHC #### 4.5.1 Effect of Basic Schemes TABLE X: Average PSNR of RAHC on HAC dataset with four variants. RVA and OSD represent reconstruction vectors aided scheme and output space discriminative learning scheme respectively. Variant | PSNR ---|--- RAHC w/o RVA | 21.96 RAHC w/o OSD | 22.41 RAHC w/o RVA&OSD | 21.47 Ours | 22.92 We first investigate the validity of the reconstruction vectors aided scheme and the output space discriminative learning scheme proposed in this paper. Tab. X reports the ablations of the two basic schemes, and as we can see, the PSNR of RAHC w/o OSD decreases by 0.51dB, and a sharp reduction of 0.96dB was observed when RVA was discarded. The output space discriminative learning scheme ensures that the network learns degradation-agnostic generic repair capabilities while the reconstruction vectors aided scheme provides solid support for the network to cope with hybrid complicated adverse scenarios. The experimental results strongly demonstrate the indispensability of these two schemes, which have been instrumental in achieving arbitrary hybrid adverse weather conditions restoration. #### 4.5.2 Effect of MHBB TABLE XI: Average PSNR of RAHC on HAC dataset with five variants. Variant | PSNR ---|--- MHBB w/o multi-head | 22.59 CAM w/o convolution path | 22.66 CAM w/o attention path | 22.61 DP-FFN w/o convolution path | 22.71 Ours | 22.92 In order to verify the effectiveness of each pivotal design element in MHBB, we then conduct experiments over the following variants: (1) MHBB w/o multi- head; (2) CAM w/o convolution path; (3) CAM w/o attention path; (4) DP-FFN w/o convolution path; (5) Ours. It is necessary to note that for each variant we adjust the width of the network to maintain the total number of parameters. As can be found in Tab. XI, each of the components plays a pivotal role, and removing any of them causes significant performance degradation. In particular, the PSNR decreases by 0.33dB when the multi-head mechanism is eliminated, indicating the profound effect of multiple representation subspace for degradation-agnostic learning. Besides, we also provide a feature map comparison of DP-FFN with or without convolution path in Fig. 11. It is observed that the original FFN is insensitive to the local detail context and the extracted features ignore the building texture in the image, instead, our method extracts the rich building texture structure by parallelizing a convolutional branch afterward. The results clearly demonstrate that our proposed DP-FFN can better extract the detailed contexts of interest in image restoration by parallelizing a convolution path. Figure 11: Visual effect of DP-FFN with or without convolution path. Top row: feature maps extracted by the original FFN without convolutional path. Bottom row: feature maps extracted by the proposed DP-FFN with convolutional path. #### 4.5.3 Effect of Reconstruction Vectors Figure 12: Visual effect of reconstruction vectors. Visualizations of reconstruction vectors were obtained by the VQGAN decoder. RV represents reconstruction vectors. We also explored the effect of reconstruction vectors, and the visual results are provided in Fig. 12. As suggested, RAHC without RV tends to produce ambiguous contexts while our RAHC can restore sharper structural and textural details. Naturally, recovering clean images from the reconstruction vectors directly using the VQGAN decoder may be another option, but in this paper, we only leverage the reconstruction vectors as auxiliary features and let the network learn how to utilize them on its own. Such strategy can also be demonstrated by Fig. 12. We can see that the directly restored images (Visualization of RV) complement the possible textures of the degraded region but with low fidelity, while our implicit modeling allows the restoration model to use the visual atoms embedded in the reconstruction vector according to its own “experience”, hence restoring more realistic and reliable images with rich details. Figure 13: Curve comparison on reconstruction vectors location accuracy. Additionally, we also proffer the accuracy comparison of the proposed mapping network and direct classification in reconstruction vectors localization. As shown in Fig. 13, our method exhibits consistently higher accuracy than classification, not sensitive to high-precision predictions, the close-fitting reconstruction vectors can provide rich visual cues for the restoration process, as best demonstrated in Fig. 12. #### 4.5.4 Objective Evaluation of Output Space Discriminative Learning Scheme TABLE XII: Average probability predicted by the weather type classifier. A lower confidence level indicates that the recovery results are perceptually closer to expected clean ones. Method | Avg. Probability $(\downarrow)$ ---|--- TransWeather [23] | 0.553 AirNet [22] | 0.528 TKL [20] | 0.496 RAHC w/o OSD | 0.452 RAHC (Ours) | 0.231 To verify whether the output space discriminative learning scheme yields degradation-agnostic and trace-free clean results, we objectively evaluated the perceptual satisfaction of the restoration results. We first train a ResNet-101 [86] classifier with half of the data from the HAC test set to recognize which weather degradation is contained in the image. Then we input the rest data repaired by the restoration network to the trained classifier to get the predicted probability. Tab. XII presents the average probability predicted by the weather type classifier. As we can see, our RAHC restores more indiscernible images, suggesting a more consistent and trace-free clean result. When this scheme is eliminated, the probability becomes significantly higher indicating the indelible role of the proposed learning scheme in the degradation-agnostic restoration. Despite OSD being removed, the probability is still lower than other approaches, which can be attributed to the proposed restoration network that allows the restored image to preserve more structural and textural details, thus closer to the desired clean result. #### 4.5.5 Feature-Level Discrimination vs. Output Space Discrimination TABLE XIII: Comparison of feature-level discrimination and output space discrimination. Variant | PSNR | SSIM ---|---|--- None | 22.41 | 0.8652 Feature-Level | 22.71 | 0.8735 Output Space (Ours) | 22.92 | 0.8756 Analogous to the output space discrimination, we further investigate feature- level discrimination that is input the encoder-extracted features into the discriminator to distinguish the type of degradation while the restoration network tries to confuse the discriminator to extract degradation-agnostic features. Quantitative experimental results are shown in Tab. XIII, as can be found, both feature-level and output space contribute to learning degradation- agnostic restoration capacity while our output space is more favored compared to the counterpart. We conjecture that intermediate features containing high- level semantics are more prone to perplex the discriminator, which leads to weaker binding of the discriminator. #### 4.5.6 Universal Output Space Discrimination TABLE XIV: Results of applying TKL strategy proposed by Chen _et al._ [20] (top rows) and output space discriminative learning scheme proposed in this study (middle rows) into SOTA universal image restoration methods. Performance gains compared to pure training process are provided in parenthesis. Method | PSNR | SSIM ---|---|--- MPRNet [15] | 20.26 $\uparrow$ 0.58) | 0.7812 ($\uparrow$ 0.0383) SwinIR [18] | 21.01 ($\uparrow$ 0.36) | 0.8163 ($\uparrow$ 0.0256) Uformer [19] | 21.03 ($\uparrow$ 0.28) | 0.8197 ($\uparrow$ 0.0264) Restormer [16] | 21.14 ($\uparrow$ 0.30) | 0.8156 ($\uparrow$ 0.0172) NAFNet [17] | 21.22 ($\uparrow$ 0.38) | 0.8215 ($\uparrow$ 0.0230) MPRNet [15] | 20.33 ($\uparrow$ 0.65) | 0.7866 ($\uparrow$ 0.0437) SwinIR [18] | 21.09 ($\uparrow$ 0.44) | 0.8213 ($\uparrow$ 0.0306) Uformer [19] | 21.16 ($\uparrow$ 0.41) | 0.8217 ($\uparrow$ 0.0284) Restormer [16] | 21.31 ($\uparrow$ 0.47) | 0.8301 ($\uparrow$ 0.0317) NAFNet [17] | 21.37 ($\uparrow$ 0.53) | 0.8339 ($\uparrow$ 0.0354) TransWeather [23] | 21.14 | 0.8258 AirNet [22] | 21.27 | 0.8312 TKL [20] | 21.30 | 0.8332 RAHC | 22.92 | 0.8756 As mentioned above, our proposed output space discriminative learning scheme can be integrated into existing universal image restoration architectures boosting their performance under all-in-one setting. Similarly, TKL proposed by Chen _et al._ can also be applied to existing algorithms, and we have compared the superiority and inferiority of the two approaches under a fair backbone. As presented in Tab. XIV, all algorithms show a significant performance improvement under the all-in-one setting when the learning scheme is equipped which is more obvious and noticeable than the TKL strategy. In particular, Restormer [16] and NAFNet [17] even surpass the extant state-of- the-art unified framework TKL [20] with the original backbone in their paper and our mechanism is more flexible and straightforward compared to the complex training process of TKL’s two-stage strategy. The experiments provide strong evidence of the generality, universality, and effectiveness of our proposed output space discriminative learning scheme. #### 4.5.7 Robust Hybrid Adverse Weather Conditions Restoration TABLE XV: Comparison of similarity between reconstruction outputs of different degraded versions. S, D, T, Q, and P represent Single, Double, Triple, Quadruple, and Pentuple respectively (e.g., D$\longleftrightarrow$S denotes similarity calculation between the restoration results of the double-degraded versions and the single-degraded versions that correspond to the same clean image). Participating | TransWeather | TKL | RAHC (ours) ---|---|---|--- Data | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM D$\longleftrightarrow$S | 22.06 | 0.869 | 22.26 | 0.851 | 25.44 | 0.926 T$\longleftrightarrow$S | 18.53 | 0.799 | 19.02 | 0.811 | 22.36 | 0.901 Q$\longleftrightarrow$S | 15.16 | 0.675 | 15.36 | 0.698 | 19.91 | 0.852 P$\longleftrightarrow$S | 12.11 | 0.612 | 13.78 | 0.621 | 18.22 | 0.745 Since our proposed HAC dataset contains different degraded versions of the same clean image we can further verify the robustness of different restoration algorithms by comparing the similarity between the restoration results of different degraded images. Considering that existing methods are designed primarily for single degradation scenarios, we calculate the similarity between the restoration results of the hybrid weather degradation versions and the corresponding single degradation versions. Higher similarity indicates more robust hybrid adverse weather conditions restoration capability. It can be observed from Tab. XV that our RAHC-restored hybrid degraded images are much more similar to the repaired single degraded images, delivering 3.34dB higher PSNR than TKL [20] on T$\longleftrightarrow$S. Especially when there are more hybrid species (e.g., P$\longleftrightarrow$S) for which our superiority is more obvious. This result strongly demonstrates the superiority of our method in restoring hybrid adverse weather conditions, in particular, benefiting from the reconstruction vectors aided scheme, which allows RAHC to extract the image content features from the auxiliary visual cues to produce more promising results in hybrid scenarios. #### 4.5.8 Effect and Sensitivity Analysis of $\lambda_{dis}$ TABLE XVI: Average PSNR of RAHC on HAC dataset with different $\lambda_{dis}$. $\lambda_{dis}$ | 0.01 | 0.05 | 0.10 | 0.15 | 0.20 ---|---|---|---|---|--- PSNR | 22.69 | 22.86 | 22.92 | 22.83 | 22.81 We also conducted a pilot study to substantiate the effect of $\lambda_{dis}$ on the proposed output space discriminative learning scheme. Tab. XVI shows the final performances of different $\lambda_{dis}$ values. Obviously, the highest PSNR is yielded when $\lambda_{dis}$ is set to 0.10, and both decreasing and increasing result in minor performance degradation. Furthermore, despite there are minor fluctuations, the final performances remain stable without collapse, implying the stability and facility of the proposed discriminative learning scheme. ### 4.6 Analyzing Data Generator AdverseGAN #### 4.6.1 Effect of AdverseGAN TABLE XVII: Average PSNR of RAHC with different adverse weather conditions generators. DD and DS represent dual discriminitor scheme and dual space injection scheme respectively. Generator | PSNR ---|--- pix2pix [78] | 18.33 pix2pixHD [87] | 18.56 CycleGAN [42] | 19.69 StarGAN [88] | 20.75 StarGAN v2 [89] | 20.82 Kim _et al._ [64] | 20.91 AdverseGAN w/o DD&DS | 20.86 AdverseGAN w/o DD | 21.78 AdverseGAN w/o DS | 21.89 AdverseGAN (Ours) | 22.92 Figure 14: Visual comparison of snow scenarios. Our AdverseGAN generates snow scenes that more closely resemble real-world condition and with consistent background. To substantiate the effectiveness of AdverseGAN and the tailored schemes, we first conduct experiments with SOTA image translation algorithms as well as variants of AdverseGAN on adverse conditions generation. As can be found in Tab. XVII, the model trained with data generated by AdverseGAN achieves the highest PSNR of 22.92 dB, 2.17 dB higher than StarGAN [88]. In addition, the dual discriminator scheme and dual space injection scheme are indelible credits to our design which are specifically adapted to the adverse conditions generation task presented in this paper. These two tailored schemes bring consistent performance improvements providing solid data support for the training of restoration networks. Furthermore, we also provide a visual comparison in Fig. 14, from which it can be seen that removing the P-Discriminator leads to inconsistent background perturbations while removing the R-Discriminator results in unrealistic snow. On the contrary, can generate snow sceneries containing multiple scales of snow effects with different density distributions at different scales, which are visually similar to real snow scenes and significantly more realistic than hand-made conditions. #### 4.6.2 Disentanglement of Style and Content Figure 15: Style interpolation and content interpolation. $s$ code and $c$ code are sampled from style space and content space respectively. Interpolation of style and content are shown in Fig. 15. As we can see, the $s$ code tend to control the form and direction of the weather degradations while the $c$ code are mainly responsible for the position and density of the weather degradations. The disentanglement of style and content can better control the generation of adverse conditions and therefore produce richer and more diverse data. #### 4.6.3 The Amount of Training Data TABLE XVIII: Average PSNR of RAHC on HAC dataset with different amount of training data. Amount | $31K$ | $62K$ | $124K$ | $186K$ | $248K$ | $310K$ ---|---|---|---|---|---|--- PSNR | 13.54 | 16.75 | 19.65 | 21.06 | 22.43 | 22.92 We also substantiate the effect of different amounts of training data on the final performance of the model, and the experimental results are reported in Tab. XVIII. One can see that more training data can always lead to higher PSNR values, except that the frequency of growth continues to slow as the volume increases. In this paper, there are $310K$ training data in HAC, but as we can see from the trend, continuing to increase the training data will bring further performance improvements, only more computational resources are required. ### 4.7 Performance vs. Complexity Figure 16: Avergae PSNR vs. computational cost and parameter quantity on our proposed HAC dataset. The proposed RAHC obtains state-of-the-art performance with competitive computational cost and parameters. As it can be seen from Fig. 16, our RAHC obtains state-of-the-art performance at a moderate complexity and outperforms existing methods by a large margin. As a comparison, MIRNet [35], Restormer [16], and NAFNet [17] can only deal with single degradation scenarios, while AirNet [22] does require complicated additional regulation branch and TKL [20] suffers from two-stage training bringing unwanted resource consumption. ### 4.8 Results on High-level Application TABLE XIX: Quantitative results in mean intersection over union (mIoU) on ACDC [90] dataset with five adverse weather conditions (Both rain streaks and raindrops are included in Rain). Method | Fog | Night | Rain | Snow ---|---|---|---|--- Original | 43 | 11 | 37 | 29 Restormer [16] | 46 | 27 | 39 | 31 TransWeather [23] | 47 | 29 | 40 | 33 TKL [20] | 49 | 27 | 41 | 34 RAHC (Ours) | 54 | 36 | 45 | 36 To demonstrate the performance of the proposed restoration algorithm in real- world vision systems, we evaluate the effect of the restoration process on the popular adverse conditions segmentation dataset ACDC [90] with a pre-trained DeepLabv3+ [91]. As can be seen from Tab. XIX, the unified framework TKL [20], TransWeather [23] and our RAHC achieve better results than single weather removal algorithm Restormer [16]. This phenomenon once again reveals the truth that real-world adverse weather conditions are sophisticated, unpredictable, and often the results of a mixture of weather factors. Single weather removal algorithm Restormer [16] can only remove specific weather at once, while unified frameworks can restore the complex scenario adaptively generating more appealing results. In addition, thanks to the application of reconstruction vectors and the output space constraint, our proposed RAHC attains significantly higher mIoU compared to TKL [20] and TransWeather [23]. This result also indicates the necessity and effectiveness of our proposed arbitrary hybrid adverse weather conditions restoration issue from the side. We also provide visual results of segmentation in Fig. 17. As we can see, our approach can better understand images in adverse conditions and more accurately identify the semantics of different regions. Figure 17: Visual comparison of segmentation for driving scene understanding on ACDC [90] dataset between TKL [20] and our RAHC. ## 5 Discussion Figure 18: Real-world hybrid adverse weather condition restoration results. Top row: an example with rain streaks, haze, and night. Bottom row: an example with rain streaks, snow, and night. Fig. 18 illustrates the restoration results of two real-world hybrid weather conditions. It can be easily noticed that processing each weather individually is unable to fully restore the degraded images, while our method can remove all degradations in one go to restore more pleasing high quality images. Besides, our RAHC can handle not only randomly occurring hybrid adverse weather conditions but also single weather scenarios robustly, thus our approach can be better deployed to tackle complex and diverse real-world conditions (without losing the ability to restore single weather degradations while can handle hybrid weather scenarios that are neglected by existing approaches). We believe the exploration in this study should be meaningful and valuable to real-world weather conditions restoration. ## 6 Limitations and Future Work A common limitation shared with other restoration algorithms is the strong reliance on large-scale paired data. As stated in the main paper, there are $310K$ pairs of data for model training. Fortunately, thanks to the proposed AdverseGAN, the labor cost, time cost, and capital cost is tremendously compressed. On the other hand, RAHC can handle arbitrary hybrid conditions combined by five common weather, while other rare adverse weather like frost on glass, sand and dust has remained to be addressed. In the future, we will focus on training models that can handle complicated hybrid conditions with single degraded data, which is extremely challenging but meaningful, especially when more weather types are taken into account, whose number of hybrid conditions grows exponentially. In addition, we will also explore more real-world nighttime adverse weather conditions simulation strategy in future research, which hopefully training more robust and practical image restoration models. ## 7 Concluding Remarks In this paper, we proposed a novel unified framework, namely RAHC to restore arbitrary hybrid adverse weather conditions in one go. In contrast to existing frameworks, RAHC can handle severely deteriorated scenarios suffered from hybrid weather degradations and restore arbitrary hybrid conditions with a single trained model by a concise and flexible scheme. Meanwhile, MHBB provides comprehensive degradation characterization and representation support for multiple simultaneous weather. In addition, we propose a hybrid adverse conditions generation pipeline, based on which sufficient training data can be generated cost-effectively. And then finally established HAC dataset contains $\sim\\!316K$ image pairs of 31 weather conditions, which richness, diversity, and adequacy render it a competent evaluator. Extensive experiments on our HAC and conventional datasets manifest the effectiveness, superiority, and robustness of RACH. We expect this work to provide insights into arbitrary hybrid adverse conditions restoration and steer future research on this Gordian knot. ## References * [1] M. Liang, B. Yang, S. Wang, and R. Urtasun, “Deep continuous fusion for multi-sensor 3d object detection,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 641–656. * [2] A. Prakash, K. Chitta, and A. Geiger, “Multi-modal fusion transformer for end-to-end autonomous driving,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 7077–7087. * [3] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum pointnets for 3d object detection from rgb-d data,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 918–927. * [4] A. G. Perera, Y. Wei Law, and J. Chahl, “Uav-gesture: A dataset for uav control and gesture recognition,” in _Proceedings of the European Conference on Computer Vision (ECCV) Workshops_ , 2018, pp. 0–0. * [5] Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in _Proceedings of the 27th ACM international conference on multimedia_ , 2019, pp. 1632–1640. * [6] X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in _Proceedings of the IEEE/CVF international conference on computer vision_ , 2019, pp. 7314–7323. * [7] H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 695–704. * [8] Y. Quan, S. Deng, Y. Chen, and H. Ji, “Deep learning for seeing through window with raindrops,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 2463–2471. * [9] W.-T. Chen, H.-Y. Fang, C.-L. Hsieh, C.-C. Tsai, I. Chen, J.-J. Ding, S.-Y. Kuo _et al._ , “All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 4196–4205. * [10] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 2482–2491. * [11] H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 2157–2167. * [12] H. Wang, Q. Xie, Q. Zhao, and D. Meng, “A model-driven deep neural network for single image rain removal,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 3103–3112. * [13] R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 10 561–10 570. * [14] M.-W. Shao, L. Li, D.-Y. Meng, and W.-M. Zuo, “Uncertainty guided multi-scale attention network for raindrop removal from a single image,” _IEEE Transactions on Image Processing_ , vol. 30, pp. 4828–4839, 2021. * [15] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2021, pp. 14 821–14 831. * [16] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 5728–5739. * [17] L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple baselines for image restoration,” _arXiv preprint arXiv:2204.04676_ , 2022. * [18] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 1833–1844. * [19] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 17 683–17 693. * [20] W.-T. Chen, Z.-K. Huang, C.-C. Tsai, H.-H. Yang, J.-J. Ding, and S.-Y. Kuo, “Learning multiple adverse weather removal via two-stage knowledge learning and multi-contrastive regularization: Toward a unified model,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 17 653–17 662. * [21] R. Li, R. T. Tan, and L.-F. Cheong, “All in one bad weather removal using architectural search,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 3175–3185. * [22] B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-In-One Image Restoration for Unknown Corruption,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 17 452–17 462. * [23] J. M. J. Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 2353–2363. * [24] O. Özdenizci and R. Legenstein, “Restoring vision in adverse weather conditions with patch-based denoising diffusion models,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2023. * [25] R. Li, L.-F. Cheong, and R. T. Tan, “Heavy rain image restoration: Integrating physics model and conditional adversarial learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 1633–1642. * [26] R. Quan, X. Yu, Y. Liang, and Y. Yang, “Removing raindrops and rain streaks in one go,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 9147–9156. * [27] Y. Wan, Y. Cheng, M. Shao, and J. Gonzàlez, “Image rain removal and illumination enhancement done in one go,” _Knowledge-Based Systems_ , p. 109244, 2022. * [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017. * [29] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2021, pp. 12 873–12 883. * [30] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 3855–3863. * [31] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 07, 2020, pp. 11 908–11 915. * [32] H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 10 551–10 560. * [33] Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” _arXiv preprint arXiv:2204.03883_ , 2022. * [34] Y.-F. Liu, D.-W. Jaw, S.-C. Huang, and J.-N. Hwang, “Desnownet: Context-aware deep network for snow removal,” _IEEE Transactions on Image Processing_ , vol. 27, no. 6, pp. 3064–3073, 2018. * [35] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” in _European Conference on Computer Vision_. Springer, 2020, pp. 492–511. * [36] L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “Hinet: Half instance normalization network for image restoration,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 182–192. * [37] Y. Zhang, H. Ling, J. Gao, K. Yin, J.-F. Lafleche, A. Barriuso, A. Torralba, and S. Fidler, “Datasetgan: Efficient labeled data factory with minimal human effort,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 10 145–10 155. * [38] Z. Yang, Y. Chai, D. Anguelov, Y. Zhou, P. Sun, D. Erhan, S. Rafferty, and H. Kretzschmar, “Surfelgan: Synthesizing realistic sensor data for autonomous driving,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 11 118–11 127. * [39] H. Wang, Z. Yue, Q. Xie, Q. Zhao, Y. Zheng, and D. Meng, “From rain generation to rain removal,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 14 791–14 801. * [40] Z. Liu, H. Yin, X. Wu, Z. Wu, Y. Mi, and S. Wang, “From shadow generation to shadow removal,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 4927–4936. * [41] Z. Yue, Q. Zhao, L. Zhang, and D. Meng, “Dual adversarial network: Toward real-world noise removal and noise generation,” in _European Conference on Computer Vision_. Springer, 2020, pp. 41–58. * [42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2223–2232. * [43] Y. Wei, Z. Zhang, Y. Wang, M. Xu, Y. Yang, S. Yan, and M. Wang, “Deraincyclegan: Rain attentive cyclegan for single image deraining and rainmaking,” _IEEE Transactions on Image Processing_ , vol. 30, pp. 4788–4801, 2021. * [44] W. Yan, Y. Zhang, P. Abbeel, and A. Srinivas, “Videogpt: Video generation using vq-vae and transformers,” _arXiv preprint arXiv:2104.10157_ , 2021\. * [45] J. Yu, X. Li, J. Y. Koh, H. Zhang, R. Pang, J. Qin, A. Ku, Y. Xu, J. Baldridge, and Y. Wu, “Vector-quantized image modeling with improved vqgan,” _arXiv preprint arXiv:2110.04627_ , 2021. * [46] A. Razavi, A. Van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” _Advances in neural information processing systems_ , vol. 32, 2019. * [47] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 10 684–10 695. * [48] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in _International conference on machine learning_. Pmlr, 2018, pp. 1989–1998. * [49] G. Kang, Y. Wei, Y. Yang, Y. Zhuang, and A. Hauptmann, “Pixel-level cycle association: A new perspective for domain adaptive semantic segmentation,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 3569–3580, 2020. * [50] Z. Murez, S. Kolouri, D. Kriegman, R. Ramamoorthi, and K. Kim, “Image to image translation for domain adaptation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 4500–4509. * [51] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7472–7481. * [52] Y. Luo, P. Liu, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Category-level adversarial adaptation for semantic segmentation using purified features,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021. * [53] Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 2507–2516. * [54] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 289–305. * [55] K. Zhang, Y. Li, J. Liang, J. Cao, Y. Zhang, H. Tang, R. Timofte, and L. Van Gool, “Practical blind denoising via swin-conv-unet and data synthesis,” _arXiv preprint arXiv:2203.13278_ , 2022. * [56] A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu _et al._ , “Conformer: Convolution-augmented transformer for speech recognition,” _arXiv preprint arXiv:2005.08100_ , 2020. * [57] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 1874–1883. * [58] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov _et al._ , “The open images dataset v4,” _International Journal of Computer Vision_ , vol. 128, no. 7, pp. 1956–1981, 2020. * [59] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in _European conference on computer vision_. Springer, 2016, pp. 694–711. * [60] X. Pan, A. Tewari, L. Liu, and C. Theobalt, “Gan2x: Non-lambertian inverse rendering of image gans,” _arXiv preprint arXiv:2206.09244_ , 2022. * [61] Y. Xu, Y. Yin, L. Jiang, Q. Wu, C. Zheng, C. C. Loy, B. Dai, and W. Wu, “Transeditor: Transformer-based dual-space gan for highly controllable facial editing,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 7683–7692. * [62] G. Kwon and J. C. Ye, “Diagonal attention and style-based gan for content-style disentanglement in image generation and translation,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 13 980–13 989. * [63] Y. Alharbi and P. Wonka, “Disentangled image generation through structured noise injection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 5134–5142. * [64] K. Kim, S. Park, E. Jeon, T. Kim, and D. Kim, “A style-aware discriminator for controllable image translation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 18 239–18 248. * [65] A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves _et al._ , “Conditional image generation with pixelcnn decoders,” _Advances in neural information processing systems_ , vol. 29, 2016. * [66] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 8110–8119. * [67] A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” in _International conference on machine learning_. PMLR, 2017, pp. 2642–2651. * [68] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” _IEEE Transactions on Image Processing_ , vol. 28, no. 1, pp. 492–505, 2018. * [69] W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised transfer learning for image rain removal,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2019, pp. 3877–3886. * [70] J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” _IEEE Transactions on Image Processing_ , vol. 27, no. 4, pp. 2049–2062, 2018. * [71] Y. P. Loh and C. S. Chan, “Getting to know low-light images with the exclusively dark dataset,” _Computer Vision and Image Understanding_ , vol. 178, pp. 30–42, 2019. * [72] X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” _IEEE Transactions on Image Processing_ , vol. 26, no. 6, pp. 2944–2956, 2017. * [73] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” _arXiv preprint arXiv:1808.04560_ , 2018. * [74] I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” _arXiv preprint arXiv:1608.03983_ , 2016. * [75] Q. Huynh-Thu and M. Ghanbari, “Scope of validity of psnr in image/video quality assessment,” _Electronics letters_ , vol. 44, no. 13, pp. 800–801, 2008. * [76] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE transactions on image processing_ , vol. 13, no. 4, pp. 600–612, 2004. * [77] K. Zhang, R. Li, Y. Yu, W. Luo, and C. Li, “Deep dense multi-scale network for snow removal using semantic and depth priors,” _IEEE Transactions on Image Processing_ , vol. 30, pp. 7419–7431, 2021. * [78] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 1125–1134. * [79] X. Liu, M. Suganuma, Z. Sun, and T. Okatani, “Dual residual networks leveraging the potential of paired operations for image restoration,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 7007–7016. * [80] D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 3937–3946. * [81] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” _IEEE Transactions on Image Processing_ , vol. 30, pp. 2340–2349, 2021. * [82] L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 5637–5646. * [83] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” _arXiv preprint arXiv:2205.01649_ , 2022. * [84] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” _IEEE Signal processing letters_ , vol. 20, no. 3, pp. 209–212, 2012. * [85] L. Liu, B. Liu, H. Huang, and A. C. Bovik, “No-reference image quality assessment based on spatial and spectral entropies,” _Signal processing: Image communication_ , vol. 29, no. 8, pp. 856–863, 2014. * [86] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [87] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8798–8807. * [88] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8789–8797. * [89] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha, “Stargan v2: Diverse image synthesis for multiple domains,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 8188–8197. * [90] C. Sakaridis, D. Dai, and L. Van Gool, “Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 10 765–10 775. * [91] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 801–818. | Yecong Wan is a B.Eng. at College of Computer Science and Technology, China University of Petroleum (East China), under the supervision of Prof. Shao. His current research interests include image restoration and computer vision. ---|--- | Mingwen Shao received his M.S. degree in mathematics from the Guangxi University, Guangxi, China, in 2002, and the Ph.D. degree in applied mathematics from Xi’an Jiaotong University, Xi’an, China, in 2005. He received the postdoctoral degree in control science and engineering from Tsinghua University in February 2008. Now he is a professor and doctoral supervisor at China University of Petroleum (East China). His research interests include machine learning, computer vision, and data mining. ---|--- | Yuanshuo Cheng is a B.Eng. at College of Computer Science and Technology, China University of Petroleum (East China), under the supervision of Prof. Shao. His current research interests include image restoration, computer vision, and deep learning. ---|--- | Yuexian Liu is a B.Eng. at College of Computer Science and Technology, China University of Petroleum (East China), under the supervision of Prof. Shao. His current research interests include image restoration and computer vision. ---|--- | Zhiyuan Bao is a master student at College of Computer Science and Technology, China University of Petroleum (East China), under the supervision of Prof. Shao. He received the bachelor of software engineering from the China University of Petroleum (East China). His research interests include image restoration and computer vision. ---|--- | Deyu Meng received the B.Sc., M.Sc., and Ph.D. degrees from Xi’an Jiaotong University, Xi’an, China, in 2001, 2004, and 2008, respectively. He was a Visiting Scholar with Carnegie Mellon University, Pittsburgh, PA, USA, from 2012 to 2014. He is currently a Professor with the Institute for Information and System Sciences, Xi’an Jiaotong University. His current research interests include self-paced learning, noise modeling, and tensor sparsity. ---|---
# QCD axion bubbles from the hidden SU(N) gauge symmetry breaking Hai-Jun Li<EMAIL_ADDRESS>Center for Advanced Quantum Studies, Department of Physics, Beijing Normal University, Beijing 100875, China ###### Abstract The QCD axion bubbles can be formed due to an extra Peccei-Quinn (PQ) symmetry breaking in the early Universe. In this paper, we investigate the QCD axion bubbles formation from the PQ symmetry broken by hidden $SU(N)_{H}$ gauge interactions after inflation, which leads to the multiple vacua. The axion acquires a light mass and then settles down into different vacua. The QCD axion bubbles are formed when the conventional QCD axion arises during the QCD phase transition. In our scenario, the QCD axions that start to oscillate at the large values $\sim 2\pi/3$ can lead to the high density axion bubbles with $N=2$. The cosmological implications of the QCD axion bubbles are also discussed, such as the primordial black holes (PBHs) and the axion miniclusters. We find that the PBH mass is lager than $\sim\mathcal{O}(5\times 10^{5})M_{\odot}$ for the axion scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$. ††preprint: BNU-23-028 ## I Introduction The strong CP violation in quantum chromodynamics (QCD) is a long-standing problem and can be solved by the Peccei-Quinn (PQ) mechanism with a spontaneously broken $U(1)$ PQ symmetry Peccei and Quinn (1977a, b). The mechanism predicts the light pseudo Nambu-Goldstone (NG) boson axion Weinberg (1978); Wilczek (1978), also called the QCD axion, which acquires a tiny mass from the QCD non-perturbative effects ’t Hooft (1976a, b). When the potential of QCD axion is generated by the QCD instanton, axion is stable at the minimum value of CP conservation, which solves the strong CP problem. The QCD axion is the potential cold dark matter (DM) candidate if non-thermally produced in the early Universe through misalignment mechanism Preskill _et al._ (1983); Abbott and Sikivie (1983); Dine and Fischler (1983); Preskill _et al._ (1983). The axion is massless at high temperatures, as the cosmic temperature decreases, it acquires a non-zero mass at the QCD phase transition and starts to oscillate when its mass becomes comparable to the Hubble parameter, which explains the observed DM abundance. See $\rm e.g.$ Refs. Di Luzio _et al._ (2020); Chadha-Day _et al._ (2022); Adams _et al._ (2022) for recent reviews. The PQ symmetry is supposed to be broken before or during inflation. In this case, the QCD axion is massless during inflation and acquires the quantum fluctuations. To suppress the isocurvature perturbation, a feasible method is considering an extra PQ symmetry breaking Dine and Anisimov (2005); Jeong and Takahashi (2013); Takahashi and Yamada (2015); Harigaya _et al._ (2015). The PQ symmetry is an approximate global symmetry and can be strongly broken in the early Universe with the multiple vacua Kallosh _et al._ (1995); Banks and Seiberg (2011); Witten (2018); Harlow and Ooguri (2019); Jeong _et al._ (2022). The QCD axion bubbles can be formed due to this extra PQ symmetry breaking after inflation Kitajima and Takahashi (2020). In this case, axion acquires a light mass and oscillates when this mass is greater than the Hubble parameter. Note that the extra PQ symmetry is temporarily broken, the axion potential will disappear before the QCD phase transition. Therefore, the final QCD axion abundance can also be calculated through misalignment mechanism. However, the initial misalignment angle is determined by the extra PQ symmetry breaking, which can be split into different values. At the QCD scale, the QCD axions start to oscillate with these initial angles, in which the high axion density region can be formed with the large initial angle values. This high axion density region is called the “axion bubble”. The concept of QCD axion bubbles was first proposed in Ref. Kitajima and Takahashi (2020), which is similar to the baryon bubbles in the inhomogeneous Affleck-Dine baryogenesis Dolgov _et al._ (2009); Hasegawa and Kawasaki (2019). They also studied the formations of primordial black holes (PBHs) and axion miniclusters from the QCD axion bubbles. In this paper, we focus our attention on the QCD axion bubbles and investigate the axion bubbles formation due to the extra PQ symmetry dynamically broken by the hidden $SU(N)_{H}$ gauge interactions in the early Universe. After inflation, the axion acquires a light mass due to this PQ symmetry breaking with a large axion scale, and then settles down into the nearest potential minimum. The multiple vacua split the initial misalignment angle into different values. In this case, if the axion initial value is smaller than a critical value, the axion is stabilized near the origin. On the contrary, the axion will be stabilized at the another minimum if the axion initial value is larger than the critical value. During the QCD phase transition, the QCD axions start to oscillate near the origin and the another large value. The former case can account for the cold DM abundance, and the latter one can form the large energy density QCD axion bubbles. In our scenario, the QCD axions that start to oscillate at the large values $\sim 2\pi/3$ can lead to the high density axion bubbles with $N=2$. We also discuss the cosmological implications of the QCD axion bubbles, focusing on the PBHs formation. The PBHs are formed when the axions dominate the radiation in the bubbles, which predicts the minimum PBH mass $\sim\mathcal{O}(5\times 10^{5})M_{\odot}$ for the axion scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$. On the other hand, if the bubbles enter the horizontal plane before the axions dominate the radiation, they will eventually form the axion miniclusters. This paper is organized as follows. In Sec. II, we briefly review the QCD axion and the misalignment mechanism. In Sec. III, we discuss the QCD axion bubbles formation due to the extra PQ symmetry dynamically broken by the hidden $SU(N)_{H}$ gauge interactions in the early Universe. The cosmological implications are discussed in Sec. IV, including the axion DM abundance, PBHs, and axion miniclusters. The discussion and conclusion are given in Sec. V. ## II QCD axion and misalignment mechanism Here we briefly review the QCD axion and the misalignment mechanism. The QCD axion is a pseudo NG boson with a spontaneously broken $U(1)$ PQ symmetry. It couples to gluons with the following effective Lagrangian $\displaystyle\mathcal{L}_{agg}=-\frac{\alpha_{s}}{8\pi}\frac{a}{f_{a}}G^{a\,\mu\nu}\tilde{G}_{\mu\nu}^{a}\,,$ (1) where $\alpha_{s}$ is the strong fine structure constant, $a$ is the axion field, $f_{a}$ is the axion decay constant, $\theta=a/f_{a}$ is the axion angle, $G^{a\,\mu\nu}$ and $\tilde{G}_{\mu\nu}^{a}$ are the gluon field strength tensor and its dual tensor, respectively. The resulting effective potential of QCD axion is given by $\displaystyle V_{\rm QCD}(a)=m_{a}^{2}(T)f_{a}^{2}\left[1-\cos\left(\frac{a}{f_{a}}\right)\right]\,,$ (2) where $m_{a}(T)$ is the temperature-dependent axion mass for $T\gtrsim T_{\rm QCD}$ ($\sim 150\,\rm MeV$) Borsanyi _et al._ (2016) $\displaystyle m_{a}(T)\simeq m_{a,0}\left(\frac{T}{T_{\rm QCD}}\right)^{-4.08}\,,$ (3) with the zero-temperature axion mass Grilli di Cortona _et al._ (2016) $\displaystyle m_{a,0}\simeq 5.70(7)\,{\mu\rm eV}\left(\frac{f_{a}}{10^{12}\,{\rm GeV}}\right)^{-1}\,.$ (4) As the cosmic temperature decreases, the QCD axion starts to oscillate when its mass $m_{a}(T)$ becomes comparable to the Hubble parameter $H(T)$, $3H(T_{a})=m_{a}(T_{a})$, with the oscillation temperature $\displaystyle\begin{aligned} T_{a}&\simeq 9.59\times 10^{-1}\,{\rm GeV}\left(\frac{g_{*}(T_{a})}{61.75}\right)^{-0.082}\\\ &\times\left(\frac{f_{a}}{10^{12}\,\rm GeV}\right)^{-0.16}\,,\end{aligned}$ (5) where $g_{*}(T)$ is the number of effective degrees of freedom. The axion number density at $T_{a}$ is given by $\displaystyle n_{a}(T_{a})=\frac{1}{2}m_{a}(T_{a})f_{a}^{2}\left\langle\theta_{i}^{2}f(\theta_{i})\right\rangle\chi\,,$ (6) where $\theta_{i}$ is the initial misalignment angle, $\chi\simeq 1.44$ is a numerical factor Turner (1986), and $f(\theta_{i})$ is the anharmonic factor $\displaystyle f(\theta_{i})\simeq\left[\ln\left(\frac{e}{1-\theta_{i}^{2}/\pi^{2}}\right)\right]^{1.16}\,,$ (7) which is taken as $f(\theta_{i})\simeq 1$ for $|\theta_{i}|\ll\pi$ Lyth (1992); Visinelli and Gondolo (2009). The present axion energy density $\rho_{a}(T_{0})=m_{a,0}n_{a}(T_{0})$ is $\displaystyle\rho_{a}(T_{0})=\frac{m_{a,0}m_{a}(T_{a})s(T_{0})}{2s(T_{a})}f_{a}^{2}\left\langle\theta_{i}^{2}f(\theta_{i})\right\rangle\chi\,,$ (8) where $s(T)=2\pi^{2}g_{*}(T)T^{3}/45$ is the entropy density, and $T_{0}$ is the present CMB temperature. Then we have the current QCD axion abundance $\Omega_{a}h^{2}=\rho_{a}(T_{0})/\rho_{c}h^{2}$ as $\displaystyle\begin{aligned} \Omega_{a}h^{2}&\simeq 1.43\times 10^{-1}\left(\frac{g_{*}(T_{0})}{3.94}\right)\left(\frac{g_{*}(T_{a})}{61.75}\right)^{-0.42}\\\ &\times\left(\frac{f_{a}}{10^{12}\,\rm GeV}\right)^{1.16}\left\langle\theta_{i}^{2}f(\theta_{i})\right\rangle\,,\end{aligned}$ (9) where $\rho_{c}=3H_{0}^{2}M_{\rm Pl}^{2}$ is the critical energy density, $M_{\rm Pl}$ is the reduced Planck mass, and $h\simeq 0.68$ is the reduced Hubble constant. In order to explain the observed cold DM abundance, $\Omega_{\rm DM}h^{2}\simeq 0.12$ Aghanim _et al._ (2020), we derive the initial misalignment angle $\displaystyle\begin{aligned} \theta_{i}&\simeq 0.87\left(\frac{g_{*}(T_{0})}{3.94}\right)^{1/2}\left(\frac{g_{*}(T_{a})}{61.75}\right)^{0.21}\\\ &\times\left(\frac{f_{a}}{10^{12}\,\rm GeV}\right)^{-0.58}\,.\end{aligned}$ (10) Note that the scope of above discussion is $f_{a}\lesssim 10^{17}\,\rm GeV$. We can see that the abundance of QCD axion will be many orders of magnitude larger than that of the observed DM with high scales $f_{a}\sim\mathcal{O}(10^{16}-10^{17})\,\rm GeV$, unless the initial misalignment angle is less than $\sim\mathcal{O}(10^{-2})$. ## III QCD axion bubbles In this section, we discuss the QCD axion bubbles formation. The QCD axion bubbles can be formed due to an extra PQ symmetry breaking after inflation, and are formed at the QCD phase transition. The evolution can be summarized as the following three parts. * • During/After inflation. During inflation, the QCD axion is massless and acquires the quantum fluctuations. An extra PQ symmetry breaking after inflation is required. Then the axion acquires a light mass due to this extra PQ symmetry breaking, which can lead to the multiple vacua. The axion starts to oscillate when its mass is comparable to the Hubble parameter, and then settles down into different minimum depending on its initial position. Note that this extra symmetry is temporarily broken, and the resulting axion potential will be disappeared before the QCD phase transition. * • During/After the QCD phase transition. The QCD axion bubbles are formed when the QCD axion starts to oscillate at the QCD scale. During the QCD phase transition, the conventional axion potential $V_{\rm QCD}(\phi)$ arises. Therefore, one of the multiple vacua minimum should be around at the minimum of $V_{\rm QCD}(\phi)$ to ensure the correct cold DM abundance. In this case, if the axion initial value is smaller than a critical value, the axion is stabilized near the origin. Otherwise, the axion will be stabilized at the another minimum with a large initial value, leading to the high density QCD axion bubbles. * • The late time. The QCD axion bubbles can lead to some further interesting phenomena, such as the formations of PBHs and axion miniclusters, which will be briefly discussed in Sec. IV. ### III.1 PQ symmetry broken by hidden $SU(N)_{H}$ There are many scenarios for the extra PQ symmetry breaking in the early Universe, such as the Witten effect of monopoles in hidden sectors Nomura _et al._ (2016); Kawasaki _et al._ (2016, 2018), a larger scale of the spontaneous PQ symmetry breaking with the higher dimensional term Chiba _et al._ (2004); Takahashi and Yamaguchi (2004); Higaki _et al._ (2014); Co _et al._ (2020) and the hidden non-Abelian gauge interactions Takahashi and Yamada (2015), and a stronger QCD with the large Higgs field expectation value Choi _et al._ (1997); Banks and Dine (1997); Jeong and Takahashi (2013). In Ref. Kitajima and Takahashi (2020), they considered the Witten effect as an example of the extra PQ symmetry breaking to form the QCD axion bubbles. For comparison, we consider the extra PQ symmetry is broken by the hidden non- Abelian gauge interactions whose scales are temporarily enhanced by the dynamics of flat directions, which was considered in Ref. Takahashi and Yamada (2015) to suppress the isocurvature perturbations of the QCD axion. In this case, the PQ symmetry is dynamically broken by the hidden $SU(N)_{H}$ gauge interactions. Then the axion becomes so heavy during inflation and its isocurvature perturbations can be significantly suppressed. However, in our scenario the PQ symmetry is assumed to be broken after inflation with the large axion scale, resulting in a light axion mass and the multiple vacua. In Ref. Takahashi and Yamada (2015), they considered a supersymmetric (SUSY) axion model, but this can be applied to the non-SUSY case. We begin with the superpotential $\displaystyle W=y_{ij}\psi Q_{H}^{i}\bar{Q}_{H}^{j}+{y}_{kl}^{\prime}\phi{Q}_{H}^{{}^{\prime}k}{\bar{Q}}_{H}^{{}^{\prime}l}+W_{\cancel{\rm PQ}}+W_{\phi}(\phi)\,,~{}$ (11) where contains $N_{F}$ ($N^{\prime}_{F}$) flavors of the hidden quarks $Q_{H}$ ($Q^{\prime}_{H}$) and anti-quarks $\bar{Q}_{H}$ ($\bar{Q}^{\prime}_{H}$) in the representation of $SU(N)_{H}$ with (without) the PQ charges, respectively, $\psi$ is the PQ breaking scalar field, $\phi$ is the singlet scalar field, $W_{\cancel{\rm PQ}}$ is the PQ breaking term, and $y$ is the Yukawa coupling with the flavor indices $i$, $j$, $k$, and $l$. Assuming that $\phi$ is a flat direction, then the superpotential of $\phi$ is given by $\displaystyle W_{\phi}(\phi)=\lambda\frac{\phi^{4}}{4M_{\rm Pl}}\,,$ (12) where $\lambda\sim\mathcal{O}(1)$. The potential of $\phi$ during inflation is $\displaystyle\begin{aligned} V(\phi)&=\left(m_{\phi}^{2}-c_{H}H^{2}\right)|\phi|^{2}\\\ &-\left(a_{H}H\lambda\frac{\phi^{4}}{4M_{\rm Pl}}+{\rm c.c.}\right)+\lambda^{2}\frac{|\phi|^{6}}{M_{\rm Pl}^{2}}\,,\end{aligned}$ (13) where $c_{H}=a_{H}=1$, $m_{\phi}$ is the soft SUSY breaking mass, the three terms on the right-hand side represent the sum of low-energy and Hubble- induced soft mass terms, the Hubble-induced $A$-terms, and the contribution of the non-renormalizable term, respectively. The flat direction $\phi$ acquires a mass of order the Hubble parameter during inflation, and starts to oscillate when its soft mass becomes comparable to the Hubble parameter. Then it settles down into the potential minimum $\displaystyle\langle|\phi|\rangle(t)\simeq\left(\frac{H(t)M_{\rm Pl}}{\sqrt{3}\lambda}\right)^{1/2}\,.$ (14) Note that the phase direction of $\phi$ also has a mass, which comes from the Hubble-induced $A$-terms. The $F$-component of $\phi$ is given by $\displaystyle\langle F_{\phi}\rangle\simeq\left(\frac{H^{3}(t)M_{\rm Pl}}{3^{3/2}\lambda}\right)^{1/2}\,,$ (15) which hints that the $SU(N)_{H}$ gaugino acquires a soft mass due to the gauge mediated SUSY breaking $\displaystyle m_{\lambda}\simeq\frac{N^{\prime}_{F}g_{H}^{2}}{16\pi^{2}}\frac{F_{\phi}}{\langle\phi\rangle}\simeq\frac{N^{\prime}_{F}g_{H}^{2}}{16\sqrt{3}\pi^{2}}H(t)\,,$ (16) where $g_{H}$ is the gauge coupling. (a) $N=2$. (b) $N=3$. Figure 1: Illustration of the QCD axion bubbles formation with the axion potentials. The left and right panels correspond to the cases $N=2$ and $3$, respectively. The red and blue lines represent the potentials $V_{\rm QCD}(\phi)$ and $V_{\cancel{\rm PQ}}(\phi)$ with a small $\theta_{i}$, respectively. The amplitude of $V_{\cancel{\rm PQ}}(\phi)$ and the value of $\theta_{i}$ are exaggerated for illustrative purposes. The Lagrangian of the $SU(N)_{H}$ gauge field is Takahashi and Yamada (2015) $\displaystyle\mathcal{L}=\frac{1}{2}\int{\rm d}^{2}\theta\left(\frac{1-2m_{\lambda}\theta^{2}}{2g_{H}^{2}}-\frac{i\theta_{H}}{16\pi^{2}}\right)W^{\alpha}W_{\alpha}+{\rm h.c.}\,,~{}~{}~{}$ (17) where $\theta_{H}$ is the vacuum angle. The axion comes from the gauge kinetic function with the notation $\displaystyle\theta_{H}\to\theta_{H}-3N_{F}\frac{\phi}{v_{a}}\,.$ (18) Using $f_{a}=v_{a}/N_{\rm DW}=v_{a}/(NN_{F})$, we have $\displaystyle\theta_{H}\to\theta_{H}-\frac{3}{N}\frac{\phi}{f_{a}}\,,$ (19) where $N_{\rm DW}$ is the domain wall number. The gaugino condensation induces the effective superpotential $\displaystyle W_{\rm eff}(\phi)=N\tilde{\Lambda}_{H}^{3}(\phi)\,,$ (20) where $\Lambda_{H}(\phi)$ is the condensation scale. Then the axion field acquires the effective potential $\displaystyle\begin{aligned} V_{\cancel{\rm PQ}}(\phi)&=-\int{\rm d}^{2}\theta W_{\rm eff}(\phi)+{\rm h.c.}\\\ &=\frac{32\pi^{2}}{g_{H}^{2}}m_{\lambda}\Lambda_{H}^{3}(\phi)\cos\left(\frac{\theta_{H}}{N}-\frac{3}{N^{2}}\frac{\phi}{f_{a}}\right)+\cdot\cdot\cdot\,,~{}~{}\end{aligned}$ (21) and the axion effective mass is given by Takahashi and Yamada (2015) $\displaystyle m_{a}^{2}(\phi)=\frac{\hat{c}H(t)\Lambda_{H}^{3}(\phi)}{f_{a}^{2}}\,,$ (22) where $\hat{c}=6\sqrt{3}N^{\prime}_{F}/N^{4}$. It should be emphasized that in our scenario the PQ symmetry is assumed to be broken after inflation with the large axion scale $f_{a}$. The axion acquires a light effective mass and starts to oscillate when its mass becomes comparable to the Hubble parameter. Then the axion will be stabilized at the potential minimum with $\displaystyle\phi_{\rm min}^{n}=\left(\theta_{H}+2\pi nN\right)\frac{N}{3}f_{a}\,,$ (23) where $n$ is the integer, which represents the multiple vacua. ### III.2 Axion bubbles formation Here we show the QCD axion bubbles formation due to above PQ symmetry breaking in the early Universe. During inflation, the QCD axion is massless and acquires the quantum fluctuations with a Gaussian distribution. After inflation, the axion acquires a light axion mass due to the PQ symmetry broken by the hidden $SU(N)_{H}$ gauge interactions and settles down into the nearest potential minimum $\phi_{\rm min}^{n}$. During the QCD phase transition, the conventional axion potential $V_{\rm QCD}(\phi)$ arises. Therefore, one of the minimum of $\phi_{\rm min}^{n}$ should be around at the minimum of $V_{\rm QCD}(\phi)$ to ensure the correct DM abundance. In this paper, we consider the large axion scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$ and assume that it is unchanged. In order to explain the DM abundance, we derive the small initial misalignment angle $\displaystyle\begin{aligned} \theta_{i}&\simeq 4.29\times 10^{-3}\left(\frac{g_{*}(T_{0})}{3.94}\right)^{1/2}\left(\frac{g_{*}(T_{a})}{61.75}\right)^{0.21}\\\ &\times\left(\frac{f_{a}}{10^{16}\,\rm GeV}\right)^{-0.58}\,.\end{aligned}$ (24) Note that this initial angle is corresponding to the QCD axion. We first consider the case with $N=2$, see Fig. 1 (a). The red and blue lines represent the potentials $V_{\rm QCD}(\phi)$ and $V_{\cancel{\rm PQ}}(\phi)$, respectively. After inflation, the axion is stabilized at the potential minimum of $V_{\cancel{\rm PQ}}(\phi)$ with $\displaystyle\phi_{\rm min}^{0}\simeq 0\,,\quad\phi_{\rm min}^{1}\simeq\frac{8\pi}{3}f_{a}\,,\quad\cdot\cdot\cdot\,,$ (25) which correspond to the effective initial misalignment angle $\theta_{i,n}$ with $\displaystyle\theta_{i,0}=0-\theta_{i}\,,\quad\theta_{i,1}=\frac{2\pi}{3}-\theta_{i}\,,\quad\cdot\cdot\cdot\,.$ (26) When the QCD axion $V_{\rm QCD}(\phi)$ arises during the QCD phase transition, the state $\phi_{\rm min}^{0}$ with the effective initial angle $\theta_{i,0}$ explains the DM. On the other hand, if the initial value is larger than $\phi_{\rm crit}$, the axion will settle down into the minimum $\phi_{\rm min}^{1}$ with the initial angle $\theta_{i,1}$. In this case, since the large initial misalignment angle $\sim 2\pi/3$, the local axion density at the minimum $\phi_{\rm min}^{1}$ becomes much higher than that at $\phi_{\rm min}^{0}$. This high axion density region is called the “axion bubble”. When the axion dominates the radiation in the bubbles, $\rm i.e.$, the axion energy density is equal to the radiation energy density, we define the cosmic background temperature, $T_{B}$. The local axion energy density in the bubbles at $T_{B}$ is given by $\displaystyle\rho_{a}(T_{B})=\frac{m_{a,0}m_{a}(T_{a})s(T_{B})}{2s(T_{a})}f_{a}^{2}\left\langle\theta_{i,1}^{2}f(\theta_{i,1})\right\rangle\chi\,.$ (27) The radiation energy density is defined by $\displaystyle\rho_{R}(T_{B})=\frac{\pi^{2}}{30}g_{*}(T_{B})T_{B}^{4}\,.$ (28) Considering $\rho_{a}(T_{B})=\rho_{R}(T_{B})$, and the effective initial angle $\theta_{i,1}=2\pi/3-\theta_{i}$, we derive the temperature $\displaystyle T_{B}\simeq 0.23\,{\rm MeV}\left(\frac{g_{*}(T_{a})}{61.75}\right)^{-0.42}\left(\frac{f_{a}}{10^{16}\,\rm GeV}\right)^{1.16}\,.~{}~{}$ (29) Note that here the initial angle is taken as $\theta_{i}\simeq 4.29\times 10^{-3}$ for $f_{a}=10^{16}\,\rm GeV$ and we do not make other simplifications. Additionally, we also consider the case with $N=3$, which is shown in Fig. 1 (b). In this case, the axion will be stabilized at the potential minimum with $\displaystyle\phi_{\rm min}^{0}\simeq 0\,,\quad\phi_{\rm min}^{1}\simeq 6\pi f_{a}\,,\quad\cdot\cdot\cdot\,,$ (30) corresponding to the effective initial angle $\theta_{i,n}$ with $\displaystyle\theta_{i,0}=0-\theta_{i}\,,\quad\theta_{i,1}=0-\theta_{i}\,,\quad\cdot\cdot\cdot\,.$ (31) Since the effective initial angle $\theta_{i,0}=\theta_{i,1}\simeq 0$, $\rm i.e.$, the local axion density at $\phi_{\rm min}^{0}$ is same with that at $\phi_{\rm min}^{1}$, we find no QCD axion bubbles is generated in this case. Therefore, in the following we just discuss the axion bubbles in the case with $N=2$. Then we discuss the QCD axion bubbles abundance, which is related to the inflationary fluctuations. The probability density function obeys the Fokker- Planck equation Starobinsky (1982); Linde (1982); Starobinsky and Yokoyama (1994). Under the initial condition $P(N,\phi)=\delta(\phi-\phi_{i})$, and assuming the Hubble parameter during inflation is constant, its solution can be given by the Gaussian distribution Kitajima and Takahashi (2020) $\displaystyle P(N,\phi)=\frac{1}{\sqrt{2\pi}\sigma(N)}\exp{\left(-\frac{\left(\phi-\phi_{i}\right)^{2}}{2\sigma^{2}(N)}\right)}\,,$ (32) with the variance $\displaystyle\sigma(N)=\frac{H_{\rm inf}}{2\pi}\sqrt{N}\,.$ (33) Note that here $N$ is the e-folding number, and $H_{\rm inf}\sim\mathcal{O}(10^{13})\,\rm GeV$ is the Hubble parameter during inflation. Then the volume fraction of the QCD axion bubbles is given by Kitajima and Takahashi (2020) $\displaystyle\begin{aligned} \frac{{\rm d}\beta}{{\rm d}N}&\simeq\frac{\partial}{\partial N}\int_{\phi_{\rm crit}}^{\infty}{\rm d}\phi P(N,\phi)\\\ &=\frac{1}{2}\frac{\partial}{\partial N}{\rm erfc}\left(\frac{\phi_{\rm crit}-\phi_{i}}{\sqrt{2}\sigma(N)}\right)\\\ &=\frac{\phi_{\rm crit}-\phi_{i}}{2}P(N,\phi_{\rm crit})\,,\end{aligned}$ (34) where ${\rm erfc}(x)$ is the complementary error function. With ${\rm d}N={\rm d}\ln k$, where $k$ is the wave number, then we obtain the volume fraction $\displaystyle\frac{{\rm d}\beta}{{\rm d}\ln k}\simeq\frac{\phi_{\rm crit}-\phi_{i}}{2}P(\ln(k/k_{*})+N_{*},\phi_{\rm crit})\,,$ (35) where $k_{*}=0.002\,\rm Mpc^{-1}$ and $N_{*}=0$. ## IV Cosmological implications In this section, we discuss the cosmological implications of the QCD axion bubbles, including the axion DM abundance, PBHs, and axion miniclusters. ### IV.1 Axion dark matter abundance The QCD axion abundance depends on the initial misalignment angle $\theta_{i}$ and the decay constant $f_{a}$. In our scenario, the current axion abundance can also be described by the misalignment mechanism with the effective initial angle $\theta_{i,0}$. For the large scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$, we have the current QCD axion abundance $\displaystyle\begin{aligned} \Omega_{a}h^{2}&\simeq 6.52\times 10^{3}\left(\frac{g_{*}(T_{0})}{3.94}\right)\left(\frac{g_{*}(T_{a})}{61.75}\right)^{-0.42}\\\ &\times\left(\frac{f_{a}}{10^{16}\,\rm GeV}\right)^{1.16}\left\langle\theta_{i,0}^{2}f(\theta_{i,0})\right\rangle\,.\end{aligned}$ (36) In order to explain the observed cold DM abundance, the effective initial angle is given by Eq. 24 with $|\theta_{i,0}|\sim\mathcal{O}(10^{-3})$. ### IV.2 Primordial black holes & axion miniclusters The PBHs can be generated due to large density perturbations in the early Universe, they are also the DM candidates Bird _et al._ (2016); Carr and Kuhnel (2020); Green and Kavanagh (2021); Carr and Kuhnel (2022); Li (2022). The initial PBH mass at the formation time $t_{f}$ is given by $M_{\rm PBH}=4\pi\gamma\rho_{R}/(3H_{f}^{3})$ Carr _et al._ (2010), where $\gamma\simeq 0.2$ is the gravitational collapse factor Carr (1975), $\rho_{R}$ is the radiation energy density, and $H_{f}$ is the Hubble parameter at $t_{f}$. Then we derive the PBH mass at the formation time $\displaystyle\begin{aligned} \frac{M_{\rm PBH}}{M_{\odot}}&\simeq 0.03\times\left(\frac{\gamma}{0.2}\right)\left(\frac{g_{*}(T_{f})}{10.75}\right)^{-1/2}\\\ &\times\left(\frac{T_{f}}{1\,\rm GeV}\right)^{-2}\,,\end{aligned}$ (37) where $T_{f}$ is the temperature at $t_{f}$ and $M_{\odot}$ is the solar mass. We also have the relation Di and Gong (2018) $\displaystyle\begin{aligned} \frac{M_{\rm PBH}}{M_{\odot}}&\simeq 3.68\times\left(\frac{\gamma}{0.2}\right)\left(\frac{g_{*}(T_{f})}{10.75}\right)^{-1/6}\\\ &\times\left(\frac{k}{10^{6}\rm\,Mpc^{-1}}\right)^{-2}\,,\end{aligned}$ (38) where $k$ is the wave number. Figure 2: The PBH fractional abundance $f_{\rm PBH}$ as a function of the PBH mass (in the solar mass $M_{\odot}$). The red, orange, and blue lines correspond to the results with $f_{a}=5\times 10^{16}\,\rm GeV$, $1\times 10^{16}\,\rm GeV$, and $5\times 10^{15}\,\rm GeV$, respectively. The cut-off represent the corresponding minimum PBH mass $M_{\rm PBH}^{\rm min}$. In our scenario, the PBHs will be formed when the axions dominate the radiation in the bubbles, $\rm i.e.$, $\displaystyle T_{f}\lesssim T_{B}\,.$ (39) The condition for PBHs formation is that when the bubbles enter the horizontal plane, the local energy density inside the bubbles is significantly greater than the background radiation density. Substituting Eq. 29 in Eq. 37, then we have the minimum PBH mass $\displaystyle\begin{aligned} \frac{M_{\rm PBH}^{\rm min}}{M_{\odot}}&\simeq 5.65\times 10^{5}\left(\frac{\gamma}{0.2}\right)\left(\frac{g_{*}(T_{f})}{10.75}\right)^{-1/2}\\\ &\times\left(\frac{g_{*}(T_{a})}{61.75}\right)^{0.84}\left(\frac{f_{a}}{10^{16}\,\rm GeV}\right)^{-2.33}\,.\end{aligned}$ (40) The PBH mass at this region may account for the LIGO-Virgo gravitational waves (GWs) events Abbott _et al._ (2016a, b, 2017a, 2017b) and the seeds of supermassive black holes (SMBHs) Richstone _et al._ (1998); Ferrarese and Merritt (2000); Kormendy and Ho (2013). The PBH energy density at $T_{B}$ is given by $\displaystyle\rho_{\rm PBH}(T_{B})=\frac{3}{4}T_{B}s(T_{B})\frac{{\rm d}\beta}{{\rm d}\ln k}\,,$ (41) where $s(T_{B})$ is the entropy density. Finally, we derive the fractional abundance of PBH at present $\displaystyle f_{\rm PBH}=\frac{\Omega_{\rm PBH}}{\Omega_{\rm DM}}=\frac{3}{4}T_{B}s(T_{0})\frac{{\rm d}\beta}{{\rm d}\ln k}\frac{1}{\rho_{c}}\frac{1}{\Omega_{\rm DM}}\,,$ (42) with the total cold DM abundance $\Omega_{\rm DM}\simeq 0.268$ Aghanim _et al._ (2020). With Eq. 35, we show the PBH fractional abundance as a function of the PBH mass $M_{\rm PBH}$ in Fig. 2. The three typical values of $f_{a}=5\times 10^{16}\,\rm GeV$ (red), $1\times 10^{16}\,\rm GeV$ (orange), and $5\times 10^{15}\,\rm GeV$ (blue) are selected for comparisons. The cut- off represent the minimum PBH mass $M_{\rm PBH}^{\rm min}$ (Eq. 40). Here we take $\phi_{\rm crit}-\phi_{i}=4.5H_{\rm inf}$ as a typical value. The fractional abundance would be higher for the large scale $f_{a}\sim\mathcal{O}(10^{17})\,\rm GeV$. Additionally, the another interesting phenomenon is the formation of axion miniclusters from the QCD axion bubbles. The axion miniclusters Hogan and Rees (1988); Kolb and Tkachev (1993); Fairbairn _et al._ (2018); Ellis _et al._ (2022); Dandoy _et al._ (2022) are gravitationally bound clumps of the axion DM. Their mass and size depend on the Hubble volume when the QCD axion starts to oscillate. In our scenario, the axion miniclusters can be formed when the bubbles enter the horizontal plane before the axions dominate the radiation in the bubbles, $\rm i.e.$, $\displaystyle T_{f}>T_{B}\,,$ (43) where $T_{f}$ is the temperature at the miniclusters formation time. The miniclusters mass is related to the QCD axion oscillation temperature $T_{a}$ (Eq. 5). Since we are more concerned about the phenomena of PBHs formation, the miniclusters will not be further discussed in this context, and more details about the axion miniclusters from the QCD axion bubbles can be found in Ref. Kitajima and Takahashi (2020). ## V Discussion and conclusion In this paper, we have presented the QCD axion bubbles formation from the dynamically broken PQ symmetry in the early Universe. First we introduce the QCD axion and the misalignment mechanism. Then we consider that the PQ symmetry is broken by the hidden $SU(N)_{H}$ gauge interactions, which is assumed to be broken after inflation with the multiple vacua. The axion acquires a light effective mass and then settles down into different vacua. Note that this symmetry is temporarily broken, and the resulting axion potential will be disappeared before the QCD axion arises. During the QCD phase transition, the QCD axion starts to oscillate near the origin and the another large value. In our scenario, we consider the two cases with $N=2$ and 3. We find that the QCD axions start to oscillate at the large values $\sim 2\pi/3$ can lead to the high density QCD axion bubbles with $N=2$. However, since the effective initial angle $\theta_{i,0}=\theta_{i,1}\simeq 0$, we find no axion bubbles is generated with $N=3$. We also discuss the cosmological implications of the QCD axion bubbles. Since the large axion scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$, the initial misalignment angle should be small $\sim\mathcal{O}(10^{-3})$ to explain the cold DM abundance. The interesting phenomenon in our scenario is the PBHs formation. The PBHs will be formed when the axions dominate the radiation in the bubbles. We find that the PBH mass is lager than $\sim\mathcal{O}(5\times 10^{5})M_{\odot}$ for the scale $f_{a}\sim\mathcal{O}(10^{16})\,\rm GeV$. The PBH mass at this region may explain the LIGO-Virgo GWs events and the seeds of SMBHs. In addition, if the bubbles enter the horizontal plane before the axions dominate the radiation, the bubbles will eventually cluster into the axion miniclusters. ## Acknowledgments The author would like to thank Wei Chao, Naoya Kitajima, Shota Nakagawa, and Fuminobu Takahashi for helpful discussions and valuable comments. This work was supported by the National Natural Science Foundation (NNSF) of China (Grants No. 11775025 and No. 12175027). ## References * Peccei and Quinn (1977a) R.D. Peccei and Helen R. Quinn, “Constraints Imposed by CP Conservation in the Presence of Instantons,” Phys. Rev. D 16, 1791–1797 (1977a). * Peccei and Quinn (1977b) R.D. Peccei and Helen R. Quinn, “CP Conservation in the Presence of Instantons,” Phys. Rev. Lett. 38, 1440–1443 (1977b). * Weinberg (1978) Steven Weinberg, “A New Light Boson?” Phys. Rev. Lett. 40, 223–226 (1978). * Wilczek (1978) Frank Wilczek, “Problem of Strong $P$ and $T$ Invariance in the Presence of Instantons,” Phys. Rev. Lett. 40, 279–282 (1978). * ’t Hooft (1976a) Gerard ’t Hooft, “Symmetry Breaking Through Bell-Jackiw Anomalies,” Phys. Rev. Lett. 37, 8–11 (1976a). * ’t Hooft (1976b) Gerard ’t Hooft, “Computation of the Quantum Effects Due to a Four-Dimensional Pseudoparticle,” Phys. Rev. D 14, 3432–3450 (1976b), [Erratum: Phys.Rev.D 18, 2199 (1978)]. * Preskill _et al._ (1983) John Preskill, Mark B. Wise, and Frank Wilczek, “Cosmology of the Invisible Axion,” Phys. Lett. B 120, 127–132 (1983). * Abbott and Sikivie (1983) L.F. Abbott and P. Sikivie, “A Cosmological Bound on the Invisible Axion,” Phys. Lett. B 120, 133–136 (1983). * Dine and Fischler (1983) Michael Dine and Willy Fischler, “The Not So Harmless Axion,” Phys. Lett. B 120, 137–141 (1983). * Di Luzio _et al._ (2020) Luca Di Luzio, Maurizio Giannotti, Enrico Nardi, and Luca Visinelli, “The landscape of QCD axion models,” Phys. Rept. 870, 1–117 (2020), arXiv:2003.01100 [hep-ph] . * Chadha-Day _et al._ (2022) Francesca Chadha-Day, John Ellis, and David J. E. Marsh, “Axion dark matter: What is it and why now?” Sci. Adv. 8, abj3618 (2022), arXiv:2105.01406 [hep-ph] . * Adams _et al._ (2022) C. B. Adams _et al._ , “Axion Dark Matter,” in _2022 Snowmass Summer Study_ (2022) arXiv:2203.14923 [hep-ex] . * Dine and Anisimov (2005) Michael Dine and Alexey Anisimov, “Is there a Peccei-Quinn phase transition?” JCAP 07, 009 (2005), arXiv:hep-ph/0405256 . * Jeong and Takahashi (2013) Kwang Sik Jeong and Fuminobu Takahashi, “Suppressing Isocurvature Perturbations of QCD Axion Dark Matter,” Phys. Lett. B 727, 448–451 (2013), arXiv:1304.8131 [hep-ph] . * Takahashi and Yamada (2015) Fuminobu Takahashi and Masaki Yamada, “Strongly broken Peccei-Quinn symmetry in the early Universe,” JCAP 10, 010 (2015), arXiv:1507.06387 [hep-ph] . * Harigaya _et al._ (2015) Keisuke Harigaya, Masahiro Ibe, Kai Schmitz, and Tsutomu T. Yanagida, “Peccei-Quinn Symmetry from Dynamical Supersymmetry Breaking,” Phys. Rev. D 92, 075003 (2015), arXiv:1505.07388 [hep-ph] . * Kallosh _et al._ (1995) Renata Kallosh, Andrei D. Linde, Dmitri A. Linde, and Leonard Susskind, “Gravity and global symmetries,” Phys. Rev. D 52, 912–935 (1995), arXiv:hep-th/9502069 . * Banks and Seiberg (2011) Tom Banks and Nathan Seiberg, “Symmetries and Strings in Field Theory and Gravity,” Phys. Rev. D 83, 084019 (2011), arXiv:1011.5120 [hep-th] . * Witten (2018) Edward Witten, “Symmetry and Emergence,” Nature Phys. 14, 116–119 (2018), arXiv:1710.01791 [hep-th] . * Harlow and Ooguri (2019) Daniel Harlow and Hirosi Ooguri, “Constraints on Symmetries from Holography,” Phys. Rev. Lett. 122, 191601 (2019), arXiv:1810.05337 [hep-th] . * Jeong _et al._ (2022) Kwang Sik Jeong, Kohei Matsukawa, Shota Nakagawa, and Fuminobu Takahashi, “Cosmological effects of Peccei-Quinn symmetry breaking on QCD axion dark matter,” JCAP 03, 026 (2022), arXiv:2201.00681 [hep-ph] . * Kitajima and Takahashi (2020) Naoya Kitajima and Fuminobu Takahashi, “Primordial Black Holes from QCD Axion Bubbles,” JCAP 11, 060 (2020), arXiv:2006.13137 [hep-ph] . * Dolgov _et al._ (2009) A. D. Dolgov, M. Kawasaki, and N. Kevlishvili, “Inhomogeneous baryogenesis, cosmic antimatter, and dark matter,” Nucl. Phys. B 807, 229–250 (2009), arXiv:0806.2986 [hep-ph] . * Hasegawa and Kawasaki (2019) Fuminori Hasegawa and Masahiro Kawasaki, “Primordial Black Holes from Affleck-Dine Mechanism,” JCAP 01, 027 (2019), arXiv:1807.00463 [astro-ph.CO] . * Borsanyi _et al._ (2016) Sz. Borsanyi _et al._ , “Calculation of the axion mass based on high-temperature lattice quantum chromodynamics,” Nature 539, 69–71 (2016), arXiv:1606.07494 [hep-lat] . * Grilli di Cortona _et al._ (2016) Giovanni Grilli di Cortona, Edward Hardy, Javier Pardo Vega, and Giovanni Villadoro, “The QCD axion, precisely,” JHEP 01, 034 (2016), arXiv:1511.02867 [hep-ph] . * Turner (1986) Michael S. Turner, “Cosmic and Local Mass Density of Invisible Axions,” Phys. Rev. D 33, 889–896 (1986). * Lyth (1992) D. H. Lyth, “Axions and inflation: Sitting in the vacuum,” Phys. Rev. D 45, 3394–3404 (1992). * Visinelli and Gondolo (2009) Luca Visinelli and Paolo Gondolo, “Dark Matter Axions Revisited,” Phys. Rev. D 80, 035024 (2009), arXiv:0903.4377 [astro-ph.CO] . * Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO] . * Nomura _et al._ (2016) Yasunori Nomura, Surjeet Rajendran, and Fabio Sanches, “Axion Isocurvature and Magnetic Monopoles,” Phys. Rev. Lett. 116, 141803 (2016), arXiv:1511.06347 [hep-ph] . * Kawasaki _et al._ (2016) Masahiro Kawasaki, Fuminobu Takahashi, and Masaki Yamada, “Suppressing the QCD Axion Abundance by Hidden Monopoles,” Phys. Lett. B 753, 677–681 (2016), arXiv:1511.05030 [hep-ph] . * Kawasaki _et al._ (2018) Masahiro Kawasaki, Fuminobu Takahashi, and Masaki Yamada, “Adiabatic suppression of the axion abundance and isocurvature due to coupling to hidden monopoles,” JHEP 01, 053 (2018), arXiv:1708.06047 [hep-ph] . * Chiba _et al._ (2004) Takeshi Chiba, Fuminobu Takahashi, and Masahide Yamaguchi, “Baryogenesis in a flat direction with neither baryon nor lepton charge,” Phys. Rev. Lett. 92, 011301 (2004), [Erratum: Phys.Rev.Lett. 114, 209901 (2015)], arXiv:hep-ph/0304102 . * Takahashi and Yamaguchi (2004) Fuminobu Takahashi and Masahide Yamaguchi, “Spontaneous baryogenesis in flat directions,” Phys. Rev. D 69, 083506 (2004), arXiv:hep-ph/0308173 . * Higaki _et al._ (2014) Tetsutaro Higaki, Kwang Sik Jeong, and Fuminobu Takahashi, “Solving the Tension between High-Scale Inflation and Axion Isocurvature Perturbations,” Phys. Lett. B 734, 21–26 (2014), arXiv:1403.4186 [hep-ph] . * Co _et al._ (2020) Raymond T. Co, Lawrence J. Hall, and Keisuke Harigaya, “Axion Kinetic Misalignment Mechanism,” Phys. Rev. Lett. 124, 251802 (2020), arXiv:1910.14152 [hep-ph] . * Choi _et al._ (1997) Kiwoon Choi, Hang Bae Kim, and Jihn E. Kim, “Axion cosmology with a stronger QCD in the early universe,” Nucl. Phys. B 490, 349–364 (1997), arXiv:hep-ph/9606372 . * Banks and Dine (1997) Tom Banks and Michael Dine, “The Cosmology of string theoretic axions,” Nucl. Phys. B 505, 445–460 (1997), arXiv:hep-th/9608197 . * Starobinsky (1982) Alexei A. Starobinsky, “Dynamics of Phase Transition in the New Inflationary Universe Scenario and Generation of Perturbations,” Phys. Lett. B 117, 175–178 (1982). * Linde (1982) Andrei D. Linde, “Scalar Field Fluctuations in Expanding Universe and the New Inflationary Universe Scenario,” Phys. Lett. B 116, 335–339 (1982). * Starobinsky and Yokoyama (1994) Alexei A. Starobinsky and Junichi Yokoyama, “Equilibrium state of a selfinteracting scalar field in the De Sitter background,” Phys. Rev. D 50, 6357–6368 (1994), arXiv:astro-ph/9407016 . * Bird _et al._ (2016) Simeon Bird, Ilias Cholis, Julian B. Muñoz, Yacine Ali-Haïmoud, Marc Kamionkowski, Ely D. Kovetz, Alvise Raccanelli, and Adam G. Riess, “Did LIGO detect dark matter?” Phys. Rev. Lett. 116, 201301 (2016), arXiv:1603.00464 [astro-ph.CO] . * Carr and Kuhnel (2020) Bernard Carr and Florian Kuhnel, “Primordial Black Holes as Dark Matter: Recent Developments,” Ann. Rev. Nucl. Part. Sci. 70, 355–394 (2020), arXiv:2006.02838 [astro-ph.CO] . * Green and Kavanagh (2021) Anne M. Green and Bradley J. Kavanagh, “Primordial Black Holes as a dark matter candidate,” J. Phys. G 48, 043001 (2021), arXiv:2007.10722 [astro-ph.CO] . * Carr and Kuhnel (2022) Bernard Carr and Florian Kuhnel, “Primordial black holes as dark matter candidates,” SciPost Phys. Lect. Notes 48, 1 (2022), arXiv:2110.02821 [astro-ph.CO] . * Li (2022) Hai-Jun Li, “Primordial black holes induced stochastic axion-photon oscillations in primordial magnetic field,” JCAP 11, 045 (2022), arXiv:2208.04605 [astro-ph.CO] . * Carr _et al._ (2010) B. J. Carr, Kazunori Kohri, Yuuiti Sendouda, and Jun’ichi Yokoyama, “New cosmological constraints on primordial black holes,” Phys. Rev. D 81, 104019 (2010), arXiv:0912.5297 [astro-ph.CO] . * Carr (1975) Bernard J. Carr, “The Primordial black hole mass spectrum,” Astrophys. J. 201, 1–19 (1975). * Di and Gong (2018) Haoran Di and Yungui Gong, “Primordial black holes and second order gravitational waves from ultra-slow-roll inflation,” JCAP 07, 007 (2018), arXiv:1707.09578 [astro-ph.CO] . * Abbott _et al._ (2016a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “Observation of Gravitational Waves from a Binary Black Hole Merger,” Phys. Rev. Lett. 116, 061102 (2016a), arXiv:1602.03837 [gr-qc] . * Abbott _et al._ (2016b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence,” Phys. Rev. Lett. 116, 241103 (2016b), arXiv:1606.04855 [gr-qc] . * Abbott _et al._ (2017a) Benjamin P. Abbott _et al._ (LIGO Scientific, VIRGO), “GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2,” Phys. Rev. Lett. 118, 221101 (2017a), [Erratum: Phys.Rev.Lett. 121, 129901 (2018)], arXiv:1706.01812 [gr-qc] . * Abbott _et al._ (2017b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence,” Phys. Rev. Lett. 119, 141101 (2017b), arXiv:1709.09660 [gr-qc] . * Richstone _et al._ (1998) D. Richstone _et al._ , “Supermassive black holes and the evolution of galaxies,” Nature 395, A14–A19 (1998), arXiv:astro-ph/9810378 . * Ferrarese and Merritt (2000) Laura Ferrarese and David Merritt, “A Fundamental relation between supermassive black holes and their host galaxies,” Astrophys. J. Lett. 539, L9 (2000), arXiv:astro-ph/0006053 . * Kormendy and Ho (2013) John Kormendy and Luis C. Ho, “Coevolution (Or Not) of Supermassive Black Holes and Host Galaxies,” Ann. Rev. Astron. Astrophys. 51, 511–653 (2013), arXiv:1304.7762 [astro-ph.CO] . * Hogan and Rees (1988) C. J. Hogan and M. J. Rees, “AXION MINICLUSTERS,” Phys. Lett. B 205, 228–230 (1988). * Kolb and Tkachev (1993) Edward W. Kolb and Igor I. Tkachev, “Axion miniclusters and Bose stars,” Phys. Rev. Lett. 71, 3051–3054 (1993), arXiv:hep-ph/9303313 . * Fairbairn _et al._ (2018) Malcolm Fairbairn, David J. E. Marsh, Jérémie Quevillon, and Simon Rozier, “Structure formation and microlensing with axion miniclusters,” Phys. Rev. D 97, 083502 (2018), arXiv:1707.03310 [astro-ph.CO] . * Ellis _et al._ (2022) David Ellis, David J. E. Marsh, Benedikt Eggemeier, Jens Niemeyer, Javier Redondo, and Klaus Dolag, “Structure of axion miniclusters,” Phys. Rev. D 106, 103514 (2022), arXiv:2204.13187 [hep-ph] . * Dandoy _et al._ (2022) Virgile Dandoy, Thomas Schwetz, and Elisa Todarello, “A self-consistent wave description of axion miniclusters and their survival in the galaxy,” JCAP 09, 081 (2022), arXiv:2206.04619 [astro-ph.CO] .
# Accessible Computation of Tight Symbolic Bounds on Causal Effects using an Intuitive Graphical Interface by Gustav Jonzon, Michael C Sachs, and Erin E Gabriel ###### Abstract Strong untestable assumptions are almost universal in causal point estimation. In particular settings, bounds can be derived to narrow the possible range of a causal effect. Symbolic bounds apply to all settings that can be depicted using the same directed acyclic graph (DAG) and for the same effect of interest. Although the core of the methodology for deriving symbolic bounds has been previously developed, the means of implementation and computation have been lacking. Our R-package causaloptim (Sachs et al., 2022b) aims to solve this usability problem by implementing the method of (Sachs et al., 2022a) and providing the user with a graphical interface through shiny that allows for input in a way that most researchers with an interest in causal inference will be familiar; a DAG (via a point-and-click experience) and specifying a causal effect of interest using familiar counterfactual notation. ## Introduction and Background A common goal in many different areas of scientific research is to determine causal relationships between one or more exposure variables and an outcome. Prior to any computation or inference, we must clearly state all assumptions made, i.e., all subject matter knowledge available, regarding the causal relationships between the involved variables as well as any additional variables, called confounders, that may not be measured but influence at least two other variables of interest. These assumptions are usually encoded in a causal directed acyclic graph (DAG), with directed edges encoding direct causal influences, which conveniently depicts all relevant information and has become a familiar tool in applied research (Greenland et al., 1999). Such a DAG not only clearly states the assumptions made by the researcher, but also comes with a sound methodology for causal inference, in the form of identification results as well as derivation of causal estimators (Pearl, 2009). Unfortunately, point identification of a desired causal effect typically requires an assumption of no unmeasured confounders, in some form. When there are unmeasured confounders, it is sometimes still possible to derive bounds on the effect, i.e., a range of possible values for the causal effect in terms of the observed data distribution. Symbolic bounds are algebraic expressions for the bounds on the causal effect written in terms of probabilities that can be estimated using observed data. Alexander Balke and Judea Pearl first used linear programming to derive tight symbolic bounds in a simple binary instrumental variable setting (Balke and Pearl, 1997). Balke wrote a program in C++ to take a linear programming problem as text file input, perform variable reduction, conversion of equality constraints into inequality constraints, and perform the vertex enumeration algorithm of (Mattheiss, 1973). This program has been used by researchers in the field of causal inference (Balke and Pearl, 1997; Cai et al., 2008; Sjölander, 2009; Sjölander et al., 2014) but it is not particularly accessible because of the technical challenge of translating the DAG plus causal query into the constrained optimization problem and to determine whether it is linear. Moreover, the program is not optimized and hence does not scale well to more complex problems. Since they only cover a simple instrumental variable setting, it has also not been clear to what extent their techniques extend to more general settings, nor how to apply them to more complex queries. Thus, applications of this approach have been limited to a small number of settings and few attempts to generalize the method to more widely applicable settings have been made. Recent developments have expanded the applicability by generalizing the techniques and the causal DAGs and effects to which they apply (Sachs et al., 2022a). These new methods have been applied in novel observational and experimental settings (Gabriel et al., 2020, 2021, 2022). Moreover, through the R package causaloptim (Sachs et al., 2022b), these computations are now accessible. With causaloptim, the user needs only to give input in a way they would usually express their causal assumptions and state their target causal estimand; through a DAG and counterfactual expression. Providing DAGs through textual input is an awkward experience for most users, as DAGs are generally communicated pictorially. Our package causaloptim provides a user-friendly graphical interface through a web browser, where the user can draw their DAG in a way that is familiar to them. The methodology that underpins causaloptim is not universal however; some restrictions on the DAG and query are imposed. These are validated and communicated to the user through the graphical interface, which guides the user through providing the DAG and query, adding any extra conditions beyond those encoded in the DAG, computing, interpreting and exporting the bounds for various further analyses. There exist few other R-packages related to causal bounds and none to our knowledge for computation of symbolic bounds. bpbounds (Palmer et al., 2018) provides a text-based interface to compute numeric bounds for the original single instrumental variable example of Balke and Pearl and extends this by being able to compute bounds given different types of data input including a ternary rather than binary instrument. There is also a standalone program written in Java by the TETRAD Project (https://github.com/cmu-phil/tetrad) that includes a graphical user interface and has a wrapper for R. Its focus, however, is on causal discovery in a given sample data set, and although it can also compute bounds, it can do so only numerically for the given data set. In this paper we describe our R package causaloptim, first focusing on the graphical and programmatic user interfaces in the next 2 sections. Then we highlight some of our interesting functions and data structures that may be useful in other contexts. We provide a summary of the theoretical background and methods, while referring to the companion paper (Sachs et al., 2022a) for the details. We illustrate the use of the package with some numeric examples and close with a discussion and summary. ## Graphical User Interface In the following, we will work through the binary instrumental variable example, where we have 3 observed binary variables $X$, $Y$, $Z$, and we want to determine the average causal effect of $X$ on $Y$ given by total causal risk difference, in the presence of unmeasured confounding by $U_{R}$ and an instrumental variable $Z$. Our causal DAG is given by $Z\to X\to Y$ and $X\leftarrow U_{R}\to Y$ and our causal query is $P(Y(X=1)=1)-P(Y(X=0)=1)$, where we use $Y(X=x)$ to denote the potential outcome for $Y$ if $X$ were intervened upon to have value $x$. causaloptim includes a graphical user interface using shiny (Chang et al., 2021). The interface is launched in the user’s default web browser by calling specify_graph(). Once the shiny app is launched, the user is presented with an interactive display as shown in Figure 1, in which they can draw their causal DAG. This display is divided into a left side $\mathcal{L}$ and right side $\mathcal{R}$ to classify the vertices according to the class of DAGs that the method covers. In particular, the existence of unmeasured confounders is assumed within each of these sides, but not between them, and any causal influence between the two sides must originate in $\mathcal{L}$. Thus, for the example, we would want to put the instrumental variable on the left side, but the exposure and outcome on the right side. In the web version of this article an interactive version of this interface is shown at the end of this section. Figure 1: The Shiny web interface at lauch ### Specifying the setting by drawing a causal diagram and adding attributes (a) Adding and naming variables (b) Adding directed edges Figure 2: Constructing the DAG The DAG is drawn using a point-and-click device (e.g., a mouse) to add vertices representing variables (by Shift-click) and name them (using any valid variable name in R), and to draw edges representing direct causal influences (Shift+drag) between them. The vertices may also be moved around, renamed and deleted (as can the edges) as also described in an instruction text preceding the DAG interface. As shown in Figure 2, for the example we add a vertex $Z$ on the left side, and vertices $X$ and $Y$ on the right side. Then the $Z\to X$ and $X\to Y$ edges are added by Shift+clicking on a parent vertex and dragging to the child vertex. There is no need to add the unmeasured confounder variable $U_{R}$ as it is assumed and added automatically. Importantly, the nodes may be selected and assigned additional information. In $\mathcal{R}$ a variable may be assigned as unobserved (click+‘u’). All observed variables are assumed categorical and their cardinality (i.e., number of levels) may be set (click+‘c’ brings up a prompt for this this number; alternatively a short-cut click+‘any digit’ is provided), with the default being binary. Although the causal query (i.e., the causal effect of interest) is entered subsequently, the DAG interface provides a convenient short-cut; a node $X$ may be assigned as an exposure (click+‘e’) and another $Y$ as outcome (click+‘y’), whereupon the default query is the total causal risk difference $P(Y(X=1)=1)-P(Y(X=0)=1)$. Finally, an edge may be assigned as representing an assumed monotonic influence (click+‘m’). The nodes and edges change appearance according to their assigned characteristics (Figure 3) and violations to the restrictions characterizing the class of DAGs are detected and communicated to the user. (a) Setting the number of categories (b) Confirmation message (c) Setting exposure and outcome Figure 3: Setting attributes Once the DAG has been drawn, the user may click the button “Analyze the graph”, upon which the DAG is interpreted and converted into an annotated igraph-object (Csardi and Nepusz, 2006) as described in the implementation details below, and the results are displayed in graphical form to the user (Figure 4). The addition of $U_{R}$, the common unmeasured cause of $X$ and $Y$, is added and displayed in this static plot. (a) Graphical summary of the DAG with added confounding (b) Computing the bounds Figure 4: The causal DAG and bounds ### Specifying the causal query Next, the user is asked to specify the causal query, i.e., causal effect of interest. If no outcome variable has been assigned in the DAG then the input field for the causal query is left blank and a query needs to be specified. In our example, since we have assigned an exposure and outcome using the DAG interface, the total causal risk difference $P(Y(X=1)=1)-P(Y(X=0)=1)$ is suggested. ### Specifying optional additional constraints Finally, the user is given the option to provide any additional constraints besides those imposed by the DAG. This may be considered an optional advanced feature where, e.g., monotonicity of a certain direct influence of $Z$ on $X$ may be assumed by entering $X(Z=1)\geq X(Z=0)$, with any such extra constraints separated by line breaks. If this feature is used, the input is followed by clicking the button “Parse”, which identifies and fixes them. ### Computing the symbolic tight bounds on the query under the given constraints As the final step, the button “Compute the bounds” is clicked, whereupon the constraints and objective are compiled into an optimization problem which is then solved for tight causal bounds on the query symbolically in terms of observational quantities (conditional probabilities of the observed variables in the DAG) and the expressions are displayed alongside information on how the parameters are to be interpreted in terms of the given variable names (Figure 4). During computation, a progress indicator is shown, and the user should be aware that complex and/or high-dimensional problems may take significant time. The interface also provides a feature to subsequently convert the bounds to LaTeX-code using standard probabilistic notation for publication purposes. Once done, clicking “Exit and return objects to R” stops the shiny app and returns all information about the DAG, query and computed bounds to the R-session. This information is bundled in a list containing the graph, query, parameters and their interpretation, the symbolic tight bounds as expressions as well as implementations as R-functions and further log information about the formulation and optimization procedures. ## Programmatic user interface Interaction may also be done entirely programmatically as we illustrate with the same binary instrumental variable example. First we create the igraph object using the graph_from_literal function. Once the basic graph is created, the necessary vertex and edge attributes are added. The risk difference is defined as a character object. The analyze_graph function is the workhorse of causaloptim; it translates the causal graph, constraints, and causal effect of interest into a linear programming problem. This linear programming object, stored in obj in the code below, gets passed to optimize_effect_2 which performs vertex enumeration to obtain the bounds as symbolic expressions in terms of observable probabilities. graph <\- igraph::graph_from_literal(Z -+ X, X -+ Y, Ul -+ Z, Ur -+ X, Ur -+ Y)V(graph)$leftside <\- c(1, 0, 0, 1, 0)V(graph)$latent <\- c(0, 0, 0, 1, 1)V(graph)$nvals <\- c(2, 2, 2, 2, 2)E(graph)$rlconnect <\- c(0, 0, 0, 0, 0)E(graph)$edge.monotone <\- c(0, 0, 0, 0, 0)riskdiff <\- "p{Y(X = 1) = 1} - p{Y(X = 0) = 1}"obj <\- analyze_graph(graph, constraints = NULL, effectt = riskdiff)bounds <\- optimize_effect_2(obj)bounds #> lower bound =#> MAX {#> p00_0 - p00_1 - p10_1 - p01_1,#> p00_0 - p00_1 - p10_0 - p10_1 - p01_0,#> p00_0 - p00_1 + p10_0 - 2p10_1 - 2p01_1,#> -p10_1 - p01_1,#> -p10_0 - p01_0,#> -p00_0 + p00_1 - 2p10_0 + p10_1 - 2p01_0,#> -p00_0 + p00_1 - p10_0 - p10_1 - p01_1,#> -p00_0 + p00_1 - p10_0 - p01_0#> }#> \----------------------------------------#> upper bound =#> MIN {#> 1 - p10_1 - p01_0,#> 1 + p00_0 + p10_0 - 2p10_1 - p01_1,#> 2 - p00_1 - p10_0 - p10_1 - 2p01_0,#> 1 - p10_1 - p01_1,#> 1 - p10_0 - p01_0,#> 1 + p00_1 - 2p10_0 + p10_1 - p01_0,#> 2 - p00_0 - p10_0 - p10_1 - 2p01_1,#> 1 - p10_0 - p01_1#> } The resulting bounds object contains character strings representing the bounds and logs containing detailed information from the vertex enumeration algorithm. The bounds are printed to the console but more features are available to facilitate their use. The interpret_bounds function takes the bounds and parameter names as input and returns an R function implementing vectorized forms of the symbolic expressions for the bounds. bounds_function <\- interpret_bounds(bounds$bounds, obj$parameters)str(bounds_function) #> function (p00_0 = NULL, p00_1 = NULL, p10_0 = NULL, p10_1 = NULL, p01_0 = NULL,#> p01_1 = NULL, p11_0 = NULL, p11_1 = NULL) The results can also be used for numerical simulation using simulate_bounds. This function randomly generates counterfactuals and probability distributions that satisfy the constraints implied by the DAG and optional constraints. It then computes and returns the bounds as well as the true causal effect. If one wants to bound a different effect using the same causal graph, the update_effect function can be used to save some computation time. It takes the object returned by analyze_graph and the new effect string then returns an object of class linearcausalproblem that can be optimized: obj2 <\- update_effect(obj, "p{Y(X = 1) = 1}"). Finally, LaTeX-code may also be generated using the function latex_bounds as in latex_bounds(bounds$bounds, obj$parameters) yielding Lower bound $\displaystyle=\mbox{max}\left.\begin{cases}P(X=0,Y=0|Z=0)-P(X=0,Y=0|Z=1)-P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=1),\\\ P(X=0,Y=0|Z=0)-P(X=0,Y=0|Z=1)-P(X=1,Y=0|Z=0)-P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=0),\\\ P(X=0,Y=0|Z=0)-P(X=0,Y=0|Z=1)+P(X=1,Y=0|Z=0)-2P(X=1,Y=0|Z=1)-2P(X=0,Y=1|Z=1),\\\ -P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=1),\\\ -P(X=1,Y=0|Z=0)-P(X=0,Y=1|Z=0),\\\ -P(X=0,Y=0|Z=0)+P(X=0,Y=0|Z=1)-2P(X=1,Y=0|Z=0)+P(X=1,Y=0|Z=1)-2P(X=0,Y=1|Z=0),\\\ -P(X=0,Y=0|Z=0)+P(X=0,Y=0|Z=1)-P(X=1,Y=0|Z=0)-P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=1),\\\ -P(X=0,Y=0|Z=0)+P(X=0,Y=0|Z=1)-P(X=1,Y=0|Z=0)-P(X=0,Y=1|Z=0)\end{cases}\right\\}$ Upper bound $\displaystyle=\mbox{min}\left.\begin{cases}1-P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=0),\\\ 1+P(X=0,Y=0|Z=0)+P(X=1,Y=0|Z=0)-2P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=1),\\\ 2-P(X=0,Y=0|Z=1)-P(X=1,Y=0|Z=0)-P(X=1,Y=0|Z=1)-2P(X=0,Y=1|Z=0),\\\ 1-P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=1),\\\ 1-P(X=1,Y=0|Z=0)-P(X=0,Y=1|Z=0),\\\ 1+P(X=0,Y=0|Z=1)-2P(X=1,Y=0|Z=0)+P(X=1,Y=0|Z=1)-P(X=0,Y=1|Z=0),\\\ 2-P(X=0,Y=0|Z=0)-P(X=1,Y=0|Z=0)-P(X=1,Y=0|Z=1)-2P(X=0,Y=1|Z=1),\\\ 1-P(X=1,Y=0|Z=0)-P(X=0,Y=1|Z=1)\end{cases}\right\\}.$ ## Implementation and Program Overview An overview of the main functions and their relations is depicted as a flow chart in Figure 5. All functions may be called individually by the user at the R-console and all input, output and interaction available through the shiny app has corresponding availability at the R-console as well. (Sachs et al., 2022a) define the following class of problems for which the query in general is not identifiable, but for which a methodology to derive symbolic tight bounds on the query is provided. The causal DAG consists of a finite set $\mathcal{W}=\\{W_{1},\dots,W_{n}\\}=\mathcal{W}_{\mathcal{L}}\cup\mathcal{W}_{\mathcal{R}}$ of categorical variables with $\mathcal{W}_{\mathcal{L}}\cap\mathcal{W}_{\mathcal{R}}=\varnothing$, no edges going from $\mathcal{W}_{\mathcal{R}}$ to $\mathcal{W}_{\mathcal{L}}$ and no external common parent between $\mathcal{W}_{\mathcal{L}}$ and $\mathcal{W}_{\mathcal{R}}$, but _importantly_ external common parents $\mathcal{U}_{\mathcal{L}}$ and $\mathcal{U}_{\mathcal{R}}$ of variables within $\mathcal{W}_{\mathcal{R}}$ and $\mathcal{W}_{\mathcal{R}}$ may not be ruled out. Nothing is assumed about any characteristics of these confounding variables $\mathcal{U}_{\mathcal{L}}$ and $\mathcal{U}_{\mathcal{R}}$. The causal query may be any linear combination of joint probabilities of factual and counterfactual outcomes expressed in terms of the variables in $\mathcal{W}$ and may always be expressed as a sum of probabilities of response function variables of the DAG. It is subject to the restriction that each outcome variable is in $\mathcal{W}_{\mathcal{R}}$ and if $\mathcal{W}_{\mathcal{L}}\neq\varnothing$ it is also subject to a few regularity conditions as detailed in (Sachs et al., 2022a). Tight bounds on the query may then be derived symbolically in terms of conditional probabilities of the observable variables $\mathcal{W}$. Algorithms 1 and 2 in (Sachs et al., 2022a) construct the constraint space and causal query in terms of the joint probabilities of the response function variables and in causaloptim are implemented in the functions create_R_matrix and create_effect_vector respectively. Both are called as sub-procedures of the function analyze_graph to translate the causal problem to that of optimizing an objective function over a constraint space. The implementation of Algorithm 1 involves constructing the response functions themselves as actual R-functions. Evaluating these correspond to evaluating the structural equations of the causal DAG. The conditions on the DAG suffice to ensure that the causal query will depend only on the response functions corresponding to the variables in $\mathcal{W}_{\mathcal{R}}$ and that the exhaustive set of constraints of their probabilities are linear in a subset of conditional probabilities of observable variables (Proposition 2 in (Sachs et al., 2022a)), and the conditions on the query in turn ensure that it may be expressed as a linear combination of joint probabilities of the response functions of the variables in $\mathcal{W}_{\mathcal{R}}$ (Proposition 3 in (Sachs et al., 2022a)). Once this formulation of the causal problem as a linear program has been set up, a vertex enumeration method is employed to compute the extrema symbolically in terms of conditional probabilities of the observable variables. The main and interesting functions will be described in some detail below. We begin however with an overview of how they are tied together by the shiny app. Figure 5: Function Overview Flow Chart #### specify_graph The graphical interface is launched by specify_graph(), or preferably results <\- specify_graph(). Once the shiny app is stopped, the input, output and other useful information is returned by the function, so we recommend assigning it to a variable so they are saved in the R-session and may easily be further analyzed and processed. All further function calls will take place automatically as the user interacts with the web interface. Thus, from a basic user perspective, specify_graph is the main function. The core functionality however is implemented in the functions analyze_graph, optimize_effect_2 and their subroutines. The JavaScript that handles the communication between the shiny server and the input as the user draws a DAG through the web interface uses on the project directed-graph-creator, an interactive tool for creating directed graphs, created using d3.js and hosted at https://github.com/cjrd/directed-graph- creator, which has been modified for the purpose of causal diagrams. The modification binds the user inputs as they interact with the graph to shiny so that the directed graph and its attributes set by the user are reactively converted into an igraph-object for further processing. Since directed graphs are common in many computational and statistical problems, this shiny interface may also be valuable to many other R-package authors and maintainers who may wish to provide their users with an accessible and intuitive way to interact with their software. The server listens to a reactive function that, as the user draws the DAG, collects information about the current edges, collects and annotates vertices, adds left- and right-side confounding, and returns an annotated igraph-object, comprising information about the connectivity along with some additional attributes; for each variable, its name, cardinality, latency-indicator and side-indicator, and for each edge, a monotonicity-indicator and (to detect and communicate violations on direction) a right-to-left-indicator. The server meanwhile also monitors the DAG for any violation of the restriction that each edge between $\mathcal{L}$ and $\mathcal{R}$ must go _from_ $\mathcal{L}$ _to_ $\mathcal{R}$, and if detected directly communicates this to the user through a text message in the shiny app. #### analyze_graph The function analyze_graph takes a DAG (in the form of an igraph object), optional constraints, and a string representing the causal effect of interest and proceeds to construct and return a linear optimization problem (a linearcausalproblem-object) from these inputs. First, some basic data structures are created to keep track of the observed variables, their possible values, the latent variables, and whether they are in $\mathcal{L}$ or $\mathcal{R}$. Once these basic data-structures have been created, the first task of the algorithm is to create the response function variables (for each variable, observed or not, except $U_{\mathcal{L}}$ and $U_{\mathcal{R}}$). Probabilities of these will be the entities $\mathbf{q}$ in which the objective function (representing the target causal effect) is expressed and will constitute the points in the space it is optimized over, where this space itself is constrained by the the relationships between them and observed conditional probabilities $\mathbf{p}$. #### create_response_function The function create_response_function returns a list respvars that has a named entry for each observed variable, containing its response function variable and response function. If $X$ is an observed variable with $n$ response functions, then they are enumerated by $\\{0,\dots,n-1\\}$. Its entry respvars$X contains the response function variable $R_{X}$ of $X$, and is a list with two entries. The first, respvars$X$index, is a vector containing all the possible values of $R_{X}$, i.e., the integers $(0,\dots,n-1)$. The second, respvars$X$values is itself a list with $n$ entries; each containing the particular response function of $X$ corresponding to its index. Each such response function is an actual R-function and may be evaluated by passing it any possible values of the parents of $X$ as arguments. Next, the response function variables are used in the creation of a matrix of unobserved probabilities. Specifically the joint probabilities $P(\mathbf{R}_{\mathcal{R}}=\mathbf{r}_{\mathcal{R}})$ for each possible value-combination $\mathbf{r}_{\mathcal{R}}$ of the response function variables $\mathbf{R}_{\mathcal{R}}$ of the right-side-variables $\mathbf{W}_{\mathcal{R}}$. In (Sachs et al., 2022a), the possible value- combinations $\mathbf{r}_{\mathcal{R}}$ are enumerated by $\gamma\in\\{1,\dots,\aleph_{\mathcal{R}}\\}$ with corresponding probabilities $q_{\gamma}:=P(\mathbf{R}_{\mathcal{R}}=\mathbf{r}_{\gamma})$ being components of the vector $\mathbf{q}\in[0,1]^{\aleph_{\mathcal{R}}}$. #### create_R_matrix The constraints that the DAG and observed conditional probabilities $\mathbf{p}$ (in p.vals) impose on the unobserved probabilities $\mathbf{q}$ (represented by variables) are linear. Specifically, there exists a matrix whose entries are the coefficients relating p.vals to variables. This matrix is called $P$ in (Sachs et al., 2022a), where its existence is guaranteed by Proposition 2 and its construction is detailed in Algorithm 1, which is implemented in the function create_R_matrix. This function returns back a list with two entries; a vector of strings representing the linear constraints on the unobserved $\mathbf{q}\in[0,1]^{\aleph_{\mathcal{R}}}$ imposed by and in terms of the observed $\mathbf{p}\in[0,1]^{B}$ and the numeric matrix $R\in\\{0,1\\}^{(B+1)\times\aleph_{\mathcal{R}}}$ of coefficients corresponding to these constraints as well as the probabilistic ones and given by $R=\begin{pmatrix}\mathbf{1}\\\ P\end{pmatrix}$ where $P\in\\{0,1\\}^{B\times\aleph_{\mathcal{R}}}:\mathbf{p}=P\mathbf{q}$, so $R\mathbf{q}=\begin{pmatrix}1\\\ \mathbf{p}\end{pmatrix}$. This determines the constraint space as a compact convex polytope in $\mathbf{q}$-space, i.e., in $\mathbb{R}^{\aleph_{\mathcal{R}}}$. To create the matrix, we define a recursive function gee_r that takes two arguments; a positive integer i being the index $i\in\\{1,\dots,n\\}$ of a variable $W_{i}\in\mathcal{W}$ (i.e. the $i^{th}$ component of $\mathbf{W}$ or, equivalently, the ith entry of obsvars) and a vector r being a value $\mathbf{r}\in\nu(\mathbf{R})$ in the set $\nu(\mathbf{R})$ of all possible value-vectors of the joint response function variable $\mathbf{R}$. This recursive function is called for each variable in obsvars and for each possible value of the response function variable vector. The base case is reached if the variable has no parents, in which case the list corresponding to the response function variable $R_{W_{i}}$ of $W_{i}$ is extracted from respvars. From this list, the entry whose index matches the ith index of r (i.e. the one corresponding to the response function variable value $r_{i}=$r[i]) is extracted and finally its value, i.e., the corresponding response function itself, is extracted and is evaluated on an empty list of arguments, since it is a constant function and determined only by the value $r_{i}$. The recursive case is encountered when parents is non-empty. If so, then for each parent in parents, its index in obsvars is determined and gee_r is recursively called with the same vector r as first argument but now with this particular index (i.e. that of the current parent) as second argument. The numeric values returned by these recursive calls are then sequentially stored in a vector lookin, whose entries are named by those in parents. Just as in the base case, the response function corresponding to the particular value $r_{i}$ of the response function variable $R_{W_{i}}$ (i.e. the response function of the variable obsvars[i] that has the index r[i]) is extracted from respvars and is now evaluated with arguments given by the list lookin. Note that gee_r(r, i) corresponds to the value $w_{i}=g^{*}_{W_{i}}(\mathbf{r})$ in (Sachs et al., 2022a). Then the values that match the observed probabilities are recorded, the corresponding entries in the current row of the matrix R are set to 1 and a string representing the corresponding equation is constructed and added to the vector of constraints. #### parse_effect Now that the constraint space has been determined, the objective function representing the causal query needs to be specified as a linear function of the components of $\mathbf{q}$, i.e., variables. First the causal query that has been provided by the user as a text-string in effectt is passed to the function parse_effect, which identifies its components including nested counterfactuals and creates a data structure representing the causal query. This structure includes nested lists which represent all interventional paths to each outcome variable in the query. Once the nested list effect is returned back to analyze_graph, it checks that the requirements (see Proposition 3 in (Sachs et al., 2022a)) on the query are fulfilled before creating the linear objective function. Despite these regularity conditions, a large set of possible queries may be entered using standard counterfactual notation, using syntax described in the accompanying instruction text along with examples such as $P(Y(M(X=0),X=1)=1)-P(Y(M(X=0),X=0)=1)$; the natural direct effect (Pearl, 2001) of a binary exposure $X$ at level $M=0$ on a binary outcome $Y$ _not_ going through the mediator $M$, in the presence of unmeasured confounding between $M$ and $Y$ (Sjölander, 2009). #### create_effect_vector Now that the required characteristics of the query have been established, the corresponding objective function will be constructed by the function create_effect_vector which returns a list var.eff of string-vectors; one for each term in the query. Each such vector contains the names (strings in variables) of the response function variables of the right-side (i.e. the components of $\mathbf{q}$) whose sum corresponds the that particular term. The function create_effect_vector implements Algorithm 2 of (Sachs et al., 2022a) with the additional feature that if the user has entered a query that is incomplete in the sense that there are omitted mediating variables on paths from base/intervention variables to the outcome variable, then this is interpreted as the user intending the effects of the base/intervention variables to be propagated through the mediators, so that they are set to their “natural” values under this intervention. These mediators are detected and their values are set accordingly. We define a recursive function gee_rA that takes three arguments; a positive integer i (the index $i$ of a variable $W_{i}\in\mathcal{W}=$obsvars), a vector r (a value $\mathbf{r}\in\nu(\mathbf{R})$ in the set $\nu(\mathbf{R})$ of all possible value-vectors of the joint response function variable $\mathbf{R}$) and a string path that represents an interventional path and is of the form “X -> … -> Y” if not NULL. The base case is reached either if path is non-NULL and corresponds to a path to the intervention set or if parents is empty. In the former case, the corresponding numeric intervention-value is returned, and in the latter case, the value of the corresponding response function called on the empty list of arguments is returned just as in the base case of gee_r. The recursive case is encountered when path is NULL and parents is non-empty. This recursion proceeds just as in gee_r, but now rather with a recursive call to gee_rA, whose third argument is now path = paste(gu, "->", path) where the string in gu is the name of the parent variable in parents whose index i in obsvars is the second argument of this recursive call. This construction traces the full path taken from the outcome of interest to the variable being intervened upon. Note that gee_rA(r, i, path) corresponds to the value $w_{i}=h^{A_{i}}_{W_{i}}(\mathbf{r},W_{i})$ in (Sachs et al., 2022a). A matrix is now created just as in the observational case, but this time using gee_rA instead of gee_r . #### optimize_effect_2 Once the constraints on $\mathbf{q}$ as well as the effect of interest in terms of $\mathbf{q}$ have been established, it remains only to optimize this expression over the constraint space. Here, $\mathbf{c}$ denotes the constant gradient vector of the linear objective function and $P$ denotes the coefficient matrix of the linear restrictions on $\mathbf{q}$ in terms of $\mathbf{q}$ imposed by the causal DAG. By adding the probabilistic constraints on $\mathbf{q}$ we have arrived at e.g. the following linear program giving a tight lower bound on the average causal effect $\theta_{\mathbf{q}}=P\\{Y(X=1)=1\\}-P\\{Y(X=0)=1\\}$ in the simple instrumental variable problem of the introductory section: $\displaystyle\min_{\mathbf{q}}\theta_{\mathbf{q}}$ $\displaystyle=\min\\{\mathbf{c}^{\top}\mathbf{q}\mid\mathbf{q}\in\mathbb{R}^{16},\mathbf{q}\geq\mathbf{0}_{16\times 1},\mathbf{1}_{1\times 16}\mathbf{q}=1,P\mathbf{q}=\mathbf{q}\\}$ $\displaystyle=\max\\{\begin{pmatrix}1&\mathbf{q}^{\top}\end{pmatrix}\mathbf{y}\mid\mathbf{y}\in\mathbb{R}^{9},\mathbf{y}\geq\mathbf{0}_{9\times 1},\begin{pmatrix}\mathbf{1}_{16\times 1}&P^{\top}\end{pmatrix}\leq\mathbf{c}\\}$ $\displaystyle=\max\\{\begin{pmatrix}1&\mathbf{q}^{\top}\end{pmatrix}\bar{\mathbf{y}}\mid\bar{\mathbf{y}}\text{ is a vertex of }\\{\mathbf{y}\in\mathbb{R}^{9}\mid\mathbf{y}\geq\mathbf{0}_{9\times 1},R^{\top}\leq\mathbf{c}\\}\\}$ Since we allow the user to provide additional linear inequality constraints (e.g. it may be quite reasonable to assume the proportion of “defiers” in the study population of our example to be quite low), the actual primal and dual linear programs may look slightly more complicated, but this small example still captures the essentials. In general, given the matrix of linear constraints on the observable probabilities implied by the DAG and an optional user-provided matrix inequality, we construct the coefficient matrix and right hand side vector of the dual polytope. The optimization via vertex enumeration step in causaloptim is implemented in the function optimize_effect_2 which uses the double description method for vertex enumeration, as implemented in the rcdd package (Geyer et al., 2021). This step of vertex enumeration has previously been the major computational bottleneck. The approach is now based on cddlib (https://people.inf.ethz.ch/fukudak/cdd_home/), which has an implementation of the Double Description Method (dd). Any convex polytope can be dually described as either an intersection of half-planes (which is the form we get our dual constraint space in) or as a minimal set of vertices of which it is the convex hull (which is the form we want it in) and the dd algorithm efficiently converts between these two descriptions. cddlib also uses exact rational arithmetic, so there is no need to worry about any numerical instability issues. The vertices of the dual polytope are obtained and stored as rows of a matrix with hrep <\- rcdd::makeH(a1, b1); vrep <\- rcdd::scdd(hrep); vertices <\- vrep$output[vrep$output[, 1] == 0 & vrep$output[, 2] == 1, -c(1, 2), drop=FALSE]. The rest is simply a matter of plugging them into the dual objective function, evaluating the expression and presenting the results. The first part of this is done by apply(vertices, 1, function(y) evaluate_objective(c1_num, p, y)) (here (c1_num,p)$=(\begin{pmatrix}b_{\ell}^{\top}&1\end{pmatrix},p)$ separates the dual objective gradient into its numeric and symbolic parts). causaloptim also contains a precursor to to optimize_effect_2, called optimize_effect. This legacy function uses the original optimization procedure written in C++ by Alexander Balke and involves linear program formulation followed by the vertex enumeration algorithm of (Mattheiss, 1973). This has worked well for very simple settings but has struggled severely with even remotely complex ones and thus been insufficient for the ambitions of causaloptim. The efficiency gains of optimize_effect_2 over the legacy code have reduced the computation time for several setting from hours to milliseconds. ## Numeric Examples ### A Mediation Analysis In (Sjölander, 2009), the author derives bounds on natural direct effects in the presence of confounded intermediate variables and applies them to data from the Lipid Research Clinics Coronary Primary Prevention Trial (Freedman et al., 1992), where subjects were randomized to cholestyramine treatment and presence of coronary heart disease events as well as levels of cholesterol were recorded after a 1-year follow-up period. We let $X$ be a binary treatment indicator, with $X=0$ indicating actual cholestyramine treatment and $X=1$ indicating placebo. We further let $Y$ be an indicator of the occurrence of coronary heart disease events within follow-up, with $Y=0$ indicating event-free follow-up and $Y=1$ indicating an event. We finally let $M$ be a dichotomized (cut-off at $280\ mg/dl$) cholesterol level indicator, with $M=0$ indicating levels $<280\ mg/dl$ and $M=1$ indicating levels $\geq 280\ mg/dl$. The causal assumptions are summarized in the DAG shown in Figure 6, where $U_{l}$ and $U_{r}$ are unmeasured and the latter confounds the effect of $M$ on $Y$. Figure 6: Causal DAG for mediation example b <\- igraph::graph_from_literal(X -+ Y, X -+ M, M -+ Y, Ul -+ X, Ur -+ Y, Ur -+ M)V(b)$leftside <\- c(1, 0, 0, 1, 0)V(b)$latent <\- c(0, 0, 0, 1, 1)V(b)$nvals <\- c(2, 2, 2, 2, 2)E(b)$rlconnect <\- c(0, 0, 0, 0, 0, 0)E(b)$edge.monotone <\- c(0, 0, 0, 0, 0, 0) Using the data from Table IV of (Sjölander, 2009), we compute the observed conditional probabilities. # parameters of the form pab_c, which represents# the probability P(Y = a, M = b | X = c)p00_0 <\- 1426/1888 # P(Y=0,M=0|X=0)p10_0 <\- 97/1888 # P(Y=1,M=0|X=0)p01_0 <\- 332/1888 # P(Y=0,M=1|X=0)p11_0 <\- 33/1888 # P(Y=1,M=1|X=0)p00_1 <\- 1081/1918 # P(Y=0,M=0|X=1)p10_1 <\- 86/1918 # P(Y=1,M=0|X=1)p01_1 <\- 669/1918 # P(Y=0,M=1|X=1)p11_1 <\- 82/1918 # P(Y=1,M=1|X=1) We proceed to compute bounds on the controlled direct effect $CDE(0)=P(Y(M=0,X=1)=1)-P(Y(M=0,X=0)=1)$ of $X$ on $Y$ not passing through $M$ at level $M=0$, the controlled direct effect $CDE(1)=P(Y(M=1,X=1)=1)-P(Y(M=1,X=0)=1)$ at level $M=1$, the natural direct effect $NDE(0)=P(Y(M(X=0),X=1)=1)-P(Y(M(X=0),X=0)=1)$ of $X$ on $Y$ at level $X=0$ and the natural direct effect $NDE(1)=P(Y(M(X=1),X=1)=1)-P(Y(M(X=1),X=0)=1)$ at level $X=1$. CDE0_query <\- "p{Y(M = 0, X = 1) = 1} - p{Y(M = 0, X = 0) = 1}"CDE0_obj <\- analyze_graph(b, constraints = NULL, effectt = CDE0_query)CDE0_bounds <\- optimize_effect_2(CDE0_obj)CDE0_boundsfunction <\- interpret_bounds(bounds = CDE0_bounds$bounds, parameters = CDE0_obj$parameters)CDE0_numericbounds <\- CDE0_boundsfunction(p00_0 = p00_0, p00_1 = p00_1, p10_0 = p10_0, p10_1 = p10_1, p01_0 = p01_0, p01_1 = p01_1, p11_0 = p11_0, p11_1 = p11_1)CDE1_query <\- "p{Y(M = 1, X = 1) = 1} - p{Y(M = 1, X = 0) = 1}"CDE1_obj <\- update_effect(CDE0_obj, effectt = CDE1_query)CDE1_bounds <\- optimize_effect_2(CDE1_obj)CDE1_boundsfunction <\- interpret_bounds(bounds = CDE1_bounds$bounds, parameters = CDE1_obj$parameters)CDE1_numericbounds <\- CDE1_boundsfunction(p00_0 = p00_0, p00_1 = p00_1, p10_0 = p10_0, p10_1 = p10_1, p01_0 = p01_0, p01_1 = p01_1, p11_0 = p11_0, p11_1 = p11_1)NDE0_query <\- "p{Y(M(X = 0), X = 1) = 1} - p{Y(M(X = 0), X = 0) = 1}"NDE0_obj <\- update_effect(CDE0_obj, effectt = NDE0_query)NDE0_bounds <\- optimize_effect_2(NDE0_obj)NDE0_boundsfunction <\- interpret_bounds(bounds = NDE0_bounds$bounds, parameters = NDE0_obj$parameters)NDE0_numericbounds <\- NDE0_boundsfunction(p00_0 = p00_0, p00_1 = p00_1, p10_0 = p10_0, p10_1 = p10_1, p01_0 = p01_0, p01_1 = p01_1, p11_0 = p11_0, p11_1 = p11_1)NDE1_query <\- "p{Y(M(X = 1), X = 1) = 1} - p{Y(M(X = 1), X = 0) = 1}"NDE1_obj <\- update_effect(CDE0_obj, effectt = NDE1_query)NDE1_bounds <\- optimize_effect_2(NDE1_obj)NDE1_boundsfunction <\- interpret_bounds(bounds = NDE1_bounds$bounds, parameters = NDE1_obj$parameters)NDE1_numericbounds <\- NDE1_boundsfunction(p00_0 = p00_0, p00_1 = p00_1, p10_0 = p10_0, p10_1 = p10_1, p01_0 = p01_0, p01_1 = p01_1, p11_0 = p11_0, p11_1 = p11_1) We obtain the same symbolic bounds as (Sjölander, 2009) and the resulting numeric bounds are given in Table 1 which of course agree with those of Table V in (Sjölander, 2009). Table 1: Bounds on the controlled and natural direct effects. | lower | upper ---|---|--- CDE(0) | -0.20 | 0.39 CDE(1) | -0.78 | 0.63 NDE(0) | -0.07 | 0.56 NDE(1) | -0.55 | 0.09 ### A Mendelian Randomization Study of the Effect of Homocysteine on Cardiovascular Disease Mendelian randomization (Davey Smith and Ebrahim, 2003) assumes certain genotypes may serve as suitable instrumental variables for investigating the causal effect of an associated phenotype on some disease outcome. In (Palmer, 2011), the authors investigate the effect of homocysteine on cardiovascular disease using the 677CT polymorphism (rs1801133) in the Methylenetetrahydrofolate Reductase gene as an instrument. They use observational data from (Meleady et al., 2003) in which the outcome is binary, the treatment has been made binary by a suitably chosen cut-off at $15\mu mol/L$, and the instrument is ternary (this polymorphism can take three possible genotype values). With $X$ denoting the treatment, $Y$ the outcome and $Z$ the instrument, the conditional probabilities are given as follows. params <\- list(p00_0 = 0.83, p00_1 = 0.88, p00_2 = 0.72, p10_0 = 0.11, p10_1 = 0.05, p10_2 = 0.20, p01_0 = 0.05, p01_1 = 0.06, p01_2 = 0.05, p11_0 = 0.01, p11_1 = 0.01, p11_2 = 0.03) The computation using causaloptim is done using the following code. # Input causal DAGb <\- graph_from_literal(Z -+ X, Ul -+ Z, X -+ Y, Ur -+ X, Ur -+ Y)V(b)$leftside <\- c(1, 0, 1, 0, 0)V(b)$latent <\- c(0, 0, 1, 0, 1)V(b)$nvals <\- c(3, 2, 2, 2, 2)E(b)$rlconnect <\- c(0, 0, 0, 0, 0)E(b)$edge.monotone <\- c(0, 0, 0, 0, 0)# Construct causal problemobj <\- analyze_graph(b, constraints = NULL, effectt = "p{Y(X = 1) = 1} - p{Y(X = 0) = 1}")# Compute bounds on querybounds <\- optimize_effect_2(obj)# Construct bounds as function of parametersboundsfunction <\- interpret_bounds(bounds = bounds$bounds, parameters = obj$parameters)# Insert observed conditional probabilitiesnumericbounds <\- do.call(boundsfunction, as.list(params))round(numericbounds, 2) #> lower upper#> 1 -0.09 0.74 Our computed bounds agree with those computed using bpbounds as well as those estimated using Theorem 2 of (Richardson and Robins, 2014), who independently derived expressions for tight bounds that are applicable to this setting. ## Summary and Discussion The methods and algorithms described in (Sachs et al., 2022a) to compute symbolic expressions for bounds on non-identifiable causal effects are implemented in the package causaloptim. Our aim was to provide a user-friendly interface to these methods with a graphical interface to draw DAGs, specification of causal effects using standard notation for potential outcomes, and an efficient implementation of vertex enumeration to reduce computation times. These methods are applicable to a wide variety of causal inference problems which appear in biomedical research, economics, social sciences and more. Aside from the graphical interface, programming with the package is encouraged to promote reproducibility and advanced use. Our package includes automated unit tests and also tests for correctness by comparing the symbolic bounds derived using our program to independently derived bounds in particular settings. Our implementation uses a novel approach to draw DAGs using JavaScript in a web browser that can then be passed to R using shiny. This graphical approach can be adapted and used in other settings where graphs need to be specified and computed on, such as other causal inference settings, networks, and multi- state models. Other algorithms and data structures that could be more broadly useful include the representation of structural equations as R functions, recursive evaluation of response functions, and parsing of string equations for causal effects and constraints. ## References * Balke and Pearl (1997) A. Balke and J. Pearl. Bounds on treatment effects from studies with imperfect compliance. _Journal of the American Statistical Association_ , 92(439):1171–1176, 1997. * Cai et al. (2008) Z. Cai, M. Kuroki, J. Pearl, and J. Tian. Bounds on direct effects in the presence of confounded intermediate variables. _Biometrics_ , 64(3):695–701, 2008. * Chang et al. (2021) W. Chang, J. Cheng, J. Allaire, C. Sievert, B. Schloerke, Y. Xie, J. Allen, J. McPherson, A. Dipert, and B. Borges. _shiny: Web Application Framework for R_ , 2021. URL https://CRAN.R-project.org/package=shiny. R package version 1.7.1. * Csardi and Nepusz (2006) G. Csardi and T. Nepusz. The igraph software package for complex network research. _InterJournal_ , Complex Systems:1695, 2006. URL https://igraph.org. * Davey Smith and Ebrahim (2003) G. Davey Smith and S. Ebrahim. ‘Mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease?*. _International Journal of Epidemiology_ , 32(1):1–22, 02 2003. ISSN 0300-5771. doi: 10.1093/ije/dyg070. URL https://doi.org/10.1093/ije/dyg070. * Freedman et al. (1992) L. S. Freedman, B. I. Graubard, and A. Schatzkin. Statistical validation of intermediate endpoints for chronic diseases. _Statistics in Medicine_ , 11(2):167–178, 1992\. doi: https://doi.org/10.1002/sim.4780110204. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.4780110204. * Gabriel et al. (2020) E. E. Gabriel, M. C. Sachs, and A. Sjölander. Causal bounds for outcome-dependent sampling in observational studies. _Journal of the American Statistical Association_ , pages 1–12, 2020\. * Gabriel et al. (2021) E. E. Gabriel, A. Sjölander, and M. C. Sachs. Nonparametric bounds for causal effects in imperfect randomized experiments. _Journal of the American Statistical Association_ , pages 1–9, 2021\. * Gabriel et al. (2022) E. E. Gabriel, M. C. Sachs, and A. Sjölander. Sharp nonparametric bounds for decomposition effects with two binary mediators. _Journal of the American Statistical Association_ , pages 1–8, 2022\. * Geyer et al. (2021) C. J. Geyer, G. D. Meeden, and incorporates code from cddlib written by Komei Fukuda. _rcdd: Computational Geometry_ , 2021. URL https://CRAN.R-project.org/package=rcdd. R package version 1.5. * Greenland et al. (1999) S. Greenland, J. Pearl, and J. M. Robins. Causal diagrams for epidemiologic research. _Epidemiology_ , pages 37–48, 1999. * Mattheiss (1973) T. H. Mattheiss. An algorithm for determining irrelevant constraints and all vertices in systems of linear inequalities. _Operations Research_ , 21(1):247–260, 1973. * Meleady et al. (2003) R. Meleady, P. M. Ueland, H. Blom, A. S. Whitehead, H. Refsum, L. E. Daly, S. E. Vollset, C. Donohue, B. Giesendorf, I. M. Graham, A. Ulvik, Y. Zhang, and A.-L. Bjorke Monsen. Thermolabile methylenetetrahydrofolate reductase, homocysteine, and cardiovascular disease risk: the European Concerted Action Project. _The American journal of clinical nutrition_ , 77(1):63–70, Jan. 2003. ISSN 0002-9165. doi: 10.1093/ajcn/77.1.63. Place: United States. * Palmer et al. (2018) T. Palmer, R. Ramsahai, V. Didelez, and N. Sheehan. _bpbounds: R package implementing Balke-Pearl bounds for the average causal effect_ , 2018. URL https://github.com/remlapmot/bpbounds. * Palmer (2011) T. M. Palmer. Nonparametric bounds for the causal effect in a binary instrumental-variable model. _Stata Journal_ , 11(3):345–367(23), 2011. URL //<?echo(www)%****␣main.bbl␣Line␣125␣****?>.stata-journal.com/article.html?article=st0232. * Pearl (2001) J. Pearl. Direct and indirect effects. In _Proceedings of the Seventeenth Conference on Uncertainty and Artificial Intelligence, 2001_ , pages 411–420. Morgan Kaufman, 2001. * Pearl (2009) J. Pearl. _Causality_. Cambridge university press, 2009. * Richardson and Robins (2014) T. S. Richardson and J. M. Robins. Ace bounds; sems with equilibrium conditions. _Statistical Science_ , 29(3):363–366, 2014. * Sachs et al. (2022a) M. C. Sachs, G. Jonzon, A. Sjölander, and E. E. Gabriel. A general method for deriving tight symbolic bounds on causal effects. _Journal of Computational and Graphical Statistics_ , 0(0):1–10, 2022a. doi: 10.1080/10618600.2022.2071905. URL https://doi.org/10.1080/10618600.2022.2071905. * Sachs et al. (2022b) M. C. Sachs, A. Sjölander, and E. E. Gabriel. _causaloptim: An Interface to Specify Causal Graphs and Compute Bounds on Causal Effects_ , 2022b. URL https://github.com/sachsmc/causaloptim. R package version 0.9.2. * Sjölander (2009) A. Sjölander. Bounds on natural direct effects in the presence of confounded intermediate variables. _Statistics in Medicine_ , 28(4):558–571, 2009\. * Sjölander et al. (2014) A. Sjölander, W. Lee, H. Källberg, and Y. Pawitan. Bounds on causal interactions for binary outcomes. _Biometrics_ , 70(3):500–505, 2014. * Sjölander (2009) A. Sjölander. Bounds on natural direct effects in the presence of confounded intermediate variables. _Statistics in Medicine_ , 28(4):558–571, 2009\. doi: https://doi.org/10.1002/sim.3493. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.3493. _Gustav Jonzon Department of Medical Epidemiology and Biostatistics, Karolinska Institutet https://ki.se/meb <EMAIL_ADDRESS> _Michael C Sachs Department of Public Health, University of Copenhagen https://biostat.ku.dk/ ORCiD: 0000-0002-1279-8676 <EMAIL_ADDRESS> _Erin E Gabriel Department of Public Health, University of Copenhagen https://biostat.ku.dk/ ORCiD: 0000-0002-0504-8404 <EMAIL_ADDRESS>
# A Non-intrusive Approach for Physics-constrained Learning with Application to Fuel Cell Modeling Vishal Srivastava<EMAIL_ADDRESS>Valentin Sulzer<EMAIL_ADDRESS>Peyman Mohtat<EMAIL_ADDRESS>Jason B. Siegel<EMAIL_ADDRESS>Karthik Duraisamy<EMAIL_ADDRESS>Aerospace Engineering, University of Michigan, Ann Arbor, MI, USA Mechanical Engineering, University of Michigan, Ann Arbor, MI, USA ###### Abstract A data-driven model augmentation framework, referred to as Weakly-coupled Integrated Inference and Machine Learning (IIML), is presented to improve the predictive accuracy of physical models. In contrast to parameter calibration, this work seeks corrections to the structure of the model by a) inferring augmentation fields that are consistent with the underlying model, and b) transforming these fields into corrective model forms. The proposed approach couples the inference and learning steps in a weak sense via an alternating optimization approach. This coupling ensures that the augmentation fields remain learnable and maintain consistent functional relationships with local modeled quantities across the training dataset. An iterative solution procedure is presented in this paper, removing the need to embed the augmentation function during the inference process. This framework is used to infer an augmentation introduced within a Polymer electrolyte membrane fuel cell (PEMFC) model using a small amount of training data (from only 14 training cases.) These training cases belong to a dataset consisting of high- fidelity simulation data obtained from a high-fidelity model of a first generation Toyota Mirai. All cases in this dataset are characterized by different inflow and outflow conditions on the same geometry. When tested on 1224 different configurations, the inferred augmentation significantly improves the predictive accuracy for a wide range of physical conditions. Predictions and available data for the current density distribution are also compared to demonstrate the predictive capability of the model for quantities of interest which were not involved in the inference process. The results demonstrate that the weakly-coupled IIML framework offers sophisticated and robust model augmentation capabilities without requiring extensive changes to the numerical solver. ## 1 Introduction Digital transformation of industrial design and operations requires efficient reduced-fidelity models of the underlying physical phenomena. However, in complex systems, such models contain structural inadequacies which may prevent them from providing sufficiently accurate predictions. In the past decade, several data-driven model augmentation frameworks have been developed that aim to address such model-form inadequacies by inferring functional corrections into the baseline model from available high-fidelity data. As an example, several such techniques have been introduced in the context of turbulence modeling which include but are not limited to genetic algorithms by Weatheritt and Sandberg [1], sparse symbolic regression by Schmelzer et al [2], Tensor Basis Neural Networks by Ling and Templeton [3], Field Inversion and Machine Learning by Duraisamy et al. [4, 5], Integrated Inference and Machine Learning by Holland et al [6, 7], CFD-driven machine learning [8, 9], etc. While most of these frameworks provide promising predictive results on geometries and flow conditions similar to those seen in the training dataset, they usually require significant changes within the numerical solver and considerable expertise in both the method itself and the physical phenomena being modeled. This work introduces a new framework that minimizes such requirements to reduce the time and effort needed to setup the inference procedure while ensuring various consistencies among the training and prediction environments. This framework is demonstrated by augmenting a polymer electrolyte membrane fuel cell (PEMFC) model using high-fidelity data. To meet the challenges of climate change and reduce automotive emissions, there has been a steady push for development of alternative power-train systems with lower emissions. One such alternative is the hydrogen Fuel Cell (FC) [10], which is an electrochemical device that directly converts chemical energy into electricity with high efficiency. Despite major advancements, the cost and durability of Polymer Electrolyte Membrane Fuel Cell (PEMFC) vehicles remain a challenge for their large scale adoption. For better control and management of a PEMFC, it is necessary to have computationally inexpensive physics-based models on-board a vehicle that can run in real-time with sufficient predictive accuracy [11, 12]. This is due to the fact that direct measurements of important internal states of a fuel cell are very difficult and/or prohibitively expensive in real-time [13]. For instance, one such quantity that significantly affects the performance of a PEMFC is the water content inside the membrane and gas channels. Obtaining reliable measurements for the water content is difficult and therefore must be estimated from the model using the observed current, voltage, and temperature measurements. There are a number of different approaches for modeling PEMFCs ranging from simple 1D models to complex 3D models [14]. However, reduced order models [15, 16], which meet the limited computational requirements of an embedded computer, may not achieve satisfactory performance (in terms of model accuracy) or are too difficult to calibrate due to a lack of available information on the internal system states. In the past few years, machine learning methods have been used to design data- driven surrogate models and control strategies for PEMFCs, as well. Napoli et al. [17] used classical neural networks along with stacking strategies to develop data-driven fuel cell models to predict the output voltage and cathode temperature of a fuel cell given the stack current and the flow rates for different gases. Li et al. [18] used data-driven classification strategies supported by carefully chosen feature extraction and data labeling techniques for the diagnosis of water content related faults such as membrane drying and catalyst or channel flooding. Using inlet pressures of hydrogen and oxygen, stack temperature and relative humidity as inputs, Han et al. [19] compared the voltage and current predictions obtained from data-driven surrogate models trained using neural networks and support vector machines. Ma et al. [20] used recurrent neural networks with G-LSTM (grid long short-term memory) neurons to train and predict the degradation to a fuel cell’s performance due to impurities in the incoming hydrogen fuel or changes in the operating conditions. Zhu et al. [21] used artificial neural networks (ANN) with considerable success to create a surrogate model for a high temperature proton exchange membrane fuel cell, which was further used to conduct a parameter study for the fuel cell geometry and operating conditions. These quantities also served as the inputs to the ANN. Sun et al. [22] used a hybrid methodology (using both model-based and data-driven) to construct optimal PID (Proportional Integral Derivative) and ADRC (Active Disturbance Rejection Control) control strategies for the fuel cell stack cooling. Wang et al. [23] used support vector machines (SVM) to create a data-driven surrogate model from 3D simulation data which was then used to optimize the catalyst layer composition using a genetic algorithm. A common theme among the aforementioned works is that the data-driven models can predict scalar outputs like stack voltage and stack current but not field quantities within the fuel cell itself. Secondly, most of these models are purely data-driven and do not incorporate physical laws manifested in the traditional models. Data-driven techniques that introduce corrections into a traditional model, instead of building a surrogate one, alleviate these issues to a large extent while also providing access to field outputs. The simplest and earliest of such techniques includes parameter estimation, which involves optimizing a single model parameter in order to improve predictive accuracy. Although introducing corrections in model parameter values can improve predictions to some extent [24], such an approach is unable to incorporate any additional physical correlations into the model. To introduce such corrections, the model form needs to be augmented appropriately. This augmentation function has to be inferred from available high-fidelity data (i.e. data obtained from experiments or more accurate, yet computationally expensive simulations). As mentioned before, several such model augmentation frameworks exist in the literature and while these techniques have not been extensively used in the fuel cell modeling community, their predictive capability has been successfully demonstrated for problems in other disciplines, e.g. data-driven augmentation of turbulent fluid flow models. Even among such model augmentation techniques, the generalizability of the augmented model depends on a range of different factors including model consistency of the framework, diversity and parsimony of the training dataset, choice of functional form for the augmentation, choice of features (quantities that the augmentation is a function of), choice of technique to solve the inference problem, etc. Since modeled quantities can behave significantly differently compared to their physical counterparts, it is important that inferred augmentation is model consistent, i.e. the augmentation is inferred as a function of the corresponding modeled quantities and not the physical quantities [25]. FIML (Field Inversion and Machine Learning) was among the first frameworks to offer a versatile model-consistent framework which can be used to create augmentations with good predictive accuracy and a reasonable range of applicability. While FIML in its original form suffers from limited learnability (see section 2.2), Integrated Inference and Machine Learning (IIML) - which is based on the FIML framework - removes such shortcomings and improves the accuracy and generalizability of the augmented model. In this work, we develop a novel weakly-coupled IIML technique which facilitates inference of augmentation functions without embedding them into the solver. This technique is demonstrated by applying it to improve the accuracy of an existing fuel cell model. The inference process attempts to minimize the discrepancy between predictions and available higher-fidelity data for the ionomer water content. The data used here was provided by Toyota from predictions using a proprietary higher fidelity model. It is observed that training on only a few representative cases resulted in considerable improvement in the predictive accuracy across a majority of test cases (with input parameters significantly different from those used during training). This paper is structured as follows. Section 2 briefly describes the low- fidelity model used for predictions followed by an introduction to different variants of the Field Inversion and Machine Learning approach available in literature. Section 3, then discusses how the augmentation is introduced into the fuel cell model, how the augmented model is solved via a non-intrusive iterative method that bypasses the need to embed the augmentation within the numerical solver, and a novel weakly-coupled Integrated Inference and Machine Learning (IIML) strategy to enable a corresponding non-intrusive inference. Training and validation results using this weakly-coupled IIML technique are then presented for the fuel cell model in section 4 followed by conclusions in section 5. ## 2 Background ### 2.1 Physical modeling of Fuel Cells A fuel cell is an electrochemical energy conversion device that directly converts chemical energy to electrical energy. In polymer electrolyte membrane fuel cells (PEMFC), hydrogen gas is supplied as the fuel. A 3D representation of a PEMFC is shown in Fig. 1. Figure 1: 3D representation of a polymer electrolyte membrance fuel cell (PEMFC) Hydrogen travels through the gas diffusion layer (GDL) to the catalyst layer. At the anode catalyst layer, a hydrogen oxidation reaction $H_{2}\longrightarrow 2H^{+}+2e^{-}$ produces protons and electrons. Electrons flow through an external circuit to create an electric current, while protons cross the polymer electrolyte membrane. Finally, in the cathode catalyst layer, electrons and protons recombine together with oxygen/air (which is supplied to the cathode channel) to create water in an oxygen reduction reaction: $\dfrac{1}{2}O_{2}+2H^{+}+2e^{-}\longrightarrow H_{2}O.$ Modeling of fuel cells requires a description of dynamics in both the through- plane and along-channel dimensions. A schematic is presented in Fig. 2 to better illustrate the structure and working of a fuel cell. Due to the large discrepancy in length scales between these dimensions (the aspect ratio is around $10^{-3}$, with a $100\mu$m thick GDL and $10cm$ long channels), the model is usually decomposed into a through-plane model (along the $x$-direction) an an along-the-channel model (along the $y$-direction), with coupling between the two dimensions at the GDL-channel interface only (a ‘1+1D’ model). Figure 2: Schematic detailing variation of quantities within a fuel cell. The concentration and temperature gradients across the membrane, catalyst layers, and gas diffusion layers in the through-plane or x-direction are resolved by equations (1-8) at each spatial node along the y-direction. These fluxes are coupled to the along-channel or y-direction distributions by equation (9). #### 2.1.1 Full through-cell model The full through-cell model is a transient model, based on the steady-state model presented by Vetter and Schumacher [16]. The modeling domains are channels, gas diffusion layers (GDLs), and catalyst layers (CLs) in the anode and cathode, with a polymer electrolyte membrane between them as shown by the dashed box in Fig. 2 for transport in the x-direction. The subscripts $ch$, $gdl$, $cl$, and $mb$ arer used to denote a channel, gas diffusion layer (GDL), catalyst layer (CL), or polymer electrolyte membrane (PEM) domains respectively, the and superscripts $ca$ or $an$ denote the cathode and anode sides, respectively. The effects of the microporous layers have been neglected in this model (following [16]). Conservation of current and Ohm’s law result in the following elliptic system relating the electron potential $\phi_{e}$ and the proton potential $\phi_{p}$, to the current densities $i_{p}$ and $i_{e}$ and the interfacial current density $j$. $\frac{\partial i_{p}}{\partial x}=aj\quad\text{where}\quad i_{p}=-\sigma_{p}(\lambda,T)\frac{\partial\phi_{p}}{\partial x}$ (1) $\frac{\partial i_{e}}{\partial x}=-aj\quad\text{where}\quad i_{e}=-\sigma_{e}\frac{\partial\phi_{p}}{\partial x}.$ (2) Here, $a$ refers to the surface area, $j$ is the reaction current density shown in the appendix, and $\sigma_{p}$ and $\sigma_{e}$ refer to the electrical conductivity for the protons and electrons, respectively. The conservation of the ionomer water content, $\lambda$, is enforced using the water transport model introduced by Springer [26], which consists of a diffusion term and an electro-osmotic drag term, as shown in the following equation. $\frac{\varepsilon_{i}}{V_{m}}\frac{\partial\lambda}{\partial t}=-\frac{\partial N_{\lambda}}{\partial x}+S_{ad}+r_{\text{H}_{\text{2}}\text{O}}\quad\text{where}\quad N_{\lambda}=-\frac{D_{\lambda}(\lambda,T)}{V_{m}}\frac{\partial\lambda}{\partial x}+\frac{n_{d}(\lambda)}{F}i_{p}.$ (3) Here, $\varepsilon_{i}$ represents the ionomer volume fraction (which is assumed constant in this model), $V_{m}$ refers to the equivalent volume of dry membrane, $D_{\lambda}$ refers to the diffusivity of the membrane, $F$ is the Faraday’s constant, and $r_{\text{H}_{\text{2}}\text{O}}$ refers to the rate at which water is produced within the membrane as a consequence of the oxygen reduction reaction in the cathode catalyst layer. $S_{ad}$ is the source term which controls the adsorption/desorption of water within the ionomer membrane. This term is given as $S_{ad}=\frac{k_{ad}}{h_{cl}V_{m}}(\lambda_{eq}-\lambda).$ (4) Here, $h_{cl}$ refers to the thickness of the catalyst layer, $\lambda_{eq}$ refers to the equilibrium membrane water content and is usually given as a function of temperature and relative humidity. $k_{ad}$ refers to the rate of adsorption (when $\lambda<\lambda_{eq}$) or desorption (when $\lambda>\lambda_{eq}$) and is usually a function of $\lambda$ and temperature. Gas transport is modeled using gas concentrations (denoted by $c$) instead of the typically used gas mole fractions. Fickian diffusion is used for the fluxes with an effective diffusivity factor to account for the reduced diffusivity in the porous medium. Additional source terms are used for phase changes from adsorption/desorption and evaporation/condensation. $\frac{\partial}{\partial t}(\varepsilon_{g}c_{\text{H}_{\text{2}}\text{O}})=-\frac{\partial N_{\text{H}_{\text{2}}\text{O}}}{\partial x}-S_{ad}-S_{ec}\quad\text{where}\quad N_{\text{H}_{\text{2}}\text{O}}=-D_{\text{H}_{\text{2}}\text{O}}^{\text{eff}}(s,T)\frac{\partial c_{\text{H}_{\text{2}}\text{O}}}{\partial x}.$ (5) The gas porosity, $\varepsilon_{g}$, is given in terms of the liquid water saturation $s$ and the porosity $\varepsilon_{p}$. Similarly, we can obtain transport equations for hydrogen and oxygen gases, with their source terms arising from the chemical reactions. $\frac{\partial}{\partial t}(\varepsilon_{g}c_{\text{H}_{\text{2}}})=-\frac{\partial N_{\text{H}_{\text{2}}}}{\partial x}+r_{\text{H}_{\text{2}}}\quad\text{where}\quad N_{\text{H}_{\text{2}}}=-D_{\text{H}_{\text{2}}}^{\text{eff}}(s,T)\frac{\partial c_{\text{H}_{\text{2}}}}{\partial x}.$ (6) $\frac{\partial}{\partial t}(\varepsilon_{g}c_{\text{O}_{\text{2}}})=-\frac{\partial N_{\text{O}_{\text{2}}}}{\partial x}+r_{\text{O}_{\text{2}}}\quad\text{where}\quad N_{\text{O}_{\text{2}}}=-D_{\text{O}_{\text{2}}}^{\text{eff}}(s,T)\frac{\partial c_{\text{O}_{\text{2}}}}{\partial x}.$ (7) The liquid water saturation, $s$, is governed by the following equation. $\frac{1}{V_{w}}\frac{\partial}{\partial t}(\varepsilon_{\ell}c_{s})=-\frac{\partial N_{s}}{\partial x}+S_{ec}\quad\text{where}\quad N_{s}=-\frac{D_{s}^{\text{eff}}(s,T)}{V_{w}}\frac{\partial c_{s}}{\partial x}.$ (8) The liquid volume fraction, $\varepsilon_{\ell}$, is given as $\varepsilon_{\ell}=s\varepsilon_{p}$ and the capillary liquid water diffusivity, $D_{s}$, is given as $D_{s}=\dfrac{\kappa}{\mu}\dfrac{\partial p_{c}}{s}$. It should be noted that this model is isothermal, so the channel temperature is assumed uniform in the through-cell direction. The respective source term definitions can be found in A. #### 2.1.2 1-D channel model The 1-D through-cell model is coupled to a 1-D channel model through its boundary conditions, and the channel model governs how these boundary conditions vary along the channel spatial variable $y$. A counter-flow channel configuration is considered in this model as shown in Fig. 2. The anode and cathode channels have different physical channel lengths due to the design of the flow path, but must be modeled on the same 1-D grid to capture the coupling through the membrane. This mapping is achieved by considering a fixed cross-sectional area for the grid points. Thus, the spatial dimensions in each channel have been non-dimensionalized by the channel length $L_{ch}$, so that a common spatial variable $y\in[0,1]$ can be use for computations. The concentrations of water, hydrogen, oxygen and nitrogen, are governed by the conservation of mass and their transport is modeled using a convective-diffusive flux. Thus, for any gas $k\in\\{\text{H}_{\text{2}}\text{O},\text{O}_{\text{2}},\text{H}_{\text{2}},\text{N}_{\text{2}}\\}$, we have $\frac{\partial c_{k,ch}}{\partial t}=-\frac{1}{L_{c}h}\frac{\partial N_{k,ch}}{\partial y}+\frac{w}{h_{ch}}S_{k,ch}\quad\text{where}\quad N_{k,ch}=-\frac{D_{k,ch}}{L_{ch}}\frac{\partial c_{k,ch}}{\partial y}+c_{k,ch}v_{ch}.$ (9) The gas flow velocity in the channel, $v_{ch}$, is governed by the following equation. $\frac{\partial v_{ch}}{\partial y}=\frac{RT_{ch}}{L_{ch}p_{ch}}\frac{w}{h_{ch}}\sum_{k}{S_{k,ch}}.$ (10) The source term of a species into a channel is equal to the flux of that species from the GDL into the channel in consideration. Hence, $S_{k,ch}^{an}=-N_{k}|_{x=0}\quad\text{and}\quad S_{k,ch}^{ca}=N_{k}|_{x=h_{tot}}.$ (11) To ensure the conservation of mass in the model, it is important to keep track of the liquid water in the channels. Any accumulated liquid water in the channel is convected away by the gas flow velocity with velocity $v_{ch}$. $\frac{\partial s_{ch}}{\partial t}=-\frac{1}{L_{ch}}\frac{\partial(s_{ch}v_{ch})}{\partial y}+\frac{w}{h_{ch}}S_{s,ch}.$ (12) It is assumed that the temperature in both the channels is equal to the temperature in the cooling channel which is assumed to vary linearly in y. The cooling channel is oriented in the same direction as the anode channel with inlet at $y=1$ and outlet at $y=0$. Thus, we can write the channel temperature as $T_{ch}=T_{in}+\Delta T(1-y).$ (13) Similarly, it is assumed that the pressure varies linearly in both the channels as well. Note that, pressure unlike temperature can be significantly different in the two channels. Thus, one may write, $p_{ch}^{an}=p_{in}^{an}+\Delta p^{an}(1-y)\quad\text{and}\quad p_{ch}^{ca}=p_{in}^{ca}+\Delta p^{ca}y.$ (14) Lastly, the channel current density, $i_{ch}$, and the cathode channel potential, $\phi^{ca}_{e,ch}$, are related by Ohm’s law in the channel. $i_{ch}=-\frac{\sigma_{ch}}{(L^{ca}_{ch})^{2}}\frac{\partial^{2}\phi^{ca}_{e,ch}}{\partial y^{2}}.$ Solving a full order model, with appropriately discretized through-cell and channel length scales is exceedingly computationally expensive for on-board real-time use in control systems of devices using PEMFCs. Thus, it is imperative to use a reduced-order model for quick computations. The reduced order model’s inadequacies may be compensated for using data-driven techniques for model augmentation. This approach is demonstrated herein using integrated inference and learning on a reduced-order, asymptotic linearization of the, through-cell model by Sulzer et al. [27] which has been coupled to the aforementioned 1-D channel model discretized with 20 node points. ### 2.2 Data-driven Model Augmentation via Field Inversion and Machine Learning Field Inversion and Machine Learning [4, 5] (FIML) is a data-driven approach that helps improve the predictive accuracy of a model by inferring model inadequacies as functions of some chosen features (functions of modeled quantities). These functions are referred to as “augmentation” functions. Two main versions of the FIML framework exist in literature. These versions, referred to as classic FIML and strongly-coupled Integrated Inference and Machine Learning (IIML) in this work, have been briefly discussed in Sections 2.2.1 and 2.2.2, respectively. #### 2.2.1 Classic FIML Given a model $\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m};\boldsymbol{\xi})=0,$ (15) where $\widetilde{\boldsymbol{u}}_{m}$ are the model states and $\boldsymbol{\xi}$ specifies the configuration (geometry, boundary conditions, etc.), a spatial field of model inadequacies, $\delta(\boldsymbol{x})$, can be appropriately introduced within the model formulation to “augment” the model as $\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m};\delta(\boldsymbol{x}),\boldsymbol{\xi})=0.$ (16) This optimal values of $\delta(\boldsymbol{x})$ at all spatial locations in the discretized computational domain can then be inferred such that the available high-fidelity data $\boldsymbol{y}_{d}$ is matched by the predictions $\boldsymbol{y}(\widetilde{\boldsymbol{u}}_{m};\boldsymbol{\xi})$ as closely as possible. Formulating a cost function $\mathcal{C}(\boldsymbol{y}_{d},\boldsymbol{y}(\widetilde{\boldsymbol{u}}_{m};\boldsymbol{\xi}))$ then transforms the inference problem (“Field Inversion”) into an optimization problem as $\begin{split}\delta(\boldsymbol{x})&=\text{arg}\min_{\delta^{\prime}(\boldsymbol{x})}\text{ }\mathcal{C}(\boldsymbol{y}_{d},\boldsymbol{y}(\widetilde{\boldsymbol{u}}_{m};\boldsymbol{\xi}))+\mathcal{T}(\delta^{\prime}(\boldsymbol{x});\boldsymbol{\xi})\\\ &\text{where }\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m};\delta^{\prime}(\boldsymbol{x}),\boldsymbol{\xi})=0.\end{split}$ (17) Features $\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\xi})$ (modeled quantities that the model inadequacy is assumed to be a function of) are then chosen. In addition, a functional form for the model inadequacy is also fixed as $\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\xi});\boldsymbol{w})$, where $\boldsymbol{w}$ are the parameters of the augmentation function, $\beta$. Finally, a Machine Learning technique is used to obtain the optimal parameters $\boldsymbol{w}$ such that the optimal inadequacy fields $\delta(\boldsymbol{x})$ obtained from different physical configurations are matched by the corresponding augmentation function predictions as closely as possible. Formulating a loss function $\mathcal{L}(\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\xi});\boldsymbol{w}),\delta)$ then transforms the machine learning problem into an optimization problem as follows $\begin{split}\boldsymbol{w}&=\text{arg}\min_{\boldsymbol{w}^{\prime}}\text{ }\mathcal{L}(\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\xi});\boldsymbol{w}^{\prime})\delta).\end{split}$ (18) Finally, embedding the augmentation function within the model for predictive use, the augmented model can be given as $\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m};\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\xi});\boldsymbol{w}),\boldsymbol{\xi})=0.$ (19) #### 2.2.2 Strongly-coupled Integrated Inference and Machine Learning While the classic FIML approach is effective in extracting augmentations from configurations sharing similar physics, the task becomes progressively harder as the configurations exhibit more diverse physical behavior. This inefficiency occurs due to information loss during the machine learning step which can be attributed to two reasons. Firstly, the field inversion problem is ill-posed and multiple solutions for $\delta(\boldsymbol{x})$ can exist which offer similar improvements in predictive accuracy. Since the field inversion step has no information about the features, it does not necessarily choose the solution which is most suitably expressible as a function of the chosen features. Secondly, solving independent field inversion problems on different configurations can give rise to augmentation fields $\delta_{j}(\boldsymbol{x})$ which are correlated to the features in significantly different ways. Both of these inconsistencies can lead to a loss of information in the machine learning step and, hence, the so-obtained augmentation function parameters $\boldsymbol{w}$ can be sub-optimal. An integrated inference and machine learning technique (first proposed by Holland et al. [6, 7]) can address these limitations. In this framework, the previously separate field inversion and machine learning steps are combined into a single inverse problem that seeks to directly infer the augmentation function parameters from the available high-fidelity data. To achieve this, the functional form of the augmentation is embedded within the solver. As a consequence, the inference process is constrained to explore only those inadequacy fields that can be represented by the augmentation function. In addition, this also implicitly ensures the features are correlated to the augmentation in a consistent manner across all training cases. Mathematically, the corresponding problem statement can be written as $\begin{split}\boldsymbol{w}&=\text{arg}\min_{\boldsymbol{w}^{\prime}}{\bigsqcup}_{j=1}^{n}\left(\mathcal{C}^{j}(\boldsymbol{y}_{d}^{j},\boldsymbol{y}_{m}^{j}(\widetilde{\boldsymbol{u}}_{m}^{j};\boldsymbol{\xi}^{j}))+\lambda_{j}\mathcal{T}^{j}(\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m}^{j};\boldsymbol{\xi}^{j});\boldsymbol{w}^{\prime}));\boldsymbol{\xi}^{j})\right)\\\ &s.t.\quad\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m}^{j};\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m}^{j};\boldsymbol{\xi}^{j});\boldsymbol{w}^{\prime})),\boldsymbol{\xi}^{j})=0.\end{split}$ (20) Here, $\bigsqcup$ denotes an assembly operator that combines the cost and regularization functions from different configurations (indicated by index $j$) into a single combined objective function. The assembly operator can be as simple as a weighted sum. As can be seen, the inference and learning procedures in this technique are no longer distinct and the function parameters $\boldsymbol{w}$ are directly inferred by solving a single optimization problem. To perform integrated inference and learning using the approach mentioned above, the augmentation function needs to be embedded into the numerical solver to enable accurate sensitivity evaluation. Embedding the augmentation involves significant changes to the solver code which may require considerable effort. When testing several augmentation candidates and/or working with an intricate solver, being able to work with a non-intrusive solution technique can save time, effort and resources while allowing increased flexibility, ease-of-use and portability. This work presents a novel weakly-coupled version of Integrated Inference and Machine Learning (see Section 3.3) that can offer the benefits of the aforementioned strongly-coupled IIML while bypassing the need to embed the augmentation into the numerical solver. This framework is demonstrated by augmenting the aforementioned fuel cell model. ## 3 Methodology This section outlines and explains the components of the weakly-coupled integrated inference and learning technique to obtain the augmentation function parameters $\boldsymbol{w}$ from available data without embedding the augmentation function within the solver. Section 3.1 briefly discusses how the model was augmented, what features were chosen, and what neural network architecture was used. Thereafter, section 3.2 explains the minimal changes needed to be made to the solver along with the iterative method used to solve the augmented equations and the use of finite differences to obtain sensitivities with respect to augmentation field, $\delta(\boldsymbol{x})$. Following that, section 3.3 details the proposed weakly-coupled integrated inference and machine learning strategy. ### 3.1 Augmenting the Numerical Solver The reduced-order through-cell model along with the full channel model constitute a system of differential algebraic equations (DAEs) which are implemented in python using the PyBaMM library [28] and numerically solved within the CasADi framework [29] via the SUNDIALS [30] solver. After testing different ways to augment the model, the most promising approach seems to be modifying the algebraic model used to evaluate the equilibrium water content, $\lambda_{\text{eq}}$ (used to calculate $S_{ad}$ in Eqn. 5), by multiplying it with the augmentation function $\beta$. The equilibrium water content is typically modeled as a function of temperature and relative humidity [26], but since the precise values of these quantities in the catalyst layer cannot be measured during FC operation this was a logical place to insert the augmentation. Furthermore, the membrane water content is sensitive to the equilibrium value across various physical conditions, viz. dry/humid, low/high current density, low/high temperatures etc. The augmented form of the source term $S_{ad}$ (see Eqn. 4) is shown in Eqn. 21. $S_{ad}^{\text{aug}}=\frac{k_{ad}}{h_{cl}V_{m}}({\color[rgb]{0.7,0,0}\beta_{\text{aug}}(\boldsymbol{\eta}_{\text{aug}};\boldsymbol{w})}\lambda_{eq}-\lambda).$ (21) Here, $\boldsymbol{\eta}_{\text{aug}}$ represents the features and $\boldsymbol{w}$ represents the parameters that characterize the augmentation function. The feature set used for this application contained the following quantities. 1. 1. Mole fraction of water vapor in the anode channel (from Eqn. 9) 2. 2. Temperature inside the cathode channel (from Eqn. 13) 3. 3. Mole fraction of water vapor in the cathode channel (from Eqn. 9) 4. 4. Water content in the anode catalyst layer (from Eqn. 8) 5. 5. Water vapor concentration in the anode catalyst layer (from Eqn. 5) 6. 6. Water content in the cathode catalyst layer (from Eqn. 8 solved in the cathode domain) 7. 7. Water vapor concentration in the cathode catalyst layer (from Eqn. 5 solved in the cathode domain) 8. 8. Membrane water content (from Eqn. 3) ### 3.2 A Non-Intrusive Iterative Method to Solve Augmented Models #### 3.2.1 Introducing an augmentation term into the model equations To implement it in the solver, $\delta(\boldsymbol{x})$ can be declared to be an additional field variable in the domain such that it remains constant for a single run of the numerical solver. Thus, $\delta(\boldsymbol{x})$ at all spatial locations is initialized before every solver run with a set of available apriori values. Note that, since the only change to the solver code is creating a new array and multiplying its corresponding local values to a term in the model equations, the solver code needs to undergo minimal change. #### 3.2.2 Solving the augmented model Assuming that an augmentation function $\beta(\boldsymbol{\eta};\boldsymbol{w})$ is given, we need to solve the model as described in eqn. 22. $\mathscr{R}(\widetilde{\boldsymbol{u}}_{m};\delta(\boldsymbol{x}),\boldsymbol{\xi})=0\quad s.t.\quad\delta(\boldsymbol{x})=\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m};\boldsymbol{\xi});\boldsymbol{w}).$ (22) To do this without embedding the augmentation function $\beta(\boldsymbol{\eta})$ into the solver, one can solve the augmented model in an iterative manner as shown in eqn. 23. $\mathscr{R}(\widetilde{\boldsymbol{u}}_{m,i+1};\delta_{i}(\boldsymbol{x}),\boldsymbol{\xi})=0\quad s.t.\quad\delta_{i}(\boldsymbol{x})=\rho\delta_{i-1}(\boldsymbol{x})+(1-\rho)\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m,i};\xi);\boldsymbol{w}).$ (23) Here, $\rho$ is a relaxation factor to avoid stability issues in the numerical solver. $\delta^{(0)}(\boldsymbol{x})$ can assume a constant value of $0$ or $1$ throughout the domain depending on whether the augmentation term is additive or multiplicative, respectively. An augmentation residual can be defined as $R_{\text{aug}}=\left\lVert\delta_{i}(\boldsymbol{x})-\delta_{i-1}(\boldsymbol{x})\right\rVert_{2}.$ (24) A stopping criterion of $R_{\text{aug}}<10^{-3}$ worked well for the simulations performed in this work. While convergence is not guaranteed, an overwhelming number of the configurations tested in this work converged, while the remaining exhibited an oscillatory behavior in the augmentation residual. It is noteworthy here, that since the augmentation field changes in increasingly smaller amounts from one augmentation iteration to the next (given an appropriate value of the relaxation factor $\rho$), the computational cost required for the solver to converge keeps decreasing as iterations progress. Thus, carefully choosing the convergence criterion can be instrumental in significantly reducing the computational costs associated with the aforementioned iterative solution method. ### 3.3 Weakly-coupled Integrated Inference and Machine Learning This version of IIML constrains the inadequacy field to stay consistent with the functional form chosen for the augmentation by solving the field inversion and machine learning problems in a predictor-corrector fashion while simultaneously inferring from multiple data sources. This is done by learning the augmentation each time the inadequacy field is updated, i.e. after every iteration of field inversion. Note that while the inadequacy fields are updated independently for all training cases, the machine learning step acts a synchronizing step for these individual optimization problems. Data from the inadequacy fields ($\delta^{i}(\boldsymbol{x})$) and corresponding feature fields ($\boldsymbol{\eta}^{i}(\boldsymbol{x})$) is collated from all training cases and a sufficient number of machine learning iterations (epochs) are performed to ensure that the feature-to-augmentation map learns any new information from the updated flow fields. After the machine learning step, a “field correction” is performed by solving the model again with the newly learned augmentation function ($\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m},\boldsymbol{\zeta});\boldsymbol{w})$). When the simulation converges, the predicted augmentation field ($\beta^{i}(\boldsymbol{x})=\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m}^{(i,\beta)},\boldsymbol{\zeta}^{i});\boldsymbol{w})$) is used as the input for the next field inversion iteration. The superscript $(i,\beta)$ denotes that the velocity field corresponds to the $i^{\text{th}}$ training case and is obtained by solving the model with the augmentation function (not the inadequacy field). Solving the model again with the updated augmentation function is crucial to ensure that the model predictions are consistent with the augmentation function throughout the inference process. Finally, the sensitivity $\dfrac{d\mathcal{J}}{d\beta(\boldsymbol{x})}$ is calculated and the inadequacy field is updated using a steepest gradient method, similar to a field inversion iteration. The step length $\alpha^{i}$ needs to be set manually. In this work, it was set to $\displaystyle\frac{0.05}{\left\lVert\dfrac{d\mathcal{J}^{i}}{d\beta^{i}(\boldsymbol{x})}\right\rVert_{\infty}}$. In summary, the following three consistencies are ensured when using the weakly-coupled IIML described above. 1. 1. Formulating the objective as a function of model predictions ensures that the inadequacy field iterates $\beta^{i}(\boldsymbol{x})$ are model-consistent. 2. 2. Machine learning ensures that inadequacy field iterates $\beta^{i}(\boldsymbol{x})$ are always consistent with the functional form of the augmentation function across all iterations. 3. 3. Field correction ensures that the augmentation field iterates $\beta^{i}(\boldsymbol{x})$ is always consistent with the augmented model. A flowchart describing this process is shown in Fig. 3. Figure 3: Schematic of weakly-coupled IIML procedure It should be noted here that the optimization trajectory for weakly-coupled IIML could be significantly different from that for its strongly-coupled counterpart. The reason for this is explained as follows. For the $i^{\text{th}}$ training case (the computational domain for which consists of $N^{i}_{x}$ discrete spatial location) the discretized inadequacy field can be represented in an $\mathbb{R}^{N^{i}_{x}}$. Now, the set $\Delta^{i}$ consisting of all inadequacy fields $\delta^{i}(\boldsymbol{x})$ for which there exist some set of parameters $\boldsymbol{w}$ such that $\delta^{i}(\boldsymbol{x})=\beta(\boldsymbol{\eta}(\widetilde{\boldsymbol{u}}_{m}^{i},\boldsymbol{\zeta}^{i});w)$ and $\mathscr{R}_{m}(\widetilde{\boldsymbol{u}}_{m}^{i};\delta^{i}(x),\boldsymbol{\xi}^{i})=0$, will form a nonlinear manifold in $\mathbb{R}^{N^{i}_{x}}$. Strongly-coupled IIML is, by structure, constrained to explore only this nonlinear manifold. The field inversion process (which only consists of gradient-descent-based inadequacy field updates), however, is free to find an optimal solution in the entire $N_{x}^{i}$-dimensional space. By introducing the machine learning and field correction steps between gradient-descent-based inadequacy field updates, the weakly-coupled IIML performs a nonlinear projection operation from a point in the $N_{x}^{i}$-dimensional space to a point within the learnable manifold. Hence, within each inference iteration, the inadequacy field can jump out of the learnable manifold after the gradient-descent-based update and is projected back into the manifold by the machine learning and field correction step. This difference in how the iterations progress for the strongly- and weakly-coupled IIML can result in different optimization trajectories within the manifold. While the two techniques may converge to the same solution, the difference in optimization trajectories may cause the weakly-coupled IIML – in some cases – to converge to a different local minima compared to strongly-coupled IIML. In this work, the functional form for the augmentation was chosen to be a neural network with 2 hidden layers containing 7 nodes each. The sigmoid activation function was used in the hidden layers. The ReLU activation function was used in the output layer to ensure that the augmentation was non- negative. The Keras library [31] was used to create and train the network. The Adam optimizer [32] was used to train the model for a total of 500 epochs after every gradient-descent-based update of the augmentation field. The learning rate was set to be $10^{-3}$. ## 4 Results The available dataset contains 1224 cases, each uniquely characterized by different inflow conditions. The high-fidelity data used to infer the augmentation function is the corresponding steady-state x-averaged membrane water content. The cost function for any case with ID $j$ was defined as $\mathcal{C}_{j}=\left\lVert\lambda_{j}-\lambda_{\text{data},j}\right\rVert_{2}^{2},$ (25) where $\lambda$ refers to the spatial field of the membrane water content along the channel direction $y$. Since no regularization is used, the cost function is identical to the individual objective function for a given case. The combined objective function for all the cases is calculated as the weighted sum of the individual cost functions of all training cases with all weights set to unity. Mathematically, $\mathcal{J}=\sum_{i}\alpha_{i}\mathcal{C}_{i},$ (26) where $\alpha_{j}$ represents the weights for the $j^{th}$ case which in this particular instance are all set to 1. The spatial domain used to solve the model is discretized along the channel into 20 spatial nodes. The reduced-order model is used to obtain the steady- state solution by running it for a sufficiently long amount of physical time which in this case was 1000 seconds. Due to the relatively low dimensionality of the spatial discretization, finite differences were found feasible to obtain the sensitivities of the cost function w.r.t. the augmentation field, $\beta$. The step-size used for finite differences was $10^{-4}$. The model was trained on only 14 configurations out of 1224. The corresponding IDs for these training cases in the dataset are 40, 100, 125, 155, 190, 230, 400, 685, 740, 840, 865, 1000, 1090 and 1200. These cases were chosen arbitrarily to include different kinds of input conditions (e.g. different values of relative humidity, channel temperatures, cell current densities, and stoichiometric ratios) A representative plot for the residual histories of the states being solved for a given augmentation field is shown in Fig. 4. As can be seen, the residuals approach zero within the chosen time interval that the model is solved for. It must be noted that no residual-based stopping criteria is built into the DAE solver used for this work. Figure 4: Representative plot for residual decay of all the state variables being solved for by the model A representative plot for the augmentation residual ($R_{\text{aug}}$) history resulting from the iterative solution of the model with a non-embedded augmentation is presented in Fig. 5. Figure 5: Representative plot for $R_{\text{aug}}$ convergence across the iterative solution of a model with non-embedded augmentation (from Eqn. 24) Such iterative solutions need to be performed during both, training and prediction. Note here that a stopping condition of $R_{\text{aug}}<10^{-3}$ was found sufficient to obtain a sufficiently converged result. While there do exist a few cases where such a convergence cannot be achieved and the residuals keep oscillating, no cases exhibit divergent behavior. Even in the cases where the augmentation residuals keep oscillating, the residual magnitudes are very small (of the order of $10^{-}2$). ### 4.1 Training Figure 6: Minimization of objective function with inference (weakly-coupled IIML) iterations (from Eqn. 26) ((a)) $\mathscr{P}_{1}^{\lambda}=0.27$, $\mathscr{P}_{2}^{\lambda}=0.09$ ((b)) $\mathscr{P}_{1}^{\lambda}=0.18$, $\mathscr{P}_{2}^{\lambda}=0.03$ ((c)) $\mathscr{P}_{1}^{\lambda}=0.08$, $\mathscr{P}_{2}^{\lambda}=0.008$ ((d)) $\mathscr{P}_{1}^{\lambda}=0.31$, $\mathscr{P}_{2}^{\lambda}=0.1$ ((e)) $\mathscr{P}_{1}^{\lambda}=0.04$, $\mathscr{P}_{2}^{\lambda}=0.004$ ((f)) $\mathscr{P}_{1}^{\lambda}=0.42$, $\mathscr{P}_{2}^{\lambda}=0.18$ ((g)) $\mathscr{P}_{1}^{\lambda}=0.62$, $\mathscr{P}_{2}^{\lambda}=0.46$ ((h)) $\mathscr{P}_{1}^{\lambda}=0.44$, $\mathscr{P}_{2}^{\lambda}=0.25$ ((i)) $\mathscr{P}_{1}^{\lambda}=0.86$, $\mathscr{P}_{2}^{\lambda}=0.85$ ((j)) $\mathscr{P}_{1}^{\lambda}=0.75$, $\mathscr{P}_{2}^{\lambda}=0.62$ ((k)) $\mathscr{P}_{1}^{\lambda}=0.43$, $\mathscr{P}_{2}^{\lambda}=0.20$ ((l)) $\mathscr{P}_{1}^{\lambda}=0.65$, $\mathscr{P}_{2}^{\lambda}=0.48$ ((m)) $\mathscr{P}_{1}^{\lambda}=0.96$, $\mathscr{P}_{2}^{\lambda}=0.95$ ((n)) $\mathscr{P}_{1}^{\lambda}=0.49$, $\mathscr{P}_{2}^{\lambda}=0.26$ Figure 7: Ionomer water content predictions for the training cases The combined objective function for all 14 cases across inference iterations are shown in Fig. 6. The optimization could not proceed beyond iteration 23 because any subsequent augmentation function iterates caused the PyBaMM fuel cell model solver to diverge for some training cases (probably because of the increased numerical stiffness/instability introduced by the augmentation into the model). Predictive improvements in ionomer water content ($\lambda$) distributions w.r.t. available high-fidelity data for all training cases are plotted in Fig. 7. As can be seen in the figure, some cases show very good improvements while some improved only marginally. A possible cause for this behavior could be the combined objective function being less sensitive to the feature-space regions where the features corresponding to the marginally improved cases lie. While a more careful choice of the training cases and the corresponding weights to the individual objective functions within the combined objective function might help, the objective here is to demonstrate the viability of the IIML approach to obtain generalizable improvements to the model. ### 4.2 Testing Once the training was completed, the resulting model was further tested over all available 1224 cases, the results for which are summarized in Figs. 8 and 9 using the following performance metrics, $\mathscr{P}_{1}$ and $\mathscr{P}_{2}$, which are defined for any quantity of interest $q$ as $\mathscr{P}_{1}^{q}=\frac{2\left\lVert q_{\text{baseline}}-q_{\text{data}}\right\rVert_{2}}{\left\lVert q_{\text{augmented}}-q_{\text{data}}\right\rVert_{2}+\left\lVert q_{\text{baseline}}-q_{\text{data}}\right\rVert_{2}}-1$ (27) $\mathscr{P}_{2}^{q}=\frac{\mathscr{P}_{1}(q)\left\lVert q_{\text{augmented}}-q_{\text{baseline}}\right\rVert_{2}}{\left\lVert q_{\text{augmented}}-q_{\text{data}}\right\rVert_{2}+\left\lVert q_{\text{baseline}}-q_{\text{data}}\right\rVert_{2}}.$ (28) The performance metric $\mathscr{P}_{1}$, by design, is positive for cases where the augmented model gives a smaller $L_{2}$ error compared to the baseline model and negative when the error increases. The performance metric $\mathscr{P}_{2}$ scales $\mathscr{P}_{1}$ with the $L_{2}$ norm of the difference between the predictions from the augmented and baseline models. Thus, for a given case, $\lvert\mathscr{P}_{2}/\mathscr{P}_{1}\rvert\ll 1$ means that the baseline and augmented profiles are very close and that the baseline profile was reasonably accurate in the first place. On the other hand, if $\mathscr{P}_{2}/\lvert\mathscr{P}_{1}\rvert$ is close to unity, it means that the augmented model predicts accurately w.r.t. data and that the baseline model was significantly inaccurate compared to the augmented model. Figure 8: Performance metric $\mathscr{P}_{1}^{\lambda}$ for ionomer water content predictions across all 1224 cases Figure 9: Performance metric $\mathscr{P}_{2}^{\lambda}$ for ionomer water content predictions across all 1224 cases ((a)) $\mathscr{P}_{1}^{\lambda}=0.27$, $\mathscr{P}_{2}^{\lambda}=0.09$ ((b)) $\mathscr{P}_{1}^{\lambda}=0.37$, $\mathscr{P}_{2}^{\lambda}=0.32$ ((c)) $\mathscr{P}_{1}^{\lambda}=0.30$, $\mathscr{P}_{2}^{\lambda}=0.21$ ((d)) $\mathscr{P}_{1}^{\lambda}=0.48$, $\mathscr{P}_{2}^{\lambda}=0.36$ ((e)) $\mathscr{P}_{1}^{\lambda}=0.94$, $\mathscr{P}_{2}^{\lambda}=0.94$ ((f)) $\mathscr{P}_{1}^{\lambda}=0.63$, $\mathscr{P}_{2}^{\lambda}=0.47$ Figure 10: Ionomer water content predictions for cases with high $\mathscr{P}_{1}^{\lambda}$ performance metrics ((a)) $\mathscr{P}_{1}^{\lambda}=-0.07$, $\mathscr{P}_{2}^{\lambda}=-0.02$ ((b)) $\mathscr{P}_{1}^{\lambda}=-0.05$, $\mathscr{P}_{2}^{\lambda}=-0.01$ ((c)) $\mathscr{P}_{1}^{\lambda}=-0.02$, $\mathscr{P}_{2}^{\lambda}=-0.01$ Figure 11: Ionomer water content predictions for cases with $\mathscr{P}_{1}^{\lambda}$ values closest to zero ((a)) $\mathscr{P}_{1}^{\lambda}=-0.33$, $\mathscr{P}_{2}^{\lambda}=-0.23$ ((b)) $\mathscr{P}_{1}^{\lambda}=-0.36$, $\mathscr{P}_{2}^{\lambda}=-0.29$ ((c)) $\mathscr{P}_{1}^{\lambda}=-0.38$, $\mathscr{P}_{2}^{\lambda}=-0.25$ Figure 12: Ionomer water content predictions for cases with low $\mathscr{P}_{1}^{\lambda}$ performance metrics. Conditions correspond to low cathode stoichiometry, high temperature, and low anode inlet relative humidity where the augmented model over-predicts drying of the cell (Low membrane water content) The cases where accuracy has improved are shown in green whereas the cases where it has deteriorated are shown in red. For 1087 out of 1224 cases the augmented model resulted in a lower L2 error compared to the baseline as indicated by a higher $\mathscr{P}_{1}$ metric. Figs. 10, 11 and 12 show representative results associated with highly improved, marginally different, and significantly deteriorated $\mathscr{P}_{1}$ performance metrics. As can be seen in the results, the model seems to improve the predictions for a range of different physical conditions after training on just 14 representative cases. It should be noted that most cases with nearly zero or negative performance metrics exhibit a plateau in the spatial membrane water content distributions which indicates a saturated flow. Also, note that for some cases with values of $\mathscr{P}_{1}$ close to zero, the predictions are significantly different while predicting more accurately in one part of the physical space while falling short of even the baseline model in others. The plateau physically corresponds to saturated relative humidity conditions in the anode and cathode catalyst layers, (and possibly the channels) which drastically changes the water transport mechanism from a diffusion driven flux, to a mechanism of condensation and capillary flow. The relative difference in density of the gas vs liquid is $10^{3}$ which might explain why additional augmentations are needed to capture this highly nonlinear change in system behaviour. Figs. 13 and 14 show the qualitative trends of inflow and outflow conditions overlaid on the evaluated performance metrics. Empirical evidence suggests that the augmented model performs significantly worse than its baseline counterpart under two specific sets of inflow/outflow conditions, both of which are characterized by high cell current densities. The first set of conditions involves a high cathode stoichiometric ratio, high anode relative humidity and low temperature and the second set of conditions involves a low cathode stoichiometric ratio, low relative humidity and high temperature. Here, stoichiometric ratio refers to the ratio of inlet oxygen to reacted oxygen. High cathode stoichiometric ratio is important because these cases increase the water removal rate from the cathode inlet area causing larger non-uniformity and partially saturated conditions along the channel due to higher water generation rate with higher currents. Additionally, the sharp changes in performance metrics across consecutive case numbers is consistent with the variation of pressure difference between anode inlet and outlet. This can be seen from the high frequency oscillations with respect to case numbers in Fig. 14. Given the complex interactions between various sub-models within the fuel-cell model itself and such a high-dimensional feature-space, a model would require a highly intricate functional form and a large amount of data to make accurate predictions for any arbitrary inflow conditions, if such predictions are possible at all. Figure 13: Qualitative trends of inflow conditions across different cases overlaid on $\mathscr{P}_{1}^{\lambda}$ trends (Black dashed lines indicate cases used for training). The performance of the model augmentation is highly dependent on both the anode inlet RH, Temperature and operating current, and the worst performance corresponds to high current, low temperature and low anode inlet RH conditions. Figure 14: Qualitative trends of anode pressure difference across different cases overlaid on $\mathscr{P}_{1}^{\lambda}$ trends ### 4.3 Changes in Current Density Predictions Figure 15: Performance metric $\mathscr{P}_{1}^{j}$ for current density predictions across all 1224 cases Figure 16: Performance metric $\mathscr{P}_{2}^{j}$ for current density predictions across all 1224 cases ((a)) $\mathscr{P}_{1}^{j}=0.74$, $\mathscr{P}_{2}^{j}=0.64$ ((b)) $\mathscr{P}_{1}^{j}=0.65$, $\mathscr{P}_{2}^{j}=0.55$ ((c)) $\mathscr{P}_{1}^{j}=0.53$, $\mathscr{P}_{2}^{j}=0.34$ ((d)) $\mathscr{P}_{1}^{\lambda}=-0.04$, $\mathscr{P}_{2}^{\lambda}=-0.03$ ((e)) $\mathscr{P}_{1}^{\lambda}=0.36$, $\mathscr{P}_{2}^{\lambda}=0.31$ ((f)) $\mathscr{P}_{1}^{\lambda}=0.56$, $\mathscr{P}_{2}^{\lambda}=0.37$ Figure 17: Current density and water content predictions for cases with high $\mathscr{P}_{1}^{j}$ performance metrics ((a)) $\mathscr{P}_{1}^{j}=-0.34$, $\mathscr{P}_{2}^{j}=-0.18$ ((b)) $\mathscr{P}_{1}^{j}=-0.17$, $\mathscr{P}_{2}^{j}=0.07$ ((c)) $\mathscr{P}_{1}^{j}=-0.11$, $\mathscr{P}_{2}^{j}=-0.02$ ((d)) $\mathscr{P}_{1}^{\lambda}=0.08$, $\mathscr{P}_{2}^{\lambda}=0.01$ ((e)) $\mathscr{P}_{1}^{\lambda}=-0.29$, $\mathscr{P}_{2}^{\lambda}=-0.21$ ((f)) $\mathscr{P}_{1}^{\lambda}=0.48$, $\mathscr{P}_{2}^{\lambda}=0.23$ Figure 18: Current density and water content predictions for cases with low $\mathscr{P}_{1}^{j}$ performance metrics To judge the quality of predictions for other physical quantities, individual comparisons for the current density distributions are presented for a few selected cases which show better and worse results compared to the corresponding high-fidelity data are shown in Figs. 17 and 18. The performance metrics w.r.t. the predictions for current density distributions are summarized in Figs. 15 and 16 across all of the 1224 cases. Since the current density is not the intended output of the augmented model the presented results are not completely unexpected. However, it should be noted here that even for several cases with fairly low $\mathscr{P}_{1}$, $\mathscr{P}_{2}$ is significantly smaller in magnitude, i.e. the predictions from the augmented model are close to those from the baseline model. Thus, for most cases, the augmented model either stays close to the baseline model or improves it. For several cases with high performance metrics, we do see a significant correction in the current density predictions. This result is expected because the membrane water content impacts the proton conductivity as shown in Eqn. 1, and hence the current density distribution should follow the shape of the membrane water content unless the stoichiometric ratio is very low, or the temperature gradient along the y direction is high. Finally, note that for a few cases (e.g. case 1199), even though the prediction error for the ionomer water content decreases, even the qualitative trends for the water content are wrong, and correspondingly, the current density predictions also contain significant errors when compared to the high-fidelity data. Further work and analysis is needed to ascertain whether such cases require a different treatment during the inference process, or a different physical augmentation point in the model. The impact of using less data than in the above experiments can be found in Appendix B. ## 5 Conclusions The weakly-coupled Integrated Inference and Machine Learning (IIML) framework is presented which enables inference of data-driven model-consistent augmentations. Weakly-coupled IIML constrains the inference problem to a learnable manifold and enforces consistency among features-to-augmentation maps across the training dataset. This is achieved by performing a machine learning step after every inference update made to the spatial field of augmentation values. To maintain consistency with the learned augmentation, a field correction step, consisting of a forward solve using the augmented model, is carried out before starting the next inference update. When used with an iterative solution strategy, this framework removes the requirement to embed the augmentation function within the numerical solver. The only changes that need to be made are the addition of a spatial field of augmentation values as an array and applying the appropriate augmentation values within the numerics as required. This can significantly reduce the time and effort required to setup an inference problem at the expense of increased computation time (owing to iterative solution of the augmented model) to solve the problem. However, this trade-off is acceptable when working with reduced order (fast-running) models. The weakly-coupled IIML framework was used to augment an existing linearized 1+1D proton exchange membrane fuel cell (PEMFC) model in order to better predict the x-averaged membrane water content distribution along the channel length (y-direction). To introduce the augmentation term into the model, the equilibrium membrane water content ($\lambda_{\text{eq}}$) function was multiplied by an augmentation function $\beta$. The augmentation was assumed to be a function of eight features (which in turn are functions of model states) that include mole fractions of water vapor in the anode and cathode channels, water vapor concentrations and water content in the anode and cathode catalyst layers, membrane water content and temperature inside the cathode channel. The high-fidelity data for all 1224 different cases (operating conditions including flow rates, relative humidity, and power level) in the dataset was obtained from a proprietary 2D model of a fuel cell from the 2016 Toyota Mirai. To demonstrate generalizability, only 14 cases out of the available 1224 cases were chosen for training. The choice of the training cases was based on the need to expose the neural network to different types of physical phenomena that can take place due to significantly different boundary conditions. Since the objective here is to demonstrate the range of applicability of such augmentations, the number of training cases in this work was intentionally kept as low as possible to minimize similarity with most testing cases. Once trained, the predictive capability of the augmented model was demonstrated for the intended output, i.e. the membrane water content distribution and another output not involved in the inference process, viz. current density distribution. Performance metrics were designed to judge the capability of the augmented model relative to the baseline model. The membrane water content predictions improved for a majority (1087/1224) of cases which provides a testament to the capability of the framework to produce generalizable augmentations. However, a small fraction of cases showed predictions which were worse compared to the baseline model, and did not predict the respective water content distributions accurately. A more rigorous tuning of the neural network hyper-parameters, a more careful choice of the training cases, and a better design of the combined objective function might help in improving predictive capabilities of the current model. The changes in the current density predictions were more nuanced in the sense that even cases with a fairly low performance metric showed comparable predictions to the baseline model and large discrepancies were observed only in a small part of the domain (usually near the cathode air inlet y=0). However, for cases exhibiting high performance metrics, varying degrees of improvements were observed in the predictive accuracy. Hence, to some extent, the IIML procedure not only improved the ionomer water content predictions but the current density predictions as well. For comparative purposes, a second augmentation was also trained using only 7 out of the 14 configurations in the original training dataset. The results confirm that having too little data reduces the predictive accuracy of the augmented model across the testing cases. Although, using fewer training cases allows the inference process to overfit augmentation behavior specific to the training cases (and hence predict more accurately on the training cases), the resulting augmentation function generally yields less accurate predictions when compared to a training dataset with more data. Overall, the application of the weakly-coupled IIML approach resulted in improved predictive accuracy of the fuel cell model. Further gains in the accuracy of an augmented model can be achieved by introducing more refined physical parametrizations. Along these lines, we remark that the main contribution of this study - the IIML approach - presents the modeler a new set of tools. ## Acknowledgements The authors acknowledge funding from Toyota Motor Engineering and Manufacturing North America. We thank Ken Butts and Oana Nitulescu for their insightful discussions. ## References * [1] J. Weatheritt, R. Sandberg, A novel evolutionary algorithm applied to algebraic modifications of the rans stress–strain relationship, Journal of Computational Physics 325 (2016) 22 – 37. doi:https://doi.org/10.1016/j.jcp.2016.08.015. URL http://www.sciencedirect.com/science/article/pii/S0021999116303643 * [2] M. Schmelzer, R. P. Dwight, P. Cinnella, Discovery of algebraic reynolds-stress models using sparse symbolic regression, Flow, Turbulence and Combustion 104 (2) (2020) 579–603. * [3] J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics 807 (2016) 155–166. * [4] E. J. Parish, K. Duraisamy, A paradigm for data-driven predictive modeling using field inversion and machine learning, Journal of Computational Physics 305 (2016) 758–774. doi:10.1016/j.jcp.2015.11.012. URL http://dx.doi.org/10.1016/j.jcp.2015.11.012 * [5] A. P. Singh, S. Medida, K. Duraisamy, Machine-learning-augmented predictive modeling of turbulent separated flows over airfoils, AIAA journal 55 (7) (2017) 2215–2227. * [6] J. R. Holland, J. D. Baeder, K. Duraisamy, Field inversion and machine learning with embedded neural networks: Physics-consistent neural network training, AIAA Aviation 2019 Forum (2019). arXiv:https://arc.aiaa.org/doi/pdf/10.2514/6.2019-3200, doi:10.2514/6.2019-3200. URL https://arc.aiaa.org/doi/abs/10.2514/6.2019-3200 * [7] J. R. Holland, J. D. Baeder, K. Duraisamy, Towards integrated field inversion and machine learning with embedded neural networks for rans modeling, AIAA Scitech 2019 Forum (2019). arXiv:https://arc.aiaa.org/doi/pdf/10.2514/6.2019-1884, doi:10.2514/6.2019-1884. URL https://arc.aiaa.org/doi/abs/10.2514/6.2019-1884 * [8] I. B. H. Saïdi, M. Schmelzer, P. Cinnella, F. Grasso, Cfd-driven symbolic identification of algebraic reynolds-stress models, Journal of Computational Physics 457 (2022) 111037. * [9] F. Waschkowski, Y. Zhao, R. Sandberg, J. Klewicki, Multi-objective cfd-driven development of coupled turbulence closure models, Journal of Computational Physics 452 (2022) 110922. * [10] A. G. Olabi, T. Wilberforce, M. A. Abdelkareem, Fuel cell application in the automotive industry and future perspective, Energy 214 (2021) 118955. doi:10.1016/j.energy.2020.118955. * [11] W. R. Daud, R. E. Rosli, E. H. Majlan, S. A. Hamid, R. Mohamed, T. Husaini, PEM fuel cell system control: A review (12 2017). doi:10.1016/j.renene.2017.06.027. * [12] H. Yuan, H. Dai, X. Wei, P. Ming, Model-based observers for internal states estimation and control of proton exchange membrane fuel cell system: A review (8 2020). doi:10.1016/j.jpowsour.2020.228376. * [13] K. Priya, K. Sathishkumar, N. Rajasekar, A comprehensive review on parameter estimation techniques for Proton Exchange Membrane fuel cell modelling (10 2018). doi:10.1016/j.rser.2018.05.017. * [14] M. Arif, S. C. Cheung, J. Andrews, Different Approaches Used for Modeling and Simulation of Polymer Electrolyte Membrane Fuel Cells: A Review, Energy and Fuels 34 (10) (2020) 11897–11915. doi:10.1021/acs.energyfuels.0c02414. URL https://dx.doi.org/10.1021/acs.energyfuels.0c02414 * [15] A. Goshtasbi, B. L. Pence, J. Chen, M. A. DeBolt, C. Wang, J. R. Waldecker, S. Hirano, T. Ersal, A Mathematical Model toward Real-Time Monitoring of Automotive PEM Fuel Cells, Journal of The Electrochemical Society 167 (2) (2020) 024518. doi:10.1149/1945-7111/ab6dd1. URL https://iopscience.iop.org/article/10.1149/1945-7111/ab6dd1https://iopscience.iop.org/article/10.1149/1945-7111/ab6dd1/meta * [16] R. Vetter, J. O. Schumacher, Free open reference implementation of a two-phase PEM fuel cell model, Computer Physics Communications 234 (2019) 223–234. doi:10.1016/j.cpc.2018.07.023. * [17] G. Napoli, M. Ferraro, F. Sergi, G. Brunaccini, V. Antonucci, Data driven models for a pem fuel cell stack performance prediction, International journal of hydrogen energy 38 (26) (2013) 11628–11638. * [18] Z. Li, R. Outbib, D. Hissel, S. Giurgea, Data-driven diagnosis of pem fuel cell: A comparative study, Control Engineering Practice 28 (2014) 1–12. * [19] I.-S. Han, C.-B. Chung, Performance prediction and analysis of a pem fuel cell operating on pure oxygen using data-driven models: A comparison of artificial neural network and support vector machine, International Journal of Hydrogen Energy 41 (24) (2016) 10202–10211. * [20] R. Ma, T. Yang, E. Breaz, Z. Li, P. Briois, F. Gao, Data-driven proton exchange membrane fuel cell degradation predication through deep learning method, Applied energy 231 (2018) 102–115. * [21] G. Zhu, W. Chen, S. Lu, X. Chen, Parameter study of high-temperature proton exchange membrane fuel cell using data-driven models, International Journal of Hydrogen Energy 44 (54) (2019) 28958–28967. * [22] L. Sun, G. Li, Q. Hua, Y. Jin, A hybrid paradigm combining model-based and data-driven methods for fuel cell stack cooling control, Renewable Energy 147 (2020) 1642–1652. * [23] B. Wang, B. Xie, J. Xuan, K. Jiao, Ai-based optimization of pem fuel cell catalyst layers for maximum power density via data-driven surrogate modeling, Energy Conversion and Management 205 (2020) 112460. * [24] J. B. Siegel, S. V. Bohac, A. G. Stefanopoulou, S. Yesilyurt, Nitrogen Front Evolution in Purged Polymer Electrolyte Membrane Fuel Cell with Dead-Ended Anode, J. Electrochem. Soc. 157 (7) (2010) B1081–B1093. doi:10.1149/1.3425743. * [25] K. Duraisamy, Perspectives on machine learning-augmented reynolds-averaged and large eddy simulation models of turbulence, Physical Review Fluids (2021). * [26] T. E. Springer, T. A. Zawodzinski, S. Gottesfeld, Z. T.A., S. Gottesfeld, Polymer Electrolyte Fuel Cell Model, J. Electrochem. Soc. 138 (8) (1991) 2334–2341. * [27] V. Sulzer, P. Mohtat, J. B. Siegel, Reduced-order modeling of pem fuel cells using asymptotic analysis (Apr 2022). doi:10.1149/osf.io/yntze. URL ecsarxiv.org/yntze * [28] V. Sulzer, S. G. Marquis, R. Timms, M. Robinson, S. J. Chapman, Python battery mathematical modelling (pybamm), Journal of Open Research Software 9 (1) (2021). * [29] J. A. Andersson, J. Gillis, G. Horn, J. B. Rawlings, M. Diehl, Casadi: a software framework for nonlinear optimization and optimal control, Mathematical Programming Computation 11 (1) (2019) 1–36. * [30] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, C. S. Woodward, Sundials: Suite of nonlinear and differential/algebraic equation solvers, ACM Transactions on Mathematical Software (TOMS) 31 (3) (2005) 363–396. * [31] F. Chollet, et al., Keras, https://keras.io (2015). * [32] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). ## Appendix A Fuel Cell Model Source Terms The definitions of all source terms and boundary conditions for all equations used to model the fuel cell (see Section 2.1) have been given as follows. The Butler-Volmer relation is used to model the reaction-current density $j_{cl}$ induced by the half-reactions in the catalyst layers. $j_{cl}=i_{0}(c_{k},T)\left(\exp\left(\frac{2\beta F}{RT}\eta\right)-\exp\left(-\frac{2(1-\beta)F}{RT}\eta\right)\right)\quad\text{where}\quad k\in\\{O_{2},H_{2}\\}$ (29) Here, $\eta$ is the overpotential given by $\eta=\phi_{e}-\phi_{p}-U(c_{k},T)\quad\text{where}\quad k\in\\{O_{2},H_{2}\\}$ (30) $i_{0}$ is the exchange-current density and $U$ is the reversible potential difference, both of which are functions of temperature $T$ and the appropriate concentration ($c_{\text{H}_{\text{2}}}$ or $c_{\text{O}_{\text{2}}}$). $F$ is the Faraday constant. The sign convention used here assumes that $j_{cl}$ is positive at the anode (where the oxidation of hydrogen occurs). Since no reactions occur outside the catalyst layers, the interfacial current density can be written as follows. $j=\left\\{\begin{matrix}j_{cl},&x\in\Omega_{cl}\\\ 0,&\text{otherwise}\end{matrix}\right.$ (31) The rate of consumption of hydrogen and oxygen can be written in terms of the interfacial current density as follows. $r_{H_{2}}=-\frac{aj}{2F},\qquad r_{O_{2}}=\frac{aj}{4F}$ (32) The evaporation/condensation source term can be given as follows in terms of the water vapor concentration and saturation concentration (which is a function of the saturation pressure $p_{\text{sat}}$, which in turn varies with temperature). $S_{ec}=\gamma_{ec}(c_{\text{H}_{\text{2}}\text{O}}-c_{\text{sat}}),\qquad c_{\text{sat}}=\frac{p_{\text{sat}}(T)}{RT}$ (33) The rate of evaporation and condensation is given as follows. $\gamma_{ec}=\left\\{\begin{matrix}\gamma_{e}(T)s_{\text{red}},&c_{\text{H}_{\text{2}}\text{O}}<c_{\text{sat}}\\\ \gamma_{c}(T)(1-s_{\text{red}})&c_{\text{H}_{\text{2}}\text{O}}>c_{\text{sat}}\end{matrix}\right.$ (34) Here, $s_{\text{red}}$ is the reduced liquid water saturation and is given as $s_{\text{red}}=(s-s_{\text{im}})/(1-s_{\text{im}})$ with $s_{im}$ referring to the immobile saturation. ## Appendix B Impact of a smaller training dataset Figure 19: Performance metric for all cases when trained with only 7 (instead of 14) cases To illustrate the impact of removing training configurations, a second model was trained using only 7 training configurations (case IDs 40, 125, 190, 400, 740, 865 and 1090) instead of 14, and as can be seen from Fig. 19, the performance of the augmented model immediately deteriorates and it is able to achieve better-than-baseline performance for only 777 cases out of a total of 1224 that it was tested for. It can also be seen that the performance metric for many cases deteriorates drastically, while it improves for a handful of cases. These are either cases which were used during training or those that share very similar inflow conditions with the training cases. This behavior is caused by the inference and learning process overfitting to the augmentation behavior specific to the few training cases it has been provided with. While it improves predictions on the training cases, overfitting is an undesirable outcome as it results in poorer predictive accuracy for cases different than those in the training dataset and hence, hurts generalizability.
# The effects of charmonium on the properties of the $1^{++}$ hidden charm poles in effective field theory E. Cincioglu<EMAIL_ADDRESS>Department of Physics, Middle East Technical University, Ankara, Turkey A. Ozpineci Department of Physics, Middle East Technical University, Ankara, Turkey D. Yildirim Yilmaz Department of Physics, Faculty of Sciences, Ankara University, Ankara, Turkey Department of Physics, Faculty of Sciences and Arts, Amasya University, Amasya, Turkey<EMAIL_ADDRESS> ###### Abstract In this study, the properties of the $J^{PC}=1^{++}$ hidden charm poles are analyzed under the variation of the bare 2P charmonium mass within the effective field theory proposed in Ref. Cincioglu _et al._ (2016). The main focus of the current work is on the pole trajectory of the $\chi_{c1}(2P)$ charmonium dressed by the $D\bar{D}^{*}$ meson loops. It is shown that the trajectories of the pole change radically for values of the bare charmonium mass above a certain value and also depending on how close the pole is to the threshold. ## I Introduction In the previous decades, many states were found experimentally in the mass range of heavy hadrons that can not be easily explained within a quark model and a standard quarkonium picture Olsen (2015); Tanabashi _et al._ (2018). To understand the structure of these states is one of the important aims of hadron physics. For most of these states, only their masses are experimentally known, and information about their quantum numbers and decays modes are very scarce Olsen _et al._ (2018); Lebed _et al._ (2017). Especially the content of $X(3872)$ among all the exotics has been the subject of various studies since its discovery. The mass of $X(3872)$, which is very close to the $D^{0}D^{*0}$ threshold Choi _et al._ (2003); Acosta _et al._ (2004); Abazov _et al._ (2004); Aubert _et al._ (2005); Chatrchyan _et al._ (2013); Aaij _et al._ (2013, 2020a, 2020b), and the isospin-breaking decay $X(3872)\rightarrow J/\psi\rho$, make it an ideal candidate for a $D\bar{D}^{*}$ hadronic molecule Chen _et al._ (2016); Guo _et al._ (2018). Another important measurement that might give insight into the structure of X(3872) is its electromagnetic decay into $\psi(2S)$ and $J/\psi$. The ratio of the branching ratios of these decays is measured as Aubert _et al._ (2009); Aaij _et al._ (2014): $R_{\psi\gamma}=\frac{B_{r}(X\to\psi(2S)\gamma)}{B_{r}(X\to J/\psi\gamma)}=2.46\pm 0.64\pm 0.29\,.$ (1) This implies that, since the phase space for the decay $X\rightarrow\psi(2S)\gamma$ is much smaller than the phase space for the $X\rightarrow J/\psi\gamma$ decay, the amplitude for $X\rightarrow\psi(2S)\gamma$ is much larger. This is naturally expected in the quark model since in the quark model $X\rightarrow\psi(2S)\gamma$ is a $\Delta L=1$ transition. But, in the quark model, the predicted mass of the $1^{++}(J^{PC})$ charm- anticharm state, called $\chi_{c1}(2P)$, is around $3.95$ GeV Godfrey and Isgur (1985); Ebert _et al._ (2011), about 70 MeV higher than the observed state. On the other hand, in Ref. Guo _et al._ (2015), triangular $DD^{*}\bar{D}^{*}$ and simple $D\bar{D}^{*}$ loop contributions to the radiative amplitude were computed. It was concluded that the observed ratio allows the $X(3872)$ to be a hadronic molecule with the dominant component $D\bar{D}^{*}$. In Ref. Cincioglu and Ozpineci (2019), the effects of short-range contributions to the radiative decays of the $X(3872)$ were analyzed. It was demonstrated that the possible constructive or destructive interferences between the meson-loop and the short-distance contact term are important to determine whether the charmonium content of the $X(3872)$ is nontrivial. In Cincioglu _et al._ (2016), an effective theory using heavy quark spin symmetry (HQSS) is presented in which $X(3872)$ is described as a superposition of molecular components and a compact core, taken to be 2P charmonium state throughout this study111 There are plenty of studies that considered X(3872) as a mixture in the literature Dong _et al._ (2011); Takizawa and Takeuchi (2013); Chen _et al._ (2015). . An important source of uncertainty in the model is bare masses of the compact components. In Cincioglu _et al._ (2016), a bare mass was taken to be the charmonium mass predicted by potential quark models. This work aims to draw attention to the effect of the bare charmonium mass, which affects the trajectories on the predictions of the model, and, as shown, how it alters the molecular weight of the observation. It is noted that the bare mass is UV regulator dependent and it is a free parameter in the presented scheme. Although the bare mass is not physically observable, theoretically it is still relevant, because its value can be obtained from schemes that ignore the coupling of charmonium states to the mesons ($d=0$). A major problem is to set the UV regulator to match the quark model and the EFT approaches. Thus, all calculations have been performed with two different UV cutoffs, spanning a physically motivated range of values. The expectation is that the cutoff dependence will be absorbed into the low energy constants (LECs) and thus predictions for observables could become at most mildly regulator dependent. This paper is organized as follows: In section II, the main points of the model proposed in Cincioglu _et al._ (2016) is presented, followed by the our results and discussion. ## II Formalism Due to the presence of heavy quarks in $X(3872)$, it can be described within a HQSS framework. To describe the molecular component of $X(3872)$, interactions of $D$ and $D^{*}$ are needed. In the HQSS, these mesons group in a HQSS doublet, which can be written as: $H^{(Q)}_{a}=\frac{1+\not{v}}{2}(P^{*(Q)}_{a\mu}\gamma^{\mu}-P^{(Q)}_{a}\gamma_{5})\,.$ (2) Due to the very low momentum exchange between the mesons in the molecule, contact interactions are sufficient to describe the D mesons’ interactions in X(3872) Nieves and Valderrama (2012). Except for contact interaction, other interactions like one pion exchange and coupled channel are sub-leading order Nieves and Valderrama (2012). Therefore their contribution can be ignored safely. Since the interaction among the heavy hadrons forming a molecule is nonperturbative, the potential should be iterated by solving Lippmann- Schwinger equation. Lippmann-Schwinger equation shows ill-defined ultraviolet behavior resulting from a contact interaction. In consequence, it requires regularization. As a regulator function, a Gaussian function $f_{\Lambda}(\vec{p})$ is employed $\langle\vec{p}^{\prime};D^{(*)}\bar{D}^{(*)}|V_{\Lambda}|\vec{p};D^{(*)}\bar{D}^{(*)}\rangle=C_{0X}f_{\Lambda}(\vec{p}^{\prime})f_{\Lambda}(\vec{p})$ (3) $\langle\vec{p};D^{(*)}\bar{D}^{(*)}|V_{c\bar{c};\Lambda}|\Psi_{c\bar{c}}(2P)\rangle=df_{\Lambda}(\vec{p})$ (4) where $d$ and $C_{0X}$ are low energy constants of related interactions, and in this paper we take $\Lambda$ cutoff values a $0.5-1.0$ GeV. At leading order the interaction of four heavy mesons with contact interaction potentials can be described as below Nieves and Valderrama (2012) $\begin{array}[]{l l l l l}\mathcal{L}_{4H}&=&D_{0a}Tr\left[\bar{H}^{(Q)a}H^{(Q)}_{a}\gamma_{\mu}\right]Tr\left[H^{(\bar{Q})b}\bar{H}^{(\bar{Q})}_{b}\gamma^{\mu}\right]\\\ &+&D_{0b}Tr\left[\bar{H}^{(Q)a}H^{(Q)}_{a}\gamma_{\mu}\gamma_{5}\right]Tr\left[H^{(\bar{Q})b}\bar{H}^{(\bar{Q})}_{b}\gamma^{\mu}\gamma_{5}\right]\\\ &+&E_{0a}Tr\left[\bar{H}^{(Q)a}\vec{\tau}^{b}_{a}H^{(Q)}_{b}\gamma_{\mu}\right]Tr\left[H^{(\bar{Q})r}\vec{\tau}^{s}_{r}\bar{H}^{(\bar{Q})}_{s}\gamma^{\mu}\right]\\\ &+&E_{0b}Tr\left[\bar{H}^{(Q)a}\vec{\tau}^{b}_{a}H^{(Q)}_{b}\gamma_{\mu}\gamma_{5}\right]Tr\left[H^{(\bar{Q})r}\vec{\tau}^{s}_{r}\bar{H}^{(\bar{Q})}_{s}\gamma^{\mu}\gamma_{5}\right]\,,\end{array}$ (5) where $D_{0i}$ and $E_{0i}$ are LECs. To include the compact core component, it is necessary to identify the HQSS multiplet that can be used to describe it. For this purpose, the $P$-wave quarkonium multiplet, which can be written as Casalbuoni _et al._ (1993): $J^{\mu}=\frac{1+\not{v}}{2}\left(\chi_{2}^{\mu\alpha}\gamma_{\alpha}+\frac{i}{\sqrt{2}}\epsilon^{\mu\alpha\beta\gamma}\chi_{1\gamma}v_{\alpha}\gamma_{\beta}+\frac{1}{\sqrt{3}}\chi_{0}(\gamma^{\mu}-v^{\mu})+h^{\mu}\gamma_{5}\right)\frac{1-\not{v}}{2}~{}.$ (6) HQSS restricts the possible contact interactions between the $J^{\mu}$ multiplet, and the $D$ meson multiplets. The only interaction term between two $D$ mesons and $J^{\mu}$ that does not contain any derivative interactions at the leading order, and the only consistent Lagrangian with HQSS is: $\mathcal{L}_{HHQ\bar{Q}}=\frac{d}{2}Tr[H^{a(\bar{Q})}\bar{J}_{\mu}H_{a}^{(Q)}\gamma^{\mu}]+\frac{d}{2}Tr[\bar{H}^{a(Q)}J_{\mu}\bar{H}_{a}^{(\bar{Q})}\gamma^{\mu}]\,,$ (7) where the parameter $d$ is an unknown LEC that causes molecular and compact components to mix (For more details, see e.g. Cincioglu _et al._ (2016); Hanhart _et al._ (2014); Colangelo _et al._ (2004)). For bound states, to study the weights of the molecular and compact components in $X(3872)$, a method put forward by Weinberg Weinberg (1963, 1965) can be used. The method is crucial to examine interaction couplings and the probabilistic interpretations of the components. For small binding energies (s-wave), the approach is model-independent. With the help of the sum rule Garcia-Recio _et al._ (2015); Weinberg (1965) $-1=\sum_{ij}g_{i}g_{j}\left(\delta_{ij}\left[\frac{\partial G_{i}^{II}(E)}{\partial E}\right]_{E=E_{R}}+\left[G_{i}^{II}(E)\frac{\partial V_{ij}(E)}{\partial E}G_{j}^{II}(E)\right]_{E=E_{R}}\right)\,,$ (8) the probabilistic interpretation of the compositeness condition can be made; moreover, Eq. (8) is valid for both bound states and resonance statesGarcia- Recio _et al._ (2015). For resonance (bound) states, $G$ should be taken as $G^{II}$ ($G^{I}$). Each term in Eq.(8) can be identified differently222 The imaginary parts of $\tilde{X}_{i}$ and $\tilde{Z}$ must cancel each other.. $X_{i}=Re\tilde{X}_{i}=Re\left(-g_{i}^{2}\left[\frac{\partial G_{i}^{II}(E)}{\partial E}\right]_{E=E_{R}}\right)$ (9) $Z=Re\tilde{Z}_{i}=Re\left(-\sum_{ij}\left[g_{i}G_{i}^{II}(E)\frac{\partial V_{ij}(E)}{\partial E}G_{j}^{II}(E)g_{j}\right]_{E=E_{R}}\right)$ (10) With the definitions in Eqs.(9) and (10), we obtain the compositeness and elementariness, respectively. $\tilde{X}_{i}$ quantifies the probability of finding a two-body component in the wave function of a hadron, and $\tilde{Z}_{i}$ is related to other components and thus is understood as the elementariness. Hence Z close to 1 signifies that its compact component dominates the bound state. On the other hand, in the case of a resonance, probabilistic interpretations are not entirely accurate because of $\tilde{X}_{i}$’s negative imaginary values. But in Ref. Guo and Oller (2016); Aceti _et al._ (2014), it was claimed that the absolute value of $\tilde{X}_{i}$ can be used as a measure of the weight of the i-th channel333In Ref. Guo and Oller (2016) a probabilistic interpretation of the compositeness relation at the pole of a resonance with only positive coefficients thanks to a suitable transformation of the S matrix has been derived. Absolute value of $\tilde{Z}_{i}$ gives the weight of finding a specific component in the wave function of a hadron, but it is only valid when $Re(E_{R})>M_{i,th}$, with $E_{R}$ resonance energy and $M_{i,th}$ the corresponding threshold of the channel $i$.. $T$-matrix, which can be obtained as a solution of an LSE equation, develops poles on the complex energy plane. If $T$-matrix is close to the pole, its elements are approximately $T_{ij}\approx\frac{g_{i}g_{j}}{E-E_{R}}\,,$ (11) where $g_{i}$ is the coupling of the state to the i-th channel. When considering a specific $(1^{++})$ state, $T$-matrix, which gives dynamics of the system, can be formed as Cincioglu _et al._ (2016): $T(E)=\frac{\Sigma_{c\bar{c}}}{1-G^{0}_{c\bar{c}}\Sigma_{c\bar{c}}}\left(\begin{array}[]{cc}f^{2}_{\Lambda}(E)[\frac{1}{d^{2}G_{QM}^{2}}-\frac{1-G^{0}_{c\bar{c}}\Sigma_{c\bar{c}}}{G_{QM}\Sigma_{c\bar{c}}}]&f_{\Lambda}(E)\frac{1}{dG_{QM}}\\\ f_{\Lambda}(E)\frac{1}{dG_{QM}}&1\end{array}\right)\,,$ (12) where $\Sigma_{c\bar{c}}$, $G^{0}_{c\bar{c}}$, $f_{\Lambda}$444 All calculations have been performed with an UV cutoff $\Lambda=0.5-1$ GeV. , and $G_{QM}$ 555 QM stands for non-relativistic quantum mechanics. are the charmonium self-energy induced by the meson loops, the non-relativistic bare charmonium propagator, the Gaussian regulator, and the diagonal meson loop function, respectively. In Eq. (12), while the first channel is molecular type, the second is charmonium Baru _et al._ (2010). Besides, poles of the transition matrix are given by zeros of the inverse of the dressed propagator Cincioglu _et al._ (2016) $1-G^{0}_{c\bar{c}}(E_{R})\Sigma_{c\bar{c}}(E_{R})=0\,,$ (13) where $\Sigma_{c\bar{c}}$ is the quarkonium self-energy $\Sigma_{c\bar{c}}(E)=\left[V_{c\bar{c}}^{\rm QM}\right]^{t}G_{\rm QM}(E)\Gamma_{c\bar{c}}(E)\,,$ (14) with the dressed vertex function, $\Gamma_{c\bar{c}}$, reads $\Gamma_{c\bar{c}}(E)=\left(1-V^{\rm QM}G_{\rm QM}(E)\right)^{-1}V_{c\bar{c}}^{\rm QM}\,.$ (15) where $V^{QM}$ and $V_{c\bar{c}}^{QM}$ are the molecular contact potential and the $\chi_{c_{1}}(2P)-D\bar{D}^{(*)}$ transition amplitudes, respectively (see Eq. (30) and Eq. (34) of Ref. Cincioglu _et al._ (2016)). Finally, for an arbitrary $E$, the mesonic loop function is given by Albaladejo _et al._ (2013) $\displaystyle G_{\rm QM}(E)$ $\displaystyle=\int\frac{\text{d}^{3}\vec{q}}{(2\pi)^{3}}\frac{e^{-2\vec{q}^{\,2}/\Lambda^{2}}}{E-M_{1}-M_{2}-\vec{q}^{\,\,2}/2\mu+i0^{+}}$ $\displaystyle=-\frac{\mu\Lambda}{(2\pi)^{3/2}}+\frac{\mu k}{\pi^{3/2}}\phi\left(\sqrt{2}k/\Lambda\right)-i\frac{\mu k}{2\pi}e^{-2k^{2}/\Lambda^{2}}~{},$ (16) with $\mu^{-1}=M_{1}^{-1}+M_{2}^{-1}$, $k^{2}=2\mu(E-M_{1}-M_{2})$ and $\phi(x)$ the Dawson integral given by: $\phi(x)=e^{-x^{2}}\int_{0}^{x}e^{y^{2}}\text{d}y~{}.$ (17) Poles on the complex E-energy plane represent the observable states. The mass and width of a pole can be obtained from the pole position on the complex energy plane. On the complex energy E-plane, poles can be located on the different Riemann sheets. Indeed, $G_{\rm QM}(E)$ has two Riemann sheets. In the first Riemann sheet (FRS), $0\leqslant{\rm Arg}(E-M_{1}-M_{2})<2\pi$, there is a discontinuity $G_{\rm QM}^{I}(E+i\epsilon)-G_{\rm QM}^{I}(E-i\epsilon)=2i\,{\rm Im}G_{\rm QM}^{I}(E+i\epsilon)$ for $E>(M_{1}+M_{2})$. For those poles located on the FRS, on the real axis, and below the threshold are named bound states. In the second Riemann sheet (SRS), $2\pi\leqslant{\rm Arg}(E-M_{1}-M_{2})<4\pi$, one can find $G_{\rm QM}^{II}(E-i\epsilon)=G_{\rm QM}^{I}(E+i\epsilon)$, for real energies and above threshold. Poles located below the real axis, and above the threshold on the SRS are called resonances666 The more detailed information about poles can be found in Refs. Guo _et al._ (2018); Hanhart _et al._ (2014). There is no restriction for the location of the poles on the second Riemann (unphysical) sheets. Hermitian analyticity requires that if there is a pole at a complex value of s (resonance), there must be another pole at its complex conjugate value, $s^{*}$ (anti-resonance). In this study, the properties of the conjugate pole are not given since this pole corresponds to the same resonance.. When it is a narrow resonance in SRS, the pole with a negative imaginary part (the pole located in the lower half-plane) is closer to the physical Riemann sheet than the pole with a positive imaginary part; thus, it influences the observables more strongly in the vicinity of the resonance region Baru _et al._ (2010). Moreover, when the poles’ real part reaches the threshold with increasing d, both resonance and anti-resonance poles are equally essential. Those nearby poles only significantly influence resonance behavior in the experiment region that could be extracted from the experiment data in a phenomenological study. However, when they have large imaginary parts, they lose their width interpretation. The position of the pole might give further insight into the structure of the state. It appears that if the bound state is mostly compact, there are two near-threshold poles, one is on the first Riemann sheet, and the other is on the second Riemann sheet. Furthermore, if it is a predominantly molecular state, there is a single near-threshold pole on the first Riemann sheet Baru _et al._ (2010); Morgan (1992). We look carefully at the trajectories of 2P charmonium poles located in nearby threshold zone in scattering amplitudes. There are qualitative differences between the pole trajectories of resonances that couple to the related continuum channel with changing bare 2P charmonium masses. ### II.1 General remarks In the model of Ref. Cincioglu _et al._ (2016), two poles in the $1^{++}$ sector are expected. One is located in the FRS as a bound state. This state is identified as $X(3872)$ bound state in the FRS, and its mass is fixed at $3871.69$ MeV to determine the LEC $C_{0X}$ defined as $C_{0X}=C_{0A}+C_{0B},\qquad C_{0\phi}=D_{0\phi}+3E_{0\phi},\qquad\text{for}\qquad\phi=a,\,b.$ (18) The other one is identified as a dressed $\chi_{c1}(2P)$. The dependence of the second pole position on the bare 2P charmonium mass results from the non- relativistic bare propagator: $G^{0}_{c\bar{c}}(E)=\frac{1}{E-\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}}\,,$ (19) where $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}$ is the mass of the 2P bare charmonium state. As mentioned above, it is dressed by the $D\bar{D}^{*}$ meson loops and gives rise to the physical mass of the charmonium states, as $d$ increases. On the other hand, there exist some uncertainties in the presented model of Ref. Cincioglu _et al._ (2016). One of the most considerable uncertainties is the mass of the bare charmonium state. As can be seen in Table 1, most recent constituent quark models give the mass of the $\chi_{c1}(2P)$ in quite a broad range777 Furthermore, if the compact component is identified as a tetraquark, its mass is completely unknown.. In Ref. Cincioglu _et al._ (2016), the bare $\chi_{c1}(2P)$ mass is taken 3906 MeV from Ref. Ebert _et al._ (2011) due to the closest prediction to the experimental mass of the $\chi_{c2}(2P)$ state. Another uncertainty is a sizable error in the predicted masses that can reach up to $10\%$. Besides, different models in Table 1 predict masses for the two- meson threshold and the bare $\chi_{c1}(2P)$ mass differences ranging from around 35 MeV to 82 MeV 888To see effects of the a lower mass of the threshold(for $3865$ MeV see Tables 2 and 10), we searched the hypothetical values below the threshold. Contrary to the values higher than the threshold, as $C_{0X}$ decreases, $d$ increases, the pole moves towards the threshold, gaining small width.. Also, it might be interesting to see the impact of the bare charmonium mass, which is $10$ MeV below the $3906$ MeV mass value, on the properties of the $1^{++}$ hidden charm poles (charmonium content in $X(3872)$, dressed charmonium $\chi_{c1}(2P)$, $DD^{*}-\chi_{c1}(2P)$ coupling, etc.). For $m_{c\bar{c}}^{0}=3906$ MeV, the state is just $\sim 35$ MeV above the $DD^{*}$ threshold, and dressed $\chi_{c1}(2P)$ pole becomes below threshold in the SRS with relatively small $X(3872)$ charmonium content. However, as the larger bare $\chi_{c1}(2P)$ mass is taken, it is seen that on the SRS, a larger charmonium content is needed to move the $\chi_{c1}(2P)$ state below the $DD^{*}$ threshold. Ref. | $m_{\mathcal{X}_{c1}(2P)}$ [MeV] | $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}-m_{DD^{*}}$ [MeV] ---|---|--- Ebert _et al._ (2011) | 3906 | 35 Sng and Jumasahatov (2019) | 3924 | 53 Barnes _et al._ (2005) | 3925 | 54 Ebert _et al._ (2003) | 3929 | 58 Gui _et al._ (2018); Deng _et al._ (2017) | 3937 | 66 Segovia _et al._ (2013); Ortega _et al._ (2013) | 3947 | 76 Godfrey and Isgur (1985) | 3953 | 82 Table 1: The $\chi_{c1}(2P)$ masses in the literature. Constituent quark model Segovia _et al._ (2013); Ortega _et al._ (2013), Regge trajectory Ebert _et al._ (2011), relativistic quark model Ebert _et al._ (2003), non-relativistic potential model Barnes _et al._ (2005); Godfrey and Isgur (1985), non- relativistic quark model - ${}^{3}P_{0}$ model Gui _et al._ (2018); Deng _et al._ (2017), and QCD sum rule Sng and Jumasahatov (2019) were used to obtain these mass values. ## III Results and Discussion In Tables 3–9 and Table I of Ref. Cincioglu _et al._ (2016)999 Table I includes properties of $\chi_{c1}(2P)$ states, taking the bare charmonium mass as 3906 MeV. , the properties of the poles found in the $1^{++}$ hidden charm sector are studied as a function of the LEC $d$, which controls the admixtures of charmonium and $DD^{*}$ molecule. The location of the dressed $\chi_{c1}(2P)$ pole depends on the mixing parameter $d$ and the bare charmonium mass. Within the scheme presented here and in Ref. Cincioglu _et al._ (2016) of bare charmonium mass is a free parameter and it is not an observable, as mentioned above, which gets dressed by the $D^{(*)}D^{(*)}$ meson loops and gives rise to the physical mass of the charmonium states. Couplings of the $\chi_{c1}(2P)$ state to $D^{(*)}$ mesons causes the bare mass to be renormalised. Since, in the effective theory, the difference between the bare and the physical charmonium masses is a finite renormalization. This shift depends on the UV regulator since the bare mass itself depends on the renormalization scheme. To show the cutoff dependency of the properties of the poles found in the $1^{++}$ hidden charm sector, all calculation have been carried out with UV cutoff values $\Lambda=0.5\\--1$ GeV. The results obtained for both cutoffs with values of d are qualitatively similar, though some quantitative differences appear, as can be seen in Tables 3–9 and Tables 11–16 of the appendix. Moreover, it is observed that the behavior of the physical $\chi_{c1}(2P)$ state alters around $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}\approx 3908.5$ MeV. For $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3896,\,3906$ and $3908.5$ MeV values, the trajectories of the corresponding dressed $\chi_{c1}(2P)$ pole are depicted in Fig. 1. When $d$ is $0$, the pole in the SRS is on the real axis. With the increasing value of $d$, the pole gradually gets away from the real axis by gaining width. As d continues to increase, the pole moves below the threshold. At some point, it reaches the real axis again. When the pole reaches the real axis with $\tilde{X}_{X(3872)}\sim 0.39$ for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3896$ MeV, $\tilde{X}_{X(3872)}\sim 0.43$ for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3906$ MeV and $\tilde{X}_{X(3872)}\sim 0.47$ for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3908.5$ MeV. As can be seen at Tables 3–4, for $m_{c\bar{c}}^{0}=3896$ MeV charmonium content $|\tilde{X}_{\chi_{c1}}|$ is about 35$\%$, and for $m_{c\bar{c}}^{0}=3908.5$ MeV charmonium content $|\tilde{X}_{\chi_{c1}}|$ is about 83$\%$. With the conjugate pairs coinciding in the real axis, two poles appear in the SRS below threshold located at $m_{R}-0i$. As one pole moves along the real axis toward the threshold, a second pole departs from the threshold and leaves the real axis forming another conjugate pair, with $d$ increasing. These new-formed poles are either far below the threshold or above the threshold but deep in the complex plane. It means that, in that region, the width interpretation is lost. Since it will not produce any observable effects, its behavior is not illustrated in Fig. 1, and also its properties are not given in Tables 3 and 4. Additionally, values smaller than $3908.5$ MeV bare charmonium mass have similar trajectories. In the case of $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}\gtrsim 3908.5$ MeV, the behaviors of the pole trajectories, depicted in Fig. 2, are different from Fig. 1. However, with increasing $d$ values, the poles stay in the SRS above or below the threshold. But they do not reach the real axis until d’s high values. For larger bare mass values than $3908.5$ MeV, until around $3930$ MeV bare charmonium mass, the pole trajectories cross the threshold. For those who cross the threshold, while the bare charmonium mass is getting bigger, their molecular weight of the $X(3872)$ state is getting smaller. Besides this, for some specific values of $d$ where $d>d^{\rm{crit}}$, there exist a double pole in the SRS. One of the poles approaches the threshold, with $\Sigma_{c\bar{c}}^{\prime}$ decreasing and moves along the real axis in the SRS becomes quite close to the threshold, where SRS and FRS are connected, it might have visible effects in scattering observables as the line shape will be determined by both the pole and this virtual state. The other goes away from the real axis with gaining width but eventually reaches the real axis far from the threshold. These trajectories of the poles (illustrated with crosses) as $d$ increases are either below the threshold or above the threshold but located in much above the real axis. As mentioned above, since these poles will not have any observable consequences, their details are not also included in Tables 5–9 in the appendix. In the $d\gg d^{\rm{crit}}$ limit, $X(3872)$ appears to be a 2P charmonium state where the molecular weight in $X(3872)$, $X_{X(3872)}\sim 0$, mirror in the FRS of the pole found in the SRS. Also, the proximity of the bare mass to the threshold is essential within the model. As mentioned in the introduction, the radiative decays of $X(3872)$ were analyzed in Ref. Cincioglu and Ozpineci (2019) within the effective theory of Ref. Cincioglu _et al._ (2016) to constrain the charmonium content in $X(3872)$. In the case of the destructive interferences between the meson loops and the counter-term modeled by a charm quark loop, a strong restriction on the charmonium admixture was found. In this work, the contribution from short range interaction to the ratio $R{\psi\gamma}$ depends on $\tilde{Z}_{X(3872)}$101010In the presented study, it is defined as $\tilde{Z}_{X(3872)}=1-\tilde{X}_{X(3872)}$, which is the weight of finding the charmonium component $\chi_{c1}(2P)$ in the physical wave function of $X(3872)$, and position of the $\chi_{c1}(2P)$ pole, as can be seen from Eq. (4) of that reference. It was claimed that the behavior of the predictions of the ratio of radiative decays is different when $\tilde{Z}_{X(3872)}\lesssim 0.55$ and when $\tilde{Z}_{X(3872)}\gtrsim 0.55$111111As can be seen from Fig. 2 of that reference there is a bump around $\tilde{Z}_{X(3872)}\sim$ 0.55.. Indeed, in the vicinity of $C_{0X}$=0 ($\tilde{Z}_{X(3872)}\sim 0.55$), which controls the four-meson contact interaction, the dressed $\chi_{c1}(2P)$ pole becomes below threshold with its mass decreasing rapidly and quite wide. As $\tilde{Z}_{X(3872)}$ increases, the mass and width of the charmonium state decrease, up to $\tilde{Z}_{X(3872)}\sim 0.57$ ($\chi_{c1}(2P)$ pole reaches the real axis with that weight), while $C_{0X}$ increases and takes large positive values which creates a strong repulsive force between the $D$ and $D^{*}$ mesons. Thus, the contribution of the molecular component in the $X(3872)$ is suppressed. However, this mentioned behavior of radiative branching ratio is valid only for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3906$ MeV. For larger (smaller) values of bare charmonium, smaller (larger) $\chi_{c1}(2P)$ contents are required to reach the real axis of the pole. For instance, the $\chi_{c1}(2P)$ pole appears on the real axis below threshold when $\tilde{Z}_{X(3872)}\sim$ 0.61 for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3896$ MeV and when $\tilde{Z}_{X(3872)}\sim$ 0.39 for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3937$ MeV, as can be seen in Tables 3–9 of the appendix. Due to these non-trivial effects, it is difficult to put a firm restriction on the charmonium admixture in $X(3872)$. Figure 1: Pole trajectories of the $\chi_{c1}(2P)$, located in the SRS, for the different bare charmonium masses for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3896,\,3906$ and 3908.5 MeV. The $D\bar{D}^{*}$ threshold is shown as a vertical black dashed line. The lines and the crosses are obtained with the help of Tables 3, 4 and Table I of Ref. Cincioglu _et al._ (2016). While the crosses show the trajectories which come close to the threshold after colliding the real axis, and the solid circles show the trajectories which move away from the threshold. Note that the properties of the poles that depart from the threshold are not given in Tables 3, 4. Figure 2: Pole trajectories of the $\chi_{c1}(2P)$, located in the SRS, for the different bare charmonium masses for $\stackrel{{\scriptstyle\circ}}{{m}}_{c\bar{c}}=3910,\,3925,\,3937,\,3947$ and 3953 MeV. The $D\bar{D}^{*}$ threshold is shown as a vertical black dashed line. To present a general picture, those located in the much above the real axis do not have any observable effects and are illustrated with crosses. The circles show other pole trajectories which come close to the threshold from deep. The lines and the circles are obtained with the help of the values in Tables 5–9. Note that the properties of the poles that depart from the threshold are not given in Tables 5–9. ### Acknowledgement This research has been supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under the grant no F117090. ## IV Appendix In this appendix, for the values whose pole trajectories are shown with the Figs. 1 and 2, their tables are shown below. The calculations have been carried out with an UV cutoff $\Lambda=0.5-1$ GeV. In the tables, $d^{crit}$ indicates the point that $C_{0X}$ is zero. After $d^{crit}$ values, $C_{0X}$ becomes positive, which means that the interaction becomes repulsive. While details of the Fig. 1 values are given by Tables 3 and 4, values in the Fig. 2 are given by Tables 5–9. Moreover, the properties of the $1^{++}$ hidden charm poles are compiled as a function of the mixing parameter $d$, when an UV cutoff is used to regularized the molecular interactions (see Tables 11–16). Note that $X(3872)$ is assumed as a bound state in the FRS; therefore, the pole position of $X(3872)$ is fixed at 3871.69 MeV in the FRS. Finally, the position of the $\chi_{1c}(2P)$ is located in the SRS. Numerical results for an UV cutoff $\Lambda=1$ GeV: d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.01 | -0.792 | 0.891 | 0.978 | (3865.0, 0.1) | 0.02-0.05i | 0.002 | 1.00 0.05 | -0.862 | 0.723 | 0.645 | (3866.1, 2.0) | 0.11-0.20i | 0.055 | 1.05-0.01i 0.10 | -1.084 | 0.503 | 0.312 | (3868.7, 3.17) | 0.25-0.20i | 0.122 | 1.11+0.06i 0.20 | -1.968 | 0.288 | 0.102 | (3870.8, 1.38) | 0.21-0.07i | 0.071 | 1.03+0.06i 0.40 | -5.508 | 0.150 | 0.028 | (3871.46, 0.39) | 0.11-0.03i | 0.022 | 1.01+0.02i 1.00 | -30.285 | 0.061 | 0.005 | (3871.65, 0.06) | 0.05-0.01i | 0.004 | 1.00 3.00 | -266.251 | 0.020 | 0.000 | (3871.69, 0.007) | 0.015-0.003i | 0.00 | 1.00 10.00 | -2950.37 | 0.006 | 0.000 | (3871.69, 0.000) | 0.005-0.001i | 0.00 | 1.00 Table 2: For $m^{0}_{c\bar{c}}=$ 3865 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$. d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3896.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.768 | 0.88 | 0.96 | (3896.7, 2.2) | 0.01-0.19i | 0.03 | 0.98+0.02i 0.10 | -0.707 | 0.83 | 0.86 | (3898.6, 9.6) | 0.00-0.36i | 0.10 | 0.93+0.07i 0.20 | -0.464 | 0.70 | 0.60 | (3900.2, 50.0) | 0.16+0.63i | 0.35 | 0.80+0.29i 0.30 | -0.058 | 0.57 | 0.40 | (3821.0, 123.1) | 0.82+1.02i | $>$1 | 0.64+1.79i 0.305 | -0.034 | 0.56 | 0.39 | (3797.6, 94.2) | 1.10+1.25i | $>$1 | 0.62+3.02i 0.307 | -0.024 | 0.56 | 0.39 | (3784.8, 60.5) | 1.48+1.60i | $>$1 | 0.60+5.33i 0.3075 | -0.021 | 0.56 | 0.39 | (3781.1, 43.9) | 1.79+1.88i | $>$1 | 0.59+7.59i 0.3078 | -0.019 | 0.56 | 0.39 | (3778.8, 28.1) | 2.28+2.35i | $>$1 | 0.59+12.12i 0.30798 | -0.019 | 0.56 | 0.39 | (3777.4, 7.9) | 4.35+4.39i | $>$1 | 0.59+43.38i 0.309 | -0.014 | 0.56 | 0.39 | (3734.2, 0.0) | 0.00+2.25i | $>$1 | -4.84 0.31 | -0.008 | 0.56 | 0.38 | (3810.7, 0.0) | 1.73 | $|\tilde{X}_{\chi_{c1}}|<1$ | 4.44 $\emph{d}^{\emph{crit}}$ | 0.000 | 0.55 | 0.38 | (3818.8, 0.0) | 1.45 | $|\tilde{X}_{\chi_{c1}}|<1$ | 3.45 0.35 | 0.206 | 0.52 | 0.33 | (3853.3, 0.0) | 0.66 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.57 0.40 | 0.510 | 0.47 | 0.27 | (3861.6, 0.0) | 0.47 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.33 0.50 | 1.241 | 0.40 | 0.19 | (3866.8, 0.0) | 0.33 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.18 1.00 | 7.328 | 0.21 | 0.06 | (3870.8, 0.0) | 0.14 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.04 1.50 | 17.475 | 0.15 | 0.03 | (3871.3, 0.0) | 0.10 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.02 2.00 | 31.680 | 0.11 | 0.02 | (3871.5, 0.0) | 0.07 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 3.00 | 72.265 | 0.07 | 0.01 | (3871.6, 0.0) | 0.05 | 0.00 | 1.00 Table 3: For $m^{0}_{c\bar{c}}=$ 3896 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.311708$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3908.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.775 | 0.89 | 0.98 | (3909.6, 1.8) | 0.01+0.15i | 0.01 | 0.99+0.01i 0.10 | -0.735 | 0.87 | 0.93 | (3910.6, 7.6) | 0.03+0.30i | 0.06 | 0.97+0.05i 0.20 | -0.574 | 0.79 | 0.77 | (3915.0, 35.8) | 0.14+0.54i | 0.21 | 0.89+0.18i 0.30 | -0.306 | 0.70 | 0.60 | (3910.4, 104.9) | 0.35+0.71i | 0.49 | 0.79+0.44i 0.35 | -0.132 | 0.65 | 0.53 | (3886.1, 172.1) | 0.55+0.80i | 0.83 | 0.73+0.78i 0.38 | -0.014 | 0.63 | 0.49 | (3827.9, 230.5) | 0.80+0.97 | $>$1 | 0.60+1.55i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.63 | 0.48 | (3811.2, 236.5) | 0.87+1.04i | $>$1 | 0.54+1.84i 0.39 | 0.027 | 0.62 | 0.47 | (3756.2, 233.0) | 1.13+1.34i | $>$1 | 0.16+3.31i 0.391 | 0.031 | 0.62 | 0.47 | (3739.8, 225.7) | 1.24+1.47i | $>$1 | -0.05+4.02i 0.393 | 0.039 | 0.62 | 0.47 | (3672.0, 162.6) | 1.98+2.58i | $>$1 | -3.05+12.07i 0.3932 | 0.040 | 0.62 | 0.47 | (3652.2, 131.8) | 2.49+3.39i | $>$1 | -6.48+20.28i 0.39334 | 0.041 | 0.62 | 0.47 | (3616.6, 49.3) | 6.50+9.19i | $>$1 | -54.59+148.92i 0.393345 | 0.041 | 0.62 | 0.47 | (3611.2, 27.1) | 10.84+14.09 | $>$1 | -104.57+383.37i 0.393346 | 0.041 | 0.62 | 0.47 | (3609.5, 16.3) | 15.72+18.77 | $>$1 | -136.33+742.74i 0.395 | 0.048 | 0.62 | 0.47 | (3741.6, 0.0) | 2.14 | $|\tilde{X}_{\chi_{c1}}|<1$ | 6.25 0.398 | 0.061 | 0.61 | 0.46 | (3774.3, 0.0) | 1.58 | $|\tilde{X}_{\chi_{c1}}|<1$ | 3.85 0.40 | 0.069 | 0.61 | 0.46 | (3785.7, 0.0) | 1.43 | $|\tilde{X}_{\chi_{c1}}|<1$ | 3.31 1.00 | 4.572 | 0.31 | 0.12 | (3869.4, 0.0) | 0.22 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.09 2.00 | 20.654 | 0.16 | 0.03 | (3871.2, 0.0) | 0.11 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.02 3.00 | 47.458 | 0.11 | 0.02 | (3871.5, 0.0) | 0.07 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 Table 4: For $m^{0}_{c\bar{c}}=$ 3908.5 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.383565$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3910.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.776 | 0.89 | 0.98 | (3910.6, 1.8) | 0.01+0.15i | 0.01 | 0.99+0.01i 0.10 | -0.737 | 0.87 | 0.94 | (3912.1, 7.5) | 0.04+0.29i | 0.05 | 0.97+0.04i 0.20 | -0.583 | 0.80 | 0.79 | (3916.6, 34.8) | 0.14+0.53i | 0.20 | 0.89+0.17i 0.30 | -0.325 | 0.71 | 0.62 | (3913.7, 100.7) | 0.34+0.70i | 0.46 | 0.80+0.42i 0.35 | -0.158 | 0.67 | 0.55 | (3895.0, 164.3) | 0.51+0.78i | 0.74 | 0.75+0.70i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.63 | 0.49 | (3818.6, 249.8) | 0.85+1.01i | $>$1 | 0.54+1.72i 0.40 | 0.035 | 0.63 | 0.48 | (3163.8, 0.0) | 0.86 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.22 0.402 | 0.044 | 0.62 | 0.48 | (3514.7, 0.0) | 3.30 | $|\tilde{X}_{\chi_{c1}}|<1$ | 15.71 0.40205 | 0.044 | 0.62 | 0.48 | (3529.6, 0.0) | 3.61 | $|\tilde{X}_{\chi_{c1}}|<1$ | 18.40 0.4021 | 0.044 | 0.62 | 0.48 | (3545.8, 0.0) | 3.97 | $|\tilde{X}_{\chi_{c1}}|<1$ | 21.75 0.403 | 0.048 | 0.62 | 0.48 | (3694.4, 0.0) | 2.82 | $|\tilde{X}_{\chi_{c1}}|<1$ | 0.52 0.403 | 0.050 | 0.62 | 0.48 | (3714.3, 0.0) | 2.42 | $|\tilde{X}_{\chi_{c1}}|<1$ | 0.52 0.45 | 0.254 | 0.59 | 0.42 | (3838.0, 0.0) | 0.78 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.73 0.50 | 0.499 | 0.55 | 0.37 | (3852.3, 0.0) | 0.59 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.46 1.00 | 4.362 | 0.32 | 0.13 | (3869.2, 0.0) | 0.23 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.10 2.00 | 19.815 | 0.17 | 0.36 | (3871.1, 0.0) | 0.11 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.03 3.00 | 45.569 | 0.11 | 0.02 | (3871.5, 0.0) | 0.07 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 Table 5: For $m^{0}_{c\bar{c}}=$ 3910 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.391302$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3925.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.780 | 0.90 | 0.99 | (3925.5, 1.5) | 0.03+0.12i | 0.01 | 1.00+00i 0.10 | -0.752 | 0.89 | 0.97 | (3926.8, 6.2) | 0.06+0.24i | 0.03 | 0.98+0.03i 0.20 | -0.641 | 0.84 | 0.88 | (3931.5, 27.5) | 0.15+0.45i | 0.13 | 0.94+0.11i 0.30 | -0.456 | 0.79 | 0.76 | (3935.9, 73.3) | 0.29+0.62i | 0.29 | 0.87+0.26i 0.40 | -0.196 | 0.72 | 0.64 | (3927.9, 172.3) | 0.50+0.74i | 0.59 | 0.79+0.56i 0.45 | -0.039 | 0.69 | 0.59 | (3898.6, 280.6) | 0.69+0.84i | $>$1 | 0.67+0.96i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.68 | 0.57 | (3882.1, 325.8) | 0.76+0.90i | $>$1 | 0.58+1.18i 0.48 | 0.064 | 0.67 | 0.56 | (3373.4, 0.0) | 0.99 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.44 0.50 | 0.137 | 0.66 | 0.54 | (3749.4, 0.0) | 1.20 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.66 0.60 | 0.544 | 0.60 | 0.44 | (3842.3, 0.0) | 0.65 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.53 1.00 | 2.913 | 0.43 | 0.22 | (3866.2, 0.0) | 0.33 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.18 2.00 | 14.017 | 0.23 | 0.07 | (3870.6, 0.0) | 0.16 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.05 4.00 | 58.44 | 0.12 | 0.02 | (3871.4, 0.0) | 0.08 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 Table 6: For $m^{0}_{c\bar{c}}=$ 3925 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.461594$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3937.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.781 | 0.90 | 0.99 | (3937.42, 1.36) | 0.03+0.11i | 0.01 | 1.00+0.01i 0.10 | -0.758 | 0.89 | 0.98 | (3938.6, 5.54) | 0.07+0.21i | 0.02 | 0.99+0.02i 0.20 | -0.668 | 0.86 | 0.92 | (3943.2, 23.9) | 0.16+0.40i | 0.09 | 0.96+0.09 0.30 | -0.517 | 0.82 | 0.83 | (3949.1, 61.5) | 0.28+0.56i | 0.22 | 0.91+0.20i 0.40 | -0.305 | 0.77 | 0.73 | (3951.7, 134.3) | 0.44+0.68i | 0.42 | 0.85+0.39i 0.50 | -0.033 | 0.72 | 0.63 | (3930.6, 311.5) | 0.70+0.82i | 0.94 | 0.67+0.88i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.71 | 0.62 | (3923.5, 352.8) | 0.74+0.86i | $>$1 | 0.59+1.01i 0.53 | 0.060 | 0.70 | 0.61 | (3081.0, 0.0) | 0.47 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.39 0.54 | 0.092 | 0.70 | 0.60 | (3457.4, 0.0) | 0.94 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.24 0.60 | 0.299 | 0.67 | 0.55 | (3795.6, 0.0) | 0.87 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.86 1.00 | 2.233 | 0.49 | 0.30 | (3862.4, 0.0) | 0.40 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.24 2.00 | 11.297 | 0.28 | 0.10 | (3870.0, 0.0) | 0.19 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.07 4.00 | 47.554 | 0.15 | 0.03 | (3871.3, 0.0) | 0.09 | $|\tilde{X}_{chi_{c1}}|<1$ | 1.02 Table 7: For $m^{0}_{c\bar{c}}$= 3937 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.510903$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3947.0, 0.0) | 0.00 | 0\. 00 | 1.00 0.05 | -0.782 | 0.90 | 1.00 | (3947.4, 1.3) | 0.04 + 0.10i | 0.01 | 0.99 + 0.01i 0.10 | -0.762 | 0.89 | 0.98 | (3948.6, 5.1) | 0.08 + 0.19i | 0.02 | 0.99 + 0.02i 0.20 | -0.683 | 0.87 | 0.93 | (3953.0, 21.8) | 0.16 + 0.37i | 0.08 | 0.97 + 0.07i 0.30 | -0.553 | 0.84 | 0.86 | (3959.3, 54.7) | 0.27 + 0.52i | 0.18 | 0.93 + 0.17i 0.40 | -0.369 | 0.80 | 0.78 | (3965.2, 115.0) | 0.42 + 0.63i | 0.34 | 0.88 + 0.31i 0.50 | -0.134 | 0.75 | 0.70 | (3963.3, 237.0) | 0.61 + 0.73i | 0.64 | 0.78 + 0.60i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.73 | 0.66 | (3953.8, 365.3) | 0.74 + 0.83i | 0.99 | 0.61 + 0.91i 0.60 | 0.155 | 0.71 | 0.61 | (3628.0, 0.0) | 1.00 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.24 1.00 | 1.832 | 0.54 | 0.37 | (3858.0, 0.0) | 0.46 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.30 2.00 | 9.692 | 0.32 | 0.13 | (3869.4, 0.0) | 0.22 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.09 4.00 | 41.135 | 0.17 | 0.03 | (3871.2, 0.0) | 0.11 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.03 6.00 | 93.538 | 0.11 | 0.02 | (3871.5, 0.0) | 0.07 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 Table 8: For $m^{0}_{c\bar{c}}=$ 3947 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.548624$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -0.789 | 1.00 | 1.00 | (3953.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -0.783 | 0.90 | 1.00 | (3953.4, 1.2) | 0.04+0.09i | 0.01 | 1.00+0.00i 0.10 | -0.764 | 0.89 | 0.99 | (3954.5, 4.9) | 0.08+0.18i | 0.02 | 0.99+0.02i 0.20 | -0.692 | 0.88 | 0.94 | (3958.8, 20.7) | 0.17+0.35i | 0.07 | 0.97+0.07i 0.30 | -0.570 | 0.85 | 0.88 | (3965.3, 51.4) | 0.27+0.50i | 0.16 | 0.94+0.15i 0.40 | -0.400 | 0.81 | 0.81 | (3972.2, 106.2) | 0.41+0.61i | 0.30 | 0.89+0.28i 0.50 | -0.182 | 0.77 | 0.73 | (3975.2, 210.5) | 0.58+0.70i | 0.54 | 0.81+0.51i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.74 | 0.67 | (3970.6, 370.1) | 0.74+0.82i | 0.95 | 0.61+0.87i 0.60 | 0.085 | 0.73 | 0.65 | (3185.6, 0.0) | 0.50 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.40 0.65 | 0.237 | 0.70 | 0.61 | (3716.5, 0.0) | 0.93 | $|\tilde{X}_{\chi_{c1}}|<1$ | 2.00 1.00 | 1.638 | 0.57 | 0.40 | (3854.6, 0.0) | 0.50 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.33 2.00 | 8.919 | 0.34 | 0.14 | (3868.9, 0.0) | 0.24 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.10 4.00 | 38.041 | 0.18 | 0.04 | (3871.1, 0.0) | 0.12 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.03 6.00 | 86.578 | 0.12 | 0.02 | (3871.4, 0.0) | 0.08 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 Table 9: For $m^{0}_{c\bar{c}}=$ 3953 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=$0.57006 fm1/2). Numerical results for an UV cutoff $\Lambda=0.5$ GeV: d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.01 | -1.941 | 1.052 | 0.995 | (3865.0, 0.21) | -0.02i | 0.001 | 1.00 0.05 | -2.011 | 0.997 | 0.894 | (3865.3, 0.11) | 0.04-0.02i | 0.013 | 1.01-0.01i 0.10 | -2.232 | 0.868 | 0.677 | (3866.1, 1.65) | 0.05-0.20i | 0.048 | 1.05-0.01i 0.20 | -3.118 | 0.619 | 0.344 | (3868.5, 2.80) | 0.17-0.25i | 0.110 | 1.11+0.03i 0.40 | -6.657 | 0.359 | 0.116 | (3870.7, 1.34) | 0.18-0.12i | 0.070 | 1.04+0.06i 1.00 | -31.434 | 0.151 | 0.021 | (3871.5, 0.25) | 0.08-0.04i | 0.015 | 1.01+0.01i 3.00 | -267.4 | 0.051 | 0.002 | (3871.67, 0.03) | 0.03-0.01i | 0.002 | 1.00 10.00 | -2951.5 | 0.015 | 0.000 | (3871.69, 0.00) | 0.01 | 0.000 | 1.00 Table 10: For $m^{0}_{c\bar{c}}=$ 3865 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$. d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3896.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.917 | 1.05 | 0.99 | (3896.2, 0.4) | 0.02-0.18i | 0.006 | 0.99+0.04i 0.10 | -1.856 | 1.03 | 0.96 | (3896.8, 1.8) | 0.04-0.16i | 0.025 | 0.98+0.02i 0.20 | -1.613 | 0.98 | 0.87 | (3899.2, 7.95) | 0.10+0.30i | 0.09 | 0.93+0.07i 0.30 | -1.207 | 0.91 | 0.75 | (3903.1, 21.1) | 0.19+0.4i | 0.21 | 0.86+0.16i 0.40 | -0.639 | 0.83 | 0.63 | (3908.2, 48.3) | 0.33+0.47i | 0.40 | 0.78+0.33i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.77 | 0.53 | (3917.1, 105.9) | 0.48+0.55i | 0.80 | 0.53+0.65i 0.50 | 0.091 | 0.76 | 0.52 | (3920.2, 118.4) | 0.49+0.57i | 0.89 | 0.43+0.69i 0.70 | 2.039 | 0.63 | 0.36 | (3856.6, 0.0) | 0.35 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.28 1.00 | 6.179 | 0.49 | 0.21 | (3866.9, 0.0) | 0.23 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.13 2.00 | 30.53 | 0.27 | 0.06 | (3870.7, 0.0) | 0.113 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.03 Table 11: For $m^{0}_{c\bar{c}}=$ 3896 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.488589$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3910.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.925 | 1.05 | 0.99 | (3910.1, 0.3) | 0.03-0.05i | 0.003 | 0.99+0.02i 0.10 | -1.886 | 1.04 | 0.98 | (3910.6, 1.4) | 0.06-0.11i | 0.013 | 0.99+0.01i 0.20 | -1.731 | 1.024 | 0.94 | (3912.6, 5.8) | 0.13+0.22i | 0.05 | 0.97+0.04i 0.30 | -1.474 | 0.99 | 0.88 | (3916.0, 14.2) | 0.21+0.30i | 0.11 | 0.93+0.10i 0.40 | -1.113 | 0.94 | 0.81 | (3921.0, 28.4) | 0.29+0.37i | 0.21 | 0.88+0.18i 0.50 | -0.650 | 0.90 | 0.73 | (3928.5, 52.3) | 0.39+0.42i | 0.36 | 0.81+0.31i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.85 | 0.64 | (3946.1, 100.5) | 0.51+0.48i | 0.67 | 0.57+0.52i 0.70 | 0.586 | 0.80 | 0.58 | (3979.1, 139.1) | 0.52+0.54i | 0.89 | 0.23+0.46i 1.00 | 3.213 | 0.67 | 0.40 | (3855.4, 0.0) | 0.32 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.24 2.00 | 18.665 | 0.40 | 0.14 | (3869.2, 0.0) | 0.174 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.07 3.00 | 44.419 | 0.28 | 0.07 | (3870.6, 0.0) | 0.117 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.03 Table 12: For $m^{0}_{c\bar{c}}=$ 3910 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.613349$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3925.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.928 | 1.05 | 0.99 | (3925.1, 0.2) | 0.03-0.04i | 0.002 | 0.99+0.02i 0.10 | -1.900 | 1.05 | 0.99 | (3925.5, 1.1) | 0.07-0.08i | 0.008 | 0.99+0.07i 0.20 | -1.789 | 1.04 | 0.97 | (3927.3, 4.5) | 0.13+0.16i | 0.03 | 0.98+0.03i 0.30 | -1.604 | 1.02 | 0.93 | (3930.2, 10.7) | 0.21+0.23i | 0.07 | 0.96+0.06i 0.40 | -1.345 | 0.99 | 0.89 | (3934.7, 20.4) | 0.28+0.28i | 0.14 | 0.93+0.12i 0.50 | -1.012 | 0.96 | 0.84 | (3941.0, 34.9) | 0.36+0.33i | 0.23 | 0.88+0.20i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.89 | 0.72 | (3970.9, 91.9) | 0.53+0.42i | 0.59 | 0.61+0.44i 1.00 | 1.763 | 0.79 | 0.57 | (3822.8, 0.0) | 0.33 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.26 2.00 | 12.868 | 0.52 | 0.25 | (3866.4, 0.0) | 0.23 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.12 4.00 | 57.286 | 0.29 | 0.07 | (3870.6, 0.0) | 0.12 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.04 Table 13: For $m^{0}_{c\bar{c}}=$ 3925 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.723529$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3937.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.930 | 1.05 | 0.99 | (3937.1, 0.2) | 0.03-0.03i | 0.001 | 0.99+0.00i 0.10 | -1.907 | 1.05 | 0.99 | (3937.5, 0.9) | 0.07-0.06i | 0.006 | 0.99+0.00i 0.20 | -1.817 | 1.04 | 0.98 | (3939.1, 3.8) | 0.13+0.12i | 0.02 | 0.98+0.02i 0.30 | -1.665 | 1.03 | 0.95 | (3941.8, 9.0) | 0.20+0.18i | 0.06 | 0.97+0.05i 0.40 | -1.454 | 1.01 | 0.92 | (3945.9, 16.7) | 0.27+0.23i | 0.11 | 0.94+0.01i 0.50 | -1.182 | 0.99 | 0.89 | (3951.5, 27.7) | 0.34+0.27i | 0.17 | 0.91+0.15i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.91 | 0.75 | (3988.2, 84.8) | 0.53+0.37i | 0.54 | 0.62+0.39i 1.00 | 1.083 | 0.86 | 0.66 | (3762.3, 0.0) | 0.23 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.16 1.50 | 4.860 | 0.72 | 0.47 | (3852.0, 0.0) | 0.31 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.21 2.00 | 10.1478 | 0.61 | 0.33 | (3869.0, 0.0) | 0.33 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.48 4.00 | 46.404 | 0.35 | 0.11 | (3869.9, 0.0) | 0.14 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.05 Table 14: For $m^{0}_{c\bar{c}}=$ 3937 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.800832$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3947.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.931 | 1.05 | 0.99 | (3947.1, 0.2) | 0.03-0.02i | 0.001 | 0.99+0.00i 0.10 | -1.911 | 1.05 | 0.99 | (3947.5, 0.8) | 0.06-0.05i | 0.005 | 0.99+0.00i 0.20 | -1.833 | 1.04 | 0.98 | (3949.0, 3.4) | 0.13+0.10i | 0.02 | 0.98+0.02i 0.30 | -1.702 | 1.03 | 0.96 | (3951.8, 7.9) | 0.20+0.15i | 0.05 | 0.97+0.04i 0.40 | -1.518 | 1.02 | 0.94 | (3955.3, 14.5) | 0.26+0.19i | 0.10 | 0.95+0.08i 0.50 | -1.282 | 1.00 | 0.91 | (3960.6, 23.6) | 0.33+0.23i | 0.14 | 0.93+0.12i 0.70 | -0.653 | 0.968 | 0.84 | (3977.5, 50.9) | 0.45+0.28i | 0.31 | 0.82+0.25i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.93 | 0.78 | (4001.6, 79.0) | 0.53+0.33i | 0.50 | 0.64+0.36i 1.00 | 0.682 | 0.90 | 0.72 | (3660.8, 0.0) | 0.12 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.05 2.00 | 8.543 | 0.66 | 0.39 | (3859.3, 0.0) | 0.28 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.18 4.00 | 41.13 | 0.16 | 0.03 | (3871.1, 0.0) | 0.11 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.02 Table 15: For $m^{0}_{c\bar{c}}=$ 3947 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.859959$ fm1/2). d [fm1/2] | C0X [$fm^{2}$] | $g_{D\bar{D}^{*}}^{X(3872)}$[GeV-1/2] | $\tilde{X}_{X(3872)}$ | ($m_{\chi_{c1}},\Gamma_{\chi_{c1}}$)[MeV] | $g_{D\bar{D}^{*}}^{\chi_{c1}}$[GeV-1/2] | $|\tilde{X}_{\chi_{c1}}|$ | $\tilde{Z}_{\chi_{c1}}$ ---|---|---|---|---|---|---|--- 0.00 | -1.938 | 1.00 | 1.00 | (3953.0, 0.0) | 0.00 | 0.00 | 1.00 0.05 | -1.931 | 1.05 | 0.99 | (3953.1, 0.2) | 0.03-0.02i | 0.001 | 0.99+0.00i 0.10 | -1.913 | 1.05 | 0.99 | (3953.5, 0.8) | 0.06-0.04i | 0.005 | 0.99+0.00i 0.20 | -1.840 | 1.04 | 0.98 | (3954.9, 3.2) | 0.13+0.10i | 0.02 | 0.99+0.01i 0.30 | -1.719 | 1.04 | 0.97 | (3957.4, 7.3) | 0.19+0.13i | 0.04 | 0.98+0.04i 0.40 | -1.549 | 1.03 | 0.95 | (3961.1, 13.4) | 0.26+0.17i | 0.08 | 0.96+0.07i 0.50 | -1.331 | 1.01 | 0.92 | (3966.1, 21.7) | 0.32+0.20i | 0.13 | 0.93+0.11i 0.70 | -0.748 | 0.979 | 0.86 | (3981.9, 45.8) | 0.44+0.26i | 0.28 | 0.84+0.23i $\emph{d}^{\emph{crit}}$ | 0.000 | 0.94 | 0.79 | (4009.2, 75.7) | 0.53+0.31i | 0.48 | 0.65+0.34i 1.00 | 0.489 | 0.91 | 0.75 | (3550.4, 0.0) | 0.06 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.01 2.00 | 7.769 | 0.69 | 0.43 | (3856.6, 0.0) | 0.29 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.19 4.00 | 36.89 | 0.42 | 0.16 | (3868.9, 0.0) | 0.17 | $|\tilde{X}_{\chi_{c1}}|<1$ | 1.08 Table 16: For $m^{0}_{c\bar{c}}=$ 3953 MeV bare charmonium mass, dressed mass value of the $\chi_{1c}(2P)$ and its other properties as a function of $d$ ($d^{crit}=0.893559$ fm1/2). ## References * Cincioglu _et al._ (2016) E. Cincioglu, J. Nieves, A. Ozpineci, and A. U. Yilmazer, Eur. Phys. J. C76, 576 (2016), arXiv:1606.03239 [hep-ph] . * Olsen (2015) S. L. Olsen, Front. Phys.(Beijing) 10, 121 (2015), arXiv:1411.7738 [hep-ex] . * Tanabashi _et al._ (2018) M. Tanabashi _et al._ (Particle Data Group), Phys. Rev. D98, 030001 (2018). * Olsen _et al._ (2018) S. L. Olsen, T. Skwarnicki, and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018), arXiv:1708.04012 [hep-ph] . * Lebed _et al._ (2017) R. F. Lebed, R. E. Mitchell, and E. S. Swanson, Prog. Part. Nucl. Phys. 93, 143 (2017), arXiv:1610.04528 [hep-ph] . * Choi _et al._ (2003) S. K. Choi _et al._ (Belle), Phys. Rev. Lett. 91, 262001 (2003), arXiv:hep-ex/0309032 [hep-ex] . * Acosta _et al._ (2004) D. Acosta _et al._ (CDF), Phys. Rev. Lett. 93, 072001 (2004), arXiv:hep-ex/0312021 [hep-ex] . * Abazov _et al._ (2004) V. M. Abazov _et al._ (D0), Phys. Rev. Lett. 93, 162002 (2004), arXiv:hep-ex/0405004 [hep-ex] . * Aubert _et al._ (2005) B. Aubert _et al._ (BaBar), Phys. Rev. D71, 071103 (2005), arXiv:hep-ex/0406022 [hep-ex] . * Chatrchyan _et al._ (2013) S. Chatrchyan _et al._ (CMS), JHEP 04, 154 (2013), arXiv:1302.3968 [hep-ex] . * Aaij _et al._ (2013) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 110, 222001 (2013), arXiv:1302.6269 [hep-ex] . * Aaij _et al._ (2020a) R. Aaij _et al._ (LHCb), Phys. Rev. D 102, 092005 (2020a), arXiv:2005.13419 [hep-ex] . * Aaij _et al._ (2020b) R. Aaij _et al._ (LHCb), JHEP 08, 123 (2020b), arXiv:2005.13422 [hep-ex] . * Chen _et al._ (2016) H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Phys. Rept. 639, 1 (2016), arXiv:1601.02092 [hep-ph] . * Guo _et al._ (2018) F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys. 90, 015004 (2018), arXiv:1705.00141 [hep-ph] . * Aubert _et al._ (2009) B. Aubert, M. Bona, Y. Karyotakis, J. P. Lees, V. Poireau, E. Prencipe, X. Prudent, V. Tisserand, J. G. Tico, E. Grauges, and et al., Physical Review Letters 102 (2009), 10.1103/physrevlett.102.132001. * Aaij _et al._ (2014) R. Aaij, B. Adeva, M. Adinolfi, A. Affolder, Z. Ajaltouni, J. Albrecht, F. Alessio, M. Alexander, S. Ali, G. Alkhazov, and et al., Nuclear Physics B 886, 665 (2014). * Godfrey and Isgur (1985) S. Godfrey and N. Isgur, Phys. Rev. D32, 189 (1985). * Ebert _et al._ (2011) D. Ebert, R. N. Faustov, and V. O. Galkin, Eur. Phys. J. C71, 1825 (2011), arXiv:1111.0454 [hep-ph] . * Guo _et al._ (2015) F.-K. Guo, C. Hanhart, Yu. S. Kalashnikova, U.-G. Meißner, and A. V. Nefediev, Phys. Lett. B742, 394 (2015), arXiv:1410.6712 [hep-ph] . * Cincioglu and Ozpineci (2019) E. Cincioglu and A. Ozpineci, Phys. Lett. B797, 134856 (2019), arXiv:1901.03138 [hep-ph] . * Dong _et al._ (2011) Y. Dong, A. Faessler, T. Gutsche, and V. E. Lyubovitskij, J. Phys. G38, 015001 (2011), arXiv:0909.0380 [hep-ph] . * Takizawa and Takeuchi (2013) M. Takizawa and S. Takeuchi, PTEP 2013, 093D01 (2013), arXiv:1206.4877 [hep-ph] . * Chen _et al._ (2015) G.-Y. Chen, W.-S. Huo, and Q. Zhao, Chin. Phys. C 39, 093101 (2015), arXiv:1309.2859 [hep-ph] . * Nieves and Valderrama (2012) J. Nieves and M. P. Valderrama, Phys. Rev. D86, 056004 (2012), arXiv:1204.2790 [hep-ph] . * Casalbuoni _et al._ (1993) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Phys. Lett. B302, 95 (1993). * Hanhart _et al._ (2014) C. Hanhart, J. R. Pelaez, and G. Rios, Phys. Lett. B739, 375 (2014), arXiv:1407.7452 [hep-ph] . * Colangelo _et al._ (2004) P. Colangelo, F. De Fazio, and T. N. Pham, Phys. Rev. D69, 054023 (2004), arXiv:hep-ph/0310084 [hep-ph] . * Weinberg (1963) S. Weinberg, Phys. Rev. 130, 776 (1963). * Weinberg (1965) S. Weinberg, Phys. Rev. 137, B672 (1965). * Garcia-Recio _et al._ (2015) C. Garcia-Recio, C. Hidalgo-Duque, J. Nieves, L. L. Salcedo, and L. Tolos, Phys. Rev. D92, 034011 (2015), arXiv:1506.04235 [hep-ph] . * Guo and Oller (2016) Z.-H. Guo and J. A. Oller, Phys. Rev. D93, 096001 (2016), arXiv:1508.06400 [hep-ph] . * Aceti _et al._ (2014) F. Aceti, L. R. Dai, L. S. Geng, E. Oset, and Y. Zhang, Eur. Phys. J. A 50, 57 (2014), arXiv:1301.2554 [hep-ph] . * Baru _et al._ (2010) V. Baru, C. Hanhart, Yu. S. Kalashnikova, A. E. Kudryavtsev, and A. V. Nefediev, Eur. Phys. J. A44, 93 (2010), arXiv:1001.0369 [hep-ph] . * Albaladejo _et al._ (2013) M. Albaladejo, C. Hidalgo-Duque, J. Nieves, and E. Oset, Phys. Rev. D 88, 014510 (2013), arXiv:1304.1439 [hep-lat] . * Morgan (1992) D. Morgan, Nucl. Phys. A543, 632 (1992). * Sng and Jumasahatov (2019) J. Y. Sng and A. C. Jumasahatov, J. Phys. G46, 035007 (2019), arXiv:1810.03097 [hep-ph] . * Barnes _et al._ (2005) T. Barnes, S. Godfrey, and E. S. Swanson, Phys. Rev. D72, 054026 (2005), arXiv:hep-ph/0505002 [hep-ph] . * Ebert _et al._ (2003) D. Ebert, R. N. Faustov, and V. O. Galkin, Phys. Rev. D67, 014027 (2003), arXiv:hep-ph/0210381 [hep-ph] . * Gui _et al._ (2018) L.-C. Gui, L.-S. Lu, Q.-F. L, X.-H. Zhong, and Q. Zhao, Phys. Rev. D98, 016010 (2018), arXiv:1801.08791 [hep-ph] . * Deng _et al._ (2017) W.-J. Deng, H. Liu, L.-C. Gui, and X.-H. Zhong, Phys. Rev. D95, 034026 (2017), arXiv:1608.00287 [hep-ph] . * Segovia _et al._ (2013) J. Segovia, D. R. Entem, F. Fernandez, and E. Hernandez, Int. J. Mod. Phys. E22, 1330026 (2013), arXiv:1309.6926 [hep-ph] . * Ortega _et al._ (2013) P. G. Ortega, D. R. Entem, and F. Fernandez, J. Phys. G40, 065107 (2013), arXiv:1205.1699 [hep-ph] .
# Case Studies on X-Ray Imaging, MRI and Nuclear Imaging Shuvra Sarker Research and Development Department, Pioneer Alpha, Dhaka, Bangladesh <EMAIL_ADDRESS> &Angona Biswas Research and Development Department, Pioneer Alpha, Dhaka, Bangladesh <EMAIL_ADDRESS> &MD Abdullah Al Nasim Research and Development Department, Pioneer Alpha, Dhaka, Bangladesh <EMAIL_ADDRESS> &Md Shahin Ali Department of Biomedical Engineering, Islamic University, Kushtia-7003, Bangladesh <EMAIL_ADDRESS> &Sai Puppala Department of Computer Science, University of Alabama at Birmingham,Alabama, USA <EMAIL_ADDRESS> &Sajedul Talukder Department of Computer Science, University of Alabama at Birmingham,Alabama, USA <EMAIL_ADDRESS> ###### Abstract The field of medical imaging is an essential aspect of the medical sciences, involving various forms of radiation to capture images of the internal tissues and organs of the body. These images provide vital information for clinical diagnosis, and in this chapter, we will explore the use of X-ray, MRI, and nuclear imaging in detecting severe illnesses. However, manual evaluation and storage of these images can be a challenging and time-consuming process. To address this issue, artificial intelligence (AI)-based techniques, particularly deep learning (DL), have become increasingly popular for systematic feature extraction and classification from imaging modalities, thereby aiding doctors in making rapid and accurate diagnoses. In this review study, we will focus on how AI-based approaches, particularly the use of Convolutional Neural Networks (CNN), can assist in disease detection through medical imaging technology. CNN is a commonly used approach for image analysis due to its ability to extract features from raw input images, and as such, will be the primary area of discussion in this study. Therefore, we have considered CNN as our discussion area in this study to diagnose ailments using medical imaging technology. _K_ eywords Medical imaging, X-ray, MRI, Nuclear Imaging, Deep learning, Diagnosis, Artificial intelligence ## 1 Introduction As part of growing advancements in the medical field, Medical imaging played a significant role in terms of diagnosing a variety of ailments. Medical Imaging is the process of viewing and monitoring the interior of the anatomy that aids in the diagnosis and treatment of disorders [1]. Some of the popular techniques including X-ray, MRI, and Nuclear Imaging are used for diagnostic procedures that provide images of the human interior organs and bones. The X-ray (Rontgen radiation) is a non-invasive clinical diagnostic technology that yields 2D images of the internal structure of the body by utilizing electromagnetic radiation of extremely high frequency, high energy, and short wavelength [2]. It was invented by Wilhelm Conrad Rontgen, a German scientist, who named it x-radiation due to its unknown type of nature [4]. Rontgen discovered that invisible rays from a cathode-ray tube were penetrating through cardboard, causing the fluorescent screen to illuminate. An X-ray beam is carried through the anatomy, with some of the radiation being absorbed or deflected by the interior while the remainder is delivered to a detector (photographic film), resulting in images of the internal structure [6]. Less dense tissues appear as gray on the detector as most of the X-rays can pass through the soft tissue, whereas densely packed structures appear as white, e.g., bone. In addition to detecting fractures (broken bones), X-rays can also be utilized to inspect the kidney, bladder, lungs, stomach, and liver [7]. This helpful feature from X-ray imaging assists doctors in further looking into problems related to bones, joints, soft tissues, and other sections of the body effectively. Moreover, guiding surgeons during coronary angiograms for emergency treatment after a heart attack is another vital role of X-rays. They can also be used to treat cancer, pneumonia, injuries, breast cancer, and dental issues. Recently, the diagnosis of bacterial pneumonia and the most serious life-threatening COVID-19 disease has relied heavily on X-ray imaging. Several diseases can be diagnosed effectively by employing X-ray images. Among them, several studies have been carried out on COVID-19. Identifying the importance of this topic we have discussed in detail in this chapter about detecting some of the issues related to COVID-19 from x-ray images. Since COVID-19 is a pandemic disease and quick treatment could help in the early identification of bodily issues, Artificial Intelligence (AI) along with this radiological x-ray imaging will assist in diagnosing the disease. The majority of the studies are centered on deep learning, which can diagnose illnesses autonomously through X-ray images. Another non-invasive medical imaging approach is Magnetic Resonance Imaging (MRI) which yields 3-D detailed, high-quality images of the internal structure of the body from various angles. Although the invention of MRI is done by Raymond Damadian, the Nobel Prize was awarded to Paul Lauterbur and Peter Mansfield for providing a basis for gathering information from high-quality images [8]. MRI scanners are well suited to capture images of the soft tissues of the anatomy. MRI can be employed to examine the blood vessels and heart, internal organs (liver, kidneys, etc.), cancers, and other structures [9]. This technology uses radio frequency (RF) and a strong magnetic field instead of damaged electromagnetic radiation. After aligning the protons in the human body a radio frequency is provided which causes the input proton stream to become misaligned. When the RF is shut off, the protons realign, but they continue to send radio signals back, permitting the scanner to gather images of a specific region of the body. MRI is a prominent screening test for the diagnosis of any type of brain or spinal cord disorder. Additionally, AI can evaluate the image for any form of diagnosis in order to gain new discernment about any ailment. MRI has recently become the most effective way to detect brain tumors in the medical field. According to prior research, For automating the identification of this brain tumor using MRI, AI-based approaches yield the most accurate and efficient results. Nuclear medicine imaging is quite distinct from other conventional imaging modalities. It is a process of tracking radiation released by the body as opposed to radiation produced by other external sources, such as X-rays. It differs from other radiological imaging since it emphasizes the organ’s functionality, whereas other methods focus simply on the organ’s appearance. It makes use of radioactive substances (radiopharmaceuticals), which are ingested or inhaled into the body, and the detector records images from the radiation released from the radiopharmaceuticals. It focuses on organ tissue, such as lung scans, brain scans, heart scans, bone scans, gallbladder scans, and tumors [10]. This approach may also identify any damage and target the damaged cell for destruction by a tracer to halt the development of that damaged cell. In this chapter, some of the use cases of medical imaging modalities are briefly explored, with an emphasis on their applicability to the most frequent ailments. This study presents the diagnosis of some particular diseases utilizing medical imaging with the help of AI technologies. This study offers a thorough review of illness diagnosis using X-ray, MRI, and NI with the use of DL techniques studied in numerous research publications. Recently, DNN has achieved remarkable traction in the realm of the medical field for its autodidact capacity to classify disorders. The brief review of the existing methods will assist researchers in obtaining knowledge in this field and conducting a further study on this issue. An insightful analysis of X-ray, MRI, and NI imaging modalities approaches and their applications are presented in this chapter after reviewing numerous articles. Based on the prior research from several articles, only a few are selected to study the method they used in their article along with the result they achieved, which are presented in brief here covering x-ray, MRI, and Nuclear imaging (NI). The below content is organized in the following structure: The related studies on the mentioned topic are presented briefly in section 2. In Section 3, materials and methodology are described. In Section 4, the results and analysis are depicted. Section 5 presents the main findings of this chapter. ## 2 Background Study and Related works ### 2.1 Medical Imaging and Essential Study in Medical Science #### 2.1.1 Medical Imaging The history of medical imaging dates back to the discovery of X-rays by Wilhelm Conrad Roentgen in 1895. Since then, the field of medical imaging has evolved rapidly with the development of various imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET). Here is a brief overview of the major milestones in the history of medical imaging: In 1895, Roentgen discovered X-rays, a form of electromagnetic radiation that can penetrate through the body and produce images of internal structures. X-rays quickly became a popular diagnostic tool for detecting fractures, tumors, and other abnormalities. In the 1950s, ultrasound imaging was developed, which uses high-frequency sound waves to produce images of internal structures. Ultrasound is particularly useful for imaging soft tissues and organs, such as the liver, kidneys, and fetus during pregnancy. In the 1970s, the first CT scanner was developed, which uses X-rays and computer processing to produce detailed cross-sectional images of the body. CT scans are particularly useful for detecting tumors, injuries, and internal bleeding. In the 1980s, MRI was developed, which uses a powerful magnetic field and radio waves to produce detailed images of the body. MRI is particularly useful for imaging soft tissues, such as the brain, spinal cord, and joints. In the 1990s, PET scanning was developed, which uses a radioactive tracer to produce images of the body’s metabolic activity. PET scans are particularly useful for detecting cancer, heart disease, and neurological disorders. In conclusion, medical imaging has come a long way since the discovery of X-rays in 1895. The development of new imaging modalities and technologies has revolutionized the field of medicine, enabling doctors to diagnose and treat diseases with greater accuracy and precision. #### 2.1.2 X-Ray in Medical Science X-rays are commonly used to diagnose and monitor bone fractures, tumors, infections, and other abnormalities in the body. X-ray machines produce a controlled amount of radiation that passes through the body and creates an image on a film or digital detector. The resulting image shows the internal structures of the body in shades of gray, with denser structures appearing whiter and less dense structures appearing darker. X-rays are a type of electromagnetic radiation that has a wavelength shorter than visible light. When X-rays pass through the human body, they are absorbed in different rates by different tissues and structures, depending on their density and composition. This property of X-rays makes them a useful tool for medical imaging, allowing doctors to visualize internal structures and diagnose various medical conditions. #### 2.1.3 MRI in Medical Science MRI technique is particularly useful for imaging soft tissues, such as the brain, spinal cord, and organs, which are difficult to see with other imaging techniques. MRI can detect a wide range of medical conditions, including tumors, infections, injuries, and neurological disorders. It can also be used to monitor the progression of certain diseases and to guide surgical procedures. MRI, which stands for Magnetic Resonance Imaging, is a medical imaging technique that uses a strong magnetic field, radio waves, and a computer to create detailed images of the body’s internal structures. Unlike X-rays, MRI does not use ionizing radiation and is considered safe for most patients [43]. The procedure of MRI conduction in medical science: the patient lies down on a table that slides into a large tube-shaped machine. The machine creates a powerful magnetic field that aligns the hydrogen atoms in the patient’s body. Radio waves are then used to disrupt the alignment of these atoms, causing them to emit signals that are detected by the machine’s receiver. These signals are then processed by a computer to produce detailed images of the body’s internal structures. #### 2.1.4 Nuclear imaging in Medical Science Nuclear imaging, also known as nuclear medicine imaging, is a medical imaging technique that uses small amounts of radioactive materials, called radiopharmaceuticals, to create images of the body’s internal structures and functions. Nuclear imaging is different from other medical imaging techniques because it can show how different parts of the body are functioning rather than just their structure [55]. During a nuclear imaging procedure, a small amount of radiopharmaceutical is injected into the patient’s body, usually into a vein. The radiopharmaceutical travels through the patient’s body and accumulates in the organ or tissue being studied. The patient then lies down on a table and a special camera is used to detect the radiation emitted by the radiopharmaceutical. This radiation is then processed by a computer to create images of the body’s internal structures and functions. Several works have been conducted based on the nuclear medical imaging technique. Firstly, we consider X-ray images for depiction. For better analyzing the x-ray imaging modality, detecting coronavirus from the x-ray image has been considered as an example due to its easily accessible quality. It has been considered the primary diagnostic imaging tool for the Coronavirus. There are several pieces of research on this that permits us to take it as an example. The automated identification of this disease is a source of concern due to its infectious nature and the scarce number of test equipment accessible. As a result, previous research relied on an AI-based technique. Recent advancements in deep learning assist in interpreting X-ray images. Among the prior studies, the most frequent and effective method is utilizing CNN through the X-ray images, as it can differentiate features from other common pneumonia directly from raw data. X-ray images is playing a very crucial role in the ongoing pandemic for detecting any lung abnormalities in a timely manner. Transfer learning technique was adopted along with CNN to recognize COVID-19 from x-ray images, which achieved a tremendous result [11]. The key emerging tool is an x-ray, from which the data were obtained in order to detect and categorize whether the patient has COVID-19 or not. Ozturk et al. [12] proposed a DL-based model named ‘DarkNet’ as a classifier where 17 convolutional layers were used along with different filtering from x-ray images to automatically identify COVID-19 to diagnose for binary class categorization (accuracy of 98.08%) and multi-class categorization (accuracy of 87.02%). In [13], COVIDX-NET, a DL classifier, was proposed that was trained by seven distinct structures of DCNN models to interpret patient status with 50 x-ray images. The efficacy of MobileNetV2, one of the classifiers, can be enhanced further for use on smart devices due to its fast computing speed in the clinical field. Narin et al. [2] proposed five pre- trained CNN-based approaches utilizing X-ray images, and they found that the pre-trained ResNet50 model offers the highest accuracy. In that model, three distinct binary classifications were implemented with four classes. Wang et al. [14] presented COVID-Net, a deep CNN-based design tool that is a public source and accessible to all for making decisions in diagnosis. A sufficient number of datasets should be necessary to accurately identify and categorize the disease.In [68] COVID-Net therefore plays a crucial role in providing sufficient and current data for automatic classification. They proposed a method for creating this dataset, which will aid radiologists in correctly interpreting X-ray images. MRI employs an intense magnetic field combined with radio frequency to offer comprehensive information about soft tissues, bones, organs, and other body interiors. As it provides a detailed picture of soft tissue with 3-D visualization, the detection of brain tumors has been selected as an example from MRI. High precision is needed for identifying brain tumors since even a small inaccuracy might have fatal consequences. The detailed image from MRI can assist the physiologist in determining the particular disease in a particular area due to its capacity to discern between structure and tissue focused on contrast levels. Several studies have been carried out on the automatic detection of brain tumors. In [15], the authors proposed an automated brain tumor identification technique based on datasets that have been pre-processed with a diffusion filter. The median filter as well as SWT were employed to de-noise it. Following that, segmentation was conducted, and SWT was utilized to extract features before segmenting and classifying with SVM or PNN. In [16], Saladi et al. proposed a more accurate and precise brain tumor detection method for segmentation purposes that makes use of versatile regularized kernel-based fuzzy c-means (ARKFCM). The removal of noise and segmentation present the most challenging issues for MRI brain tumor diagnosis. Three distinct enhancement approaches, notably HE, CLAHE, and BPDFHE, were employed for preprocessing to remove noise from the MRI image. In [17], three distinct types of tumors were identified utilizing K-means for segmentation, and features were extracted through DWT and then the feature was reduced by adopting PCA. Finally, ANN was adopted for classification. An efficient training function “Levenberg-Marquardt” was employed in the presented network design. To identify tumors and their location, a quicker R-CNN DL technique presented (RPN) with Region Proposal Network. VGG-16 was adopted as a formation structure in that model [18]. The binary classification was described in [19], where VGG-16 and AlexNet were utilized for extracting features. They aimed to first increase the remarkable qualities utilizing the hypercolumn strategy, and then merge the information acquired using both structures. The fittest features are chosen to employ recurrent feature elimination (RFE). Afterward, Support Vector Machine (SVM) was employed for categorization, yielding an overall accuracy of 96.77 %. In [56], the authors presented a DWT-based CNN model in which four MRI sequences were fused via DWT with DW kernel, yielding a more comprehensive tumor region than a single MRI sequence. Thereafter, a partial differential diffusion filter (PDDF) was employed for noise elimination and then segmentation was done utilizing a global thresholding approach. The segmented images were then fed into the 23-layer CNN architecture for classification purposes. A retrospective analysis was carried out to diagnose lung cancer utilizing nuclear imaging. In [21], SPECT as well as PET(positron emission tomography) were integrated with CT to diagnose lung cancer with 3D treatment planning. They examined the nuclear imaging technique with CT images to improve the accuracy. PET is frequently used in brain tumor analysis. This technique can distinguish between benign and malignant brain tumors. Information from functional evaluations of each organ is a subject of concern for benign lesions, letting the physician know whether the organ is normally activated or not [22]. In [23], the authors evaluated voxel-based criteria in conjunction with dual-time-point 18F-FDG PET for the detection and localization of high- grade malignant tumors and they improved sensitivity compared to standard FDG PET. In [24], the authors proposed a technique for identifying brain tumors utilizing carbon-11 choline along with PET and examined its effectiveness. C-11 was administered intravenously to humans and rats with tumors. The tracer’s dispersion throughout the brain was assessed. FDG, the best tracer in PET, cannot differentiate between HG tumor and necrosis radiation [25]. Furthermore, PET scanning might provoke allergic reactions in individuals, which can be harmful. Additionally, due to their inferior spatial resolution compared to MRI scanning, PET tracers are unable to offer reliable localization of physiological structure [26]. The authors presented a DL-based approach and examined an approach using conventional CT images and CT images derived from PET. ResNet-18 structure was utilized for pre-processing and transfer learning was then used for classification. They demonstrated that traditional CT provides greater accuracy than CT with PET [27]. In [28], detection of coronary artery disease (CAD) was diagnosed utilizing SPECT myocardial perfusion imaging (MPI). A robust CNN structure was employed for feature extraction and further classification to identify MPI images. They approached the RGB-CNN model which can train a model efficiently with a limited dataset. The authors implemented another RGB-CNN model to categorize SPECT data. They proposed three CNN approaches where RGB-CNN was designed from scratch while VGG-16 and DenseNet-121 were implemented employing transfer learning [29]. They exaggerated numerous CNN layers to obtain the best combination with 10-fold cross-validation which resulted in high accuracy. They compared their RGB-CNN method with VGG-16 and DenseNet-121 where RGB demonstrates adequate performance with strong findings, VGG-16 functioned effectively, and DenesNet-121 retrieved adequate data. A lot of research advancements were made in X-ray, MRI, and NI Imaging modalities. We have analyzed a few of the important studies in this chapter. Also, In recent times we also reviewed significant traction towards federated learning. federated learning provides an efficient way of detecting newly discovered variants like COVID-19. Some of the useful findings about federated learning are mentioned in [63] [64] [65]. In [66] a collaborative federated learning system that enables deep-learning image analysis and classifying diabetic retinopathy without transferring patient data between healthcare organizations has been introduced. ## 3 Materials and Methodology of study ### 3.1 X-ray X-ray imaging is the most effective diagnostic tool for identifying serious illnesses such as pneumonia. Recently, this imaging technique has also been utilized to diagnose the highly contagious and severe pandemic caused by the coronavirus. Due to the urgency and rapid spread of this pandemic, early detection is crucial in reducing mortality rates and slowing its transmission. However, diagnosing this disease through X-ray images requires specialized expertise or experienced individuals. Therefore, artificial intelligence (AI) has emerged as the fastest and most efficient approach for automatically detecting the coronavirus in X-ray images. Numerous studies have investigated the use of X-ray scans in diagnosing this pandemic disease, and in this context, we will provide specific examples to highlight the significance of this imaging modality in determining the severity of this condition. Among various AI techniques, this review will primarily focus on the CNN-based deep learning approach due to its superior effectiveness [3]. #### 3.1.1 Materials: The progress impact of DL in medical imaging modalities relies on the text- mined image dataset. A vast variety of existing methods have been studied using X-ray imaging for observing various ailments, particularly lung diseases. The detection of COVID-19 has been taken as an example using X-ray images. A dataset is a primary requisite for diagnosing any disease. The chest X-ray scans are obtained from distinct sources. Cohen JP’s dataset (covid- chest x-ray-dataset) [30] is one of the frequently used datasets for amassing X-ray images from patients having COVID-19, which considers images from numerous free sources that are periodically updated. “ChestX-ray8” is another database that contains 108,948 anterior-view X-ray images of 32,717 distinct patients with annotated eight illness image descriptors that assist in detecting thoracic disease [31]. Figure 1: Sample images of chest x-ray (A) COVID-19 case (B) Normal case (C) viral pneumonia case.[14] Sample images of the X-ray are shown in Figure 1. This figure depicts three cases of X-ray images. The datasets consist of x-ray images(chest, lungs, bones, and other organs) for recognition and classification purposes. #### 3.1.2 Methodology: The COVID-19 virus can cause irreversible lung damage, resulting in pneumonia and perhaps death. Therefore, it is very crucial for doctors to detect corona at a very early stage to reduce mortality. Methods from several studies are presented in this study. The authors used the fuzzy color technique to reform the data classes and stack images that were organized using DL models MobileNetV2 and SqueezeNet [32]. The models adopted the Social Mimic optimization (SMO) approach to process the features. Following that, SVM was utilized to aggregate and categorize efficient features. For the first time, Joseph Paul Cohen explored the COVID-19 datasets and publicly shared them on GitHub. The majority of the study for COVID-19 detection is based on this dataset, which aids researchers in their COVID-19 categorization studies. In their research, three different datasets were collected and the images were pre-processed utilizing the Fuzzy technique. The datasets were trained through DL models MobileNetV2 and SqueezeNet and classified using the SVM technique. For image classification, CNN plays a vital role. As a result, the majority of the works are based on CNN-based imaging modalities. ### 3.2 MRI #### 3.2.1 Materials: MRI is the most effective and reliable screening technology for detecting brain tumors (BT). It is the best technology that is frequently used as it can diagnose BT at a very early stage. A brain tumor is a source of concern as it is one of the prominent causes of death. As brain tumors do not disseminate to other areas of the body, they can be distinguished from malignant tumors among the numerous types of tumors. Among the three different variants of BT, gliomas form in brain tissues and are considered malignant. Meningiomas develop from the membranes that are benign, while pituitary tumors are masses that rest inside the skull [33]. . Figure 2: Image of MRI brain images [33] Figure 2 depicts an image of a brain tumor from an MRI. These images are obtained from the datasets for classification purposes or further analysis. Several public datasets are made available for training the DL models, as the models necessitate an ample amount of data. Three distinct commonly used datasets are utilized in [34] which were preprocessed and later segmented utilizing Fuzzy-c- means. To categorize BT images, the detection performance of several classification techniques, including discriminant analyzer (DA), K-nearest neighbor (KNN), SVM, decision tree (DT), neural network (NN), and Naive Bays’ (NB), had been examined. In [35], the authors developed six distinct CNN models utilizing BraTS 2013 dataset, which was later tested using the WBA dataset. The best result was obtained for the KNN classifier employing the OASIS dataset. Some available datasets from MRI images are depicted in Table 1 Table 1: Table of some available datasets from MRI images . Reference | Datasets ---|--- Sahoo et al. [34], Kalaiselvi et al. [35], [36],[37] | BRATS [38] | OASIS [39] | IBSR [40] | Kaggle [41] | The Whole Brain Atlas (WBA) #### 3.2.2 Methodology: The existing approaches employed for brain tumor identification require some common procedures for precise classification. First of all, input images are required for diagnosis and classification which are obtained from the MRI image dataset Nasim et al. The images are then pre-processed through several techniques such as high pass filter, low pass filter, or other filters for eliminating noise. Then data are augmented for training purposes through DL structure and then segmentation is performed. Image processing, thresholding, etc. are done in this segmentation step to find the location of the tumor or other specific tissue. After segmenting the particular area, this is used as input to the DL structure. The preprocessed images can be passed through the DL structure without segmentation. After that, the classification of distinct tumors is performed utilizing several DL approaches such as CNN, ResNet, GoogleNet, and so on, which learn and extract features automatically directly from raw data. CNN is capable of learning patterns from raw images and extracting features automatically. Transfer learning along with DL approaches were used for classification purposes. In [42], the authors proposed tumor classification where a pre-trained CNN model, VGG-19 was utilized for extracting features, and then KNN was applied for the classification of BT on two autonomous datasets BRATS 20018 as well as CE-MRI. In [44], the authors proposed a more accurate detection procedure for BT utilizing Fuzzy c-means (FCM) clustering with the help of two optimization tools. They suggested two approaches and compared them to determine which was superior. FCM utilizing GA (Genetic Algorithm) method was more time-consuming than the other. As execution is an important parameter to analyze images, their proposed method of PSO (Particle Swarm Optimization) with FCM took less time than the other. Segmentation is a critical step in finding ambiguous regions from complicated medical images. They presented segmentation approaches that made use of the image processing technique FCM together with the different pre-processing tools GA and PSO. The authors proposed a new approach for data augmentation on numerous datasets. Several sets of MRI data were trained to utilize GAN (generative adversarial net) to create MRI-like visuals. ### 3.3 Nuclear imaging Nuclear imaging is an organ-based functional modality that has a significant effect on patient care by offering tools for disease staging. It is an approach for creating images by sensing radiation from distinct locations of the anatomy after a radioactive tracer has been administered to the patient. The benefit of evaluating an organ’s function is that it assists clinicians in making plans and developing diagnostic strategies for the area of the body being analyzed. This differs from other imaging modalities in that it employs a tracer with no side effects and minimal radiation. Hence it is safer than the other modalities. Furthermore, it provides some essential information at a preliminary phase that other imaging modalities can’t provide. Following the administration of the tracer and the accumulation of the tracer in the bodily tissue being studied, radiation will be produced and detected by a gamma camera. During a nuclear scan, physicians can analyze and diagnose disorders such as tumors, infections, cysts, etc. by observing the behavior of radioactive substances. The region of the body where the radionuclide accumulates in larger proportions is called hot spot [45]. This modality can detect abnormal renal function, brain, thyroid, and heart function, among other things. Figure 3 presents an image from nuclear imaging technology. Figure 3: Image of nuclear medicine imaging [46] Nuclear imaging approaches are of two types-SPECT (single-photon emission computed tomography) and PET (positron emission tomography) which yield metabolic and functional details, as opposed to CT and MRI. They are integrated with CT and MRI to disseminate necessary information and permit interconnection between functional and physiological data. Compared to SPECT, PET offers superior contrast and positional accuracy [47]. PET makes use of a positron-emitting tracer, while SPECT employs a gamma-emitting tracer. SPECT is a 3D imaging modality that enables radiopharmaceutical dispersion as well as provides more clarity, brightness, and spatial information compared to planar nuclear imaging. It employs numerous gamma cameras that spin around the patient to provide greater accuracy and spatial precision. The reconstructed data produces 3D images from cameras [48]. PET is used to assess radioactivity in vivo. It entails injecting a positron- emitting tracer intravenously, allowing for distribution systemically, and afterward monitoring for identification and measurement of radiopharmaceutical dissemination shapes in the anatomy. F-18 fluorodeoxyglucose (FDG) is used as a radiopharmaceutical in the cell. Having high metabolism, tumor cells metabolize FDG which is processed into FDG-6 phosphate [47]. Tumor cells are unable to consume that compound which is later aggregated and concentrated in the tumor cells that fulfill the purpose of detection. Maps of the lung perfusion in 3D are produced using SPECT. Tumor localization is aided by the 3D metabolic pictures produced by PET [67]. A DL-based approach to detect coronary artery disease utilizing SPECT, a nuclear imaging (NI) technique has been taken as an example to show the methodology from the NI technique. Most of the datasets were prepared by the Nuclear Diagnostic Department to diagnose diseases from NI. The dataset for this approach was prepared from a medical center where SPECT images were used to visualize the heart. In [29], the authors proposed an RGB-CNN model where the SPECT MPI images were first loaded and the images are presented in RGB mode. Following that, the data was prepared through normalization, shuffling, and splitting. The data augmentation was performed and the suggested CNN model was trained. Data augmentation was done due to broadening the limited size of the dataset. Their proposed model was able to identify three classes of heart disease autonomously. They utilized appropriate parameters in their model to correctly classify the cardiac disease automatically and examined using different convolution layers for the best combination from where the 16-32-64-128 combination outperformed. ## 4 Result and analysis A review of some existing methods has been presented here. Table 2: Performance evaluation of numerous studies for COVID-19 detection from x-ray images: Authors | Method | Dataset | Classes | No. of X-ray samples | Accuracy (%) ---|---|---|---|---|--- Nayak et al [50] | ResNet-34 | • covid-chestxray-dataset; • ChestX-ray8 | 2 | 406 • Covid19-203 • Normal-203 | 98.33 % Ucar and Korkmaj [51] | Bayes-SqueezeNet | • covid-chest x-ray-dataset; • Kaggle chest X-ray pneumonia dataset | 2 | 5949 • COVID-76 • Normal-1583 • Pneumonia -4290 | 98.3% Ozturk et al. [12] | DarkCovidNet | •covid-chestxray-dataset; • ChestX-ray8 | Binary class (2) | 625 •COVID -125 •Normal-500 | 98.08% Toğaçar et al.[32] | SqueezeNet, MobileNetV2, SVM, SMO | Stacked dataset of covid-chestxray-dataset and COVID-19-Radioraphy Dataset (kaggle) [52] | 3 | 458 COVID: 295, NORMAL: 65 and PNEUMONIA: 98 | 98.25% Hemdan et al.[13] | COVIDX-Net | Cohen’s covid-chest x-ray-dataset | 2 | 50 COVID: 25; NORMAL: 25 | 90.00 Narin et al. [2] | ResNet-50 | Cohen’s covid-chest x-ray-dataset | 2 | 100 COVID: 50; NORMAL: 50 | Dataset-1: 96.1% Dataset 2: 99.5% Dataset 3: 99.7% Wang et al.[14] | COVID-Net | COVIDX | 3 | 13975 | 93.3% Sethy and Behera [53] | SVM, ResNet-50 | Dataset from GitHub, Open-I and Kaggle | 2 | 50 COVID: 25; Normal: 25 | 95.38% Farooq, Hafeez and Nasim [54, 68] | COVID-ResNet | COVIDx | 4 | 5941 | 96.23% From Table 2, it is observed that numerous studies have been conducted utilizing numerous methods. Among them, the ResNet-34 model achieves 98.33 % for the detection of COVID-19 disease. X-rays can be utilized to diagnose several diseases, such as heart disease, any type of fracture, lung disease, and so on. The diagnosis of COVID-19 was chosen here as an example utilizing the X-ray images. It is also assumed that the majority of the works used the COVID chest X-ray dataset. Table 3 shows that MRI images were utilized to recognize the most common brain tumor. CNN model obtained better results using BRATS datasets. The nuclear imaging technology was used to diagnose various diseases. Table 4 depicts several studies of this imaging technique for coronary disease diagnosis. In that case, the RGB-CNN model performed better. In light of the foregoing, it is determined that the majority of imaging modalities employ convolutional neural network (CNN)-based deep learning technology. It is frequently utilized for its best recognition and classification purposes in images. It can classify images as input by discovering patterns on its own. Therefore, it shows remarkable results for image classification. Hence, medical imaging modalities rely on this CNN-based technology for its automatic feature extraction and classification nature. Table 3: Performance evaluation of numerous studies for brain tumor detection from MRI images: Authors | Purpose | Method | Dataset | Accuracy (%) ---|---|---|---|--- Kalaivelsi et al. [35] | Detection along with Classification | CNN | BRATS 2013 dataset;The Whole Brain Atlas (WBA) | 96-99 % Gopal et al. [44] | Detection, segmentation, and classification | Fuzzy c-means algorithm along with PSO and GA | — | GA with FCM:74.6% PSO with FCM:92.3% Togaçar et al. [19] | Binary classification | CNN(AlexNet and VGG-16)for feature extraction, RFE for enhancing image and SVM for classification | Dataset of Chakrabarty 2019(Kaggle) | 96.77% Amin et al. [56] | Binary Classification | Conventional optimized filter used for pre-processing; Seed growing technique for segmentation; Stacked Sparse Auto-encoder(SSAE) for classification | Multiple BRATS Challenge dataset | 100% on 2012, 90% on 2012 synthetic, 95% on 2013, 100% on Leaderboard 2013, 97% in 2014, 95% on 2015 Amin et al. [19][56] | Classification | DWT with DW kernel; CNN for classification | BRATS dataset | Average 97.6 % Ghassemi et al. [57] | Classify three distinct tumor classes | GAN with ConvNet GAN to produce MRI-like visuals; CNN for classification | Dataset by cheng et al[55], brain volume MRI images dataset [59] | 93.01%(introduced split) 95.6%(random split) Salma et al. [60] | Detection and Classification | optimal wavelet statistical texture features(OGSA) along with (RNN) | Images collected from star diagnostic and we resources | 96.26% Table 4: Performance evaluation of numerous studies for coronary artery disease detection from Nuclear imaging technology: Authors | Method | Purpose | Accuracy (%) ---|---|---|--- Papandrianos et al. [29] | RGB-CNN | Three class classification to diagnose CAD | 91.86% Papandrianos and Papageorgiou [28] | RGB-CNN | Two class classification of CAD (ischemic or infarction) | 94% Arsanjani et al.[61] | ML based LogitBoost algorithm | Early revascularization | - Berkaya et al. [62] | Two classification techniques employing transfer learning and SVM | Two class categorization of myocardial anomalies | 94% ## 5 Conclusion This chapter represents several studies on three medical imaging modalities X-ray, MRI, and nuclear imaging for diagnosis purposes. The findings of these studies highlight the superiority of artificial intelligence (AI) approaches in the field of medical science. DL-based models, particularly CNN-based models, are widely used for image classification, eliminating the need for manual feature extraction. This advanced structure can extract features directly from raw data and can be constructed by combining different layers. Although these DL techniques primarily require large datasets, data augmentation can be used to benefit smaller datasets. The images from the dataset undergo pre-processing through various filters, segmentation, and classification, utilizing DL structures with precise features. The automatic recognition and classification abilities of DL techniques from raw data alleviate the workload of doctors and clinical staff. From the reviewed studies, it is observed that X-ray is the most frequently used technology in the diagnosis of lung diseases. Currently, the ongoing pandemic COVID-19 is detected and classified using X-ray images, which have achieved remarkable success using the DL technique. From the prior studies, it is depicted that the ResNet-50 model achieves a high degree of accuracy in determining COVID-19 disease. According to prior studies, MRI is the most frequently used technology to identify brain tumors (BT) utilizing DL models. The CNN model and SSAE structures outperformed each other in this review study using this imaging technology to identify and classify BT. Another technique Nuclear imaging provides metabolic as well as functional information, which sets it apart from other imaging techniques. Among several studies in this context, the RGB-CNN structure shows better results in diagnosing cardiac diseases. It offers preliminary information that aids doctors in analyzing organ function and radioactive material activity. Using this imaging modality, relatively less work has been conducted. As a result, there are a lot of opportunities for future study employing this modality. Despite its valuable contributions, this chapter has some limitations. Firstly, it lacks a comprehensive analysis of the imaging modalities, as well as an explanation of how DL structures work. Instead, it mainly focuses on studies that employed DL models in imaging modalities as part of a diagnostic approach. Nonetheless, the chapter highlights the crucial role that imaging modalities play in identifying illnesses, as they can assist in diagnosing various diseases. To further illustrate this, the study provides a few specific, widely used diagnoses to observe how the analysis of these modalities using DL approaches can be beneficial. ## References * [1] Medical Imaging. In: Wikipedia.https://en.wikipedia.org/wiki/Medical-imaging. Accessed 29 Jan 2023 * [2] Narin A, Kaya C, Pamuk Z (2021) Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Analysis and Applications 24:1207–1220. doi: 10.1007/s10044-021-00984-y * [3] Shah, Faisal Muhammad and Hossain, Tonmoy and Ashraf, Mohsena and Shishir, Fairuz Shadmani and Al Nasim, MD Abdullah and Kabir, Md Hasanul. "Brain Tumor Segmentation Techniques on Medical Images-A Review". * [4] (2023) X-ray. In: Wikipedia. https://en.wikipedia.org/wiki/X-ray. Accessed 29 Jan 2023 * [5] Nasim, MD and Dhali, Aditi and Afrin, Faria and Zaman, Noshin Tasnim and Karim, Nazmul."The prominence of artificial intelligence in covid-19." arXiv preprint arXiv:2111.09537.2021. * [6] Center for Devices and Radiological Health Medical X-ray imaging: FDA. In: U.S. Food and Drug Administration. https://www.fda.gov/radiation-emitting-products/medical-imaging/medical-x-ray-imaging. Accessed 29 Jan 2023 * [7] X-ray: What it is, types, preparation and risks. In: Cleveland Clinic. https://my.clevelandclinic.org/health/diagnostics/21818-x-ray. Accessed 29 Jan 2023 * [8] 2021-03-16, Team E The invention of Magnetic Resonance Imaging (MRI). In: IEC. https://www.iec.ch/blog/invention-magnetic-resonance-imaging-mri. Accessed 29 Jan 2023 * [9] (2021) MRI. In: Mayo Clinic. https://www.mayoclinic.org/tests-procedures/mri/about/pac-20384768. Accessed 29 Jan 2023 * [10] (2023) Nuclear medicine. In: Wikipedia. https://en.wikipedia.org/wiki/Nuclear-medicine. Accessed 29 Jan 2023 * [11] Apostolopoulos ID, Mpesiana TA (2020) COVID-19: Automatic detection from X-ray images utilizing transfer learning with Convolutional Neural Networks. Physical and Engineering Sciences in Medicine 43:635–640. doi: 10.1007/s13246-020-00865-4 * [12] Ozturk T, Talo M, Yildirim EA, et al (2020) Automated detection of COVID-19 cases using deep neural networks with X-ray images. Computers in Biology and Medicine 121:103792. doi: 10.1016/j.compbiomed.2020.103792 * [13] Hemdan EE-D, Shouman MA, Karar ME (2020) COVIDX-Net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. In: arXiv.org. https://doi.org/10.48550/arXiv.2003.11055. Accessed 29 Jan 2023 * [14] Wang L, Lin ZQ, Wong A (2020) Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Scientific Reports. doi: 10.1038/s41598-020-76550-z * [15] Kanade PB, Gumaste PP (2015) Brain tumor detection using MRI images. IJIREEICE 146–150. doi: 10.17148/ijireeice.2015.3231 * [16] Saladi S, Karuna Y, Koppu S, et al (2023) Segmentation and analysis emphasizing neonatal MRI brain images using machine learning techniques. Mathematics 11:285. doi: 10.3390/math11020285 * [17] Biswas A, Islam MS (2021) Brain tumor types classification using K-means clustering and Ann Approach. 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST). doi: 10.1109/icrest51555.2021.9331115 * [18] Bhanothu Y, Kamalakannan A, Rajamanickam G (2020) Detection and classification of brain tumor in MRI images using deep convolutional network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). doi: 10.1109/icaccs48705.2020.9074375 * [19] Toğaçar M, Cömert Z, Ergen B (2020) Classification of brain MRI using hyper column technique with Convolutional Neural Network and feature selection method. Expert Systems with Applications 149:113274. doi: 10.1016/j.eswa.2020.113274 * [20] Amin J, Sharif M, Gul N, et al (2020) Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognition Letters 129:115–122. doi: 10.1016/j.patrec.2019.11.016 * [21] Munley MT, Marks LB, Scarfone C, et al (1999) Multimodality Nuclear Medicine Imaging in three-dimensional radiation treatment planning for lung cancer: Challenges and prospects. Lung Cancer 23:105–114. doi: 10.1016/s0169-5002(99)00005-7 * [22] Marta S, Hiromoto K, Alves Togni PH, dos Santos MJ (2013) Clinical applications of Nuclear Medicine. Medical Imaging in Clinical Practice. doi: 10.5772/53029 * [23] Prieto E, Martí-Climent JM, Domínguez-Prado I, et al (2011) Voxel-based analysis of dual-time-point 18F-FDG pet images for brain tumor identification and delineation. Journal of Nuclear Medicine 52:865–872. doi: 10.2967/jnumed.110.085324 * [24] Shinoura N, Nishijima M, Hara T, et al (1997) Brain tumors: Detection with C-11 choline pet. Radiology 202:497–503. doi: 10.1148/radiology.202.2.9015080 * [25] Wong TZ, van der Westhuizen GJ, Coleman RE (2002) Positron emission tomography imaging of brain tumors. Neuroimaging Clinics of North America 12:615–626. doi: 10.1016/s1052-5149(02)00033-3 * [26] Koon-Pong Wong, Dagan Feng, Meikle SR, Fulham MJ (2002) Segmentation of dynamic pet images using cluster analysis. IEEE Transactions on Nuclear Science 49:200–207. doi: 10.1109/tns.2002.998752 * [27] Park Y-J, Choi D, Choi JY, Hyun SH (2021) Performance evaluation of a deep learning system for differential diagnosis of lung cancer with conventional CT and FDG PET/CT using transfer learning and Metadata. Clinical Nuclear Medicine 46:635–640. doi: 10.1097/rlu.0000000000003661 * [28] Papandrianos N, Papageorgiou E (2021) Automatic diagnosis of coronary artery disease in SPECT myocardial perfusion imaging employing deep learning. Applied Sciences 11:6362. doi: 10.3390/app11146362 * [29] Papandrianos NI, Feleki A, Papageorgiou EI, Martini C (2022) Deep learning-based automated diagnosis for coronary artery disease using SPECT-MPI Images. Journal of Clinical Medicine 11:3918. doi: 10.3390/jcm11133918 * [30] ieee8023 IEEE8023/covid-chestxray-dataset: We are building an open database of COVID-19 cases with chest X-ray or CT images. In: GitHub. https://github.com/ieee8023/covid-chestxray-dataset. Accessed 30 Jan 2023 * [31] Wang X, Peng Y, Lu L, et al (2017) Chestx-Ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi: 10.1109/cvpr.2017.369 * [32] Toğaçar M, Ergen B, Cömert Z (2020) Covid-19 detection using deep learning models to exploit social mimic optimization and structured chest X-ray images using fuzzy color and stacking approaches. Computers in Biology and Medicine 121:103805. doi: 10.1016/j.compbiomed.2020.103805 * [33] Badža MM, Barjaktarović MČ (2020) Classification of brain tumors from MRI images using a convolutional neural network. Applied Sciences 10:1999. doi: 10.3390/app10061999 * [34] Sahoo L, Sarangi L, Dash BR, Palo HK (2020) Detection and classification of brain tumor using Magnetic Resonance Images. Advances in Electrical Control and Signal Systems 429–441. doi: 10.1007/978-981-15-5262-5_31 * [35] Kalaiselvi T, Padmapriya ST, Sriramakrishnan P, Somasundaram K (2020) Deriving tumor detection models using convolutional neural networks from MRI of human brain scans. International Journal of Information Technology 12:403–408. doi: 10.1007/s41870-020-00438-4 * [36] Menze BH, Jakab A, Bauer S, et al (2015) The Multimodal Brain Tumor Image Segmentation Benchmark (brats). IEEE Transactions on Medical Imaging 34:1993–2024. doi: 10.1109/tmi.2014.2377694 * [37] Papers with code - brats 2018 dataset. In: Dataset | Papers With Code. https://paperswithcode.com/dataset/brats-2018-1. Accessed 30 Jan 2023 * [38] Oasis Brains. In: OASIS Brains - Open Access Series of Imaging Studies. https://www.oasis-brains.org/. Accessed 30 Jan 2023 * [39] NITRC: IBSR: Tool/Resource Info. In: N I T R C. https://www.nitrc.org/projects/ibsr. Accessed 30 Jan 2023 * [40] Find open datasets and Machine Learning Projects. In: Kaggle. https://www.kaggle.com/datasets. Accessed 30 Jan 2023 * [41] Overview. In: brain. https://portal.brain-map.org/explor/overview?gclid=EAIaIQo bChMIvIrRtoDJ_AIVwppmAh17RwxyEAAYASAAEgLUNvD_BwE. Accessed 30 Jan 2023 * [42] B A (2020) Tumor classification using block wise fine tuning and transfer learning of deep neural network and KNN classifier on mr brain images. International Journal of Emerging Trends in Engineering Research 8:574–583. doi: 10.30534/ijeter/2020/48822020 * [43] Tonmoy, Hossain and Shadmani, Shishir Fairuz and Mohsena, Ashraf and Abdullah, MD Al Nasim and Faisal, Muhammad Shah. "Brain Tumor Detection Using Convolutional Neural Network". 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), IEEE,1–62019. * [44] Gopal NN, Karnan M (2010) Diagnose brain tumor through MRI using image processing clustering algorithms such as fuzzy C means along with intelligent optimization techniques. 2010 IEEE International Conference on Computational Intelligence and Computing Research. doi: 10.1109/iccic.2010.5705890 * [45] (2019) Nuclear medicine. In: Nuclear Medicine | Johns Hopkins Medicine. https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/nuclear-medicine. Accessed 30 Jan 2023 * [46] Nuclear imaging. In: Stanford Health Care (SHC) - Stanford Medical Center. https://stanfordhealthcare.org/medical-tests/n/nuclear-imaging.html. Accessed 30 Jan 2023 * [47] Minen G (2022) Spect vs pet: Radiology reference article. In: Radiopaedia Blog RSS. https://radiopaedia.org/articles/spect-vs-pet. Accessed 30 Jan 2023 * [48] El-Feky M (2022) Single Photon Emission Computed Tomography (SPECT): Radiology reference article. In: Radiopaedia Blog RSS. https://radiopaedia.org/articles/single-photon-emission-computed-tomography-spect. Accessed 30 Jan 2023 * [49] Bickle I (2022) Positron Emission Tomography: Radiology Reference Article. In: Radiopaedia Blog RSS. https://radiopaedia.org/articles/positron-emission-tomography?lang=us. Accessed 30 Jan 2023 * [50] Nayak SR, Nayak DR, Sinha U, et al (2021) Application of deep learning techniques for detection of COVID-19 cases using chest X-ray images: A comprehensive study. Biomedical Signal Processing and Control 64:102365. doi: 10.1016/j.bspc.2020.102365 * [51] Ucar F, Korkmaz D (2020) Covidiagnosis-net: Deep Bayes-Squeezenet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Medical Hypotheses 140:109761. doi: 10.1016/j.mehy.2020.109761 * [52] Rahman T (2022) Covid-19 radiography database. In: Kaggle. https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database. Accessed 30 Jan 2023 * [53] Sethy PK, Behera SK (2020) Detection of coronavirus disease (covid-19) based on Deep features. doi: 10.20944/preprints202003.0300.v1 * [54] Farooq M, Hafeez A (1970) [PDF] covid-resnet: A deep learning framework for screening of covid19 from radiographs: Semantic scholar. In: ArXiv. https://www.semanticscholar.org/paper/COVID-ResNet%3A-A-Deep-Learning-Framework-for-of-from-Farooq-Hafeez/049ea432acc08b04a3e21c390f62be3d845a2e2c. Accessed 30 Jan 2023 * [55] Al Nasim, Md Abdullah and Al Munem, Abdullah and Islam, Maksuda and Palash, Md Aminul Haque and Haque, Md Mahim Anjum and Shah, Faisal Muhammad. "Brain Tumor Segmentation using Enhanced U-Net Model with Empirical Analysis.", 2022 25th International Conference on Computer and Information Technology (ICCIT),1027–1032, 2022,IEEE. * [56] Amin J, Sharif M, Gul N, et al (2019) Brain tumor detection by using stacked autoencoders in deep learning. Journal of Medical Systems. doi: 10.1007/s10916-019-1483-2 * [57] Ghassemi N, Shoeibi A, Rouhani M (2020) Deep neural network with generative adversarial networks pre-training for Brain Tumor Classification based on Mr Images. Biomedical Signal Processing and Control 57:101678. doi: 10.1016/j.bspc.2019.101678 * [58] Cheng J, Huang W, Cao S, et al (2015) Enhanced performance of Brain Tumor Classification via tumor region augmentation and partition. PLOS ONE. doi: 10.1371/journal.pone.0140381 * [59] Marcus DS, Fotenos AF, Csernansky JG, et al (2010) Open access series of imaging studies: Longitudinal MRI data in nondemented and demented older adults. Journal of Cognitive Neuroscience 22:2677–2684. doi: 10.1162/jocn.2009.21407 * [60] Begum SS, Lakshmi DR (2020) Combining optimal wavelet statistical texture and recurrent neural network for tumour detection and classification over MRI. Multimedia Tools and Applications 79:14009–14030. doi: 10.1007/s11042-020-08643-w * [61] Arsanjani R, Dey D, Khachatryan T, et al (2014) Prediction of revascularization after myocardial perfusion SPECT by machine learning in a large population. Journal of Nuclear Cardiology 22:877–884. doi: 10.1007/s12350-014-0027-x * [62] Kaplan Berkaya S, Ak Sivrikoz I, Gunal S (2020) Classification models for SPECT myocardial perfusion imaging. Computers in Biology and Medicine 123:103893. doi: 10.1016/j.compbiomed.2020.103893. * [63] Puppala, Sai, Ismail Hossain, and Sajedul Talukder. "Towards Federated Learning Based Contraband Detection Within Airport Baggage X-Rays." In 2022 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT), pp. 1-6. IEEE, 2022. * [64] Talukder, Sajedul, Sai Puppala, and Ismail Hossain. "Federated Learning-Based Contraband Detection within Airport Baggage X-Rays." Journal of Computing Sciences in Colleges 38, no. 3 (2022): 218-218. * [65] Talukder, Sajedul, Sai Puppala, and Ismail Hossain. "A Novel Hierarchical Federated Learning with Self-Regulated Decentralized Clustering." Journal of Computing Sciences in Colleges 38, no. 3 (2022): 222-223. * [66] Hossain, Ismail, Sai Puppala, and Sajedul Talukder. "Collaborative Differentially Private Federated Learning Framework for the Prediction of Diabetic Retinopathy." In 2023 IEEE 2nd International Conference on AI in Cybersecurity (ICAIC), pp. 1-6. IEEE, 2023. * [67] Hossain, Tonmoy and Shishir, Fairuz Shadmani and Ashraf, Mohsena and Al Nasim4&, MD Abdullah and Shah, Faisal Muhammad, "Brain Tumor Detection Using Convolutional Neural Network". * [68] Nasim, MD and Dhali, Aditi and Afrin, Faria and Zaman, Noshin Tasnim and Karim, Nazmul. "The prominence of artificial intelligence in covid-19." 2021
# The TYPHOON stellar population synthesis survey: I. The young stellar population of the Great Barred Spiral NGC 1365 Eva Sextl Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Rolf-Peter Kudritzki Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Institute for Astronomy, University of Hawaii at Manoa, 2680 Woodlawn Drive, Honolulu, HI 96822, USA Andreas Burkert Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany I-Ting Ho Max-Planck-Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany H. Jabran Zahid Microsoft Research, 14820 NE 36th St, Redmond, WA 98052, USA Mark Seibert The Observatories, Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91106, USA Andrew J. Battisti Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia Barry F. Madore The Observatories, Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91106, USA Jeffrey A. Rich The Observatories, Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91106, USA (Accepted 31 October, 2023) ###### Abstract We analyze TYPHOON long slit absorption line spectra of the starburst barred spiral galaxy NGC 1365 obtained with the Progressive Integral Step Method covering an area of 15 square kpc. Applying a population synthesis technique, we determine the spatial distribution of ages and metallicity of the young and old stellar population together with star formation rates, reddening, extinction and the ratio RV of extinction to reddening. We detect a clear indication of inside-out growth of the stellar disk beyond 3 kpc characterized by an outward increasing luminosity fraction of the young stellar population, a decreasing average age and a history of mass growth, which was finished 2 Gyrs later in the outermost disk. The metallicity of the young stellar population is clearly super solar but decreases towards larger galactocentric radii with a gradient of -0.02 dex/kpc. On the other hand, the metal content of the old population does not show a gradient and stays constant at a level roughly 0.4 dex lower than that of the young population. In the center of NGC 1365 we find a confined region where the metallicity of the young population drops dramatically and becomes lower than that of the old population. We attribute this to infall of metal poor gas and, additionally, to interrupted chemical evolution where star formation is stopped by AGN and supernova feedback and then after several Gyrs resumes with gas ejected by stellar winds from earlier generations of stars. We provide a simple model calculation as support for the latter. Barred spiral galaxies(136) — Starburst galaxies(1570) — Stellar populations(1622) — Galaxy chemical evolution(580) ## 1 Introduction Spectroscopic studies of galaxies with integral field units (IFU) have become an important tool to investigate the evolution of galaxies. Spatially resolved maps of their chemical composition, stellar ages, star formation rates and gas properties provide unique information about the complex physical processes affecting galaxy evolution (see, for instance, Bittner et al. (2020), Carrillo et al. (2020), Parikh et al. (2021), Emsellem et al. (2022), Pessa et al. (2023), Westmoquette et al. (2011), Sánchez-Blázquez et al. (2014),Thainá- Batista et al. (2023)). Usually, because of the relatively small field of view of the available IFUs (of the order of one arcminute) this work has been mostly concentrated on galaxies with small angular size or only central regions. Therefore, given the enormous potential of these spectroscopic stellar population studies, we have started a project with the goal to extend this work to cover large parts of galactic disks together with their central regions. The TYPHOON survey which uses stepwise combined long slit spectra of galaxies seems ideal for this purpose. For the population synthesis analysis of the integrated stellar population absorption line spectra we will use the technique developed and described in Sextl et al. (2023). We start the project with the population synthesis analysis with the starbursting type 2 Seyfert barred-spiral NGC 1365 in the Fornax cluster. With an isophotal radius of R25 = 5.61 arcmin or 29.55 kpc at a distance of 18.1 Mpc (Ho et al., 2017) NGC 1365 is a galaxy of huge dimensions and ideally suited for a long-slit IFU-like investigation. It is well studied with respect to the dynamics of its gas and stellar content (Lindblad (1999), Zánmar Sánchez et al. (2008), Jałocha et al. (2010)). It is an almost face-on galaxy, which avoids line of sight confusion complications and minimizes effects of interstellar extinction. The central region shows extensive star formation and the presence of a low luminosity AGN with two conical outflows (see Venturi et al. (2018) and references therein). In addition, the chemical composition of its ionized ISM has been carefully investigated (Ho et al., 2017; Chen et al., 2023) and first long slit star formation studies have already been carried out (Sánchez-Blázquez et al., 2011). Since the galaxy is characterized by strong star formation activity, the blue sensitivity of TYPHOON is an advantage and allows for a good characterization of the properties of its young stellar population. This will provide important information extending the work and the results obtained recently within the comprehensive ESO VLT PHANGS-MUSE survey (Emsellem et al., 2022; Pessa et al., 2023). We describe the observations in section 2 and the population synthesis analysis technique in section 3. The results are presented in sections 4 followed by a discussion in 5. Figure 1: TYPHOON BVI color composite image of NGC 1365. The position of the central AGN is indicated by a small green cross. The bar extends roughly 200” on the sky (Lindblad, 1999) corresponding to $\sim$17.6 kpc projected length. The two dashed ellipses indicate galactocentric distances of 3 and 15 kpc, respectively. East is towards the left and north towards the top. ## 2 Observations The TYPHOON survey (P.I. B. Madore) uses the Las Campanas du Pont 2.5m telescope Wide Field CCD imaging spectrograph with a custom long-slit of 18 arcmin length and 1.65 arcsec width which progressively scans across the galaxies (Progressive Integral Step Method, PrISM) to construct 3D data cubes of 1.65 times 1.65 arcsec2 spaxels. At a distance of 18.1 Mpc (Jang et al., 2018) 1.65 arcsec are equivalent to 145 pc. The spectral resolution corresponds to a FWHM of 8.2 Å. In the case of NGC 1365 the slit was placed along the north-south direction. More details of the observations are described in Ho et al. (2017) and Chen et al. (2023). Figure 1 provides a BVR color composite image constructed from the TYPHOON data cube. The figure has already been shown in Ho et al. (2017) but is repeated here for illustration. NGC 1365 is a massive galaxy with a stellar mass of log M∗ = 10.95 (in solar units, Muñoz-Mateos et al. 2013, Leroy et al. 2019), an isophotal radius of 5.61 arcmin (de Vaucouleurs, 1991), inclination angle of 35.7 degrees and a position angle of 49.5 degrees based on 2MASS photometry (Jarrett et al. 2003, see Ho et al. 2017 Table 1). For our population synthesis analysis we use the spectral range from 4000 to 7070 Å where the flux calibrated spectra have the best signal. Compared to the range from 4800 to 7000 Å used in the PHANGS-MUSE study by Pessa et al. (2023) this is an important blueward extension which enables a more accurate characterization of the young stellar population. If needed, we combine the spectra of neighboring individual spaxels by Voronoi binning using Cappellari & Copin (2003) to obtain a minimum signal-to-noise ratio of 30 in the stellar continuum at 5000 Å. However, in order to avoid averaging over too large spatial dimensions we exclude bins consisting of more than 400 TYPHOON spaxel. We also remove bins containing the contribution of bright foreground stars. This leaves us with spectra of 359 bins distributed over the galaxy. As an example, Figure 2 shows the spectra of two bins and the corresponding stellar population fits. Figures 3 and 5 give an impression of the spatial distribution of the bins analyzed. Figure 2: TYPHOON spectrum (orange) and the corresponding stellar population fit (dark blue). Note that for the fit ISM emission and absorption lines and broad stellar Balmer lines are masked out. The resulting gaps are shown as straight lines. The top spectrum corresponds to bin 16 at $\Delta\alpha=-1.073$ arcmin and $\Delta\delta=1.292$ arcmin (northern spiral arm) and the bottom spectrum to bin 153 at $\Delta\alpha=-0.192$ arcmin and $\Delta\delta=-0.11$ arcmin (on the southern dust lane in the center). The different physical properties of the stellar populations are discussed in section 4. Figure 3: Reddening (top) and extinction (bottom) maps of NGC 1365 obtained with our population synthesis fit. The central region is indicated by the green cross. The ellipses indicate galactocentric distances of 3, 10, and 15 kpc. Figure 4: Radial distribution as a function of deprojected galactocentric distance of reddening E(B-V) (top), extinction AV (middle) and RV (bottom). Figure 5: Map of average population age. The dashed ellipses indicate galactocentric distances of 3, 10, and 15 kpc, respectively. Figure 6: Radial distribution of average population age (top), the ratio byoung/bold (middle), which is ratio of the contribution of the young and old population to the total observed population spectrum, and the ratio of average star formation rates of the young to the old population. Figure 7: Radial distribution of the logarithm of metallicity relative to the sun. The young and old population are plotted in blue and red, respectively. HII-region metallicities based on oxygen abundances (Ho et al., 2017) are shown in yellow. We also show regressions for the young stellar population (solid) and the HII-regions (dashed) calculated for galactocentric distances larger than 3.5 kpc. ## 3 Analysis Method Our population synthesis method is described in detail in Sextl et al. (2023). We fit the SED and absorption line spectra of the integrated stellar population with a linear combination of the spectra of single stellar populations (SSP) of different ages and metallicities. This allows us to constrain the average metallicity and age of the population together with reddening E(B-V), extinction AV and the ratio RV = AV/E(B-V), which characterizes the steepness of the reddening law. The observed and SSP template spectra are normalized in the range between 5500 to 5550 Å. Therefore, the metallicities and ages obtained in this way are V-band luminosity weighted averages (see Sextl et al. 2023). The model spectrum of the integrated stellar population Mλ combines the spectra of single stellar populations (SSPs) fλ,i(ti, [Z]i) with age ti and logarithmic metallicity [Z]i = log Z/Z☉ $M_{\lambda}=D_{\lambda}(R_{V},E(B-V))\left[\sum_{i=1}^{n_{SSP}}b_{i}f_{\lambda,i}(t_{i},[Z]_{i})+b_{a}f_{\lambda}^{a}\right]$ (1) where the coefficients bi describe the contribution of burst i with age ti and metallicity [Z]i and D${}_{\lambda}(R_{V},E(B-V))$ accounts for the absorption by interstellar dust. ba accounts for the contribution of a featureless AGN continuum f${}_{\lambda}^{a}$ with wavelength slope $\lambda^{-0.5}$ (as in Cardoso et al. (2017)). This additional template is only utilized in the fitting process of bins near the center which show broad-line region (BLR) features in their spectra. We apply the Flexible Stellar Population Synthesis code (FSPS, version 3.2) (Conroy et al., 2009; Conroy & Gunn, 2010) for the calculation of the individual SSP spectra together with the MILES library (Sánchez-Blázquez et al., 2006) and MESA stellar evolution isochrones (Choi et al., 2016; Dotter, 2016) and a Chabrier (2003) initial mass function are adopted. It is important to note that FSPS utilizes not only one main stellar library option, but adds supplementary stellar spectra from complementary evolutionary phases. Thus, the limited number of stars with T $>9000$ K in the empirical MILES library (as can be seen in Martins & Coelho (2007)), is enlarged by hot star spectra from Eldridge et al. (2017) and Wolf-Rayet spectra from Smith et al. (2002). Spectra of AGB-, post-AGB- and carbon stars are added (Lançon & Wood, 2000; Rauch, 2003; Aringer et al., 2009). The final set of our SSP spectra is then adjusted to the TYPHOON spectral resolution. Because of the low TYPHOON spectral resolution we do not consider the line broadening effects of stellar velocity dispersion, which has been measured, for instance, by Bittner et al. (2020) or Pessa et al. (2023). As in Sextl et al. (2023) we account for finite time lengths of the stellar bursts of 0.1, 1.0 and 10.0 Myr but we find that SSP spectra with the shortest burst length result in the best spectral fits with lowest residual $\chi^{2}$ value. To account for the effects of interstellar dust, we apply the attenuation law by Calzetti et al. (2000) with variable RV (deviating from the Calzetti standard value RV=4.05), derived empirically in local starburst galaxies. For the choice of SSPs we use the high resolution age described in Sextl et al. (2023), which starts at 0.1 Myr with a step to 1.0 Myr and then continues with logarithmic steps $\Delta$log t (in Gyr) alternating between 0.05 and 0.1 dex until 12.59 Gyr. The metallicities start at [Z] = -1.5 and increases in steps of 0.25 dex until [Z] = 0.5 as the highest value of the grid. We have, thus, a grid with na = 52 ages and nz = 9 metallicities and a total number of nSSP = 468 SSPs. In our fitting procedure we correct for the radial velocity shifts of the observed spectra. We also measure the equivalent widths of the ISM nebular emission lines of hydrogen Hβ to account for nebular emission continuum if needed (see Sextl et al. 2023). The equivalent width is used to estimate the contribution of the nebular continuum at the wavelength of Hβ and the wavelength dependence is calculated by accounting for nebular hydrogen and helium bound-free, free-free and 2-photon emission. The (generally weak) nebular continuum contribution is then subtracted from the observed spectra. In addition, as in Sextl et al. (2023) spectral regions contaminated by ISM emission or absorption lines are not included in the spectral fit of the stellar population. After these steps, the coefficients bi and ba are then determined by adopting a grid of RV and E(B-V) values. For each pair of these quantities we calculate D${}_{\lambda}(R_{V},E(B-V))$, use a least square algorithm to directly solve for the coefficients b, calculate the model spectrum and a $\chi^{2}$ value by comparing with the observed spectrum. The minimum of $\chi^{2}$ defines the best fit. Errors are estimated by fitting the observed spectra modified by adding Monte Carlo Gaussian noise with zero mean and a standard deviation corresponding to the flux error at each wavelength point. The uncertainties of the stellar population parameters are then calculated as the standard deviation of their distributions produced by 20 such Monte Carlo realizations. Following the arguments in Sextl et al. (2023) we use a characteristic age limit of 1.6 Gyr to distinguish between the young and old population and determine average ages and metallicities of these two populations separately. Through t${}_{i}\leq$ 1.6 Gyr and t${}_{i}\geq$ 1.6 Gyr, respectively, we introduce the young and old population and define the corresponding metallicities [Z]y, [Z]o and ages log(ty), log(to) via $b_{y}=\sum_{i_{y}}b_{i},b_{o}=\sum_{i_{o}}b_{i}$ (2) and $[Z]_{y}={\frac{1}{b_{y}}}\sum_{i_{y}}b_{i}[Z]_{i}$ (3) $[Z]_{o}={\frac{1}{b_{o}}}\sum_{i_{o}}b_{i}[Z]_{i}$ (4) $log(t_{y})={\frac{1}{b_{y}}}\sum_{i_{young}}b_{i}log(t_{i})$ (5) $log(t_{o})={\frac{1}{b_{o}}}\sum_{i_{old}}b_{i}log(t_{i}).$ (6) The average values of the total population, young and old, are then calculated as $[Z]=b_{y}[Z]_{y}+b_{o}[Z]_{o},log(t)=b_{y}log(t_{y})+b_{o}log(t_{o})$ (7) As explained in Sextl et al. (2023) metallicities and ages obtained in this way are luminosity weighted averages. The results of our analysis are presented in the next section. Figure 8: The central region of NGC 1365. Maps of the color excess E(B-V) (top), $R_{V}$ (middle), and the ratio of the average star formation rate of the young and old population (bottom). Isocontours of interstellar reddening E(B-V) are overplotted for E(B-V) = 0.2 and 0.3 mag. Figure 9: Central maps of stellar metallicity [Z]y and [Z]o of the young (top) and old (middle) population with the same axis range. The bottom figure shows the difference $\Delta$ [Z] = [Z]y \- [Z]o . Negative differences are marked in pink tones. Isocontours of interstellar reddening E(B-V) are overplotted as in Figure 8. The mean errors of [Z]y and [Z]o in the FoV are $\sim$0.12 dex and $\sim$0.10 dex, respectively. Figure 10: Central maps of the ratio by/bo (top), the ages ty of the young stellar population (middle), and the stellar mass ratio of the young and old population (bottom). Isocontours of interstellar reddening E(B-V) are overplotted as in Figure 8. ## 4 Results In the following subsections we present the results of our stellar population fits. We start with the maps of interstellar extinction. We then discuss the distribution of ages and metallicity of the young and old stellar population in the outer disk and the central region of the galaxy. ### 4.1 Reddening and extinction The spatial distribution of reddening E(B-V) and extinction AV by interstellar dust is shown in Figure 3. We find very strong effects in the center and bar and a moderate enhancement along the spiral arms. For the outer disk outside the spiral arms reddening is low and close to the Milky Way foreground reddening of E(B-V)MW = 0.018 mag (Schlafly & Finkbeiner, 2011). We provide additional information in Figure 4, which shows the radial distributions as a function of galactocentric distance together with the fit uncertainties. We note again that in our spectral fit of interstellar dust we have included the determination of RV which is allowed to deviate from the ”standard” values such as 3.1 for the diffuse Milky Way ISM or 4.05 for the Calzetti-law in starburst galaxies. Such deviations are common in star forming galaxies (see discussion and references in Sextl et al. 2023). The bottom of Figure 4 shows the radial distribution of the RV values which we encounter together with their errors. Bins with E(B-V) values $<0.005$ have been removed since the slope of the attenuation curve cannot be quantified in this case. We find a mean value around 4.0 but with a wide scatter between 1.5 and 6.7. As comprehensively discussed in Salim & Narayanan (2020), extinction is not the same as attenuation due to the additional geometric and scattering components in the latter case. This also implies that one should be cautious here with the common interpretation of large values in $R_{V}$ implying larger grain sizes (Battisti et al., 2017; Calzetti et al., 2000). It can nevertheless be used as an indicator for obscuration (Calzetti, 2001). A more detailed discussion of the central region will be given below. ### 4.2 Population ages and inside-out growth Figure 5 displays the spatial map of the average population ages across the surface of NGC 1365 and Figure 6 (top) provides the corresponding radial distribution. We find a maximum of populations ages around a galactocentric distance of 5 kpc and lower ages towards the center and the outer edge. The decrease of age beyond 5 kpc is a clear indication that the outer disk is more and more dominated by the young stellar population. This is confirmed by the radial dependence of the ratio by/bo, which represents the ratio of the luminosity contribution of the young and old population to the total observed population spectrum and which is also given in Figure 6. The contribution of the young population gradually increases when going beyond 3 kpc towards larger galactocentric distances. As shown in Sextl et al. (2023) (equations 11 and 12 and text at the end of section 4) the SSP fit coefficients bi can also be used to estimate the ratio of star formation rates of the young and old population. This ratio is also shown at the bottom of Figure 6. The outer disk shows a gradual increase of this ratio from 3 kpc outwards. We note that the average age of the young population in the different spatial bins of this region is between 0.1 and 1.0 Gyr with a very few exceptions of a very young population only 5 Myr old. The average ages of the old population are between 10 and 12.6 Gyr. The decrease of average ages and the increased contribution to luminosity and star formation by the young population towards the outer radii is a clear indication of inside-out growth of the stellar disk of NGC 1365. This will be further discussed in Section 5. From 3 kpc inwards the trend with galactocentric distance reverses. Average ages decrease and the contribution of the young population becomes stronger again. We will discuss this in the subsections below. Figure 11: Age of the stellar population in each spectral bin at the time when 80 % of the stellar population has formed as a function of galactocentric radius in kpc. A linear regression curve is also shown. For details see text. ### 4.3 The metallicity of the young and old population of the outer disk The radial distribution of metallicities (defined as [Z] = log Z/Z⊙) for the young and old stellar population is plotted in Figure 7. We also add the metallicities of HII-regions obtained by Ho et al. (2017). They are based on oxygen abundances as a proxy for metallicity and the Baysian strong line calibration developed by Ho et al. The Asplund et al. 2009 value of log N(O)/N(H) + 12 = 8.69 has been adopted for the solar oxygen abundance. In the outer disk beyond 3.5 kpc we find a negative metallicity gradient of the young population of $-0.0207\pm 0.0028$ dex/kpc in line with the expectations for a stellar disk which has formed inside-out. The stellar gradient is slightly steeper than the one obtained from HII-region emission lines and the value of metallicity is somewhat lower but these differences can be attributed to uncertainties of the HII-region strong line calibrations (see discussion in Ho et al. 2017; Bresolin et al. 2016). For the outer old population we note that the low metallicity bins with $[Z]\leq-0.35$ at a distance of 8-11 kpc are all located within the leading arm of the southern spiral right outside the bar. Apart from this anomaly there is no clear indication of an outer radial gradient of the old population. This is a striking result and will be discussed in section 5. The global difference of $\approx$ 0.4 dex in [Z] between the young and old population agrees very well with the difference found for the massive star forming SDSS galaxies studied by Sextl et al. (2023) and with the prediction of chemical evolution models (see, for instance, Kudritzki et al. 2021a, b). In addition to the outer negative gradient of [Z]y Figure 7 reveals a steep drop to low stellar metallicities of the young population towards the center of the galaxy. The metallicity of the old population, on the other hand, increases on average. This will addressed in the next subsection where we discuss the central region of the galaxy. ### 4.4 Central region As already indicated in Figures 4 and 6 the central region of NGC 1365 is characterized by high interstellar extinction and a strong contribution of the young stellar population to the observed luminosity. This is demonstrated in more detail by Figures 8 and 10 which provide zoomed maps of the central region of reddening E(B-V), RV, average star formation rate ratio, the luminosity contributions ratio of the young and old population, mass ratios, and the age of the young population. However, there is an obvious asymmetry in the sense that the increased luminosity contribution and star formation activity of the young population is confined mostly to the northern area with E(B-V) $\sim$ 0.2 to 0.3 mag and less is found in the corresponding south-east region. This has been discussed most recently by Schinnerer et al. (2023) in their analysis of JWST mid-IR imaging and ALMA CO observations. They concluded that the asymmetric gas infall along the bar has already initiated the intense star formation in the northern bar lane whereas in the dense molecular clouds in the south star formation has yet to come but will start very soon. This is in accordance with the rather similar gas properties in both regions. We note that while luminosity contribution of the young population is very high in the central region, the ratio of masses of the populations is My/Mo is small with a mean value of 0.03 (see Figure 10 bottom). The reason for this significant difference to by/bo is the higher luminosity of younger (and more massive) stars. Details of the computation of the mass ratio are described in Section 5. The central map of RV (Figure 8 middle) does not indicate any kind of spatial correlation. It appears that the RV values are randomly distributed depending on how different gas clouds contribute to the attenuation (see Salim & Narayanan 2020; Battisti et al. 2017). Most strikingly, as shown in Figure 9, the metallicity [Z]y of the young population formed in this central region is very low, sometimes even lower than the metallicity of the old population. Obviously, the surface bins with low metallicity in Figure 7 are all confined to a coherent area in the very center. This population has been formed most recently, as can be seen from Figure 10. This is in accordance with Whitmore et al. (2023) who identified several young star clusters with ages $<10$ Myr in the central regions of NGC 1365. Since our spectral analysis contemplates all stars in a spaxel, not only the massive clusters, we expect slightly higher ages in our analysis. In general, it seems to be of prime importance to provide SSP templates with sufficient young ages for the fit to capture bins with such young clusters accordingly. We note that central stellar metallicities of NGC 1365 have already been studied within two surveys using the ESO VLT integral field unit MUSE, the TIMER (Gadotti et al., 2019, 2020; Bittner et al., 2020) and the PHANGS (Emsellem et al., 2022) survey. Both surveys do not distinguish between metallicities of the young and old population and provide only an average over the total population but also indicate a drop of total metallicity towards the center (Pessa et al., 2023; Bittner, 2021). From our result we learn that this drop is caused by a strong burst of the formation of a very young population of low metallicity. It is important to check whether the low metallicities encountered are an artefact of the fitting procedure or are caused by numerical uncertainty. As a first step, we have compared the minimum fit $\chi^{2}$ values as a function of the metallicity obtained. We did not find any hint of a systematic difference. Fits at low metallicity have the same quality as high metallicity ones. In addition, we have applied an independent population synthesis algorithm and repeated the analysis for all TYPHOON bins by using the STARLIGHT package (Cid Fernandes et al., 2005; Asari et al., 2007) for all TYPHOON bins. Very similar results were found including the low metallicity bins in the center. We also note that keeping $R_{V}$ fixed to the Calzetti standard value of 4.05 or assuming a Milky Way reddening law with constant $R_{V}=3.1$ (Cardelli et al., 1989) produces a similar peculiar region of low metallicity. We also experimented with differential interstellar extinction between the young and old population. Following Lo Faro et al. (2017) and Yuan et al. (2018) we introduced a factor fatt, which increases the reddening of the very young stars (t $\leq$ 10 Myr) relative to the older stars by adopting E(B-V)(t$\leq$ 10Myr) = E(B-V)/fatt. Applying a value of fatt = 0.5 we did not encounter substantial changes in the derived stellar properties larger than the fit uncertainties in the outer parts of the galaxy. However, in the central region, where the extinction is highest, the effect of inversion of metallicity between the young and old stellar population is increased. Fits with differential extinction tend to lower the metallicity for the young population even further by on average $\sim$ 0.15 dex whereas [Z]o remains the same. Since fatt as an additional free parameter is poorly constrained, we refrain from a detailed follow up at this point but note that differential extinction tends to make the situation in the center more extreme. Finally, we have checked how well the metal line absorption features are reproduced by our low metallity SSP fits. In Figure 13 we repeat the fit of bin 153 (already displayed in Figure 2) but now show the detailed fit of the metal absorption line features in two selected spectral regions. According to our fit the metallicity of the young population in this bin is [Z]y = -0.36. We conclude that given the quality of the spectra the metal lines are on average reproduced well. We take all this as a confirmation of our results. An obvious question is whether there are HII regions in this confined region of low stellar metallicity and what their metallicities are. Very unfortunately, the most recent PHANGS-MUSE (Groves et al., 2023) and TYPHOON (Chen et al. 2023) HII-region and ISM emission line studies do not provide sufficient information allowing for conclusions about the ISM metallicity in this specific region. This is mostly caused by the strong contribution of the central AGN to the ISM ionizing radiation field. One should also keep in mind that in the TYPHOON observation of NGC 1365 one spaxel corresponds to 145 pc, so the whole low-metallicity region covers approx. a circle of 2 kpc in diameter. In kpc-resolution surveys at higher redshifts such relatively small regions would have been overlooked easily or would be dismissed as faulty spaxels. This makes the local TYPHOON sample especially valuable. The physical nature of the low metallicity of the central young stellar population is discussed in the next section. Figure 12: The mass metallicity relationship of star forming galaxies. Results from spectroscopy of individual red and blue supergiant stars are shown as stars (Bresolin et al. 2022, Urbaneja et al., 2023). These [Z] values refer to galactocentric distances of 0.4 R25 for galaxies with a metallicity gradient and to mean values for low mass irregular galaxies without a gradient. The value for NGC 1365 is given by the pink hexagon (see text). The population synthesis results by Sextl et al. (2023) for the average metallicity of the young stellar population of SDSS galaxies are plotted as khaki lines . The green curve is a prediction from galaxy evolution lookback models by Kudritzki et al. (2021a, b). Figure 13: Spectral fit (black) of the absorption line features of bin 153. The metallicity of the young population is [Z]y = -0.36. Note that the TYPHOON spectrum has a gap from 5524 to 5561Å. The spectral regions at 5007 and 6300 Å are not included in the $\chi^{2}$ spectral fit due to the contamination by ISM emission. ## 5 Discussion As concluded in section 4.2 and Figures 5 and 6, the decrease of ages averaged over the total population and the increase of the ratio by/bo is indicative of inside-out growth of the outer stellar disk of NGC 1365. An alternative way to investigate this is to look at the radial dependence of stellar mass growth. As described in Sextl et al. (2023) the fit coefficients bi in Equation (1) of the spectral fit of an observed spectral bin can be related to the relative number contribution Ni of stars of isochrone i, for which the SSP model spectrum is calculated, and the V-band luminosity Li(V) of the isochrone via $b_{i}=N_{i}L_{i}.$ (8) With Ni = bi/Li and the mass Mi of each isochrone the function $g(\log(t_{i}))=\dfrac{\sum_{i_{1}=1}^{i}\sum_{i_{2}=1}^{n_{z}}N_{i_{1},i_{2}}M_{i_{1},i_{2}}}{\sum_{i_{1}=1}^{n_{a}}\sum_{i_{2}=1}^{n_{z}}N_{i_{1},i_{2}}M_{i_{1},i_{2}}}$ (9) describes the accumulative mass growth in the surface area of each spectral bin as a function of time. Note that in Equation (9) the inner sums in the nominator and denominator add up the contributions by different metallicities at the same age, whereas the outer sums accumulate the contribution of different ages. Applying Equation (9) we can calculate the age (or lookback time) for each spectral bin, when 80 % of the stellar mass has formed. The corresponding radial distribution of ages is shown in Figure 11. We see a clear indication of inside-out growth beyond 2.5 kpc. At the outermost parts of the disk the mass growth was accomplished about 2 Gyr later than in the inner parts. We note that a similar result has been obtained by Pessa et al. (2023). According to standard chemical evolution models including galactic winds and gas infall (see, for instance, Hou et al. 2000; Chiappini et al. 2001; Kudritzki et al. 2015; Weinberg et al. 2017; Kang et al. 2021) the inside-out growth of the disk is the physical reason for the negative metallicity gradient of the young stellar population and the ISM in star forming disk spiral galaxies, which we also encounter in NGC 1365. The value of -0.02 dex/kpc determined in our work seems very small but NGC 1365 is a huge galaxy and renormalizing with respect to the isophotal radius leads to -0.59 dex/R25. This value is in good agreement with gradients found from the quantitative spectroscopy of individual supergiant stars in nearby galaxies (see, for example, Kudritzki et al. 2012; Bresolin et al. 2022; Liu et al. 2022 and references therein) and comprehensive studies of metallicity gradients obtained from HII-regions such as Ho et al. (2015). The average outer metallicity of the young population represented by the value [Z]y = 0.15 at 0.4 R25 = 11.82 kpc is in good agreement with the mass- metallicity relationship (MZR) of star forming galaxies. Figure 12 shows a MZR comparison with the results obtained by the analysis of individual supergiant stars (Bresolin et al. 2022, Urbaneja et al., submitted to ApJ) and the SDSS population synthesis study of the young population by Sextl et al. (2023). As already mentioned above, the average difference between the metallicity of the old and the young population of about 0.4 dex agrees with the result obtained by Sextl et al. (2023) in their investigation of the integrated spectra of 250000 SDSS galaxies. As for the metallicity gradients, the fact that we do not encounter a gradient for the outer old population is very interesting. When comparing gradients as a function of population age galaxy chemical evolution models yield different results depending on the assumptions made. Hou et al. (2000) find that gradients of the older population are steeper, whereas Chiappini et al. (2001) find the opposite. Roškar et al. (2008) point out that stellar migration will tend to flatten abundance gradients with time and predict younger populations to have a steeper gradient. It seems that population synthesis analysis might provide a powerful tool to investigate this issue and to provide additional constraints on galaxy evolution in this way. This will be an important part of our continued TYPHOON population synthesis survey. A striking result is the low metallicity of the young population in a very confined region in the center (Figure 9). A straightforward first explanation is strong inflow of metal poor gas which has influenced the most recent star formation. This would be in line with the most recent work by Schinnerer et al. (2023) and the results by Zánmar Sánchez et al. (2008) who detected gas infall along the bar. Tabatabaei et al. (2013) gauged a relatively short timescale of only 300 Myrs in their submilimeter analysis on which gas from the bar looses angular momentum and flows into the center. However, it is not clear whether this infalling gas is metal poor and whether the amount of gas accumulated has been sufficient to form a new metal poor population. In principle, galaxy evolution models with infall and outflow can produce a younger population with a metallicity lower than the older population (Spitoni, 2015) but the differences are not as extreme as we encounter in NGC 1365. A crucial test would be a metallicity measurement of the infalling gas. We propose an additional scenario as an explanation for the low metallicity of the central young stellar population, namely interrupted chemical evolution. In this scenario chemical evolution in the central region started in the usual way with heavy star formation and building up metals. Then, at a time when ISM metallicity had already reached high values of above solar, feedback from the central AGN together with stellar feedback from supernovae and winds created a strong central galactic wind and ejected the central gas so that star formation stopped for a long time. For a massive galaxy such as NGC 1365 an AGN can stay intermittently active for 1-5 Gyr and inhibit star formation on a scale of 1 Gyr (Stasińska et al., 2015). In the meantime, the most massive stars which have already formed, continue to recycle metal enriched gas to the ISM through supernovae and winds but due to the energetic AGN and stellar activity this matter continues to be expelled. Then, after several Gyr this activity drops and a new reservoir of metal poorer ISM gas is built up again from stellar winds and mass-loss of stars of low mass of the older population, which formed earlier. This central reservoir eventually becomes dense enough to start formation again and the newly formed stars show the low metallicity composition of the older population. Figure 14: A simple chemical evolution model as example for the central region of NGC 1365. The metallicity of the ISM present in the center before the end of star formation at 4 Gyr is given in blue and the V-band luminosity averaged metallicity of the stars in orange. The abscissa is the evolution time in Gyr. The gray shaded area marks the time when the central ISM gas is ejected and star formation is quenched. The green symbols show the gas metallicity provided by stellar winds and supernovae of the old population after the central activity ejecting gas has stopped. For more details see text. Figure 14 shows the result of a simple Monte Carlo simulation, where we apply a closed-box chemical evolution model. We assume that stars form out of pristine gas with zero metallicity in an intense star formation process following a standard initial mass function (Kroupa, 2001). Metal enriched gas is recycled to the ISM through supernovae and stellar winds. We distinguish between stars with masses higher or lower than 9M⊙. For the former, all matter except a remnant of 3M⊙ is returned to the ISM and the mass of the convective core (Maeder, 1987; Ekström et al., 2012) is converted into metals by core- collapse supernovae. For the latter, we adopt a white dwarf mass using an initial-final mass relationship (Cummings et al., 2018) and recycle the remaining matter. For the metals produced by these objects we follow Kobayashi & Nomoto (2009) and assume that 7 percent of the stars in the mass range between 3 to 9 M⊙ increase the mass of the final WD through binary accretion until the Chandrasekhar limit. A subsequent Supernova type Ia explosion returns the white dwarf mass as metals. For the stellar life times of both groups we use the main sequence fit values given by Ekström et al. (2012). In the case of SNIa explosions we do not account for the accretion time, which is short compared with the stellar life time (Kobayashi & Nomoto, 2009). As a result of our calculation, ISM metallicity (shown in blue) is increasing quickly. Young, newly formed stars will resemble the ISM metallicity but the metallicity of older stars will be lower. We show the V-band luminosity averaged metallicities of the stars in orange. (As mentioned above the metallicities and ages we derive in our population synthesis approach are luminosity weighted quantities). After 4 Gyr star formation comes to a hold because of the energetic processes in the center, which eject all star forming gas, and the evolution stops. After all the ISM gas is quickly expelled from the center, the existing massive stars still recycle gas to the ISM. However, because of the AGN activity the new ISM gas still continues to be expelled and the central region remains gas free. Then, 8 Gyr after the whole chemical evolution started, the central activity stops and a new reservoir of ISM gas forms because of stellar winds and mass-loss from the existing low mass stars of different ages and low metallicity. The ISM metallicity of this new reservoir of gas is shown in green. It is slightly lower than the luminosity weighted metallicities of the old stars, which agrees qualitatively with Figure 9. Young stars forming out of this gas will resemble this metallicity. This explains how the formation of a lower metallicity young stellar population in the center is possible without invoking the infall of metal poor gas from outside. An obvious question is whether the dying stars of the old population can provide enough gas for a new young population. As a very simple estimate we consider stars born at 3 Gyr in Figure 14. In our scenario the gas provided by stars with a main sequence lifetime shorter than 5 Gyr (or more massive than 1.18 $M_{\odot}$) is expelled by the central AGN activity. However, the gas recycled by stars less massive than 1.18 $M_{\odot}$ but with a higher mass than 0.958 $M_{\odot}$ (corresponding to a main sequence lifetime of 9 Gyr) can settle into the central region, because AGN activity has now stopped (we adopt $\tau_{MS}=8(M/M_{\odot})^{-2.8}$ Gyr for the low mass main sequence lifetime). Using the IMF we calculate the mass of these dying stars in relation to the mass of all remaining stars with masses lower than 0.958 $M_{\odot}$ and obtain a ratio of 0.124. Assuming that half of this mass fraction is recycled as gas we obtain a ratio of gas mass to the mass of old stars of 0.06, which is a factor of two higher than the observed average mass ratio of the young to the old population (see Figure 10 bottom). A starburst of very high star formation efficiency (Fisher et al., 2022) could, thus, explain the observations within the framework of our model. The goal of this simplified calculation is only to demonstrate that such an additional scenario can work. A better match of the observations could certainly be obtained by modifying parameters such as star formation rates, evolution and yields etc. and, most importantly, by including infall of metal poor gas, but that is beyond the scope of this paper. We note that the idea of recycled gas accumulating from an aging stellar population is not new, see for instance in Ciotti & Ostriker (2007). The situation in the center of NGC 1365 is not unique as it seems. The Milky Way shows a metallicity of the young stellar population which increases from [Z] $\sim$ -0.35 at 15 kpc galactocentric distance to $\sim$ +0.3 at 4 kpc (Genovali et al., 2014; da Silva et al., 2022) but suddenly drops to solar or even subsolar towards the galactic center (Najarro et al., 2004, 2009; Davies et al., 2009a, b; Martins et al., 2008; Origlia et al., 2013; Do et al., 2015). Figure 5 in Genovali et al. (2014) gives an excellent impression of the very similar situation in the Milky Way. Additionally, recent ESO VLT observations revealed an intense star formation period in the galactic center $\sim$ 0.6-1 Gyr ago after a long quiescent phase of approximately 6 Gyrs (Nogueras-Lara et al., 2020). In summary, we conclude that detailed spatially resolved population synthesis studies of young and old populations in star forming galaxies are an extremely powerful tool to investigate the evolution of galaxies. We will continue this work within the framework of our TYPHOON survey. Acknowledgements. This work was initiated and supported by the Munich Excellence Cluster Origins funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC-2094 390783311. ## References * Aringer et al. (2009) Aringer, B., Girardi, L., Nowotny, W., Marigo, P., & Lederer, M. T. 2009, A&A, 503, 913, doi: 10.1051/0004-6361/200911703 * Asari et al. (2007) Asari, N. V., Cid Fernandes, R., Stasińska, G., et al. 2007, MNRAS, 381, 263, doi: 10.1111/j.1365-2966.2007.12255.x * Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481, doi: 10.1146/annurev.astro.46.060407.145222 * Battisti et al. (2017) Battisti, A. J., Calzetti, D., & Chary, R. R. 2017, ApJ, 840, 109, doi: 10.3847/1538-4357/aa6fb2 * Bittner (2021) Bittner, A. 2021, PhD thesis, Ludwig-Maximilians University of Munich, Germany * Bittner et al. (2020) Bittner, A., Sánchez-Blázquez, P., Gadotti, D. A., et al. 2020, A&A, 643, A65, doi: 10.1051/0004-6361/202038450 * Bresolin et al. (2022) Bresolin, F., Kudritzki, R.-P., & Urbaneja, M. A. 2022, ApJ, 940, 32, doi: 10.3847/1538-4357/ac9584 * Bresolin et al. (2016) Bresolin, F., Kudritzki, R.-P., Urbaneja, M. A., et al. 2016, ApJ, 830, 64, doi: 10.3847/0004-637X/830/2/64 * Calzetti (2001) Calzetti, D. 2001, PASP, 113, 1449, doi: 10.1086/324269 * Calzetti et al. (2000) Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682, doi: 10.1086/308692 * Cappellari & Copin (2003) Cappellari, M., & Copin, Y. 2003, MNRAS, 342, 345, doi: 10.1046/j.1365-8711.2003.06541.x * Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900 * Cardoso et al. (2017) Cardoso, L. S. M., Gomes, J. M., & Papaderos, P. 2017, A&A, 604, A99, doi: 10.1051/0004-6361/201630378 * Carrillo et al. (2020) Carrillo, A., Jogee, S., Drory, N., et al. 2020, MNRAS, 493, 4094, doi: 10.1093/mnras/staa397 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392 * Chen et al. (2023) Chen, Q.-H., Grasha, K., Battisti, A. J., et al. 2023, MNRAS, 519, 4801, doi: 10.1093/mnras/stac3790 * Chiappini et al. (2001) Chiappini, C., Matteucci, F., & Romano, D. 2001, ApJ, 554, 1044, doi: 10.1086/321427 * Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102, doi: 10.3847/0004-637X/823/2/102 * Cid Fernandes et al. (2005) Cid Fernandes, R., Mateus, A., Sodré, L., Stasińska, G., & Gomes, J. M. 2005, MNRAS, 358, 363, doi: 10.1111/j.1365-2966.2005.08752.x * Ciotti & Ostriker (2007) Ciotti, L., & Ostriker, J. P. 2007, ApJ, 665, 1038, doi: 10.1086/519833 * Conroy & Gunn (2010) Conroy, C., & Gunn, J. E. 2010, ApJ, 712, 833, doi: 10.1088/0004-637X/712/2/833 * Conroy et al. (2009) Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486, doi: 10.1088/0004-637X/699/1/486 * Cummings et al. (2018) Cummings, J. D., Kalirai, J. S., Tremblay, P. E., Ramirez-Ruiz, E., & Choi, J. 2018, ApJ, 866, 21, doi: 10.3847/1538-4357/aadfd6 * da Silva et al. (2022) da Silva, R., Crestani, J., Bono, G., et al. 2022, A&A, 661, A104, doi: 10.1051/0004-6361/202142957 * Davies et al. (2009a) Davies, B., Origlia, L., Kudritzki, R.-P., et al. 2009a, ApJ, 694, 46, doi: 10.1088/0004-637X/694/1/46 * Davies et al. (2009b) —. 2009b, ApJ, 696, 2014, doi: 10.1088/0004-637X/696/2/2014 * de Vaucouleurs (1991) de Vaucouleurs, G. 1991, Science, 254, 1667 * Do et al. (2015) Do, T., Kerzendorf, W., Winsor, N., et al. 2015, ApJ, 809, 143, doi: 10.1088/0004-637X/809/2/143 * Dotter (2016) Dotter, A. 2016, ApJS, 222, 8, doi: 10.3847/0067-0049/222/1/8 * Ekström et al. (2012) Ekström, S., Georgy, C., Eggenberger, P., et al. 2012, A&A, 537, A146, doi: 10.1051/0004-6361/201117751 * Eldridge et al. (2017) Eldridge, J. J., Stanway, E. R., Xiao, L., et al. 2017, PASA, 34, e058, doi: 10.1017/pasa.2017.51 * Emsellem et al. (2022) Emsellem, E., Schinnerer, E., Santoro, F., et al. 2022, A&A, 659, A191, doi: 10.1051/0004-6361/202141727 * Fisher et al. (2022) Fisher, D. B., Bolatto, A. D., Glazebrook, K., et al. 2022, ApJ, 928, 169, doi: 10.3847/1538-4357/ac51c8 * Gadotti et al. (2019) Gadotti, D. A., Sánchez-Blázquez, P., Falcón-Barroso, J., et al. 2019, MNRAS, 482, 506, doi: 10.1093/mnras/sty2666 * Gadotti et al. (2020) Gadotti, D. A., Bittner, A., Falcón-Barroso, J., et al. 2020, A&A, 643, A14, doi: 10.1051/0004-6361/202038448 * Genovali et al. (2014) Genovali, K., Lemasle, B., Bono, G., et al. 2014, A&A, 566, A37, doi: 10.1051/0004-6361/201323198 * Groves et al. (2023) Groves, B., Kreckel, K., Santoro, F., et al. 2023, MNRAS, 520, 4902, doi: 10.1093/mnras/stad114 * Ho et al. (2015) Ho, I. T., Kudritzki, R.-P., Kewley, L. J., et al. 2015, MNRAS, 448, 2030, doi: 10.1093/mnras/stv067 * Ho et al. (2017) Ho, I. T., Seibert, M., Meidt, S. E., et al. 2017, ApJ, 846, 39, doi: 10.3847/1538-4357/aa8460 * Hou et al. (2000) Hou, J. L., Prantzos, N., & Boissier, S. 2000, A&A, 362, 921, doi: 10.48550/arXiv.astro-ph/0007164 * Jałocha et al. (2010) Jałocha, J., Bratek, Ł., Kutschera, M., & Skindzier, P. 2010, MNRAS, 406, 2805, doi: 10.1111/j.1365-2966.2010.16887.x * Jang et al. (2018) Jang, I. S., Hatt, D., Beaton, R. L., et al. 2018, ApJ, 852, 60, doi: 10.3847/1538-4357/aa9d92 * Jarrett et al. (2003) Jarrett, T. H., Chester, T., Cutri, R., Schneider, S. E., & Huchra, J. P. 2003, AJ, 125, 525, doi: 10.1086/345794 * Kang et al. (2021) Kang, X., Chang, R., Kudritzki, R.-P., Gong, X., & Zhang, F. 2021, MNRAS, 502, 1967, doi: 10.1093/mnras/stab147 * Kobayashi & Nomoto (2009) Kobayashi, C., & Nomoto, K. 2009, ApJ, 707, 1466, doi: 10.1088/0004-637X/707/2/1466 * Kroupa (2001) Kroupa, P. 2001, MNRAS, 322, 231, doi: 10.1046/j.1365-8711.2001.04022.x * Kudritzki et al. (2015) Kudritzki, R.-P., Ho, I. T., Schruba, A., et al. 2015, MNRAS, 450, 342, doi: 10.1093/mnras/stv522 * Kudritzki et al. (2021a) Kudritzki, R.-P., Teklu, A. F., Schulze, F., et al. 2021a, ApJ, 922, 274, doi: 10.3847/1538-4357/ac32cf * Kudritzki et al. (2021b) —. 2021b, ApJ, 922, 274, doi: 10.3847/1538-4357/ac32cf * Kudritzki et al. (2012) Kudritzki, R.-P., Urbaneja, M. A., Gazak, Z., et al. 2012, ApJ, 747, 15, doi: 10.1088/0004-637X/747/1/15 * Lançon & Wood (2000) Lançon, A., & Wood, P. R. 2000, A&AS, 146, 217, doi: 10.1051/aas:2000269 * Leroy et al. (2019) Leroy, A. K., Sandstrom, K. M., Lang, D., et al. 2019, ApJS, 244, 24, doi: 10.3847/1538-4365/ab3925 * Lindblad (1999) Lindblad, P. O. 1999, A&A Rev., 9, 221, doi: 10.1007/s001590050018 * Liu et al. (2022) Liu, C., Kudritzki, R.-P., Zhao, G., et al. 2022, ApJ, 932, 29, doi: 10.3847/1538-4357/ac69cc * Lo Faro et al. (2017) Lo Faro, B., Buat, V., Roehlly, Y., et al. 2017, MNRAS, 472, 1372, doi: 10.1093/mnras/stx1901 * Maeder (1987) Maeder, A. 1987, A&A, 173, 247 * Martins et al. (2008) Martins, F., Hillier, D. J., Paumard, T., et al. 2008, A&A, 478, 219, doi: 10.1051/0004-6361:20078469 * Martins & Coelho (2007) Martins, L. P., & Coelho, P. 2007, MNRAS, 381, 1329, doi: 10.1111/j.1365-2966.2007.11954.x * Muñoz-Mateos et al. (2013) Muñoz-Mateos, J. C., Sheth, K., Gil de Paz, A., et al. 2013, ApJ, 771, 59, doi: 10.1088/0004-637X/771/1/59 * Najarro et al. (2009) Najarro, F., Figer, D. F., Hillier, D. J., Geballe, T. R., & Kudritzki, R. P. 2009, ApJ, 691, 1816, doi: 10.1088/0004-637X/691/2/1816 * Najarro et al. (2004) Najarro, F., Figer, D. F., Hillier, D. J., & Kudritzki, R. P. 2004, ApJ, 611, L105, doi: 10.1086/423955 * Nogueras-Lara et al. (2020) Nogueras-Lara, F., Schödel, R., Gallego-Calvente, A. T., et al. 2020, Nature Astronomy, 4, 377, doi: 10.1038/s41550-019-0967-9 * Origlia et al. (2013) Origlia, L., Oliva, E., Maiolino, R., et al. 2013, A&A, 560, A46, doi: 10.1051/0004-6361/201322586 * Parikh et al. (2021) Parikh, T., Thomas, D., Maraston, C., et al. 2021, MNRAS, 502, 5508, doi: 10.1093/mnras/stab449 * Pessa et al. (2023) Pessa, I., Schinnerer, E., Sanchez-Blazquez, P., et al. 2023, A&A, 673, A147, doi: 10.1051/0004-6361/202245673 * Rauch (2003) Rauch, T. 2003, A&A, 403, 709, doi: 10.1051/0004-6361:20030412 * Roškar et al. (2008) Roškar, R., Debattista, V. P., Quinn, T. R., Stinson, G. S., & Wadsley, J. 2008, ApJ, 684, L79, doi: 10.1086/592231 * Salim & Narayanan (2020) Salim, S., & Narayanan, D. 2020, ARA&A, 58, 529, doi: 10.1146/annurev-astro-032620-021933 * Sánchez-Blázquez et al. (2006) Sánchez-Blázquez, P., Peletier, R. F., Jiménez-Vicente, J., et al. 2006, MNRAS, 371, 703, doi: 10.1111/j.1365-2966.2006.10699.x * Sánchez-Blázquez et al. (2014) Sánchez-Blázquez, P., Rosales-Ortega, F., Méndez-Abreu, J., et al. 2014, Astronomy & Astrophysics, 570, A6 * Schinnerer et al. (2023) Schinnerer, E., Emsellem, E., Henshaw, J. D., et al. 2023, ApJ, 944, L15, doi: 10.3847/2041-8213/acac9e * Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 * Sextl et al. (2023) Sextl, E., Kudritzki, R.-P., Zahid, H. J., & Ho, I. T. 2023, ApJ, 949, 60, doi: 10.3847/1538-4357/acc579 * Smith et al. (2002) Smith, L. J., Norris, R. P. F., & Crowther, P. A. 2002, MNRAS, 337, 1309, doi: 10.1046/j.1365-8711.2002.06042.x * Spitoni (2015) Spitoni, E. 2015, MNRAS, 451, 1090, doi: 10.1093/mnras/stv1008 * Stasińska et al. (2015) Stasińska, G., Costa-Duarte, M. V., Vale Asari, N., Cid Fernandes, R., & Sodré, L. 2015, MNRAS, 449, 559, doi: 10.1093/mnras/stv078 * Sánchez-Blázquez et al. (2011) Sánchez-Blázquez, P., Ocvirk, P., Gibson, B. K., Pérez, I., & Peletier, R. F. 2011, MNRAS, 415, 709, doi: 10.1111/j.1365-2966.2011.18749.x * Tabatabaei et al. (2013) Tabatabaei, F. S., Weiß, A., Combes, F., et al. 2013, A&A, 555, A128, doi: 10.1051/0004-6361/201321487 * Thainá-Batista et al. (2023) Thainá-Batista, J., Cid Fernandes, R., Herpich, F. R., et al. 2023, MNRAS, 526, 1874, doi: 10.1093/mnras/stad2698 * Venturi et al. (2018) Venturi, G., Nardini, E., Marconi, A., et al. 2018, A&A, 619, A74, doi: 10.1051/0004-6361/201833668 * Weinberg et al. (2017) Weinberg, D. H., Andrews, B. H., & Freudenburg, J. 2017, ApJ, 837, 183, doi: 10.3847/1538-4357/837/2/183 * Westmoquette et al. (2011) Westmoquette, M., Smith, L., & Iii, J. 2011, MNRAS, 414, 3719, doi: 10.1111/j.1365-2966.2011.18675.x * Whitmore et al. (2023) Whitmore, B. C., Chandar, R., Rodríguez, M. J., et al. 2023, ApJ, 944, L14, doi: 10.3847/2041-8213/acae94 * Yuan et al. (2018) Yuan, F.-T., Argudo-Fernández, M., Shen, S., et al. 2018, A&A, 613, A13, doi: 10.1051/0004-6361/201731865 * Zánmar Sánchez et al. (2008) Zánmar Sánchez, R., Sellwood, J. A., Weiner, B. J., & Williams, T. B. 2008, ApJ, 674, 797, doi: 10.1086/524940
# Video Instance Shadow Detection Zhenghao Xing1,∗, Tianyu Wang1, , Xiaowei Hu2, , Haoran Wu1, Chi-Wing Fu1, and Pheng-Ann Heng1 1 The Chinese University of Hong Kong 2 Shanghai Artificial Intelligence Laboratory Joint first authorsCorresponding author<EMAIL_ADDRESS> ###### Abstract Video instance shadow detection aims to simultaneously detect, segment, associate, and track paired shadow-object associations in videos. This work has three key contributions to the task. First, we design SSIS-Track, a new framework to extract shadow-object associations in videos with paired tracking and without category specification; especially, we strive to maintain paired tracking even the objects/shadows are temporarily occluded for several frames. Second, we leverage both labeled images and unlabeled videos, and explore temporal coherence by augmenting the tracking ability via an association cycle consistency loss to optimize SSIS-Track’s performance. Last, we build SOBA- VID, a new dataset with 232 unlabeled videos of 5,863 frames for training and 60 labeled videos of 1,182 frames for testing. Experimental results show that SSIS-Track surpasses baselines built from SOTA video tracking and instance- shadow-detection methods by a large margin. In the end, we showcase several video-level applications. ## 1 Introduction Instance shadow detection [29, 27, 28] suggests finding the shadows together with the associated objects in images to support photo-editing applications and light direction estimation. With the explosive growth of videos on the Internet, the demand for video editing is enormous. However, existing approaches [29, 27, 28] provide mainly image-level editing, making it hard to allow easy-to-use and good-quality video editing due to the lack of considering the temporal dimension and also the shadow-object tracking. Figure 1: Visual comparison of results produced by SSIS-Track (ours) and by SSIS+Mask2former. Each row shows three frames from a video clip. Detected shadows and objects that are associated are marked in the same color. Please zoom in to see details. Video instance shadow detection (VISD) is a new task, aiming not only to find shadows together with their associated objects in video frames but also to track each shadow, each object, and each shadow-object association in the video. VISD has great potential to benefit many applications, _e.g_., (i) instance copy-paste, in which we copy a shadow-object pair extracted from a video and paste it together into another (or the same) video; Fig. LABEL:fig:teaser shows an example. (ii) the vision system of autonomous driving, for which we can enhance its object awareness by considering the object shadows to help avoid blind-spot accidents when the object is occluded; and (iii) video in-painting [15, 18], which obliterates the entire object by simultaneously accessing the object mask with the associated shadow mask. To achieve VISD is challenging, as we have to detect and track individual shadows and objects, as well as their associations, over time. Intuitively, we may combine methods for video instance segmentation and instance shadow detection to achieve the task. However, doing so has several critical drawbacks. First, existing approaches for video instance segmentation are applicable only to instances of specific categories. Yet, VISD aims to detect any foreground object with shadows. Second, methods for video instance segmentation require rich features from the objects for tracking. Yet, shadow instances have limited features. Last, existing methods for instance shadow detection only focus on paired shadow-object associations, thus failing to track unpaired shadow/object instances. Yet, shadows and objects in a video may not be visible all the time together. If one of them is occluded or moves out of view for a while, existing methods would lose track of them. To address the drawbacks, we design SSIS-Track, a new end-to-end framework for VISD that (i) detects object instances of any categories with their associated shadow instances, (ii) tracks object/shadow instances and their associations, and (iii) retrieves unpaired instances by using the detected instances from adjacent video frames. To achieve instance paired tracking, we build SSIS-Track by designing a tracking branch over SSIS [27, 28]. First, we leverage the labeled images in SOBA [29] to train the tracking branch by contrastive learning [5, 7], such that it can learn to generate distinguishable tracking embeddings of shadow/object instances. Specifically, we encourage the distance between the tracking embeddings to be small when they are from the same instance. Also, we collect unlabeled videos in SOBA-VID to refine the performance of SISS-Track by exploring the temporal information in a self-supervised manner [10, 14, 30]. Further, we adopt a tracking-by-detection approach to match instances frame by frame to enable instance paired tracking. Last, we design a new retrieving mechanism to handle unpaired instances that typically happen when an object/shadow is temporarily occluded or out of view in the video. Fig. 1 shows examples, where our SSIS-Track successfully detects various paired shadow-object associations and retrieves an unpaired shadow instance in videos. To train SSIS-Track with less reliance on video instance annotations, we compile SOBA-VID, a new dataset with a set of unlabeled videos for training and another set of labeled videos for testing. Besides, for quantitative performance evaluation on the VISD task, we formulate the SOAP-VID metric based on SOAP [29] by considering Intersection-over-Union (IoU) computation in a spatio-temporal manner[34]. In the end, we perform a series of experiments, manifesting SSIS-Track’s effectiveness and applicability in video editing. ## 2 Related Work #### Instance shadow detection is a variant of instance segmentation, additionally considering the detection of shadows and their associations with objects. LISA [29] is the first method for the task, with a two-stage end-to-end framework to predict bounding boxes and masks of shadow/object instances, as well as their associations, in images. It also includes a strategy to pair shadow and object instances. Recently, SSIS [27, 28] approaches the task with a single-stage fully convolutional network architecture and a bidirectional relation learning module to learn the relation between the shadow and object instances. Video instance shadow detection (VISD) extends the instance shadow detection task to the video domain. Its goal is to achieve instance shadow detection and also instance paired tracking in videos. In this work, SSIS serves as a baseline of our SSIS-Track. #### Video instance segmentation. The major difference between video instance segmentation (VIS) and video instance shadow detection (VISD) is that VIS cannot effectively track shadow instances with the associated objects. Mainstream approaches for the VIS task are mainly online or offline. Online methods, also known as tracking-by-detection, detect instances in each frame and then match the detected instances in a frame-by-frame manner. MaskTrack R-CNN [34] is the first online method for VIS, with a tracking head added upon Mask R-CNN [8] to predict tracking embeddings of all the detected instances and then use them for instance matching in videos. Similar methods include VPSNet [12] and TrackR-CNN [26]. Other methods, _e.g_., CrossVIS [35] and STC [11], adopt a similar tracking head, but are built upon CondInst [24] instead of Mask R-CNN. Offline methods directly segment instances in videos by considering each instance as a sub-volume in the 3D spatio-temporal volume of the input video. Inspired by DETR [3], a transformer-based method VisTR [31] first considers VIS as a sequence decoding problem. This idea is also adopted by IFC [9], SeqFormer [32], and Mask2Former [4]. Recently, the first semi-supervised VIS approach [5], which does not require video annotations, is proposed. So far, existing approaches for VIS can only handle classification, detection, segmentation, and tracking of instances for a maximum of 40 object categories by training their deep models on the current datasets [34, 20]. Besides, the mainstream approaches rely heavily on annotated videos. In comparison, SSIS- Track has no reliance on video instance annotations in the training process and detects foreground objects and their associated shadows without identifying their categories. #### Self-supervised learning for videos. To obtain labeled video data is one of the most critical bottlenecks of exercising deep learning on videos. This motivates the development of self- supervised learning to take out the burden of annotating the videos. Mainstream self-supervised learning approaches [10, 14, 30, 13, 33] learn pixel or instance representation for downstream video tasks, _e.g_., object tracking, by exploiting temporal correspondence in unlabeled videos. As a self-supervision signal, cycle consistency loss is often used to train the tracking embedding. By doing so, we can finetune SSIS-Track on unlabeled videos and further enhance SSIS-Track’s performance in paired tracking. Beyond the existing methods, the tracking embedding for the correspondence module in SSIS-Track is formed by, paired shadow and object embeddings where both space and time correspondences are formulated during the training to promote paired shadow-object tracking. ## 3 SOBA-VID Dataset Figure 2: Example frames in our SOBA-VID dataset contain (a) shadow instances, (b) object instances, and (c) their associations. More examples are presented in (d). Figure 3: Statistical properties of the SOBA-VID test set. We build SOBA-VID (Shadow-OBject Association of VIDeos), a new dataset for video instance shadow detection. When compiling it, we consider the following criteria: (i) the collected videos must contain common shadow-object pairs; (ii) the shadows in videos should be clear without ambiguity; (iii) the instance categories and scenarios in the collected videos should be diverse, for both the occluded shadows and objects; and (iv) we try to include videos with motion backgrounds and diverse light directions. Overall, SOBA-VID has 292 videos with 7045 frames, collected from (i) existing datasets, DAVIS [2], YouTube-VIS [34], and OVIS [20], (ii) self-collected videos, and (iii) keyword search on the Internet by “shadow plus common instances.” We randomly split the videos into a training set (232 videos, 5863 frames) and a testing set (60 videos, 1182 frames). For the videos in test set, we manually annotate the masks of each shadow/object instance frame by frame using Apple Pencil and associate them to form shadow-object pairs (134 pairs in total); see Fig. 2 for some examples. Further, we analyze the statistics of SOBA-VID. From Fig. 3, we can see that (i) SOBA-VID contains a diverse number of paired shadow-object associations per video, with around 2.23 pairs on average; and (ii) as shown in the right sub-figure, the length of both videos and instances are diverse and the average lengths of videos and instances are $19.7$ and $17.2$, respectively. Note that Fig. 2 (d) shows various example frames in SOBA-VID; please refer to the supplementary material for more examples and results. ## 4 Methodology Figure 4: The schematic illustration of our two-stage SSIS-Track framework. In the first stage (top), the framework first learns from images with labels in a supervised manner. Then, in the second stage (bottom), it learns from videos without labels in a self-supervised manner. ### 4.1 Overall Network Architecture Fig. 4 shows the overall architecture of SSIS-Track, which has two stages. (i) Given labeled images from SOBA, we adopt SSIS [27] to learn to detect and segment the shadow instances with its associated object instances by using image-level supervision. Then, we leverage the tracking tower to dynamically generate the network parameters, which will be assigned to the tracking head to predict the tracking embeddings. (ii) Given the unlabeled videos from SOBA- VID, we use the temporal correspondence to further improve the ability of the tracking branch to perform paired tracking, which is trained in a self- supervised manner. In the following, we elaborate on how to learn a tracking embedding for instance paired tracking (Section 4.2) and introduce the bidirectional retrieving mechanism to further enhance the tracking ability (Section 4.3). ### 4.2 Instance Paired Tracking Except training our framework on the SOBA dataset to teach it to perform instance shadow detection on single images, we develop and incorporate various new components, _i.e_., tracking tower, tracking head, and tracking controller, together forming the tracking branch, such that it can effectively learn to track paired shadows and objects in videos. Procedure-wise, we first train the newly-added tracking components on single images in a supervised manner, and then adopt a self-supervised learning approach by training SSIS- Track on unlabeled videos, such that we can avoid the burden of labeling masks of paired shadows and objects frame-by-frame on videos. #### Learning from labeled images. First, we leverage labeled image data to train SSIS-Track. As shown in Fig. 4, given a single image, we adopt the feature pyramid network [16] to extract features. Beyond the box and class towers in SSIS [27, 28] to predict shadow and object instances on single images, we add a tracking tower and a tracking controller to generate parameters of the convolutional filters for each shadow/object instance. The generated filters are then used in the tracking head to predict the tracking embedding of each instance. The inputs to the tracking head are the concatenation of the mask features and relative coordinates of each instance; see [27, 28] for details. To learn to track, we propose to make the tracking embeddings from the same instance to be closer in the latent space, or more distant, otherwise. Here, we adopt contrastive learning [5, 7]. Specifically, for the $i$-th instance $\Omega_{i}$, we compute the center embedding $C_{i}$ as the average value of all the embeddings belonging to this instance: $C_{i}\ =\ \frac{1}{N_{i}}{\textstyle\sum_{e\in\Omega_{i}}}f_{e}\ ,$ (1) where $f_{e}$ denotes the instance-level tracking embedding of $\Omega_{i}$ and $N_{i}$ denotes the number of locations on the feature maps belong to $\Omega_{i}$ with classification score $\textgreater$ 0.05 [25]. To encourage closer embeddings from the same instance, we adopt the center loss to minimize the L1 distance: $L_{i}^{center}\ =\ \sum_{e\in\Omega_{i}}\parallel C_{i}-f_{e}\parallel\ .$ (2) To make the embeddings distinctive from embeddings of other instances, we push the center embeddings of all the other instances further apart. To do so, we compute a dense similarity matrix $Sim(i,j)$ that represents the similarity between any two center embeddings: $Sim(i,j)=\frac{exp(C_{i}^{T}\cdot C_{j})}{{\textstyle\sum_{k=0}^{K}}exp(C_{i}^{T}\cdot C_{k})}\ ,$ (3) where $K$ is the total number of instances. Note that we calculate the similarity matrix for both shadows and objects, respectively, based on the predicted classification results from the class tower, denoted as $Sim_{S}$ and $Sim_{O}$. Then, we introduce the contrast loss in terms of the cross entropy $C_{en}$ to maximize the self-matching likelihood and push different instances as far away as possible: $L^{contra}\ =\ C_{en}(Sim_{S},I)+C_{en}(Sim_{O},I)\ ,$ (4) where $I$ is an identity matrix. By combining all $L_{i}^{center}$ and $L^{contra}$ together, we can jointly optimize the framework to learn the tracking ability from the input images. #### Learning from unlabeled videos. Unlike the image data that provides labels for each instance, we adopt videos without labels to train the framework through self-supervised learning. Given two adjacent frames from in video, we first compute the tracking embeddings for the pixels with classification score $\textgreater$ 0.05 on each instance. Then, we adopt non-maximum suppression (NMS) to obtain a paired shadow-object tracking embedding for each instance. Afterward, we generate a transition matrix $A$ by computing the embedding similarity of each pair of instances at different time frames, where each element $A_{t}^{t+1}(i,j)$ stands for the transition probability of the $i^{th}$ instance at time $t$ to the $j^{th}$ instance at time $t+1$. By using the idea of cycle consistency loss [10, 14, 30], if the $i^{th}$ instance at time $t$ is able to transit (similar) to the $j^{th}$ instance at time $t+1$, the inverse transition should also hold. Thus, we encourage the diagonal elements in the multiplication results of these two transition matrices $A_{t}^{t+1}$ and $A_{t+1}^{t}$ to have large values by comparing them with an identity matrix ($I$): $L^{cyc}\ =\ C_{en}(A_{t}^{t+1}A_{t+1}^{t},I)\ .$ (5) Note that cycle consistency loss in our approach adopted paired shadow-object tracking embeddings and it is conducted in both directions. Fig. 4 shows the detailed structure of performing self-supervised learning on unannotated videos in our SSIS-Track framework. #### Tracking paired instances across frames. To perform instance paired tracking, we design a matching strategy to track each paired shadow-object association across time frames following [34]. First, we assume $N$ instances in the tracking queue with their tracking embeddings represented by the features of each instance. Then, we assume $M$ instances in the next video frame and compute their tracking embeddings. Moreover, we obtain the similarity $Score(f_{i},f_{j})$ between the instances in the frame and the instances in the tracking queue by computing the cosine similarity of their tracking embeddings and normalizing via a bidirectional softmax function: $\begin{split}Score(f_{i},f_{j})=&[\frac{exp(cos(f_{i},f_{j}))}{{\textstyle\sum_{k=1}^{M}}exp(cos(f_{k},f_{j}))}\\\ &+\frac{exp(cos(f_{i},f_{j}))}{{\textstyle\sum_{k=1}^{N}}exp(cos(f_{i},f_{k}))}]/2\ ,\end{split}$ (6) where $f_{i}$ denotes the detected instance tracking embedding in the current frame and $f_{j}$ denotes the latest instance tracking embedding in the tracking queue. If the score is lower than a predefined matching threshold, we consider the instance as a new identity object and put its embedding into the tracking queue. Otherwise, we consider it as a tracked instance and update the corresponding embedding in the tracking queue. Note that we concatenate the tracking embeddings of the paired shadow-object associations as the overall tracking embedding, which is used in this strategy. Other cues including the IoU score and confidence score are also utilized in the testing stage for tracking paired instances across frames, following [34]. For each newly-detected instance $i$, let $f_{i}$, $b_{i}$, and $c_{i}$ denote its tracking embedding, bounding box, and confidence score, respectively. For instance $j$ in the tracking queue with tracking embedding $f_{j}$ and bounding box $b_{j}$, we can obtain the similarity score of the instances $i$ and $j$ by $\begin{split}FinalScore(i,j)\ =\ \alpha Score(f_{i},f_{j})+\beta IoU(b_{i},b_{j})+\gamma c_{i}\ ,\end{split}$ (7) where $\alpha$, $\beta$, and $\gamma$ are hyperparameters to balance different cues. Note that we only use Eq. (7) in testing. ### 4.3 Bidirectional Retrieving Mechanism for Unpaired Shadow/Object Instances Figure 5: The schematic illustration of the bidirectional retrieving mechanism in our method. SSIS [27, 28] is able to perform instance shadow detection only on individual images and cannot detect shadow/object instances without paired object/shadow instances. For video instance shadow detection, some shadows/objects may be missing for a short while, due to occlusion or moved out of the view. Hence, we should try to track individual shadow/object instances and find their associated object/shadow instances when they re-appear in the video. To achieve this, we formulate a bidirectional retrieving mechanism, which is used only in the inference phase. In the matching procedure, we compute the similarity matrix using Eq. (6) by taking the tracking embeddings of individual shadow/object instances in the current frame and in the tracking queue. If we can successfully match their tracking embeddings, we will update the tracking embedding of the individual shadow/object instance in the queue and keep the corresponding tracking embedding of the paired object/shadow instance unchanged. Hence, we can track the paired shadow-object association after it re-appears in the video. If we cannot match the tracking embeddings in both the current frame and the tracking queue, we will ignore the shadow/object instance, since we can only handle paired shadow-object associations that appeared for at least one video frame. To further tackle the situation, where the paired shadow-object associations are in later frames, we will adopt this mechanism again but in a reverse order. Fig. 5 shows two examples: (i) object $n$ in (a) disappears in time frame $t-1$, but his shadow remains. We can successfully track the shadow instance and re-match it with its object in time frame $t$; (ii) shadow $m$ appears only in time frames $t-1$ and $t$. By using our retrieving mechanism in a reverse order, we are able to match object $m$ at the previous time frame $t-2$. Hence, by leveraging the tracking embeddings, we can retrieve unpaired instances back to improve the performance, thus promoting better visual effects on the applications of video instance shadow detection. ## 5 Experimental Results ### 5.1 Evaluation Metrics To evaluate video instance shadow detection, we extend the evaluation metric SOAP[29] for single-image instance shadow detection to videos. A sample is considered a true positive when the Intersection-over-Union (IoU) between the predicted and ground-truth shadow instances, object instances, and shadow- object associations are all no less than a threshold $\tau$. Specifically, we replace the IoU in SOAP with the spatio-temporal IoU in [34] and name the updated metric as SOAP-VID. We report the average over multiple $\tau$ [0.5:0.05:0.95] in SOAP-VID. Using the spatio-temporal IoU, we also examine the average precision for shadow/object instance and paired shadow-object associations over the thresholds [0.5:0.05:0.95], which are denoted as Instance AP and Association AP, respectively. ### 5.2 Implementation Details #### Training parameters. We train SSIS-Track by following the training strategies of CondInst [24] and AdelaiDet [23]. For images, we adopt the weights of CondInst trained on COCO [17] to initialize the weights of the backbone network, and then use the same parameters in SSIS [27, 28] to train SSIS-Track on the SOBA dataset [29]. For videos, we adopt the weights of SSIS-Track trained on images to initialize the the model, set the mini-batch size as four, and optimize SSIS-Track on an NVIDIA RTX 3090 GPU for 5,000 iterations. We set the initial learning rate as $1e-5$ and reduce it to $1e-6$ at 4,500 iterations. Also, we re-scale the input images without changing the aspect ratio, such that the shorter side is no less than 672 pixels. #### Inference. We process input videos in an online manner. The tracking branch of SSIS-Track predicts the tracking embeddings for both paired and unpaired shadow/object instances frame by frame and matches the instances across frames using the strategy in Section 4.2. Table 1: Comparison with two baseline methods for video instance shadow detection. Our method outperforms all the baseline methods by a large margin, where the improvements over “SSIS + Mask2Former” on SOAP-VID, Association AP, and Instance AP are 24.5%, 20.4%, and 20.6%, respectively. Method | SOAP-VID | Association AP | Instance AP ---|---|---|--- SSIS + IoU Tracker[1] | 21.5 | 31.7 | 25.7 SSIS + Mask2Former[4] | 31.8 | 51.1 | 42.2 SSIS-Track | 39.6 | 61.5 | 50.9 Table 2: Ablation: major components on SOBA-VID test set. Method With Tracking Branch Contrastive Learning Temporal Correspondence Retrieving Mechanisim SOAP-VID Association AP Instance AP SSIS 29.7 40.0 36.5 SSIS-Track $\checkmark$ $\checkmark$ 36.8 55.4 45.2 $\checkmark$ $\checkmark$ $\checkmark$ 38.0 55.9 46.8 $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ 39.6 61.5 50.9 ### 5.3 Comparison with Baseline Methods We consider two baselines in our comparison: (i) adopt SSIS [27] to detect shadow instances, object instances, and shadow-object associations on each individual frame and use the IoU Tracker [1] to match the instances across different frames based on the IoU values between the detected bounding boxes; and (ii) leverage the tracking algorithm Mask2former[4] to detect and track the object instances across all video frames, use SSIS to detect the shadow/object instances and shadow-object associations on each frame, and merge the detection results with the tracking results based on the mask IoU of each object instance. Table 1 reports the comparison results. Our method significantly surpasses both baselines under all the evaluation metrics on the SOBA-VID test set. The IoU Tracker relies heavily on the quality of the detection results, whereas Mask2Former can only recognize objects of the same category as those in the training data [34]. In contrast, our approach benefits from the end-to-end training and knowledge learned from the unlabeled videos; see Fig. 1 for visual comparisons between SSIS-Track and “SSIS + Mask2Former”. Table 3: Ablation: instance paired tracking on SOBA-VID test set. Method Object Embedding Shadow Embedding SOAP-VID Asso. AP Inst. AP SSIS-Track $\checkmark$ 38.4 59.7 49.3 $\checkmark$ 38.3 59.1 48.6 $\checkmark$ $\checkmark$ 39.6 61.5 50.9 Figure 6: (a) and (b) show visual comparisons between SSIS-Track and Omnimatte [19]. (c) shows that SSIS-Track is able to find both static and dynamic instance pairs. The yellow boxes show Omnimatte’s results, which are contaminated with background noises. The red boxes show that the mask quality of SSIS-Track is far better than Omnimatte. ### 5.4 Comparison with Omnimatte Lu _et al_. [19] present a new task that predicts a subject together with its associated visual effects, _e.g_., reflection, dust, and shadow. They design a framework named Omnimatte that takes a video as the input and approaches this task through self-supervised learning. However, this method requires other forms of inputs, such as the instance masks, optical flows, and homographies, which are provided by other methods [22, 4, 6]. Fig. 6 shows visual comparisons between SSIS-Track and Omnimatte, where (a) and (b) show that the masks predicted by Omnimatte contain regions of the background, and also do not include the complete shadow regions. (c) indicates that our SSIS- Track is applicable to static instances while Omnimate fails as also mentioned in [19]. Note that Omnimatte takes about three and a half hours to generate the associated masks for this 21-frames video, while our method takes only around 18 seconds in a single RTX 3090 GPU. ### 5.5 Ablation Study #### Component analysis. We perform an ablation study by adding the major components, _i.e_., tracking branch with contrastive learning, temporal correspondence, and retrieving mechanism, step by step, starting from the baseline of using SSIS, which performs instance tracking by taking the output of the mask controller as the tracking embedding. Table 2 reports the results, showing that each component contributes to improving the overall performance. Note that contrastive learning and temporal correspondence encourage the framework to generate tracking embeddings for achieving paired tracking of instances while the retrieving mechanism helps to find the missed unpaired instances. #### Instance paired tracking analysis. Next, we explore the effectiveness of instance paired tracking. As Table 3 shows, we achieve paired tracking by using tracking embeddings of individual shadow/object instances. Yet, when using the concatenation of both the shadow and object embeddings, the framework obtains the best performance for all the evaluation metrics. ## 6 Applications We showcase video-level applications of video instance shadow detection to demonstrate the performance and applicability of SSIS-Track. As this paper can only show static images, please watch the demo videos in the supplementary material. #### Video inpainting. Video instance shadow detection can generate shadow-object association masks for paired instances across frames, which can be used not only to remove objects together with their associated shadows but also to replace the video background and retain the objects with their shadows. Fig. 7 (a) shows some snapshots of this video inpainting application using the extracted shadow- object association masks provided by SSIS-Track. With the help of SSIS-Track, the SOTA video inpainting method [15] can successfully remove two motorcycles with their associated shadows simultaneously. Also, Fig. 7 (b) shows that our shadow-object association masks can cooperate with Stable Diffusion [21] to replace the video background and put the dancer on Mars with her associated shadow. Figure 7: Example sequences demonstrate the application of video instance shadow detection in video inpainting. #### Instance cloning. Fig. 8 presents the application of video instance cloning based on video instance shadow detection, where we can successfully generate many cinematic video effects, _e.g_., a man with a suitcase walks in a crowd flow. To achieve this, we duplicate the paired shadow-object associations across frames from one sequence to another with the adjusted frame rate and motion blur, the final result is presented in Fig. 8 (c). In contrast, Fig. 8 (b) presents the result supported by Mask2Former [4], where it (i) fails to detect the instance in frames and also (ii) produces unrealistic effects without the associated shadows. Figure 8: A man walks in a crowd created using our results. #### Shadow editing. We further demonstrate the application of instance paired tracking in Fig. 9, where the shadow of Usain Bolt is replaced with the shadow of a Cheetah. With the help of the paired masks predicted by SSIS-Track across frames, we can first remove the shadow of Usain Bolt via video inpainting [15]; see Fig. 9 (b). Then, we can acquire a sequence of shadows of a Cheetah (see Fig. 9 (c)) and adjust it to the object instance of Usain Bolt; see the final result in Fig. 9 (d). Figure 9: Example sequences demonstrate the application of video instance shadow detection in shadow editing. ## 7 Conclusion This paper presents the video instance shadow detection task, which requires us to simultaneously detect, segment, associate, and track paired shadow- object associations in videos. To approach this task, we formulate the SSIS- Track framework, which is optimized to learn paired tracking of shadow and object instances from labeled images in a supervised manner and then from unlabeled videos in a self-supervised manner. We further design a bidirectional retrieving mechanism to maintain paired tracking even the objects/shadows are temporarily missed for several frames. Besides, we build SOBA-VID, a new dataset with unlabeled videos for training and labeled videos for testing. The experimental results show the superiority of our method in multiple aspects. In the end, we showcase the applicability of video instance shadow detection and its potential to support various video-level editing applications. #### Limitations. Our SSIS-Track takes around 5000MB memory to process a 21-frame video of resolution 1280 $\times$ 720\. In the future, we hope to develop a more efficient model to perform this task. Also, we plan to explore video-editing effects through generative models. ## References * [1] Erik Bochinski, Volker Eiselein, and Thomas Sikora. High-speed tracking-by-detection without using image information. In AVSS, pages 1–6, 2017. * [2] Sergi Caelles, Alberto Montes, Kevis-Kokitsi Maninis, Yuhua Chen, Luc Van Gool, Federico Perazzi, and Jordi Pont-Tuset. The 2018 DAVIS challenge on video object segmentation. arXiv:1803.00557, 2018. * [3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213–229, 2020. * [4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In CVPR, 2022. * [5] Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello, Humphrey Shi, and Jan Kautz. Learning to track instances without video annotations. In CVPR, pages 8680–8689, 2021. * [6] Matthias Grundmann, Vivek Kwatra, and Irfan Essa. Auto-directed video stabilization with robust l1 optimal camera paths. In CVPR, pages 225–232, 2011. * [7] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, pages 9729–9738, 2020. * [8] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, pages 2961–2969, 2017. * [9] Sukjun Hwang, Miran Heo, Seoung Wug Oh, and Seon Joo Kim. Video instance segmentation using inter-frame communication transformers. In NeurIPS, pages 13352–13363, 2021. * [10] Allan Jabri, Andrew Owens, and Alexei Efros. Space-time correspondence as a contrastive random walk. In NeurIPS, pages 19545–19560, 2020. * [11] Zhengkai Jiang, Zhangxuan Gu, Jinlong Peng, Hang Zhou, Liang Liu, Yabiao Wang, Ying Tai, Chengjie Wang, and Liqing Zhang. STC: Spatio-temporal contrastive learning for video instance segmentation. ECCVW, 2022. * [12] Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Video panoptic segmentation. In CVPR, pages 9859–9868, 2020. * [13] Zihang Lai and Weidi Xie. Self-supervised learning for video correspondence flow. In BMVC, 2019. * [14] Xueting Li, Sifei Liu, Shalini De Mello, Xiaolong Wang, Jan Kautz, and Ming-Hsuan Yang. Joint-task self-supervised learning for temporal correspondence. In NeurIPS, 2019. * [15] Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. In CVPR, pages 17562–17571, 2022. * [16] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, pages 2117–2125, 2017. * [17] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, pages 740–755, 2014. * [18] Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. FuseFormer: Fusing fine-grained information in transformers for video inpainting. In ICCV, pages 14040–14049, 2021. * [19] Erika Lu, Forrester Cole, Tali Dekel, Andrew Zisserman, William T. Freeman, and Michael Rubinstein. Omnimatte: Associating objects and their effects in video. In CVPR, pages 4507–4515, 2021. * [20] Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai, Serge Belongie, Alan Yuille, Philip Torr, and Song Bai. Occluded video instance segmentation: A benchmark. IJCV, 2022. * [21] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-Resolution image synthesis with latent diffusion models. In CVPR, pages 10684–10695, 2022. * [22] Zachary Teed and Jia Deng. RAFT: Recurrent all-pairs field transforms for optical flow. In ECCV, pages 402–419, 2020. * [23] Zhi Tian, Hao Chen, Xinlong Wang, Yuliang Liu, and Chunhua Shen. AdelaiDet: A toolbox for instance-level recognition tasks. https://git.io/adelaidet, 2019. * [24] Zhi Tian, Chunhua Shen, and Hao Chen. Conditional convolutions for instance segmentation. In ECCV, pages 282–298, 2020. * [25] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: Fully convolutional one-stage object detection. In ICCV, pages 9627–9636, 2019. * [26] Paul Voigtlaender, Michael Krause, Aljosa Osep, Jonathon Luiten, Berin Balachandar Gnana Sekar, Andreas Geiger, and Bastian Leibe. MOTS: Multi-object tracking and segmentation. In CVPR, pages 7942–7951, 2019. * [27] Tianyu Wang*, Xiaowei Hu*, Chi-Wing Fu, and Pheng-Ann Heng. Single-stage instance shadow detection with bidirectional relation learning. In CVPR, pages 1–11, 2021. * [28] Tianyu Wang*, Xiaowei Hu*, Pheng-Ann Heng, and Chi-Wing Fu. Instance shadow detection with a single-stage detector. IEEE TPAMI, pages 1–14, 2022. * [29] Tianyu Wang*, Xiaowei Hu*, Qiong Wang, Pheng-Ann Heng, and Chi-Wing Fu. Instance shadow detection. In CVPR, pages 1880–1889, 2020. * [30] Xiaolong Wang, Allan Jabri, and Alexei A. Efros. Learning correspondence from the cycle-consistency of time. In CVPR, pages 2566–2576, 2019. * [31] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-to-end video instance segmentation with transformers. In CVPR, pages 8741–8750, 2021. * [32] Junfeng Wu, Yi Jiang, Song Bai, Wenqing Zhang, and Xiang Bai. SeqFormer: Sequential transformer for video instance segmentation. In ECCV, pages 553–569, 2022. * [33] Jiarui Xu and Xiaolong Wang. Rethinking self-supervised correspondence learning: A video frame-level similarity perspective. In ICCV, pages 10075–10085, 2021. * [34] Linjie Yang, Yuchen Fan, and Ning Xu. Video instance segmentation. In ICCV, pages 5188–5197, 2019. * [35] Shusheng Yang, Yuxin Fang, Xinggang Wang, Yu Li, Chen Fang, Ying Shan, Bin Feng, and Wenyu Liu. Crossover learning for fast online video instance segmentation. In ICCV, pages 8043–8052, 2021.
# Self-organized PT-symmetry of exciton-polariton condensate in a double-well potential P.A. Kalozoumis Materials Science Department, School of Natural Sciences, University of Patras, GR-26504 Patras, Greece Hellenic American University, 436 Amherst st, Nashua, NH 0306 USA Institute of Electronic Structure and Laser, FORTH, GR-70013 Heraklion, Crete, Greece D. Petrosyan Institute of Electronic Structure and Laser, FORTH, GR-70013 Heraklion, Crete, Greece A. Alikhanyan National Science Laboratory (YerPhI), 0036 Yerevan, Armenia ###### Abstract We investigate the dynamics and stationary states of a semiconductor exciton- polariton condensate in a double well potential. We find that upon the population build up of the polaritons by above-threshold laser pumping, coherence relaxation due to the phase fluctuations of the polaritons drives the system into a stable fixed point corresponding to a self-organized PT- symmetric phase. ## I Introduction One of the prominent research directions in semiconductor optics is the study of exciton-polariton condensation in microcavities. Exciton-polaritons are hybrid quasi-particles of strongly coupled quantum well excitons and cavity photons [1, 2] which retain the properties of both matter and light. The excitonic part mediates effective interactions between the polaritons, giving rise to interesting nonlinear properties, whereas the small effective mass of the photonic component enables Bose-Einstein condensation even at ambient temperatures [3, 4, 5], in contrast to their ultracold atomic counterparts [6]. The short lifetime of the polariton condensate renders it an open system that requires continuous replenishing from the excitonic reservoir via external pumping. After their experimental realization [7, 8], the polariton condensates have been shown to be ideal system for studies of many effects at the interface of non-equilibrium physics and nonlinear dynamics. The intrinsic nonlinear dynamics of polariton systems lead to a variety of effects, such as the appearance of a Mach-Cherenkov cone in a supersonic flow [9], the formation of quantized vortices [10], and dark solitons [11]. Moreover, the polariton condensates can be engineered with high precision by the external laser fields [8, 12, 13, 14]. Finally, such systems are promising candidates for various applications in photonic devices, such as switches, gates and transistors [3], as well as for quantum simulators of interacting spin models [15]. The “open” nature of the system, featuring gain and loss, leads to interesting implications when the dissipative dynamics become pseudo-Hermitian. This is the case in parity-time (PT) symmetric setups, where dissipation losses are exactly balanced by the pumping gain. Systems with PT-symmetry has been a flourishing and broad research field, extending from quantum mechanics [16] and field theory [17] to optics [18] and acoustics [19]. The interplay between the inherent losses and the laser pumping in such a way as to preserve the PT symmetry of the system provides an effective framework where a polariton system can exhibit coherent, Hermitian-like dynamics for relatively long times. Recent works have shown promising results, such as permanent Rabi oscillations [20], multistability and condensation below threshold [21], exceptional points in polaritonic cavities below lasing threshold [22], and coherent oscillations of a two species polariton mixture in a double well [23]. The latter has been shown to be able to simulate the dynamics of a pair of spin-1/2 particles (qubits) in the presence of exchange interaction. Yet, polariton structures in the framework of PT symmetry have not been extensively studied yet, and more efforts are required to understand the rich landscape of phenomena which emerge from this framework. In this work we study the dynamics of an exciton-polariton condensate in a double well potential, in the presence of time-varying exciton populations and phase fluctuations. We consider the coupled-mode equations for the polaritons supplemented by the rate equations for the laser-pumped exciton reservoirs, and derive analytically the steady state solutions for the exciton and polariton populations as well as their coherence. We find that, when the total pumping rate is above threshold, the system automatically attains the PT symmetric state, independently of the pumping rates of the individual sites. Employing numerical simulations for several different pumping rates and initial conditions, we verify our analytical findings. We also study the stability and robustness of our results in the presence of phase noise caused by the unavoidable phase fluctuations of the polaritons. ## II The exciton-polariton system Figure 1: Schematic top view (left panel) and side view (right panel) of a polariton system in a double quantum well. Spatially shaped pumping lasers populate with rates $P_{L}$ and $P_{R}$ the reservoir excitons $n_{L,R}$, which decay via recombination with rates $\Gamma$ and energy-relax and scatter into the polariton condensate with rate $R$. The pumping lasers also create the confining potentials for the polaritons $\psi_{L}$ and $\psi_{R}$, which decay with rates $\kappa$, are continuously replenished by reservoir excitons with rates $Rn_{L,R}$, while interacting with each other via the Josephson coupling $J$. The system under consideration is schematically illustrated in Fig. 1. One or more layers of semiconductor quantum wells are placed inside the semiconductor microcavity near the antinode of the resonant cavity field mode. Spatially shaped pumping lasers replenish continuously the exciton reservoirs and simultaneously create confining potentials for the polariton condensate. Assuming a tight-binding double-well potential, the exciton-polariton system can be described by the following set of equations for the polariton condensate wavefunctions $\psi_{L}$ and $\psi_{R}$ in the left ($L$) and right ($R$) wells [24]: $\displaystyle i\partial_{t}\psi_{L}$ $\displaystyle=$ $\displaystyle\left[\epsilon_{L}+\eta|\psi_{L}|^{2}\right]\psi_{L}+\frac{i}{2}\left[Rn_{L}-\kappa\right]\psi_{L}-J\;\psi_{R},\qquad$ (1a) $\displaystyle i\partial_{t}\psi_{R}$ $\displaystyle=$ $\displaystyle\left[\epsilon_{R}+\eta|\psi_{R}|^{2}\right]\psi_{R}+\frac{i}{2}\left[Rn_{R}-\kappa\right]\psi_{R}-J^{*}\psi_{L},$ (1b) where $\epsilon_{L,R}$ are the single-particle energies, $\eta$ is the nonlinear interaction strength, $\kappa$ is the decay rate of the polaritons due to the exciton recombination and cavity photon losses (assumed the same for both wells), and $J$ is the Josephson (tunnel) coupling between the wells. The polariton equations are supplemented by the equations for the populations $n_{L,R}$ of the reservoir excitons, $\displaystyle\partial_{t}n_{L}$ $\displaystyle=$ $\displaystyle P_{L}-\Gamma n_{L}-Rn_{L}|\psi_{L}|^{2},$ (2a) $\displaystyle\partial_{t}n_{R}$ $\displaystyle=$ $\displaystyle P_{R}-\Gamma n_{R}-Rn_{R}|\psi_{R}|^{2},$ (2b) which are created by laser pumping with rates $P_{L,R}$, decay with rate $\Gamma$, and scatter into the polariton condensate with rate $R$. In Appendix A we briefly outline the PT-symmetry conditions for a condensate in a double well potential. Neglecting for the moment the non-linearity $\eta$, the PT-symmetry condition is satisfied when $\epsilon_{L,R}=\epsilon\,(=0)$ and the gain in one well exactly compensates the losses in the other, $\gamma_{L}\equiv\frac{1}{2}[Rn_{L}-\kappa]=-\frac{1}{2}[Rn_{R}-\kappa]\equiv-\gamma_{R},$ (3) as per Eqs. (1a) and (1b), which leads to $n_{L}+n_{R}=\frac{2\kappa}{R}$. The threshold pumping at which the polariton condensate starts to form can be obtained from the condition that the sum of the gain and loss in both wells is non-negative, $\gamma_{L}+\gamma_{R}\geq 0$. With Eq. (3), this condition is equivalent to $n_{L}+n_{R}\geq\frac{2\kappa}{R}.$ (4) Note that exactly at the threshold, this is the same condition as for the PT- symmetry. If we consider the stationary regime for the reservoir excitons, $\partial_{t}n_{L,R}=0$, we find from Eqs. (2) the steady-state values $n_{L,R}=\frac{P_{L,R}}{\Gamma+R|\psi_{L,R}|^{2}}.$ (5) Exactly at the threshold for condensate formation, the values of the polariton populations in both wells, $|\psi_{L,R}|^{2}$, are marginally equal to zero and we have $n_{L,R}\simeq P_{L,R}/\Gamma$. Substituting these values into Eq. (4), we find the threshold pumping condition $P_{L}+P_{R}\geq\frac{2\kappa\Gamma}{R},$ (6) above which the condensate begins to form, while for $P_{L}+P_{R}<2\kappa\Gamma/R$ the condensate decays to zero. Figure 2: Dynamics of the polariton populations $|\psi_{L,R}|^{2}$ (upper panels), the reservoir excitons $n_{L,R}$ (insets), and the coherence $\Theta$ (lower panels), as obtained from the numerical solution of Eqs. (1-2) for the parameters $\epsilon=0$, $\kappa=10J$, $\Gamma=2J$, $R=0.02J$ (time is in units of $J^{-1}$), corresponding to the threshold values of pumping $P_{L}+P_{R}=2\kappa\Gamma/R=2000J$ and the steady-state exciton populations $n_{L}+n_{R}=2\kappa/R=1000$ (green dashed lines in the insets). The initial exciton populations are always taken as $n_{L,R}=P_{L,R}/\Gamma$, while the initial polariton amplitudes $\psi_{L,R}$ are seeded with small (complex) values. (a) Linear case ($\eta=0$) with the pumping rates $P_{L}=1000J$ and $P_{R}=990J$ slightly below threshold, leading to decay of the initial polariton populations $|\psi_{L}(t)|^{2}$ and $|\psi_{R}(t)|$ and steady-state exciton population $n_{L}+n_{R}<2\kappa/R$. (b) Same as in (a), but for stronger pumping rates $P_{L}=1080$ and $P_{R}=1020$ above threshold and initial conditions $\textrm{Re}\Theta(0)=0$, leading to the initial build up of the polariton populations and their continuous Rabi-like oscillations, while $\textrm{Re}\Theta(t)=0\;\forall\;t$, and the exciton population $n_{L}+n_{R}$ oscillating slightly above the threshold value $2\kappa/R=1000$. (c) Same as in (b) but for the initial conditions $\textrm{Re}\Theta(0)\neq 0$, leading to a steady-state of the system. (d) Same as in (b) with the initial conditions $\textrm{Re}\Theta(0)=0$ but in the presence of nonlinearity $\eta=0.3J$ that couples $\textrm{Re}\Theta$ and $\textrm{Im}\Theta$, leading to a steady-state of the system. In the upper panels of Fig. 2 we show the polariton populations $|\psi_{L,R}|^{2}$ for different pumping rates and initial conditions, with and without non-linear interaction, as obtained from the numerical solution of Eqs. (1-2). The insets show the evolution of the exciton populations $n_{L}$ and $n_{R}$ and their sum $n_{L}+n_{R}$. For pumping below threshold, we observe a decay of the initial (seed) polariton populations with rate $\gamma_{L}+\gamma_{R}\simeq\frac{R}{2\Gamma}(P_{L}+P_{R})-\kappa<0$, accompanied by Rabi-like oscillations, while the exciton populations settle to $n_{L,R}=P_{L,R}/\Gamma$. For pumping above threshold, the polariton populations grow until reaching certain values $|\psi_{L,R}|^{2}$ at which $n_{L}+n_{R}\simeq\frac{2\kappa}{R}$, while the Rabi-like oscillations persist or are eventually damped, depending on the initial conditions or presence of non-linear interaction, as discussed below. Remarkably, the polariton and exciton populations increase and decrease, respectively, reaching the same stationary values which satisfy the PT-symmetry conditions, independently of the pumping rates, as long as pumping is retained above threshold. ## III Equivalence of the PT-symmetry and steady state conditions To understand the dynamics of the system, it is convenient to express Eqs. (1) in terms of the polariton populations $|\psi_{L,R}|^{2}$ and coherence $\Theta\equiv\psi_{L}\psi_{R}^{*}$ as $\displaystyle\partial_{t}|\psi_{L}|^{2}$ $\displaystyle=$ $\displaystyle 2\gamma_{L}|\psi_{L}|^{2}+2J\textrm{Im}\Theta,$ (7a) $\displaystyle\partial_{t}|\psi_{R}|^{2}$ $\displaystyle=$ $\displaystyle 2\gamma_{R}|\psi_{R}|^{2}-2J\textrm{Im}\Theta,$ (7b) $\displaystyle\partial_{t}\Theta$ $\displaystyle=$ $\displaystyle-i\left[\epsilon_{L}-\epsilon_{R}+\eta(|\psi_{L}|^{2}-|\psi_{R}|^{2})\right]\Theta$ (7c) $\displaystyle+(\gamma_{L}+\gamma_{R})\Theta- iJ\left(|\psi_{L}|^{2}-|\psi_{R}|^{2}\right).$ Note that below threshold, $(\gamma_{L}+\gamma_{R})<0$, both the polariton populations and their coherence decay to zero, as already mentioned above. Let us assume $\epsilon_{L,R}=0$ and consider first the case of vanishing nonlinearity $\eta=0$. Equation (7c) indicates that the coherence decays only if its real part is nonzero. In turn, the solution for the real part of the coherence is $\textrm{Re}\Theta(t)=\textrm{Re}\Theta(0)\;e^{\int_{0}^{t}(\gamma_{L}+\gamma_{R})dt^{\prime}}.$ (8) Hence, if initially $\textrm{Re}\Theta(0)=0$, it will remain so at later times, $\textrm{Re}\Theta(t)=0\;\forall\;t>0$. Then the dynamics of the system, if pumped above threshold, will exhibit continuous Rabi-like oscillations with frequency $J$, while no steady state will be attained, as in Fig. 2(b). In practice, however, even if initially we have $\textrm{Re}\Theta(0)=0$ [e.g., either $\psi_{L}(0)=0$ or $\psi_{R}(0)=0$], the unavoidable phase fluctuations of the polaritons will eventually lead to the appearance of finite $\textrm{Re}\Theta(t)\neq 0$, which in turn will result in the decay of coherence and drive the system to the steady state. Equivalently, if we have initially $\textrm{Re}\Theta(0)\neq 0$, the system can still exhibit initially Rabi-like oscillations, but then it will eventually attain the steady state, as in Fig. 2(c). Finally, as seen from Eq. (7c) the nonlinear interaction couples the real and imaginary parts of the coherence $\Theta$ with the rate $\eta(|\psi_{L}|^{2}-|\psi_{R}|^{2})$. Hence, in the presence of nonlinearity $\eta\neq 0$, we expect the eventual decay of the coherence with the system attaining the steady state, for any initial conditions and independent on the phase fluctuations, as in Fig. 2(d). Setting the time derivative in the left-hand side of the Eq. (7c) equal to zero, we find the steady state is reached when $R[n_{L}+n_{R}]-2\kappa=0,\quad\mathrm{and}\quad|\psi_{L}|^{2}=|\psi_{R}|^{2}.$ (9) Remarkably, the first equation corresponds exactly to the PT-symmetry condition $n_{L}+n_{R}=\frac{2\kappa}{R}$ discussed above. Moreover, this condition is satisfied even in the presence of nonlinear interaction $\eta\neq 0$, because the equal polariton populations as per the second equation lead to exactly the same energy shifts $\eta|\psi_{L,R}|^{2}$ of the polaritons in both wells. In other words, for any initial conditions, and provided the total pumping is above threshold as per Eq. (6) but otherwise arbitrary $P_{L}$ and $P_{R}$, the system attains a stable fixed point corresponding to the PT- symmetric state. Even when no steady state exists or is yet reached, the PT condition in Eq. (9) is approximately satisfied, as seen in the insets of Fig. 2. Figure 3: Dynamics of the polariton populations $|\psi_{L,R}|^{2}$ (upper panels), the coherence $\Theta$ (middle panels), and the left-well polariton field correlation function $g^{(1)}(t)$ (lower panels), for the same parameters as in Fig. 2 with the addition of phase fluctuations causing decoherence with rate $\xi=0.05J$, as obtained from the the ensemble averaged solution of Eqs. (1-2). (a) Linear case ($\eta=0$) with the below threshold pumping $P_{L}=1000J$ and $P_{R}=990J$ leading to the exponential decay of the correlation function $g^{(1)}(t)\propto e^{-(\xi-|\gamma_{L}+\gamma_{R}|/2)t}$. (b) Same as in (a), but for stronger pumping above threshold $P_{L}=1080$ and $P_{R}=1020$. Now, independently of the initial value of $\textrm{Re}\Theta(0)$, the phase fluctuations cause exponential decay of $g^{(1)}(t)\propto e^{-\xi t}$ while the system approaches the steady state. (c) Same as in (b) but in the presence of nonlinearity $\eta=0.3J$, causing accelerated decay of $g^{(1)}(t)\propto e^{-0.057t}$ and faster approach of the system to the steady state. In the lower panels, we also show the correlation functions $g^{(1)}(t)$ obtained from the long-time average of the system dynamics (the oscillating tail of $g^{(1)}$ is due to the finite length of the time series). Combining Eqs. (5) and (9), we find that steady state polariton populations are $|\psi_{L}|^{2}=|\psi_{R}|^{2}=\frac{P_{L}+P_{R}}{2\kappa}-\frac{\Gamma}{R},$ (10) while the exciton populations are $n_{L,R}=\frac{2\kappa P_{L,R}}{R(P_{L}+P_{R})}.$ (11) Using these stationary values for $n_{L,R}$ and $|\psi_{L,R}|^{2}$ in Eqs. (7a) and (7b) in the steady state we obtain $\textrm{Im}(\Theta)=\frac{\kappa\Gamma(P_{L}-P_{R})}{2RJ(P_{L}+P_{R})}-\frac{P_{L}-P_{R}}{4J},$ (12) and from $|\psi_{L}\psi_{R}^{*}|^{2}=[\textrm{Re}(\psi_{L}\psi_{R}^{*})]^{2}+[\textrm{Im}(\psi_{L}\psi_{R}^{*})]^{2}$ we obtain $\textrm{Re}(\Theta)=\mathcal{D}\sqrt{4J^{2}(P_{L}+P_{R})^{2}-\kappa^{2}(P_{L}-P_{R})^{2}}$ (13) where $\mathcal{D}=\frac{P_{L}+P_{R}-2\kappa\Gamma/R}{4J\gamma(P_{L}+P_{R})}.$ These results are verified by the numerical simulations illustrated in Fig. 2 and they equally hold for any value the nonlinearity strength $\eta$. ## IV Phase fluctuations As mentioned above, the coherence of the polariton condensate will decay due to the phase fluctuations that are always present in realistic quantum systems. We therefore incorporate the phase noise in our numerical calculations and investigate how it modifies the dynamics of the polaritons and the coherence. We model the phase fluctuations as the standard Wiener process for stochastic differential equations. Thus, the single-particle energies $\epsilon_{L,R}$ in Eqs. (1) become Gaussian stochastic variables with the mean $\braket{\epsilon_{L,R}}=0$ and variance $\sigma^{2}=2\xi/\delta t$, where $\xi$ is the decoherence rate and $\delta t$ is the time step for picking a new random energy. In Fig. 3 we show the results of our numerical simulations as obtained upon the ensemble average over $N=1000$ independent realizations of the system dynamics. We compute the first-order correlation functions $g^{(1)}(t)$, which quantify the coherence for the polaritonic fields, via $g^{(1)}(t)=\frac{\langle\psi(t_{0})\psi(t)\rangle}{\sqrt{\langle|\psi(t_{0})|^{2}\rangle\langle|\psi(t)|^{2}\rangle}},$ (14) where $\psi=\psi_{L}$ or $\psi_{R}$, and $\braket{\ldots}$ denotes the ensemble average. Below the pumping threshold, the polariton fields decay with rate $\gamma_{L}+\gamma_{R}<0$, but the phase fluctuations with rate $\xi$ causes even faster decay of coherence, $g^{(1)}(t)\propto e^{-(\xi-|\gamma_{L}+\gamma_{R}|/2)t}$, as seen in Fig. 3(a). For pumping above threshold, the phase noise causes exponential decay of the correlation function $g^{(1)}(t)\propto e^{-\xi t}$, while the system approaches the steady state independently of the initial value of coherence $\textrm{Re}\Theta(0)$, as seen in Fig. 3(b). Including also the nonlinear interaction $\eta\neq 0$ further accelerates the decay of the correlation function and the system approaches the steady state even faster. We finally note that for an ergodic system the ensemble-averaged and time- averaged correlation functions are equivalent. To verify whether our polariton system is ergodic, we also compute the field correlation function $g^{(1)}(t)=\frac{\int_{t_{\textrm{i}}}^{t_{\textrm{f}}}d\tau\psi(\tau)\psi^{*}(\tau+t)}{\sqrt{\int_{t_{\textrm{i}}}^{t_{\textrm{f}}}d\tau|\psi(\tau)|^{2}\int_{t_{\textrm{i}}}^{t_{\textrm{f}}}d\tau|\psi(\tau+t)|^{2}}}$ (15) resulting from a single, long-time trajectory with $t_{\textrm{f}}-t_{\textrm{i}}=3000/J$. As seen in Fig. 3 (lower panels), the computed ensemble-averaged and time-averaged correlation functions coincide to a very approximation, attesting to the ergodicity of our system. ## V Conclusions To summarize, we have studied an exciton-polariton system in a double-well potential, taking into account the dynamics of the reservoir excitons and the polaritons. We have found that for pumping of the excitons above the total threshold value for the formation of the polariton condensate, the exciton populations attain the values that satisfy the PT-symmetry condition for the polariton condensate, independent of the pumping rates of the individual wells. Employing the population-coherence equations, we interpreted the corresponding dynamics and revealed the stable fixed point, or the steady state, that the system approaches. To make our analysis experimentally relevant, we have taken also into account the phase fluctuations present in any realistic system, and computed the first-order correlation functions for the polariton fields, which revealed the coherence decay with the corresponding rate. We note that our results apply to moderate non-linear interaction strength and small differences in pumping rates of the two wells. For large difference in the pumping rates, the strong non-linear energy shift of the polariton condensate energy may lead to self-trapping and break-up of the PT symmetry [25] ## VI Acknowledgments We thank P.G. Savvidis, H. Ohadi, and A.F. Tzortzakakis for fruitful discussions. This work was co-financed by Greece (General Secretariat for Research and Technology), and the European Union (European Regional Development Fund), in the framework of the bilateral Greek-Russian Science and Technology collaboration on Quantum Technologies (POLISIMULATOR project.) ## Appendix A Polariton condensate in a PT-symmetric double well Consider a polariton condensate in a double well potential described by the coupled-mode equations $\displaystyle i\partial_{t}\psi_{L}$ $\displaystyle=$ $\displaystyle(\epsilon_{L}+i\gamma_{L})\psi_{L}+\eta|\psi_{L}|^{2}\psi_{L}-J\;\psi_{R},$ (16a) $\displaystyle i\partial_{t}\psi_{R}$ $\displaystyle=$ $\displaystyle(\epsilon_{R}+i\gamma_{R})\psi_{R}+\eta|\psi_{R}|^{2}\psi_{R}-J^{*}\\!\psi_{L},$ (16b) where $\epsilon_{L,R}$ are the single-particle energies, $\gamma_{L,R}$ are the incoherent loss ($\gamma<0$) or gain ($\gamma>0$) rates at each well, $\eta$ is the nonlinear interaction strength, and $J$ is the Josephson coupling between the wells. Figure 4: Imaginary part of the eigenvalues $\lambda_{\pm}$ in Eq. (18). For $\gamma<J(=1)$ we have a PT symmetric phase with real eigenvalues. Above the bifurcation point at $\gamma=J$, the system enters the PT broken phase with imaginary eigenvalues. If we set $\epsilon_{L,R}=0$, assume negligibly weak nonlinearity, $\eta|\psi|^{2}\ll J$, and set $\gamma_{L}=-\gamma_{R}=\gamma$ so that the loss at the right well is exactly compensated by the gain at the left well, we obtain a PT-symmetric Hamiltonian matrix corresponding to Eqs. (16) [23]: $\mathcal{H}=\begin{pmatrix}i\gamma&-J\\\ -J^{*}&-i\gamma\end{pmatrix}.$ (17) Its eigenvalues and the corresponding eigenvectors are given by $\lambda_{\pm}=\pm\sqrt{|J|^{2}-\gamma^{2}}$ (18) and $|\pm\rangle=\left[\left(\sqrt{|J|^{2}-\gamma^{2}}\pm i\gamma\right)|L\rangle\mp J^{*}|R\rangle\right]/N_{\pm}$ (19) with $N_{\pm}$ the normalization factors. For $\gamma<|J|$, the eigenvalue spectrum is real and the dynamics is Hermitian-like. For $\gamma>|J|$, the eigenvalues become imaginary and the system enters the PT-broken phase. The case $|J|=\gamma$ corresponds to the exceptional point of the system where the eigenvalues become degenerate and the eigenstates coalesce. Figure 4 illustrates the dependence of imaginary part of the eigenvalues on the loss/gain parameter $\gamma$. ## References * [1] H. Deng, H. Haug, and Y. Yamamoto, Rev. Mod. Phys. 82, 1489 (2010). * [2] I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013). * [3] K. G. Lagoudakis, M. Wouters, M. Richard, A. Baas, I. Carusotto, R. Andre, Le Si Dang, and B. Deveaud-Pledran, Nature Phys. 4, 706 (2008). * [4] M. Yamaguchi, K. Kamide, R. Nii, T. Ogawa, and Y. Yamamoto, Phys. Rev. Lett. 111, 026404 (2013). * [5] C. Schneider, A. Rahimi-Iman, N. Y. Kim, J. Fischer, I. G. Savenko, M. Amthor, M. Lermer, A. Wolf, L. Worschech, V. D. Kulakovskii, I. A. Shelykh, M. Kamp, S. Reitzenstein, A. Forchel, Y. Yamamoto, and S. Hofling, Nature 497, 348 (2013). * [6] V. Goblot, H.S. Nguyen, I. Carusotto, E. Galopin, A. Lematre, I. Sagnes, A. Amo, and J. Bloch, Phys. Rev. Lett. 117, 217401 (2016). * [7] J. Kasprzak, M. Richard, S. Kundermann, A. Baas, P. Jeambrun, J. Keeling, F. M. Marchetti, M. H. Szymanska, R. André, J. L. Staehli, V. Savona, P. B. Littlewood, B. Deveaud, and L. S. Dang, Nature 443, 409 (2006). * [8] R. Balili, V. Hartwell, D. Snoke, L. Pfeiffer, K. West, Science 316, 1007 (2007). * [9] A. Amo, J. Lefrére, S. Pigeon, C. Adrados, C. Ciuti, I. Carusotto, R. Houdre, E. Giacobino, and A. Bramati, Nat. Phys. 5, 805 (2009). * [10] L. Dominici, R. Carretero-Gonzalez, J. Cuevas-Maraver, A. Gianfrate, A. S. Rodrigues, D. J. Frantzeskakis, P. G. Kevrekidis, G. Lerario, D. Ballarini, M. De Giorgi, G. Gigli, and D. Sanvitto, Nature Comm. 9, 1467 (2018). * [11] R. Carretero-Gonzalez, J. Cuevas-Maraver, D. J. Frantzeskakis, T. P. Horikis, P. G. Kevrekidis, and A. S. Rodrigues, Phys. Lett. A 381, 3805 (2017). * [12] H. Ohadi, A. J. Ramsay, H. Sigurdsson, Y. del Valle-Inclan Redondo, S. I. Tsintzos, Z. Hatzopoulos, T. C. H. Liew, I. A. Shelykh, Y. G. Rubo, P. G. Savvidis, and J. J. Baumberg, Phys. Rev. Lett. 119, 067401 (2017). * [13] H. Ohadi, Y. del Valle-Inclan Redondo, A. J. Ramsay, Z. Hatzopoulos, T. C. H. Liew, P. R. Eastham, P. G. Savvidis, and J. J. Baumberg, Phys. Rev. B 97, 195109 (2018). * [14] K. Orfanakis, A.F. Tzortzakakis, D. Petrosyan, P.G. Savvidis, and H. Ohadi, Phys. Rev. B (2021). * [15] N. G. Berloff, M. Silva, K. Kalinin, A. Askitopoulos, J. D. Töpfer, P. Cilibrizzi,W. Langbein, and P. G. Lagoudakis, Nature Mat. 16, 1120 (2017). * [16] C. M. Bender, D. C. Brody, and H. F. Jones, Phys. Rev. Lett. 89, 270401 (2002). * [17] C. M. Bender, D. C. Brody, and H. F. Jones, Phys. Rev. D 70, 025001 (2004). * [18] R. El-Ganainy, K. G. Makris, D. N. Christodoulides and Z. H. Musslimani, Opt. Lett. 32, 2632 (2007). * [19] R. Fleury, D. Sounas, and A. Alú, Nat. Comm. 6, 5905 (2015). * [20] I. Y. Chestnov, S. S. Demirchyan, A. P. Alodjants, Y. G. Rubo, and A. V. Kavokin, Sci. Rep. 6, 19551 (2016). * [21] J.-Y. Lien, Y.-N. Chen, N. Ishida, H.-B. Chen, C.-C. Hwang, and F. Nori Phys. Rev. B 91, 024511 (2015). * [22] J. B. Khurgin, Optica 7(8), 1015 (2020). * [23] P. A. Kalozoumis, G. M. Nikolopoulos, and D. Petrosyan, EPL 129, 37003 (2020). * [24] M. Wouters and I. Carusotto, Phys. Rev. Lett. 99, 140402 (2007). * [25] A. A. Sukhorukov, Z. Xu, and Y.S. Kivshar, Phys. Rev. A 82, 043818 (2010).
* Hinton et al. (2009) Hinton, J. A., Skilton, J. L., Funk, S., et al. 2009, ApJ, 690, L101, doi: 10.1088/0004-637X/690/2/L101 * Johnston et al. (1992) Johnston, S., Manchester, R. N., Lyne, A. G., et al. 1992, ApJ, 387, L37, doi: 10.1086/186300 * Kambe et al. (2013) Kambe, E., Yoshida, M., Izumiura, H., et al. 2013, PASJ, 65, 15, doi: 10.1093/pasj/65.1.15 * Khangulyan et al. (2008) Khangulyan, D., Aharonian, F., & Bosch-Ramon, V. 2008, MNRAS, 383, 467, doi: 10.1111/j.1365-2966.2007.12572.x * Khangulyan et al. (2011) Khangulyan, D., Aharonian, F. A., Bogovalov, S. V., & Ribó, M. 2011, ApJ, 742, 98, doi: 10.1088/0004-637X/742/2/98 * Khangulyan et al. (2007) Khangulyan, D., Hnatic, S., Aharonian, F., & Bogovalov, S. 2007, MNRAS, 380, 320, doi: 10.1111/j.1365-2966.2007.12075.x * Kirk et al. (1999) Kirk, J. G., Ball, L., & Skjæraasen, O. 1999, Astroparticle Physics, 10, 31, doi: 10.1016/S0927-6505(98)00041-3 * Krause et al. (2017) Krause, M., Pueschel, E., & Maier, G. 2017, Astroparticle Physics, 89, 1 * Kravtsov et al. (2020) Kravtsov, V., Berdyugin, A. V., Piirola, V., et al. 2020, A&A, 643, A170, doi: 10.1051/0004-6361/202038745 * Li et al. (2017) Li, J., Torres, D. F., Cheng, K.-S., et al. 2017, ApJ, 846, 169, doi: 10.3847/1538-4357/aa7ff7 * Li et al. (2011) Li, J., Torres, D. F., Zhang, S., et al. 2011, ApJ, 744, L13, doi: 10.1088/2041-8205/744/1/l13 * Li & Ma (1983) Li, T.-P., & Ma, Y.-Q. 1983, ApJ, 272, 317, doi: 10.1086/161295 * Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447, doi: 10.1007/BF00648343 * Maier & Holder (2017) Maier, G., & Holder, J. 2017, International Cosmic Ray Conference, 35, 747\. https://arxiv.org/abs/1708.04048 * Maier & VERITAS Collaboration (2017) Maier, G., & VERITAS Collaboration. 2017, International Cosmic Ray Conference, 35, 729. https://arxiv.org/abs/1708.04045 * Malyshev et al. (2019) Malyshev, D., Chernyakova, M., Santangelo, A., & Pühlhofer, G. 2019, Astronomische Nachrichten, 340, 465, doi: 10.1002/asna.201913605 * Manset & Donati (2003) Manset, N., & Donati, J.-F. 2003, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4843, ESPaDOnS; an exhelle spectro-polarimetric device for the observation of stars, ed. S. Fineschi, 425–436, doi: 10.1117/12.458230 * Martí-Devesa & Reimer (2020) Martí-Devesa, G., & Reimer, O. 2020, A&A, 637, A23, doi: 10.1051/0004-6361/202037442 * Massi et al. (2001) Massi, M., Ribó, M., Paredes, J. M., Peracaula, M., & Estalella, R. 2001, A&A, 376, 217, doi: 10.1051/0004-6361:20010953 * Mochol & Kirk (2014) Mochol, I., & Kirk, J. G. 2014, Astronomische Nachrichten, 335, 256, doi: 10.1002/asna.201312028 * Moldón et al. (2011a) Moldón, J., Johnston, S., Ribó, M., Paredes, J. M., & Deller, A. T. 2011a, ApJ, 732, L10, doi: 10.1088/2041-8205/732/1/L10 * Moldón et al. (2011b) Moldón, J., Ribó, M., & Paredes, J. M. 2011b, A&A, 533, L7, doi: 10.1051/0004-6361/201117764 * Moldon et al. (2011) Moldon, J., Ribo, M., & Paredes, J. M. 2011, Astrophysics and Space Science Proceedings, 21, 1. https://arxiv.org/abs/1101.4153 * Morgan et al. (1955) Morgan, W. W., Code, A. D., & Whitford, A. E. 1955, ApJS, 2, 41, doi: 10.1086/190016 * Moritani et al. (2018) Moritani, Y., Kawano, T., Chimasu, S., et al. 2018, PASJ, 70, 61, doi: 10.1093/pasj/psy053 * Moritani et al. (2015) Moritani, Y., Okazaki, A. T., Carciofi, A. C., et al. 2015, ApJ, 804, L32, doi: 10.1088/2041-8205/804/2/L32 * Nievas & VERITAS Collaboration (2021) Nievas, M., & VERITAS Collaboration. 2021, International Cosmic Ray Conference * Papitto et al. (2012) Papitto, A., Torres, D. F., & Rea, N. 2012, ApJ, 756, 188, doi: 10.1088/0004-637X/756/2/188 * Paredes-Fortuny et al. (2015a) Paredes-Fortuny, X., Bosch-Ramon, V., Perucho, M., & Ribó, M. 2015a, A&A, 574, A77, doi: 10.1051/0004-6361/201424672 * Paredes-Fortuny et al. (2015b) Paredes-Fortuny, X., Ribó, M., Bosch-Ramon, V., et al. 2015b, A&A, 575, L6, doi: 10.1051/0004-6361/201425361 * Park & VERITAS Collaboration (2015) Park, N., & VERITAS Collaboration. 2015, in International Cosmic Ray Conference, Vol. 34, 34th International Cosmic Ray Conference (ICRC2015), 771\. https://arxiv.org/abs/1508.07070 * Parsons & Hinton (2014) Parsons, R. D., & Hinton, J. A. 2014, Astroparticle Physics, 56, 26, doi: 10.1016/j.astropartphys.2014.03.002 * Rea & Torres (2011) Rea, N., & Torres, D. F. 2011, ApJ, 737, L12, doi: 10.1088/2041-8205/737/1/L12 * Robertson et al. (2015) Robertson, D. R. S., Gallo, L. C., Zoghbi, A., & Fabian, A. C. 2015, MNRAS, 453, 3455, doi: 10.1093/mnras/stv1575 * SALT Transient Program (2020) SALT Transient Program. 2020, personal communication (L. Townsend) * Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835, doi: 10.1086/160554 * Shafter et al. (1986) Shafter, A. W., Szkody, P., & Thorstensen, J. R. 1986, ApJ, 308, 765, doi: 10.1086/164549 * Skilton et al. (2009) Skilton, J. L., Pandey-Pommier, M., Hinton, J. A., et al. 2009, MNRAS, 399, 317, doi: 10.1111/j.1365-2966.2009.15272.x * Stellingwerf (1978) Stellingwerf, R. F. 1978, ApJ, 224, 953, doi: 10.1086/156444 * Stoyanov et al. (2018) Stoyanov, K. A., Zamanov, R. K., & Iliev, I. K. 2018, The Astronomer’s Telegram, 11233, 1 * Sushch & van Soelen (2017) Sushch, I., & van Soelen, B. 2017, ApJ, 837, 175, doi: 10.3847/1538-4357/aa62ff * Timmer & Koenig (1995) Timmer, J., & Koenig, M. 1995, A&A, 300, 707 * Tody (1986) Tody, D. 1986, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 627, The IRAF Data Reduction and Analysis System, ed. D. L. Crawford, 733, doi: 10.1117/12.968154 * Tokayer et al. (2021) Tokayer, Y., An, H., Halpern, J., et al. 2021, Contemporaneous Multi-Wavelength Campaign to Study HESS J0632+057s Distinctive Light Curve, to be published in ApJ * Torres et al. (2012) Torres, D. F., Rea, N., Esposito, P., et al. 2012, ApJ, 744, 106, doi: 10.1088/0004-637X/744/2/106 * Uttley et al. (2003) Uttley, P., Edelson, R., McHardy, I. M., Peterson, B. M., & Markowitz, A. 2003, ApJ, 584, L53, doi: 10.1086/373887 * Volkov et al. (2021) Volkov, I., Kargaltsev, O., Younes, G., Hare, J., & Pavlov, G. 2021, arXiv e-prints, arXiv:2103.04403. https://arxiv.org/abs/2103.04403 * Weekes et al. (2002) Weekes, T. C., Badran, H., Biller, S. D., et al. 2002, Astroparticle Physics, 17, 221, doi: 10.1016/S0927-6505(01)00152-9 * Weng et al. (2021) Weng, S.-S., Pan, Z., Qian, L., et al. 2021, The Astronomer’s Telegram, 14297, 1 * Yoneda et al. (2020) Yoneda, H., Makishima, K., Enoto, T., et al. 2020, Phys. Rev. Lett., 125, 111103, doi: 10.1103/PhysRevLett.125.111103 * Yudin et al. (2017) Yudin, R. V., Potter, S. B., & Townsend, L. J. 2017, MNRAS, 464, 4325, doi: 10.1093/mnras/stw2676 * Zamanov et al. (2021) Zamanov, R. K., Stoyanov, K. A., Mart, J., Marchev, V. D., & Nikolov, Y. M. 2021, Astronomische Nachrichten, 342, 531, doi: 10.1002/asna.202123856 * Zamanov et al. (2016) Zamanov, R. K., Stoyanov, K. A., Martí, J., et al. 2016, A&A, 593, A97, doi: 10.1051/0004-6361/201628735 * Zanin et al. (2013) Zanin, R., Carmona, E., J., S., et al. 2013, in Proc. of the 33th International Cosmic Ray Conference (ICRC), 773, doi: http://inspirehep.net/record/1412925/files/icrc2013-0773.pdf ## Appendix A Orbital period determination Literature provides a variety of methods for periodicity analysis of sparse astronomical data (e.g. Graham et al., 2013). The performance of the different techniques depend on quality, coverage, and shape of the given light curves. There is no clear guidance which technique is best for a given data set. For these reasons, the following methods are evaluated using Monte Carlo-generated light curves: discrete correlation functions (DCF, Edelson & Krolik, 1988; Robertson et al., 2015)777The implementation of the DCF method in the python package pyDCF (https://github.com/astronomerdamo/pydcf) is used., correlation analysis comparing the light curves with a binned-average light curve (PCC, Malyshev et al., 2019), Lomb-Scargle periodograms (Lomb, 1976; Scargle, 1982)888The implementation of the Lomb-Scargle method in astropy is used (https://docs.astropy.org/en/stable/timeseries/lombscargle.html), and phase dispersion minimisation method (PDM, Stellingwerf, 1978)999The implementation of the PDM method in the python package PyAstronomy (https://github.com/sczesla/PyAstronomy) is used.. Figure 11: Outcome of the analysis of 1000 Monte Carlo-generated light curves using the following data sets as templates for the toy MC: Top: _Swift_ -XRT; Bottom: VERITAS and H.E.S.S.. In all cases a true orbital period of 321 days is assumed. Different techniques for periodicity analysis, as indicated in the figure panels, have been applied to the same MC-generated light curves. Results are given both as mean and 68% fiducial interval for all methods. The Monte Carlo-generated light curves are based on the phase-binned average profiles of the observed fluxes in X- or gamma rays and are generated assuming the same distribution in time as the measurements, with flux values randomly altered according to the corresponding uncertainties. As the phase-binned averaged profile removes variability which is not due to statistical uncertainties of the measurement, additional power-law noise following a $(1/f)^{1.2}$ spectrum is added (Timmer & Koenig, 1995). Figure 12: Results of the periodicity analysis of _Swift_ -XRT (a-d) and gamma-ray (e,f) measurements applying different methods: a) Phase dispersion (PDM; _Swift_ -XRT); b) Discrete correlation coefficient (DCF; _Swift_ -XRT); c) Pearson correlation coefficient (PCC; _Swift_ -XRT); d) Search for periods other than the orbital period in the range 50 to 1000 days; Pearson correlation coefficient (PCC; _Swift_ -XRT); the red-dashed line indicates the 99% containment level obtained from Monte Carlo sets of the XRT light curve (to take spectral leakage into account, an orbital periodicity of 317.3 days is assumed in the Monte Carlo sets); e) Phase dispersion (PDM; gamma-ray energies); f) Pearson correlation coefficient (PCC; gamma-ray energies). All coefficients are plotted as a function of orbital period. The green shaded areas indicate the 68% fiducial interval obtained by the analysis of MC- generated light curves (see text for details). Figure 11 shows the results of the different techniques applied to 1000 Monte Carlo-generated light curves based on the _Swift_ -XRT and gamma-ray data sets assuming a true orbital period of 321 days. The search region is restricted to a $\pm 30$ day-interval around the true orbital period. This prior is necessary, as the PDM method tends towards reconstructing from a significant fraction ($>20$%) of light curves an orbital period roughly half the true orbital period. Applying the method of Lomb-Scargle to the generated light curves leads to inconsistent results, as the largest peak in the periodograms for most light curves is at half of the true period. This result is also obtained when applying this method to the _Swift_ -XRT data. Folding the light curve with such short periods leads to inconsistent solutions, as both the PDM and Lomb-Scargle methods ignore its non-sinusoidal shape and in particular the very low fluxes respectively non-detections of the system visible at orbital phases of $\approx 0.4$ (assuming 317.3 days of orbital period). All shown methods are able to reconstruct, for the majority of the MC light curves, the true orbital period with an uncertainty of less than 2 days for the _Swift_ -XRT and gamma-ray data set. The DCF method does not provide reliable estimations when applied to the gamma-ray measurements or on the toy MC based on the gamma-ray data. This is probably due to the sparsity of the gamma-ray dataset and the larger uncertainties of the flux measurements. Statistical uncertainties for the given datasets are derived from the 68% fiducial intervals of the corresponding toy MC analysis. Systematic uncertainties are derived from the largest difference to the expected values (0.6 days) and the impact of the choice of the bin width on the calculation of the averaged light curves. Bin widths from 0.025 to 0.1 in orbital phase are tested and the largest difference is used to estimate the contribution to the systematic uncertainties. The total systematic uncertainty for the orbital period determination is estimated to be $\delta_{sys}^{Swift\ \mathrm{XRT}}=1.5$ days and $\delta_{sys}^{\mathrm{Gamma}}=2.5$ days. Confidence limits on the correlation coefficients are obtained in a similar fashion: 1000 toy MC light curves with similar data structure to the one obtained by observations are used to calculate the 95 and 99% quantiles. Two null hypotheses are distinguished: a constant flux is assumed for the determination of the statistical significance of the correlation coefficients for the orbital period analysis (see Figure 12, c)). For the search for modulation periods other than the orbital period, toy MC light curves as described above, including the average observed orbital modulation pattern, are generated and analysed with the different period determination methods (see Figure 12, d)). Figure 12 shows the results of application of all three techniques on the _Swift_ -XRT and gamma-ray light curve measurements (see Figure 2). A summary of all obtained orbital periods together with those reported in the literature is given in Table 3. ## Appendix B Impact of orbital period uncertainty on phase-folded light curves Uncertainties on the orbital period determination might lead to significant differences in the shape of the phase-folded light curves given the long total observation time of $\approx$15 years for gamma-ray and $\approx$10 years for X-ray measurements. To test this, four different orbital periods are assumed: $P_{+}=319.5$ days, $P_{-}=315.1$, $P_{M}=313$ days, and $P_{H\alpha}=308$ days. $P_{+}$ and $P_{-}$ correspond to a change of the orbital period by 1 $\sigma$ statistical error plus the systematic uncertainty; $P_{M}$ and $P_{H\alpha}$ correspond to the solutions presented in Moritani et al. (2018). Figures 13 and 14 show the impact of the variation of the assumed orbital period: the three most prominent features of a peak around phases 0.3, a minimum around phases 0.4, and a second maximum region around phases 0.6 are clearly visible in all cases. This shows that the choice of $P_{orbit}=317.3$ does not influence the discussion of the physical properties presented in this work, except for the shortest period of 308 days. It should be noted that assuming such a short orbital period, as derived from optical H$\alpha$ observations (Moritani et al., 2018), would change this picture significantly, as most of the discussed features disappear (Figures 13 (d) and 14 (d). Figure 13: X-ray (0.3–10 keV) light curves as function of orbital phase assuming orbital periods as indicated below the figures. For further details, see Figure 3 and text. Figure 14: Gamma-ray ($>$350 GeV) light curves as function of orbital phase assuming orbital periods as indicated below the figures. For further details, see Figure 3 and text. ## Appendix C Light curves per orbital cycle Figure 15: X-ray (0.3–10 keV) light curves as function of orbital phase for each of the observed orbital cycles. Orbits are numbered following the start of the gamma-ray observations (no X-ray observations are available for the first four orbits; empty panels are shown for easier comparisons with Figure 16). The MJD given in each panel indicates the start of each orbit. An orbital period of 317.3 days is assumed.Vertical lines show statistical uncertainties; note that these are smaller than the marker size for all instruments but _Swift_ -XRT. The thin blue line and gray-shaded band in each canvas indicates the average _Swift_ -XRT light curve and its 68% containment region calculated from all measurements and smoothed by applying a cubic spline fit. Figure 16: Gamma-ray ($>$ 350 GeV) light curves as function of orbital phase for each of the observed orbital cycles (see Figure 15 for further details). The thin blue line and gray-shaded band in each canvas indicates the average gamma-ray light curve and its 68% containment region calculated from all measurements and smoothed by applying a cubic spline fit. The gamma-ray observations with H.E.S.S., MAGIC, and VERITAS sum up to a total observation time of $\approx 450$ h spanning 18 orbits of HESS J0632+057 covering the period of 2004–2019. The X-ray observations by the Swift _XRT_ , Chandra, XMM-Newton, Suzaku, and NuSTAR are obtained during 14 orbits of the binary system. Despite this large amount of data, there is no good coverage along most of these orbital cycles due to observational constraints and the long binary period of 317 days. This is illustrated in figures 15 and 16, which shows the light curve per orbital cycle both at X-ray and gamma-ray energies assuming an orbital period of 317.3 days. ## Appendix D Contemporaneous spectral energy distributions Contemporaneous X-ray and gamma-ray spectral energy distributions for 38 periods of VERITAS and _Swift_ -XRT observations are available through the Zenodo data repository101010https://doi.org/10.5281/zenodo.5157848.
# Sequence-level self-learning with multiple hypotheses ###### Abstract In this work, we develop new self-learning techniques with an attention-based sequence-to-sequence (seq2seq) model for automatic speech recognition (ASR). For untranscribed speech data, the hypothesis from an ASR system must be used as a label. However, the imperfect ASR result makes unsupervised learning difficult to consistently improve recognition performance especially in the case that multiple powerful teacher models are unavailable. In contrast to conventional unsupervised learning approaches, we adopt the _multi-task learning_ (MTL) framework where the $n$-th best ASR hypothesis is used as the label of each task. The seq2seq network is updated through the MTL framework so as to find the common representation that can cover multiple hypotheses. By doing so, the effect of the _hard-decision_ errors can be alleviated. We first demonstrate the effectiveness of our self-learning methods through ASR experiments in an accent adaptation task between the US and British English speech. Our experiment results show that our method can reduce the WER on the British speech data from 14.55% to 10.36% compared to the baseline model trained with the US English data only. Moreover, we investigate the effect of our proposed methods in a federated learning scenario. Index Terms: unsupervised learning, self-learning, encoder-decoder, multi-task learning, end-to-end acoustic modeling ## 1 Introduction Unsupervised learning of an acoustic model (AM) has a long history from the Bayesian model to the deep neural network (DNN) system in the field of automatic speech recognition (ASR) [1, 2, 3, 4, 5]. It is typically done by transferring knowledge from stronger teacher model(s) to the student model [6, 7, 8, 9, 10, 11, 12] or adapting the seed model pre-trained with a sufficient amount of labeled data [1, 2, 3, 13, 14, 15]. Because of no need of heavy decoding processes with teacher models, the latter self-training approach is perhaps preferable in many situations such as on-device personalization and federated learning scenarios [16, 17, 18, 19]. In this work, we thus address the self-training task for AM without an additional bigger teacher model and parallel adaptation data111The definitions of unsupervised learning and self- learning vary in different applications and fields. For presentation purposes, we will refer any model optimization methods without manual transcripts as unsupervised learning even if the initial seed model is pre-trained with labeled data. We will say self-learning is a special class of unsupervised learning techniques which optimizes the seed model without another teacher model.. The optimization criteria of unsupervised learning for DNN-based ASR systems can fall into two categories: frame-level [8, 10, 11, 12], and sequence-level criteria [7, 15, 9]. It is well known that direct sequence-level optimization [15] and sequence training from the model initialized with the frame-level cross-entropy objective [10, 11] provide better recognition accuracy. Many successful results have been reported by adapting the hybrid model of the DNN and hidden Markov model (HMM) in an unsupervised way. In the hybrid DNN-HMM systems, acoustic, pronunciation, and language models have been trained separately, each with a different objective. While such an approach can use each type of training data efficiently, this disjoint modeling method leads to the sub-optimal solution, especially in terms of model size. This issue has been addressed by designing the _end-to-end_ network, which directly generates a word or character sequence from a sequence of speech features. Two popular approaches in this area are perhaps recurrent NN transducer (RNN-T) [20] and attention-based encoder-decoder (AED) network [21, 22, 23]. Unlike connectionist temporal classification (CTC) [24], both methods do not make an unreasonable assumption for ASR that the label outputs are conditionally independent of each other. It is shown in [25] that the AED has the potential of outperforming the RNN-T arguably because of a more flexible assumption on the alignment between input and output, given a large number of training samples [26, 27]. Based on the reasons above, we focus on the development of unsupervised learning methods for the AED network. For that, knowledge distillation (KD) [28, 29, 9] is a promising approach. In particular, sequence-level KD [29, 9] is more suitable for the AED network to improve modeling capability for sequential input and output. Kim and Rush indeed showed in [29] that the sequence-level KD technique can achieve better machine translation performance with lower complexity. Sequence-level KD is also applied to the ASR tasks: model compression [9] and domain adaptation with a slight modification of information injected to the student [30]. Notice that the unsupervised learning method proposed in [30] requires parallel data for the teacher and student models, a pair of clean and noisy data. Again, the use of the bigger teachers or additional parallel adaptation data is not possible in some application scenarios, such as a federated learning scenario [18]. In contrast to prior work on unsupervised learning for the AED [9, 30], we develop the new self-learning techniques which neither require gigantic teacher models nor the parallel data. In such a scenario, we must use the ASR output of the seed model as the target labels. The use of the inaccurate ASR result as the ground truth may degrade recognition accuracy after adaptation. In order to alleviate such a noisy transcript issue, we use the $N$-best hypotheses from the ASR output for adaptation. More specifically, we optimize the AED with multiple objective functions associated with the $N$-best hypotheses. We implement the optimization algorithm with the multi-label learning (MLL) or multi-task learning (MTL) framework [31, 32]. Our techniques proposed here are evaluated through two kinds of ASR tasks: an accent adaptation task and a federated learning situation. The rest of the paper is organized as follows. Section 2 briefly reviews the sequence-to-sequence model with the AED and its optimization criterion in supervised and unsupervised manners. Section 3 describes our unsupervised learning algorithms for the attention-based encoder-decoder network. Section 4 describes the ASR experiments in two kinds of tasks: accent adaptation and federated transfer learning tasks. In section 5, we conclude this work. ## 2 Background ### 2.1 Attention-based encoder-decoder (AED) Let $\mbox{$\mathbf{x}$}=[x_{1},...,x_{K}]$ and $\mbox{$\mathbf{y}$}=[y_{1},...,y_{M}]$ be the feature input and target word embeddings with $K$ and $M$ being the source and target lengths, respectively. ASR involves finding the most probable target sequence given the feature input: $\hat{\mbox{$\mathbf{y}$}}=\mathop{\rm argmax}_{\mbox{$\mathbf{y}$}\in\mathcal{Y}}\log p(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})\vspace{-0.1cm}$ (1) where $\mathcal{Y}$ is the set of all possible sequences. As shown in Figure 1, $p(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})$ is modeled with an encoder, a decoder and an attention component. The encoder network reads the feature sequence and the decoder network produces a distribution over the target words, one word embedding at a time given the source [23, 26]. We employ the attention architecture from [21] by replacing the scoring function with the ReLU unit. Figure 1: Attention-based encoder-decoder (AED) network ### 2.2 Supervised learning We briefly review supervised learning for the sequence-to-sequence model before formulating the unsupervised learning problem. First, consider the sequence-level distribution specified with the model parameters $\theta$ over all possible sequences $\mbox{$\mathbf{y}$}\in\mathcal{Y}$ in the log domain, $\log p(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})=\sum_{i=1}^{M}\log p(y_{i}|\mbox{$\mathbf{x}$},\mbox{$\mathbf{y}$}_{<i};\theta)\vspace{-0.1cm}$ (2) where $\mbox{$\mathbf{y}$}_{<i}$ is either the ground truth of the previous characters or sampled from the model with a certain sampling rate in order to make the model robust against prediction errors during inference [23]. The sampling rate can be constant [23] or scheduled [33]. For the experiment, we use the scheduled sampling method. Sequence-level training then involves estimating $\theta$ that maximizes the log-likelihood (2) over all the sequences. ### 2.3 Unsupervised learning The self-learning framework of the attention-based sequence-to-sequence model is very similar with sequence-level knowledge distillation (KD) [29]. The only difference is that there is no better teacher model which transfers knowledge to the seed model in the self-learning scenario [34]. In such a scenario, the seed model must adapt itself to new data without knowledge distillation from the more powerful teacher model; the sequence distribution of target words can be very noisy. The erroneous labels can degrade accuracy of the seed model severely [5, 34]. In this section, we describe how we approximate the objective function in the case of the unsupervised learning scenario. First, let us consider the case of the semi-supervised setting with sequence- level KD. As shown in [29], the distribution from the data is replaced with a probability distribution derived from another teacher model $q(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})$. The loss function of sequence- level KD can be written as $\mathcal{L}_{\text{SEQ- KD}}=-\sum_{\mbox{$\mathbf{y}$}\in\mathcal{Y}}q(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})\log p(\mbox{$\mathbf{y}$}|\mbox{$\mathbf{x}$})\vspace{-0.1cm}$ (3) Calculating (3) is computationally intractable. Kim and Rush showed in [29] that the loss (3) could be well approximated with the best hypothesis from the beam search output with the teacher model in a machine translation task. With the first best hypothesis $\hat{\mbox{$\mathbf{y}$}}_{\text{T,1}}$, they approximated (3) as $\mathcal{L}_{\text{SEQ-KD}}\approx-\log p(\mbox{$\mathbf{y}$}=\hat{\mbox{$\mathbf{y}$}}_{\text{T,1}}|\mbox{$\mathbf{x}$})\vspace{-0.1cm}$ (4) It is worth noting that the same algorithm was also applied to an ASR model compression task in [9] although their improvement was not as significant as that reported in [29]. Now consider the case of self-learning. Again, we only have the seed model in this case. Thus, we use the best hypothesis of the seed model $\hat{\mbox{$\mathbf{y}$}}_{\text{S,1}}$ instead of that of the teacher model $\hat{\mbox{$\mathbf{y}$}}_{\text{T,1}}$. It is straightforward to express the loss approximation function: $\mathcal{L}_{\text{SEQ-KD}}\approx-\log p(\mbox{$\mathbf{y}$}=\hat{\mbox{$\mathbf{y}$}}_{\text{S,1}}|\mbox{$\mathbf{x}$})\vspace{-0.1cm}$ (5) The model can still be improved with self-learning since the seed model learns features from unseen data. However, the seed model needs to provide reasonably good accuracy on the unseen data. Otherwise, the model may degrade due to the erroneous label estimates. ## 3 Proposed self-learning method ### 3.1 Loss function approximation with multiple hypotheses Figure 2: MT network with encoder-decoder shared. Figure 3: MT network with encoder shared. Figure 4: Flow chart of unsupervised training with the $N$-best hypotheses In many cases, the best ASR output contains errors. Thus, approximating (3) with the only best hypothesis may degrade recognition performance after unsupervised training. Instead, we use the $N$-best hypotheses weighted based on the ASR confidence score for approximation. First, let us denote the weight normalized with the Softmax function for the $n$-th best candidate $\mbox{$\mathbf{y}$}_{S,n}$ as $q_{n}(\mbox{$\mathbf{y}$}_{\text{S,n}})=\frac{\exp(s_{n}/T)}{\sum_{i=1}^{N}\exp(s_{i}/T)}$ (6) where $s_{n}$ is the $n$-th best confidence score and $T$ is a temperature which controls a probability distribution over the classes [28]. We set $T=1$ for the experiments. We then rewrite the loss function (3) as $\mathcal{L}_{\text{SEQ- KD}}\approx-\sum_{n=1}^{N}q_{n}(\hat{\mbox{$\mathbf{y}$}}_{\text{S,n}})\log p(\mbox{$\mathbf{y}$}=\hat{\mbox{$\mathbf{y}$}}_{\text{S,n}}|\mbox{$\mathbf{x}$})\vspace{-0.1cm}$ (7) Minimizing this loss function can be done by creating multiple labels associated with weights for one utterance. We refer to this method as multi- label learning (MLL) in this paper. The loss function can be also optimized through multi-task learning (MTL) [31, 32] as it will be explained in the next section. ### 3.2 Optimization with multi-task learning Optimization of the loss function (7) can be done with the multi-task (MT) network where each separate Softmax layer is associated with the loss of the $n$-th hypothesis. We consider two network architectures in this work. Figure 4 shows the first architecture proposed here. In the architecture shown in Figure 4, the parameters of the encoder, attention and decoder network are shared during the optimization process. Only the output layer is separated for each task associated with the term in (7). The weights of the new branch for the $n$-th best hypothesis are initialized with those of the main task. Figure 4 shows the second architecture investigated in this work. In contrast to the architecture in Figure 4, the second network architecture shares the encoder and attention module only; the set of the decoder and the output layer is separated for each task. After self-training, the encoder is expected to generate a better acoustic embedding that covers the multiple hypotheses. This can be viewed as acoustic feature-space adaptation with the AED network. Figure 4 shows a flow chart of our unsupervised training scheme with the $N$-best hypotheses. As shown in Figure 4, we first decode the sequence of the speech features with the beam search in order to obtain the $N$-best hypotheses, $\hat{\mbox{$\mathbf{y}$}}_{\text{S,1}}$ $\cdots$ $\hat{\mbox{$\mathbf{y}$}}_{\text{S,N}}$. Then, the seed model is adapted with the $N$ hypotheses. This can be done by either multi-label learning (MLL) or MTL framework, as described before. The normalized probability computed with (6) is used for weighting the loss of each utterance in MLL and the task in MTL. After adopting the model with the $N$-th hypotheses, we train the model with the best hypothesis only. The only single task branch is used in the case of MTL. These steps will be repeated until there is no improvement in a character error rate on the validation data. During decoding, we can remove the sub-task branches and keep the main task branch only. ## 4 ASR experiment In this section, we describe the ASR experiments in order to compare the unsupervised learning techniques with the AED. ### 4.1 Accent Adaptation Task First, we addressed the English accent adaptation task from American English to British English. The data used for the experiment were collected through the live traffic and stored with the sampling rate of 16 kHz. For the experiments, we train the seed model with approximately 75,000 hours of American English speech data. About 1,000 hours of the British English speech data were used for adaptation. The adaptation data are treated as untranscribed in the case of unsupervised learning experiments. The British Cortana corpus was used as a test set. The test data were also collected through the live traffic and consists of 14,000 utterances spoken by hundreds of users. The feature extraction front-end computes 40 log mel-filterbank energy features at a frame rate of 10 milliseconds (msec), concatenates three adjacent feature frames, and then downsamples it into a 30 msec frame rate. Each feature was rescaled to have zero mean and unit variance over the training set. The global mean and variance calculated with the training data were applied to adaptation and evaluation. In order to augment adaptation data, we randomly scale the amplitude of a signal, change the speed in the same manner implemented in Sox software [35] and perform the spec-augment on the feature domain [36]. The network used here has 6 layers of 1024 bidirectional GRU nodes in the encoder and 2 layers of 1024 unidirectional GRU nodes in the decoder. All dropout types were applied with $0.1$ dropout probability. We used label smoothing and Adam optimizer in the experiments. Table 1 shows the word error rates (WERs) obtained with different models, the seed model trained with American English data, British AMs adapted in the supervised and unsupervised learning manners. In the self-learning scenario, we performed 4 types of sequence-level self-knowledge distillation methods, conventional training with the 1-best hypothesis (5), multi-label learning (MLL) with the 4-best hypotheses (7), multi-task learning (MTL) with the attention, encoder and decoder (AED) shared and that with the attention and encoder (AE) shared. For unsupervised MTL, we use the 4-best hypotheses. It is clear from Table 1 that the WER of the American English model can be reduced from 14.55% to 7.8% by supervised learning with the manual transcripts. It is also clear from Table 1 that the WER of the seed model can be reduced by the self-learning methods, although the improvement is not as significant as that obtained with supervised learning. Table 1 also shows that the use of the 4-best hypotheses provides better recognition accuracy than the case of the 1-best hypothesis only. The WERs of the 1-best and N-best results on the adaptation data obtained with the seed model were 22.6% and 13.7%, respectively. Thus, we consider using multiple hypotheses to provide better recognition accuracy. We did not observe a significant difference among different self-learning methods with the 4-best hypotheses after decoding the adaptation data twice, but it is worth mentioning that MTL with the shared AE network provided the lowest WER with the first decoding pass. Adaptation Type | Adaptation Method | WER(%) ---|---|--- Seed model | US baseline model | 14.55 Supervised learning | Training with transcription | 7.8 Self-learning | Training with the 1-best hypo | 12.09 MLL with 4-best hypos | 10.66 MTL with shared AED | 10.83 MTL with shared AE | 10.36 Table 1: Word error rates (WER) on the British Cortana data for each training method ### 4.2 Librispeech Task We also conducted ASR experiments with the Librispeech (LS) data [37] for this study. For the LS task, we used the same front-end as described in Section 4.1 but slightly different model configuration. The AED network used here consists of 6 layers of 1024 bidirectional LSTM nodes and 2 layers of 1024 unidirectional LSTM nodes. For the LS task, we used byte-pair encoding (BPE) [38] to create 16,000 subword units. The optimizer settings are the same as described in [18]. The LS corpus contains approximately 1000 hours of read speech for training. We split the whole LS training set into two subsets. A seed model was first trained with the first half of the training set. For adaptation, we further divide the second half dataset into the per-speaker dataset. The adaptation dataset contains approximately 1100 speakers. Since the seed model does not see the LS vocabulary and word sequences of the adaptation dataset, the ASR result on the adaptation data will not be perfect; self-learning has to be done with noisy labels. The seed model was then adapted with the speaker- partitioned dataset through our federated learning simulator [18, 19]. Briefly, the federated learning algorithm iteratively performs the following processes on a server and clients: (1) broadcasting the global (seed) model from the server to the clients, (2) decoding speech with the global model at each client, (3) adapting the global model with each client’s data and the ASR hypotheses, (4) encrypting each client’s model so as to ensure privacy222We did not use encryption here for the sake of simplicity although it is straightforward to apply multi-party computation (MPC) secure algorithms or differential privacy techniques such as [17]., (5) aggregating the linearly- weighted models from a pool of clients, and (6) optimizing the global model through the aggregated gradients. Those processes are repeated until the loss on the validation data is converged. We only decode speech every 512 aggregation steps for unsupervised training. We found that this could avoid training divergence due to inaccurate ASR results with an intermediate model trained halfway. Notice that the adapted model is only uploaded to the server while privacy data such as speech and transcripts are secured in the client. The adapted models will be sent to the server after processing all the utterances per speaker on each client. This saves a tremendous amount of network bandwidths. Table 2 shows the WERs on the LS test clean set obtained with different models in the same way as Table 1. We performed supervised and unsupervised learning algorithms on the seed model with the second half of the training data. The last row in Table 2 shows the upper bound WER in the case that the model is trained from scratch with the full amount of the LS training data. It is clear from Table 2 that the WER of the seed model can be significantly reduced by supervised-training with the rest 50% of the training data from 5.66%to 4.40%. It is also clear from Table 2 that recognition accuracy can still be improved by self-learning with MTL even when the seed model does not know the vocabulary and word sequences of the adaptation data. We observed that the validation loss diverged in MLL, which very likely led to recognition accuracy degradation. The results also show that the convergence performance is improved by separating more layers relevant to the language model, the decoder and output layers, for each hypothesis. This suggests that the $N$ best hypotheses contain little language information shareable over the decoders and output layers. This may be because language knowledge of the seed model on the adaptation dataset is poor. In order to check how much the N-best results can potentially help recognition accuracy, we decoded the second half of the LS training data with our LS seed model. The WERs of the 1-best and N-best results on the LS adaptation data were 10.4% and 7.62%. The difference between the 1-best and N-best results was not as significant as that in the case of the accent adaptation task. We assume that many word errors in this LS task cannot be recovered by simply increasing the $N$-best hypotheses because the seed model does not observe the language of the adaptation dataset. On the other hand, the seed model used in the British Cortana task should cover the vocabulary and grammar for the adaptation data very well because it is trained with a large amount of the American English data containing the Cortana task. Of course, there are some different word usages between the American and British English. We are, however, led to conclude that the short phrase ASR task such as the Cortana task was probably not affected significantly. Adaptation Type | Adaptation Method | WER(%) ---|---|--- Seed model | Trained from scratch with 50% of data | 5.66 Supervised learning | Training with the rest 50% of data | 4.40 Self-learning | Training with the 1-best hypo | 5.58 MLL with 4-best hypos | 5.72 MTL with shared AED | 5.55 MTL with shared AE | 5.22 Reference | The whole LS training data | 4.00 in centralized training | Table 2: WERs on the LS test clean dataset for each training method in the privacy preserving scenario (where the seed model has no language information on the adaptation data) ## 5 Conclusions In this work, we have investigated the ASR accuracy of self-learning methods with the attention-based sequence-to-sequence model. We proposed the new self- learning methods with the $N$ best hypotheses; the seed model is adapted with MLL or MTL in an end-to-end manner. We conducted two kinds of ASR experiments: the English accent adaptation task and the federated-learning scenario. In the accent adaptation task, our experiment results show that our best self- learning method can reduce the WER on the British speech data from 14.55% to 10.36% compared to the baseline model trained with the US English data only. In the federated-learning scenario, the improvement on the LS dataset was not as significant as that on the Cortana dataset. In contrast to the Cortana task, the seed model used in the LS task does not contain language information on the adaptation data well. Thus, misrecognition results may not be entirely recoverable by increasing the N-best candidates. We plan to combine our AM self-learning and the language data augmentation method a text-to-speech module [39] ## 6 Acknowledgements The authors would like to thank Masaki Itagaki, Heiko Rahmel, Naoyuki Kanda, Lei He, Ziad Al Bawab, Jian Wu, and Xuedong Huang for their project support and technical discussions. ## References * [1] P. C. Woodland, D. Pye, and M. J. F. Gales, “Iterative unsupervised adaptation using maximum likelihood linear regression,” in _The 4th International Conference on Spoken Language Processing, Philadelphia, PA, USA, October 3-6, 1996_. ISCA, 1996. * [2] G. Zavaliagkos and T. Colthurst, “Utilizing untranscribed training data to improve performance,” in _DARPA Broadcast News Transcription and Understanding Workshop, Landsdowne_ , 1998, pp. 301–305. * [3] L. Lamel, J. luc Gauvain, and G. Adda, “Lightly supervised and unsupervised acoustic model training,” _Computer Speech and Language_ , vol. 16, pp. 115–129, 2002. * [4] F. Wessel and H. Ney, “Unsupervised training of acoustic models for large vocabulary continuous speech recognition,” _IEEE Transactions on Speech and Audio Processing_ , vol. 13, no. 1, pp. 23–31, 2005. * [5] Y. Huang, Y. Wang, and Y. Gong, “Semi-supervised training in deep learning acoustic model,” in _Proc. Interspeech_ , September 2016. * [6] J. Li, R. Zhao, J.-T. Huang, and Y. Gong, “Learning small-size dnn with output-distribution-based criteria,” in _Proc. Interspeech_ , September 2014\. * [7] J. H. M. Wong and M. J. F. Gales, “Sequence student-teacher training of deep neural networks,” in _Proc. Interspeech_ , September 2016. * [8] J. Li, M. L. Seltzer, X. Wang, R. Zhao, and Y. Gong, “Large-scale domain adaptation via teacher-student learning,” in _Proc. Interspeech_ , 2017, pp. 2386–2390. * [9] R. M. Mun’im, N. Inoue, and K. Shinoda, “Sequence-level knowledge distillation for model compression of attention-based sequence-to-sequence speech recognition,” _CoRR_ , vol. abs/1811.04531, 2018. * [10] S. H. K. Parthasarathi, N. Sivakrishnan, P. Ladkat, and N. Strom, “Realizing petabyte scale acoustic modeling,” _IEEE Journal on Emerging and Selected Topics in Circuits and Systems_ , vol. 9, no. 2, pp. 422–432, 2019. * [11] L. Mosner, M. Wu, A. Raju, S. H. K. Parthasarathi, K. Kumatani, S. Sundaram, R. Maas, and B. Hoffmeister, “Improving noise robustness of automatic speech recognition via parallel data and teacher-student learning,” in _Proc. ICASSP_ , 2019, pp. 6475–6479. * [12] Z. Meng, J. Li, Y. Zhao, and Y. Gong, “Conditional teacher-student learning,” _CoRR_ , vol. abs/1904.12399, 2019. * [13] J. Huang, E. Marcheret, K. Visweswariah, V. Libal, and G. Potamianos, _The IBM Rich Transcription 2007 Speech-to-Text Systems for Lecture Meetings_. Berlin, Heidelberg: Springer, 2007, vol. 4625. * [14] K. Kumatani, T. Arakawa, K. Yamamoto, J. W. McDonough, B. Raj, R. Singh, and I. Tashev, “Microphone array processing for distant speech recognition: Towards real-world deployment,” in _Proc. APSIPA ASC_ , 2012. * [15] V. Manohar, P. Ghahremani, D. Povey, and S. Khudanpur, “A teacher-student learning approach for unsupervised domain adaptation of sequence-trained ASR models,” in _2018 IEEE Spoken Language Technology Workshop, SLT 2018, Athens, Greece, December 18-21, 2018_. IEEE, 2018, pp. 250–257. * [16] J. Konecny, B. McMahan, and D. Ramage, “Federated Optimization: Distributed Optimization Beyond the Datacenter,” _arXiv preprint arXiv:1511.03575v1_ , 2015. * [17] B. Jayaraman, L. Wang, D. Evans, and Q. Gu, “Distributed learning without distress: Privacy-preserving empirical risk minimization,” in _Advances in Neural Information Processing Systems 31_ , S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. Curran Associates, Inc., 2018, pp. 6343–6354. * [18] D. Dimitriadis, K. Kumatani, R. Gmyr, Y. Gaur, and S. E. Eskimez, “A federated approach in training acoustic models,” in _Proc. Interspeech_ , 2020. * [19] ——, “Federated transfer learning with dynamic gradient aggregation,” vol. ArXiv: 2008.02452, 2020. * [20] A. Graves, “Sequence transduction with recurrent neural networks,” _CoRR_ , vol. abs/1211.3711, 2012. * [21] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in _Advances in Neural Information Processing Systems 28_ , C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Curran Associates, Inc., 2015, pp. 577–585. * [22] D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio, “End-to-end attention-based large vocabulary speech recognition,” in _ICASSP_ , 2016, pp. 4945–4949. * [23] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell,” _CoRR_ , vol. abs/1508.01211, 2015. * [24] A. Graves, S. Fernández, and F. Gomez, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in _In Proceedings of the International Conference on Machine Learning, ICML 2006_ , 2006, pp. 369–376. * [25] R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” in _Proc. Interspeech_ , 2017, pp. 939–943. * [26] E. Battenberg, J. Chen, R. Child, A. Coates, Y. Gaur, Y. Li, H. Liu, S. Satheesh, A. Sriram, and Z. Zhu, “Exploring neural transducers for end-to-end speech recognition,” in _2017 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2017, Okinawa, Japan, December 16-20, 2017_. IEEE, 2017, pp. 206–213. * [27] C.-C. Chiu, W. Han, Y. Zhang, R. Pang, S. Kishchenko, P. Nguyen, A. Narayanan, H. Liao, S. Zhang, A. Kannan, R. Prabhavalkar, Z. Chen, T. N. Sainath, and Y. Wu, “A comparison of end-to-end models for long-form speech recognition,” 2019, pp. 889–896. * [28] G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” _ArXiv_ , vol. abs/1503.02531, 2015. * [29] Y. Kim and A. M. Rush, “Sequence-level knowledge distillation,” _CoRR_ , vol. abs/1606.07947, 2016. * [30] Z. Meng, J. Li, Y. Gaur, and Y. Gong, “Domain adaptation via teacher-student learning for end-to-end speech recognition,” in _ASRU_. IEEE, December 2019. * [31] R. Caruana, “Multitask learning,” Ph.D. dissertation, Carnegie Mellon University, Pittsburgh, PA 15213, 9 1997. * [32] S. Ruder, “An overview of multi-task learning in deep neural networks,” _ArXiv_ , vol. abs/1706.05098, 2017. * [33] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence prediction with recurrent neural networks,” _CoRR_ , vol. abs/1506.03099, 2015. * [34] Q. Xie, E. H. Hovy, M.-T. Luong, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” _ArXiv_ , vol. abs/1911.04252, 2019. * [35] “Sox - sound exchange, audio manipulation tool,” Available at http://sox.sourceforge.net/ (2020/04/28). * [36] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in _Proc. Interspeech_ , 2019, pp. 2613–2617. * [37] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: An asr corpus based on public domain audio books,” _Proc. ICASSP_ , pp. 5206–5210, 2015\. * [38] R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” in _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics_. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 1715–1725. * [39] N. Rossenbach, A. Zeyer, R. Schlüter, and H. Ney, “Generating synthetic audio data for attention-based speech recognition systems,” in _Proc. ICASSP_ , 2020, pp. 7069–7073.
# Bayesian tensor factorization for predicting clinical outcomes using integrated human genetics evidence Onuralp Soylemez ###### Abstract The approval success rate of drug candidates is very low with the majority of failure due to safety and efficacy. Increasingly available high dimensional information on targets, drug molecules and indications provides an opportunity for ML methods to integrate multiple data modalities and better predict clinically promising drug targets. Notably, drug targets with human genetics evidence are shown to have better odds to succeed. However, a recent tensor factorization-based approach found that additional information on targets and indications might not necessarily improve the predictive accuracy. Here we revisit this approach by integrating different types of human genetics evidence collated from publicly available sources to support each target- indication pair. We use Bayesian tensor factorization to show that models incorporating all available human genetics evidence (rare disease, gene burden, common disease) modestly improves the clinical outcome prediction over models using single line of genetics evidence. We provide additional insight into the relative predictive power of different types of human genetics evidence for predicting the success of clinical outcomes. Machine Learning, Drug Discovery, Human Genetics, Bayesian Tensor Factorization, CompBio, ICML ## 1 Motivation The approval success rate of drug candidates is less than 10% with most of clinical trial failures attributed to safety concerns and a lack of clinical efficacy (Cook et al., 2014). Increasingly available high dimensional information on targets, drug molecules and indications provides an opportunity for ML methods to better predict clinically promising drug targets. Notably, drug targets with human genetics evidence are twice more likely to be approved [(Nelson et al., 2015), (King et al., 2019)] and the most recent drug approvals from FDA corroborate this strong trend (Ochoa et al., 2022). However, a recent tensor factorization-based approach from (Yao et al., 2019) found that additional information on targets and indications might not necessarily improve the predictive accuracy underscoring the importance of feature selection and evidence data quality. Here we revisit this approach by integrating different lines of human genetics evidence collated from publicly available sources and assess the relative predictive performance of models incorporating different types of human genetics evidence. ## 2 Data curation Open Targets Platform curates and maintains target-disease evidence by harmonizing information on genetic diseases, genetic variation, clinical trial outcomes, gene expression data and biomedical literature (Ochoa et al., 2021). We used three lines of human genetics evidence based on disease variant frequency to support the statistical and biological association between human genetic variation in a drug target and their impact on medical outcomes (see Table 1). ### 2.1 Rare genetic diseases We used a list of curated genes with reasonably well-established causal link with a disease. This class of genetics evidence is enriched for genes associated with rare Mendelian diseases whereby very rare variants with large effect and high penetrance are the causative genetic alteration underlying the disease or clinical manifestation. We leveraged expert manual curation of diagnostic-grade gene-disease relationships from ClinGen (Strande et al., 2017) and Genomics England (Martin et al., 2019) to annotate target-indication (gene-disease) pairs. ### 2.2 Gene-level burden association Gene-level collapsing methods combine information from genetic variants found at appreciable frequencies in the general population and assess the statistical strength of association between the aggregate variation and health outcomes. We collated a list of significant gene burden associations across thousands of target-disease pairs from UK Biobank. Public-private partner institutes analyzed whole-exome sequencing data from more than 400,000 UK Biobank participants to identify genes with coding variants that are collectively enriched in individuals with a selected set of medical outcomes [(Backman et al., 2021), (Wang et al., 2021), (Karczewski et al., 2022)]. Any target-indication pair that did not reach the empirical significance threshold in the respective study was labeled as negative association rather than missing. ### 2.3 Evidence from GWAS Genome-wide association studies (GWAS) test the statistical association between common genetic variants and diseases, and have identified many disease-variant associations that recapitulate known disease biology as well as nominate novel therapeutic hypotheses. While GWAS have become an indispensable tool in drug discovery for novel drug target identification and validation, prioritization of causal genes among dozens of candidate genes with equally compelling biological explanation remains a significant challenge. We leveraged a recently developed prioritization model, ’locus-to-gene’ (L2G) model, that integrates human genetics from GWAS Catalog and UK Biobank, known target and disease biology and multi-omic datasets (Ghoussaini et al., 2020), (Mountjoy et al., 2021). We used the scoring predictions from the L2G model from Open Targets Platform to annotate drug targets that are predicted to be the causal genetics factor for a disease or phenotype. ### 2.4 Clinical trial outcomes For clinical trial outcome data, we followed a similar label annotation procedure as employed in (Yao et al., 2019). Specifically, we labeled every drug target - indication pair as ”approved” (positive label) if there is at least one drug molecule that has been approved for the corresponding indication. For the remaining target-indication pairs, a ”failure” status (negative label) was assigned if there is at least one clinical trial for the pair that was either terminated or suspended. Additionally, we leveraged the data from Open Target Platform’s NLP-based classification of clinical trials to annotate the reason for trial failure. Clinical trials that were inferred to be unfavorable due to safety concerns or efficacy were also assigned negative labels. ### 2.5 Disease ontology To facilitate the integration of indication data from multiple sources, we mapped the EFO (experimental factor ontology) IDs for each indication to MeSH IDs using EBI-EMBL ontology cross-reference database (OxO) (Malone et al., 2010). Table 1: Description of the three lines of human genetics evidence used in this analysis. Evidence type | Description ---|--- Rare disease | List of curated genes with established causal link between gene and disease. Gene burden | Gene-based rare variant associations in UK Biobank using whole exome sequencing data. GWAS | Prioritization of causal genes at GWAS locus based on genetic and functional genomics features using locus-to-gene (L2G) model. Combined evidence | Integrating human genetics evidence from all three types of evidence. ## 3 Model description Given a binary matrix of drug targets (genes) and clinical outcomes (success, failure, unknown), our goal is to impute the unknown cells or missing entries using inter-relationships among targets and indications. We considered four models incorporating human genetics evidence either individually or altogether. Specifically, we created rank-3 tensors with each mode referring to drug targets, indications and human genetics evidence, respectively, and used Bayesian probabilistic matrix factorization using MCMC (Salakhutdinov & Mnih, 2008) to factorize the binary matrices as implemented in SMURFF, a highly optimized framework for Bayesian tensor factorization (Vander Aa et al., 2019). For each tensor factorization, we built a model with 32 latent dimensions and used burn-in of 500 samples for the Gibbs sampler. We collected 3500 samples from the model, and kept every 350th sample and averaged the predictions from these samples for the final prediction. ## 4 Results We evaluated the predictive performance of each model using AUROC, and the model with combined evidence across three lines of human genetics evidence performed slightly better than the other models (see Table 2). NLP-based classification of clinical trial stop reasons yielded a small conservative set of negative outcomes resulting in significant class imbalance between clinical success and failure. To address the class imbalance, we also computed F1 scores for each model. In particular, the discrepancy between AUROC and F1 scores for the gene burden model highlights the dramatic class imbalance for this model. It is very likely that a non-trivial fraction of target-indication pairs may reach statistical significance when larger sample sizes and more refined definition of indications are considered. Alternatively, more nuanced set of rare and common variants with overlapping burden signal can be considered (Weiner et al., 2022). Table 2: Classification accuracies for the models considered in this study. F1 score was calculated using a threshold of 0.5. Class imbalance shows the proportion of positive labels out of total labels for the respective model. Model/Evidence | AUROC | F1 score | Imbalance ---|---|---|--- Rare disease | 93.2 $\pm$ 0.3 | 96.6 $\pm$ 0.2 | 87.2% Gene burden | 92.6$\pm$ 0.3 | 81 $\pm$ 0.6 | 2.5% GWAS | 93.3$\pm$ 0.2 | 95.4 $\pm$ 0.2 | 39.4% Combined | 94.5$\pm$ 0.2 | 98.1$\pm$ 0.1 | 29.3% We corroborate the previous finding that target-indication pairs from Phase 3 are enriched for validated or de-risked drug targets and therefore have higher probability of success. clinical trials at later stages are more likely to succeed (Yao et al., 2019) (see Figure 1). Figure 1: Bayesian tensor factorization model prediction scores from the best performing model (’combined model’). Each target-indication pair was grouped by the maximum clinical phase reached. Preclinical phase refers to research compounds that have not made to Phase I clinical trials yet. P-values were calculated using two-sided Mann-Whitney-Wilcoxon test with Bonferroni correction. Interestingly, we also find that registered trials at preclinical stage (research compounds) appear to have better odds of success than trials that have progressed (i.e., Phase 1 and Phase 2) suggesting that sponsors consider validated drug-indication pairs increasingly more often in their early drug discovery research programs. ## 5 Discussion Here we used publicly available data on approved drug indications and human genetics evidence for the corresponding drug targets to predict the outcome of ongoing clinical trials. Notably, we used Bayesian tensor factorization with additional information from different lines of human genetics evidence to support these targets. Our results show that the model with combined evidence (rare disease, gene burden, common disease) modestly improves the accuracy of predicting clinically promising drug targets when compared to alternative models with single line of evidence. While this finding is encouraging for the increasing appreciation for the role of human genetics in drug target discovery and validation efforts, substantial class imbalance due to necessity of expert manual curation poses a significant challenge for model comparison and establishing benchmarks for further method development. While the approved drug indications may be considered as true positive labels, there are numerous reasons why the outcome of a clinical trial was not favorable. Even when a clinical trial meets its primary objectives, trial sponsors may choose not to move forward with the trial due to business reasons, rapidly changing standard of care or anticipation of difficulty to enroll eligible patients. Here we relied on NLP classification for labeling the clinical trial outcome data, however, it is very likely that text-based classifications may not completely capture the nature of a particular trial failure. There is significant need for better documentation and structure of clinical trial data to improve the effectiveness of text-based classification and semantic analysis. Choosing the most appropriate strategy to integrate different lines of human genetics evidence remains to be an active area of research in drug discovery (e.g., (Stacey et al., 2019), (Dornbos et al., 2022)). Historically, Mendelian genetics evidence has proven to be a convenient source of strong causal evidence between drug target and indication where the evidence implicates a single mutation or gene as the molecular cause underlying the disease. However, as DNA sequencing has become increasingly inexpensive, genomics studies in much larger populations and more complex medical conditions have begun to yield human genetics evidence for gene-disease associations in the form of hundreds of mutations with individually marginal effects on health outcomes and complex traits. While there are significant advances in statistical and computational approaches to combining these effects for patient stratification (see (Khera et al., 2018)), integrating evidence from genetics associations from rare and common diseases has proven to be difficult. Understanding the relevant importance of different sources of human genetics evidence for predicting clinically promising drug targets will significantly help develop safe and effective therapies. In the experimental setup presented in this study, we built multiple models to consider each source of human genetics evidence separately and combined to compare the relative predictive performance of each model. Notably, the burden model performed the poorest. It is conceivable that the poor predictive performance is largely due to high class imbalance in this model as well as relatively few available labels. Further research is necessary to probe whether this class of genes with burden evidence biologically represent difficult drug targets for therapeutic modulation (e.g., highly selective targeting) or empirical significance thresholds for these genes are too conservative. Integration of matrix (or tensor) factorization approaches with neural networks has proven to be very useful for predicting gene expression from highly structured data modalities such as genomic and epigenomic data (Schreiber et al., 2020). In case that there are substantial non-linear relationship between different sources of human genetics data, this approach can be useful for predicting trial outcome data using informative latent representation of genetics evidence. ## Data Availability All the data used in this analysis are publicly available on Open Targets Platform (Ochoa et al., 2021): https://platform.opentargets.org/downloads. Data on human genetics evidence and clinical trial outcomes were downloaded from the latest release of the platform (v22.06). Detailed information on each data source is available at https://github.com/cx0/icml-human-genetics. ## Acknowledgements We are grateful to the Open Targets team and public/private partner institutions for their commitment to open data sharing. ## References * Backman et al. (2021) Backman, J. D., Li, A. H., Marcketta, A., Sun, D., and Center, R. G. Exome sequencing and analysis of 454,787 uk biobank participants. _Nature_ , 599:628–634, 2021. * Cook et al. (2014) Cook, D., Brown, D., Alexander, R., March, R., Morgan, P., Satterthwaite, G., and Pangalos, M. N. Lessons learned from the fate of astrazeneca’s drug pipeline: a five-dimensional framework. _Nature Reviews Drug Discovery_ , 13:419–431, 2014. * Dornbos et al. (2022) Dornbos, P., Singh, P., Jang, D.-K., Mahajan, A., Biddinger, S. B., Rotter, J. I., McCarthy, M. I., and Flannick, J. Evaluating human genetic support for hypothesized metabolic disease genes. _Cell Metabolism_ , 34(5):661–666, 2022. * Ghoussaini et al. (2020) Ghoussaini, M., Mountjoy, E., Carmona, M., Peat, G., Schmidt, E., Hercules, A., Fumis, L., Miranda, A., Carvalho-Silva, D., Buniello, A., Burdett, T., Hayhurst, J., Baker, J., Ferrer, J., Gonzalez-Uriarte, A., Jupp, S., Karim, M., Koscielny, G., Machlitt-Northen, S., Malangone, C., Pendlington, Z. M., Roncaglia, P., Suveges, D., Wright, D., Vrousgou, O., Papa, E., Parkinson, H., MacArthur, J. A. L., Todd, J., Barrett, J. C., Schwartzentruber, J., Hulcoop, D., Ochoa, D., McDonagh, E. M., and Dunham, I. Open Targets Genetics: systematic identification of trait-associated genes using large-scale genetics and functional genomics. _Nucleic Acids Research_ , 49:1311–1320, 2020. * Karczewski et al. (2022) Karczewski, K. J., Solomonson, M., Chao, K. R., Goodrich, J. K., Tiao, G., Lu, W., Riley-Gillis, B. M., Tsai, E. A., Kim, H. I., Zheng, X., Rahimov, F., Esmaeeli, S., Grundstad, A. J., Reppell, M., Waring, J., Jacob, H., Sexton, D., Bronson, P. G., Chen, X., Hu, X., Goldstein, J. I., King, D., Vittal, C., Poterba, T., Palmer, D. S., Churchhouse, C., Howrigan, D. P., Zhou, W., Watts, N. A., Nguyen, K., Nguyen, H., Mason, C., Farnham, C., Tolonen, C., Gauthier, L. D., Gupta, N., MacArthur, D. G., Rehm, H. L., Seed, C., Philippakis, A. A., Daly, M. J., Davis, J. W., Runz, H., Miller, M. R., and Neale, B. M. Systematic single-variant and gene-based association testing of thousands of phenotypes in 426,370 uk biobank exomes. _medRxiv_ , 2022. * Khera et al. (2018) Khera, A. V., Chaffin, M., Aragam, K. G., Haas, M. E., Roselli, C., Choi, S. H., Natarajan, P., Lander, E. S., Lubitz, S. A., Ellinor, P. T., and Kathiresan, S. Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations. _Nature Genetics_ , 50:1219–1224, 2018. * King et al. (2019) King, E., Davis, W., and Degner, J. Are drug targets with genetic support twice as likely to be approved? revised estimates of the impact of genetic support for drug mechanisms on the probability of drug approval. _PLOS Genetics_ , 15(12), 2019. * Malone et al. (2010) Malone, J., Holloway, E., Adamusiak, T., Kapushesky, M., Zheng, J., Kolesnikov, N., Zhukova, A., Brazma, A., and Parkinson, H. Modeling sample variables with an experimental factor ontology. _Bioinformatics_ , 26(8):1112–1118, 2010. * Martin et al. (2019) Martin, A. R., Williams, E., Foulger, R. E., Leigh, S., Daugherty, L. C., Niblock, O., Leong, I. U., Smith, K. R., Gerasimenko, O., Haraldsdottir, E., et al. Panelapp crowdsources expert knowledge to establish consensus diagnostic gene panels. _Nature genetics_ , 51(11):1560–1565, 2019. * Mountjoy et al. (2021) Mountjoy, E., Schmidt, E. M., Carmona, M., Schwartzentruber, J., Peat, G., Miranda, A., Fumis, L., Hayhurst, J., Buniello, A., Karim, M. A., et al. An open approach to systematically prioritize causal variants and genes at all published human gwas trait-associated loci. _Nature Genetics_ , 53(11):1527–1533, 2021. * Nelson et al. (2015) Nelson, M. R., Tipney, H., Painter, J. L., Shen, J., Nicoletti, P., Shen, Y., Floratos, A., Sham, P. C., Li, M. J., Wang, J., Cardon, L. R., Whittaker, J. C., and Sanseau, P. The support of human genetic evidence for approved drug indications. _Nature Genetics_ , 47:856–860, 2015. * Ochoa et al. (2021) Ochoa, D., Hercules, A., Carmona, M., Suveges, D., Gonzalez-Uriarte, A., Malangone, C., Miranda, A., Fumis, L., Carvalho-Silva, D., Spitzer, M., Baker, J., Ferrer, J., Raies, A., Razuvayevskaya, O., Faulconbridge, A., Petsalaki, E., Mutowo, P., Machlitt-Northen, S., Peat, G., McAuley, E., Ong, C. K., Mountjoy, E., Ghoussaini, M., Pierleoni, A., Papa, E., Pignatelli, M., Koscielny, G., Karim, M., Schwartzentruber, J., Hulcoop, D. G., Dunham, I., and McDonagh, E. M. Open targets platform: supporting systematic drug–target identification and prioritisation. _Nucleic Acid Research_ , 49:1302–1310, 2021. * Ochoa et al. (2022) Ochoa, D., Karim, M., Ghoussaini, M., Hulcoop, D. G., McDonagh, E. M., and Dunham, I. Human genetics evidence supports two-thirds of the 2021 fda-approved drugs. _Nature reviews. Drug discovery_ , 2022. * Salakhutdinov & Mnih (2008) Salakhutdinov, R. and Mnih, A. Bayesian probabilistic matrix factorization using markov chain monte carlo. In _Proceedings of the 25th international conference on Machine learning_ , ICML ’08, pp. 880–887, 2008. URL https://doi.org/10.1145/1390156.1390267. * Schreiber et al. (2020) Schreiber, J., Durham, T., Bilmes, J., and Noble, W. S. Avocado: a multi-scale deep tensor factorization method learns a latent representation of the human epigenome. _Genome biology_ , 21(1):1–18, 2020. * Stacey et al. (2019) Stacey, D., Fauman, E., Ziemek, D., Sun, B., Harshfield, E., and Wood, A. Progem: a framework for the prioritization of candidate causal genes at molecular quantitative trait loci. _Nucleic Acid Research_ , 47, 2019. * Strande et al. (2017) Strande, N. T., Riggs, E. R., Buchanan, A. H., Ceyhan-Birsoy, O., DiStefano, M., Dwight, S. S., Goldstein, J., Ghosh, R., Seifert, B. A., Sneddon, T. P., et al. Evaluating the clinical validity of gene-disease associations: an evidence-based framework developed by the clinical genome resource. _The American Journal of Human Genetics_ , 100(6):895–906, 2017. * Vander Aa et al. (2019) Vander Aa, Tom and Chakroun, I., Ashby, T. J., Simm, J., Arany, A., Moreau, Y., Le Van, T., Golib Dzib, J. F., Wegner, J., Chupakhin, V., Ceulemans, H., Wuyts, R., and Verachtert, W. Smurff: a high-performance framework for matrix factorization, 2019. URL https://arxiv.org/abs/1904.02514. * Wang et al. (2021) Wang, Q., Dhindsa, R., Carss, K., Harper, A., and Initiative, A. G. Rare variant contribution to human disease in 281,104 uk biobank exomes. _Nature_ , 597:527–532, 2021. * Weiner et al. (2022) Weiner, D. J., Nadig, A., Jagadeesh, K. A., Dey, K. K., Neale, B. M., Robinson, E. B., Karczewski, K. J., and O’Connor, L. J. Polygenic architecture of rare coding variation across 400,000 exomes. _medRxiv_ , 2022. URL https://www.medrxiv.org/content/early/2022/07/07/2022.07.06.22277335. * Yao et al. (2019) Yao, J., Hurle, M. R., Nelson, M. R., and Agarwal, P. Predicting clinically promising therapeutic hypotheses using tensor factorization. _BMC Bioinformatics_ , 20(69), 2019.
# Cutoff phenomenon and entropic uncertainty for random quantum circuits Sangchul Oh, Sabre Kais† Department of Chemistry, Department of Physics and Astronomy, and Purdue Quantum Science and Engineering Institute, Purdue University, West Lafayette, IN, USA $^†$ Corresponding author: <EMAIL_ADDRESS> ###### Abstract How fast a state of a system converges to a stationary state is one of the fundamental questions in science. Some Markov chains and random walks on finite groups are known to exhibit the non-asymptotic convergence to a stationary distribution, called the cutoff phenomenon. Here, we examine how quickly a random quantum circuit could transform a quantum state to a Haar- measure random quantum state. We find that random quantum states, as stationary states of random walks on a unitary group, are invariant under the quantum Fourier transform. Thus the entropic uncertainty of random quantum states has balanced Shannon entropies for the computational bases and the quantum Fourier transform bases. By calculating the Shannon entropy for random quantum states and the Wasserstein distances for the eigenvalues of random quantum circuits, we show that the cutoff phenomenon occurs for the random quantum circuit. It is also demonstrated that the Dyson-Brownian motion for the eigenvalues of a random unitary matrix as a continuous random walk exhibits the cutoff phenomenon. The results here imply that random quantum states could be generated with shallow random circuits. ††: Electronic Structures: Focus Issue on “Quantum Chemistry in the Ages …” * Keywords: Random circuits, quantum computing, cutoff phenomenon, random walks ## 1 Introduction How many shuffles are enough to ensure that a deck of 52 cards is mixed randomly? The answer is $\tfrac{3}{2}\log_{2}52$, approximately 8.55, for the riffle shuffling case, as shown by Aldous and Diaconis [1] and by Bayer and Diaconis [2]. Fewer than this number are not enough to mix the deck of cards and more do not significantly improve the randomness. This non-asymptotic convergence to a steady or equilibrium state is called the cutoff phenomenon [3] and has been discovered in various fields. Some finite Markov chains exhibit the cutoff phenomenon. These include the Ehrenfest urn model as a simplified diffusion model [4], random walks on a hypercube [5], some Metropolis algorithms [3], Glauber dynamics of Ising models [6], and certain quantum Markov chains [7]. The card shuffling problem is considered a random walk on the symmetric group $S_{52}$. The cutoff phenomenon occurs also for random walks on a compact Lie group. Rosenthal [8] considered that random walk on the special orthogonal group ${\rm SO}(N)$ given by repeated rotations by a fixed angle through randomly chosen planes. Rosenthal showed that the random walk on ${\rm SO}(N)$, after $1/(2(1-\cos\theta))N\log N+cN$ rotations with a fixed angle $\theta$ and a constant $c$, converges rapidly to the Haar measure in total variation distance. Porod [9, 10] showed that random walks on ${\rm O}(N)$, ${\rm U}(N)$, and ${\rm Sp}(N)$ given by random reflections converge to Haar measure in the total variation distances and the cutoff phenomena occurs at $\frac{1}{2}N\log N$ steps. Random circuit sampling is a task to sample bit-strings from a probability distribution defined by random quantum circuits, and considered a good candidate to demonstrate quantum advantage with noisy-intermediate scale quantum computers. In 2019, it was implemented on the Sycamore processor with 53 qubits [11] and recently on the Zuchongzhi processor with 56 and 60 qubits [12, 13]. In both quantum computations, the random circuit was implemented by applying repeatedly, up to 20 and 24 cycles, randomly-chosen single qubit gates and the two-qubit gate acting on the nearest neighbor qubits. The number of cycles plays the same role as the number of times a deck of cards is shuffled or the number of random rotations on Lie groups. So it is natural to ask the same questions which have been answered for the card shuffling problem. How many depths of random quantum circuits, i.e., the number of cycles, are needed to obtain a Haar-measure random unitary operator or to transform an initial quantum state into a pure random quantum state? Does the cutoff phenomenon occur? What is going on in a system after a steady state has been reached? Is there any tool to measure the closeness to a steady state in addition to the total variance distance? In this paper, we present partial answers to these questions using the random circuit implemented on the Sycamore processor and a time-dependent random Hamiltonian model. The former is considered a discrete random walk on a unitary group and the latter a continuous random walk called the Dyson-Brownian motion for eigenvalues of a unitary operator. A pure random quantum state at the steady state will be analyzed with the Shannon entropy. Instead of the total variation distance $||\mu_{k}-\mu_{\rm Haar}||_{\rm TV}$ between the probability distribution $\mu_{k}$ at the $k$-th time step and the Haar measure distribution $\mu_{\rm Haar}$, we employ the Wasserstein distance for the distribution of eigenvalues of a unitary operator. The paper is organized as follows. In Sec. 2, we will discuss the cutoff phenomenon for random quantum circuits by calculating the Shannon entropy and the Wasserstein distance. We show that the Shannon entropy of a random quantum state is invariant under the quantum Fourier transform. We discuss the entropic uncertainty relation of random quantum states for the computational bases and the quantum Fourier transform bases. In Sec. 3, we investigate the cutoff phenomenon of the Dyson-Brownian motion generated by a time-dependent random Hamiltonian. Finally, in Sec. 4, we will summarize the result and present the discussion. ## 2 Cutoff Phenomenon for Random Quantum Circuits Let us begin with the introduction to the random quantum sampling implemented on the Sycamore quantum processor and on the Zuchongzhi quantum processor [11, 12]. The task is to sample bit-strings $x=a_{1}a_{2}\cdots a_{n}\in\\{0,1\\}^{n}$ from the probability $p(x)=|\left\langle{x}\right|U\left|{0}\right\rangle^{\otimes n}|^{2}$ given by a random quantum circuit $U$ acting on $n$ qubits where $\left|{0}\right\rangle^{\otimes n}=\left|{0_{1}\cdots 0_{n}}\right\rangle$ is the initial state and $\left|{x}\right\rangle\equiv\left|{a_{1}a_{2}\cdots a_{n}}\right\rangle$ is a computational basis. The random quantum circuit $U$ implemented on both the Sycamore and Zuchongzhi processors is given by repeatedly applying random unitary operators $U_{k}$ and finally the single- qubit gates $S$ before the measurement $U=SU_{m}\cdots U_{2}U_{1}\,,$ (1) where each random quantum circuit $U_{k}$ is composed of single-qubit gates $S$ chosen randomly from the set $\\{\sqrt{X},\sqrt{Y},\sqrt{W}\\}$ on all qubits and two-qubit gates on the pair of qubits selected in the sequence of the coupler activation patterns of a 2-dimensional array of qubits. Millions of bit-strings were sampled from the Sycamore and Zuchongzhi processors. The distributions of bit strings obtained from these noisy quantum processors are deviated from the ideal distribution. Oh and Kais [14, 15, 16] investigated this deviation using the random matrix theory and the Wasserstein distances. Eq. (1) is considered random walks or random rotations on a unitary group. If the number of cycles $k$ is large, $U^{(k)}\equiv U_{k}\cdots U_{1}$ would approach a random unitary operator $U_{\rm Haar}$ sampled from the Haar probability measure on a unitary group ${\rm U}(2^{n})$. Typically, the convergence to a stationary state is measured by the total variation distance $||v^{*(k)}-\mu_{\rm Haar}||_{\rm TV}$ where $v^{*(k)}$ is the distribution of the random walk at step $k$ and $\mu_{\rm Haar}$ is the Haar measure [8, 9, 10]. For the Sycamore processor, the sub-linear convergence, the depth proportional to $\sqrt{n}$, was claimed by calculating the average entropy of random quantum states [17]. Emerson et al. [18] studied a pseudo-random circuit given by repeated applications of $n$ single-qubit random gates sampled from the Haar measure on ${\rm U}(2)$ and simultaneous two-qubit interactions. Moreover, Emerson et al. employed the measure of entanglement for a multipartite system as the measure of the convergence. It was shown that this random quantum circuit converges to the Haar measure if the circuit depth $m$ is larger than $m_{c}\equiv{\cal O}(n^{3}N^{2})$ with $N=2^{n}$. This is larger than the cutoff step $k_{c}\equiv\frac{1}{2}N\ln N={\cal O}(nN)$ for random rotations or random reflections on Lie groups shown by Rosenthal [8] and Porod [9, 10]. Different measures have been used to quantify the convergence to the Haar measure distribution, and give rise to the different cutoff steps. Here, we employ the Shannon entropy for quantum states and the Wasserstein distance between the eigenvalue distribution of a random unitary operator sampled from the Haar measure and those of random quantum circuits. Random unitary matrices drawn from the Haar measure on a unitary group are called the circular unitary ensemble. The properties of random quantum states and the eigenvalue distribution of the circular unitary ensemble are well known from the random matrix theory [19]. Let us first investigate how close a quantum state at the $k$-th step, $\left|{\psi^{(k)}}\right\rangle=R^{(k)}\left|{0}\right\rangle=U_{k}\cdots U_{1}\left|{0}\right\rangle$, is to a random quantum state, a stationary state of random walks on a unitary group. An immediate question is what a pure random quantum state is and how to generate it. A random pure state can be generated in several ways, that is, there are several ways of drawing a random unitary operator from the Haar measure [20, 21, 22]. A random unitary operator could be sampled from the Haar measure through the Euler angle method or the QR decomposition of a complex Gaussian random matrix. Basically, a random quantum state $\left|{\psi}\right\rangle=\sum_{i=0}^{N-1}a_{i}\left|{i}\right\rangle$ can be viewed as a random vector on the $(2N-1)$ sphere and expansion coefficients $a_{i}=x_{i}+iy_{i}$ are drawn from the normal distributions, i.e., $x_{i},y_{i}\sim{\cal N}(0,1)$. The distribution $P=\\{p_{1},p_{2},\dots,p_{N-1}\\}$ of probabilities $p_{i}=|a_{i}|^{2}$ of a random quantum state obeys the $\chi^{2}$ distribution with 2 degree of freedom [19] ${\rm Pr}(p)=(N-1)(1-p)^{N-2}\,.$ (2) The $\chi^{2}$ distribution for $P$ of $p_{i}$ of random quantum states makes it possible to calculate the average Shannon entropy. The Shannon entropy for the probability distribution $P=\\{p_{1},p_{2},\dots,p_{N-1}\\}$ is $\displaystyle H(P)=H(\left|{\psi}\right\rangle)=-\sum_{i=0}^{N-1}p_{i}\ln p_{i}\,,$ (3) where $\sum_{i}p_{i}=1$ and $p_{i}=|a_{i}|^{2}$. The Shannon entropy $H(\left|{\psi}\right\rangle)$ for a quantum state measures the amount of uncertainty or the concentration of $P=(p_{0},\dots,p_{N-1})$. The average Shannon entropy over random quantum states can be calculated with Eqs. (2) and (3) and is given by $\langle H(\left|{\psi}\right\rangle)\rangle=\ln N-1+\gamma\,,$ (4) where $\gamma\approx 0.5772$ is the Euler constant and $\langle\cdots\rangle$ is the average over random quantum states. So the Shannon entropy of a quantum state could be used as a measure of convergence of random walks on a unitary group. Figure 1: The distributions of $p_{i}$ for (a) a uniform superposition of all basis states $N=20$, (c) a random quantum state generated by a random unitary matrix ${\rm U}(N)$ with $N=20$, and (e) a random quantum state of $n=14$ qubits generated by a random quantum circuit implemented on the Sycamore processor [11, 23]. Plots (b), (d), and (f) are the distributions of $p_{k}$ after the quantum Fourier transform of (a), (b), and (c), respectively. The red lines indicate $p=1/N$ and the Shannon entropy $H$ for each state is labeled by $H$. Note that the Shannon entropy $H(\left|{\psi}\right\rangle)$ is defined with respect to a specific basis set $\\{\left|{i}\right\rangle\\}$. If the same quantum state is expanded in terms of another basis set $\\{\left|{k}\right\rangle\\}$, $\left|{\psi}\right\rangle=\sum_{k=0}^{N-1}b_{k}\left|{k}\right\rangle$, its Shannon entropy $H(\left|{\psi}\right\rangle)$ will change. For two orthonormal basis sets, $\\{\left|{i}\right\rangle\\}$ and $\\{\left|{k}\right\rangle\\}$, the entropic uncertainty relation [24, 25, 26, 27] is written as $H(P)+H(Q)\geq-2\ln c$ (5) where $c=\max|\langle i|k\rangle|$, $P=(p_{0},\dots,p_{N-1})$ with $p_{i}=|a_{i}|^{2}$, and $Q=(q_{0},\dots,q_{N-1})$ with $q_{k}=|b_{k}|^{2}$. We consider the computational basis and the quantum Fourier transformed basis, which are mutually unbiased. The quantum Fourier transform (QFT) is the discrete Fourier transform of the amplitude vector $(x_{0},x_{1},\dots,x_{N-1})$ to another amplitude vector $(y_{0},y_{1},\dots,y_{N-1})$, defined by $y_{k}=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}e^{i2\pi\frac{jk}{N}}\,x_{j}\,.$ (6) The QFT acting on $\left|{\psi_{P}}\right\rangle$ is written as $\left|{\psi_{P}}\right\rangle=\sum_{i=0}^{N-1}x_{i}\left|{i}\right\rangle\quad\longrightarrow\quad\left|{\psi_{Q}}\right\rangle=\sum_{k=0}^{N-1}y_{k}\left|{k}\right\rangle\,.$ (7) Fig. 1 illustrates the distributions of probabilities $p_{i}$ for two kinds of quantum states in the computational basis and probabilities $q_{k}$ in the QFT basis. First, consider a uniform superposition of all possible basis states in the computational basis that is given by $\left|{\psi}\right\rangle=\frac{1}{\sqrt{N}}(\left|{0}\right\rangle+\left|{1}\right\rangle+\cdots+\left|{N-1}\right\rangle)$. As shown in Fig. 1 (a), its $p_{i}$ is distributed uniformly, $p_{i}={1}/{N}$ with $i=0,1,\dots,N-1$ and its Shannon entropy has the maximum value, $H(P)=\ln(N)$. The equally-likely distribution among $N$ possible states implies the largest randomness, and the greatest entropy [28]. However, $\left|{\psi}\right\rangle$ in the QFT basis set is localized at one site, so its entropy $H(Q)$ is zero, as depicted in Fig. 1 (d). The entropic uncertainty is given by $H(P)+H(Q)=\ln(N)$. Next, consider a random quantum state $\left|{\phi}\right\rangle=U_{\rm Haar}\left|{0}\right\rangle$. Fig. 1 (b) plots the distribution of $p_{i}$ of a random quantum state where $U_{\rm Haar}$ is generated by the QR decomposition of an $N\times N$ complex Gaussian random matrix. The entropy of a random quantum state is approximately given by $H(P)=\ln N-1+\gamma\approx 2.302$ for $N=10$. Fig. 1 (e) plots the distribution of $q_{k}$ in the QFT basis. Interestingly, the entropy $H(P)$ of a random quantum state in the computational basis is almost equal to its entropy $H(Q)$ in its QFT basis. We observe that the average entropy for all random quantum states in the computational basis, generated by a random circuit $U$, is equal to that in the QFT basis. A random quantum state may have the balanced entropic uncertainty, $H(P)=H(Q)=\ln N-1+\gamma$. It is analogous to a coherent state in the sense that the latter has balanced or symmetric minimum uncertainty: $\Delta x=\Delta p$. It may be interesting to understand why a random quantum state is invariant under the QFT. Fig. 1 (c) depicts the distribution of $p_{i}$ for a random quantum state generated by a random quantum circuit implemented on the Sycamore processor for $n=12$ and the cycles $m=14$ [23]. Fig. 1 (f) plots the distribution of $q_{k}$ in the QFT basis. One can see that the Shannon entropy of a random quantum state in the computational basis is almost same as that of its QFT state. Note that the random circuit here is implemented on a classical computer without any noise. The Shannon entropy in Fig. 1 e is close to the theoretical value, $H=\ln N-1+\gamma\approx 7.8949$. The Shannon entropy calculated from the Sycamore data for $n=12$ and $m=14$ is $H\approx 8.217$ and close to $\ln(4096)\approx 8.31$ [29]. Figure 2: For a random quantum circuit for $n=14$ qubits and up to $m=14$ cycles [11], (a) Shannon entropy of the average of quantum states and (b) Wasserstein distance between the eigenvalues of random circuits and the Haar random unitary operator are plotted as a function of the number of gates applied, i.e., the depth of a quantum circuit. Since a random quantum state is characterized by its Shannon entropy, $H(P)\approx\ln N-(1-\gamma)$, the Shannon entropy $H(\left|{\psi^{(k)}}\right\rangle)$ of a quantum state $\left|{\psi^{(k)}}\right\rangle=U_{k}\dots U_{1}\left|{0}\right\rangle$ at the $k$-th step of random walks could be used as a convergence measure to see whether the cutoff phenomenon happens. The sub-linear convergence proportional to $\sqrt{n}$ was claimed by calculating the average entropy of quantum states [17]. As shown in Fig. 2 (a), the Shannon entropy of a quantum state converges to $\ln N-(1-\gamma)$ as the number of gates increases and remains there. An interesting question is what happens to a quantum state after the Shannon entropy converges. The eigenvalue distribution for random unitary operators drawn from the Haar measure is well known [21], so the distance between the eigenvalue distributions could be used to measure the convergence of a random walk. We consider the Wasserstein distance defined by $W_{p}(P,Q)=\left(\inf_{J\in J(P,Q)}\int||x-y||^{p}\,dJ(x,y)\right)^{1/p}\,,$ (8) where ${\cal J}(P,Q)$ denotes all joint distributions $J(P,Q)$. If $P$ and $Q$ are the empirical distributions of a data set $\\{x_{1},\dots,x_{n}\\}$ and $\\{y_{1},\dots,y_{n}\\}$ respectively, then the Wasserstein distance is given by the distance between order statistics $W_{p}(P,Q)=\left(\sum_{i=1}^{n}||x_{(i)}-y_{(i)}||^{p}\right)^{1/p}\,,$ (9) where $x_{(i)}$ is the $i$-th order statistic of a samples, i.e., its $i$-th smallest value. Fig. 2 (b) plots the Wasserstein distance of order 1 between the eigenvalue distribution of the circular unitary ensemble and that of a random quantum circuit as a function of the number of quantum gates applied. The calculation of the Wasserstein distance supports the cutoff phenomenon for random quantum circuits as the Shannon entropy does. ## 3 Dyson-Brownian Motions on a Unitary Group In Sec. 2, the random walk on the unitary group is implemented by applying the sequence of random quantum circuits, $\\{U_{1},U_{2},\dots\\}$. A random quantum state after the $k$-th step is $\left|{\psi}\right\rangle=U_{k}\dots U_{1}\left|{0}\right\rangle$. This may be considered as a discrete process. In this section, we consider a continuous random walk given by a time-dependent random Hamiltonian to see how quickly a quantum state converges to a random quantum state. The time evolution operator at $t+\delta t$ is given by $\displaystyle U(t+\delta t)$ $\displaystyle=e^{i\frac{\delta t}{\hbar}H(t)}\,U(t)$ (10) $\displaystyle\approx\Bigl{[}1+i\frac{\delta t}{\hbar}H(t)-\frac{1}{2}\left(\frac{\delta t}{\hbar}\right)^{2}H^{2}(t)\Bigr{]}U(t)\,,$ (11) where $U(0)=I$. The time dependent random Hamiltonian $H(t)$ at time $t$ is obtained as follows. We draw a complex random matrix $A$ whose real and imaginary parts of a matrix element $A_{ij}$ are sampled independently from the normal distribution ${\cal N}(0,\sigma^{2}/2N)$ with the variance $\sigma^{2}/2N$. The Hermitian property of the Hamiltonian $H$ is fulfilled by summing $A$ and its conjugate transpose $A^{\dagger}$, $H=\frac{1}{2}(A+A^{\dagger})$. $H$ is the elements of the Gaussian orthogonal ensemble. Figure 3: For the Dyson-Brownian random walk, (a) trajectories of eigenvalues of a random unitary operator and (b) the Shannon entropy $H(\left|{\psi}\right\rangle)$ of a quantum state are plotted. Here, we take $20\times 20$ random Hamiltonian matrices, i.e., $N=20$ and the time step $\delta t=0.01$. In (b) the red dotted horizontal line represents the Shannon entropy of a random quantum state, $\ln N-1+\gamma\approx 2.5572$ for $N=20$. The trajectories of eigenvalues of a random unitary operator $U$ are known as the Dyson-Brownian motions and do not overlap with others. Eigenvalues repel each other. We simulate the time evolution with a time-dependent random Hamiltonian for $N=20$ and $\sigma=1$ so $\sigma^{2}/2N=1/2N$. For simplicity, we take $\hbar=1$. We set the time step $\delta t=0.01$. The eigenvalues of $U(t)$ from Eq. (11) are obtained by diagonalizing it and their trajectories as a function of time are plotted in Fig. 3 (a). The time-evolution of the Shannon entropy of a quantum state $\left|{\psi(t)}\right\rangle=U(t)\left|{0}\right\rangle$ is shown in Fig. 3 (b). We observe the non-asymptotic convergence to $\ln N-1+\gamma\approx 2.5572$ for $N=20$. It is interesting to see the eigenvalues and the Shannon entropy fluctuate after arriving at the steady state. Particularly, the fluctuation in the Shannon entropy in Fig. 3 (b) is in contrast to no fluctuation in the Shannon entropy for a finite random walk of a random quantum circuit, shown in Fig. 1 (a). ## 4 Summary In this paper, we examined some properties of random quantum states generated by discrete and continuous random walks on a unitary group. It is found that the Shannon entropy of a random quantum state generated by random quantum circuits is invariant under the QFT in the sense that the Shannon entropy does not change before and after applying the QFT. The entropic uncertainty relation of a random quantum state for the computational bases and the QFT bases is balanced, i.e., $H(P)=H(Q)$. This may remind us of the coherent state with the balanced minimum uncertainty relation, $\delta x=\delta p$. We showed that the cutoff phenomenon for a random quantum circuit occurs by calculating the Shannon entropy and the Wasserstein distance for the eigenvalue distributions. It is a open question whether the cutoff of random walks on a unitary group scales with the number of qubits $n$ as sub-linear ${\cal O}(\sqrt{n})$ or ${\cal O}({n}2^{n})$ while the numerical calculations here seem to support the sub-linear scaling of the cutoff. In addition to the demonstration of quantum advantage, random quantum circuits may be applicable to solving interesting problems, for example, in randomized linear algebra. The set of random quantum states $\left|{\psi_{i}}\right\rangle=U_{\rm Haar}\left|{i}\right\rangle$ with $i=0,1,\dots,2^{n}-1$ form the orthonormal basis set. The trace of a matrix $A$ could be calculated using ${\rm Tr}(A)\approx\frac{1}{M}\sum_{i=1}^{M}\left\langle{\psi_{i}}\right|A\left|{\psi_{i}}\right\rangle$ where $\left|{\psi_{i}}\right\rangle$ is a random quantum state [30]. ## Acknowledgment This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. We also acknowledge the National Science Foundation under award number 1955907. ## References ## References * [1] Aldous D and Diaconis P 1986 The American Mathematical Monthly 93 333–348 ISSN 00029890, 19300972 URL http://www.jstor.org/stable/2323590 * [2] Bayer D and Diaconis P 1992 The Annals of Applied Probability 2 294 – 313 URL https://doi.org/10.1214/aoap/1177005705 * [3] Diaconis P 1996 Proceedings of the National Academy of Sciences 93 1659–1664 URL https://www.pnas.org/doi/pdf/10.1073/pnas.93.4.1659 * [4] Ehrenfest P and Ehrenfest-Afanassjewa T 1907 Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem (Hirzel) * [5] Diaconis P, Graham R L and Morrison J A 1990 Random Structures & Algorithms 1 51–72 (Preprint https://onlinelibrary.wiley.com/doi/pdf/10.1002/rsa.3240010105) URL https://onlinelibrary.wiley.com/doi/abs/10.1002/rsa.3240010105 * [6] Lubetzky E and Sly A 2013 Inventiones mathematicae 191 719–755 ISSN 1432-1297 URL https://doi.org/10.1007/s00222-012-0404-5 * [7] Kastoryano M J, Reeb D and Wolf M M 2012 Journal of Physics A: Mathematical and Theoretical 45 075307 URL https://dx.doi.org/10.1088/1751-8113/45/7/075307 * [8] Rosenthal J S 1994 The Annals of Probability 22 398 – 423 URL https://doi.org/10.1214/aop/1176988864 * [9] Porod U 1996 The Annals of Probability 24 74–96 ISSN 00911798 URL http://www.jstor.org/stable/2244833 * [10] Porod U 1996 Probability Theory and Related Fields 104 181–209 ISSN 1432-2064 URL https://doi.org/10.1007/BF01247837 * [11] Arute F, Arya K, Babbush R, Bacon D, Bardin J C, Barends R, Biswas R, Boixo S, Brandao F G S L, Buell D A, Burkett B, Chen Y, Chen Z, Chiaro B, Collins R, Courtney W, Dunsworth A, Farhi E, Foxen B, Fowler A, Gidney C, Giustina M, Graff R, Guerin K, Habegger S, Harrigan M P, Hartmann M J, Ho A, Hoffmann M, Huang T, Humble T S, Isakov S V, Jeffrey E, Jiang Z, Kafri D, Kechedzhi K, Kelly J, Klimov P V, Knysh S, Korotkov A, Kostritsa F, Landhuis D, Lindmark M, Lucero E, Lyakh D, Mandrà S, McClean J R, McEwen M, Megrant A, Mi X, Michielsen K, Mohseni M, Mutus J, Naaman O, Neeley M, Neill C, Niu M Y, Ostby E, Petukhov A, Platt J C, Quintana C, Rieffel E G, Roushan P, Rubin N C, Sank D, Satzinger K J, Smelyanskiy V, Sung K J, Trevithick M D, Vainsencher A, Villalonga B, White T, Yao Z J, Yeh P, Zalcman A, Neven H and Martinis J M 2019 Nature 574 505–510 ISSN 0028-0836, 1476-4687 URL http://www.nature.com/articles/s41586-019-1666-5 * [12] Wu Y, Bao W S, Cao S, Chen F, Chen M C, Chen X, Chung T H, Deng H, Du Y, Fan D, Gong M, Guo C, Guo C, Guo S, Han L, Hong L, Huang H L, Huo Y H, Li L, Li N, Li S, Li Y, Liang F, Lin C, Lin J, Qian H, Qiao D, Rong H, Su H, Sun L, Wang L, Wang S, Wu D, Xu Y, Yan K, Yang W, Yang Y, Ye Y, Yin J, Ying C, Yu J, Zha C, Zhang C, Zhang H, Zhang K, Zhang Y, Zhao H, Zhao Y, Zhou L, Zhu Q, Lu C Y, Peng C Z, Zhu X and Pan J W 2021 Phys. Rev. Lett. 127(18) 180501 URL https://link.aps.org/doi/10.1103/PhysRevLett.127.180501 * [13] Zhu Q, Cao S, Chen F, Chen M C, Chen X, Chung T H, Deng H, Du Y, Fan D, Gong M, Guo C, Guo C, Guo S, Han L, Hong L, Huang H L, Huo Y H, Li L, Li N, Li S, Li Y, Liang F, Lin C, Lin J, Qian H, Qiao D, Rong H, Su H, Sun L, Wang L, Wang S, Wu D, Wu Y, Xu Y, Yan K, Yang W, Yang Y, Ye Y, Yin J, Ying C, Yu J, Zha C, Zhang C, Zhang H, Zhang K, Zhang Y, Zhao H, Zhao Y, Zhou L, Lu C Y, Peng C Z, Zhu X and Pan J W 2022 Science Bulletin 67 240–245 ISSN 2095-9273 URL https://www.sciencedirect.com/science/article/pii/S2095927321006733 * [14] Oh S and Kais S 2022 The Journal of Physical Chemistry Letters 13 7469–7475 pMID: 35939529 URL https://doi.org/10.1021/acs.jpclett.2c02045 * [15] Oh S and Kais S 2022 Phys. Rev. A 106(3) 032433 URL https://link.aps.org/doi/10.1103/PhysRevA.106.032433 * [16] Oh S and Kais S 2023 Phys. Rev. A 107(2) 022610 URL https://link.aps.org/doi/10.1103/PhysRevA.107.022610 * [17] Boixo S, Isakov S V, Smelyanskiy V N, Babbush R, Ding N, Jiang Z, Bremner M J, Martinis J M and Neven H 2018 Nature Physics 14 595–600 ISSN 1745-2481 URL https://doi.org/10.1038/s41567-018-0124-x * [18] Emerson J, Weinstein Y S, Saraceno M, Lloyd S and Cory D G 2003 Science 302 2098–2100 ISSN 0036-8075 URL https://science.sciencemag.org/content/302/5653/2098 * [19] Haake F, Gnutzmann S and Kuś M 2010 Quantum Signatures of Chaos (Springer-Verlag Berlin Heidelberg) ISBN 978-3-642-05428-0 * [20] Mezzadri F 2007 Notices of the American Mathematical Society 54 592 – 604 ISSN 0002-9920 * [21] Meckes E S 2019 The Random Matrix Theory of the Classical Compact Groups Cambridge Tracts in Mathematics (Cambridge University Press) * [22] Ozols M 2009 unpublished essay on http://home. lu. lv/sd20008 * [23] Martinis et al J M 2022 Quantum supremacy using a programmable superconducting processor, Dryad, Dataset https://doi.org/10.5061/dryad.k6t1rj8 * [24] Deutsch D 1983 Phys. Rev. Lett. 50(9) 631–633 URL https://link.aps.org/doi/10.1103/PhysRevLett.50.631 * [25] Kraus K 1987 Phys. Rev. D 35(10) 3070–3075 URL https://link.aps.org/doi/10.1103/PhysRevD.35.3070 * [26] Maassen H and Uffink J B M 1988 Phys. Rev. Lett. 60(12) 1103–1106 URL https://link.aps.org/doi/10.1103/PhysRevLett.60.1103 * [27] Coles P J, Berta M, Tomamichel M and Wehner S 2017 Rev. Mod. Phys. 89(1) 015002 URL https://link.aps.org/doi/10.1103/RevModPhys.89.015002 * [28] Ambegaokar V and Clerk A A 1999 American Journal of Physics 67 1068–1073 ISSN 0002-9505 (Preprint https://pubs.aip.org/aapt/ajp/article-pdf/67/12/1068/10115996/1068_1_online.pdf) URL https://doi.org/10.1119/1.19084 * [29] Oh S and Kais S (unpublished) * [30] Oh S and Kais S (in preparation) Estimating trace of a matrix with random quantum states
# Wave based damage detection in solid structures using artificial neural networks Frank Wuttke1,∗, Hao Lyu1, Amir S. Sattari1, Zarghaam H. Rizvi1 1 Geomechanics and Geotechnics Group, Kiel University, Germany ∗ Corresponding Author: Frank Wuttke, Email<EMAIL_ADDRESS> ###### Abstract The identification of structural damages takes a more and more important role within the modern economy, where often the monitoring of an infrastructure is the last approach to keep it under public use. Conventional monitoring methods require specialized engineers and are mainly time consuming. This research paper considers the ability of neural networks to recognize the initial or alteration of structural properties based on the training processes. The presented work here is based on Convolutional Neural Networks (CNN) for wave field pattern recognition, or more specifically the wave field change recognition. The CNN model is used to identify the change within propagating wave fields after a crack initiation within the structure. The paper describes the implemented method and the required training procedure to get a successful crack detection accuracy, where the training data are based on the dynamic lattice model. Although the training of the model is still time consuming, the proposed new method has an enormous potential to become a new crack detection or structural health monitoring approach within the conventional monitoring methods. Keywords— Damage Detection, Structural Health Monitoring, Artificial Neural Networks, Convolutional Neural Networks, Lattice Element Method ## 1 Introduction For the permanent use of existing structures in urban areas, as well as for lifelines and for safety structures such as dams, the successful monitoring of structures is of the highest priority. Usually, conventional methods of structural dynamics are used to analyse the state of structures and to find existing or propagating damages, while wave based method are less used in ordinary structural dynamics. These methods are more usual in the field of non-destructive testing (NDT) (Kaewunruen and Remennikov, 2006; Farhangdoust and Mehrabi, 2019) under use of very high excitation frequencies and shorter wave lengths. Beside the type of analysis method - structural dynamics with long wave length or ultra-sound methods in NDT with extreme short wave length - the matter of the structural analysis is based on the active analysis of excited vibrations or wave fields until the structural damage is detected. Without any knowledge of preexisting damage zones, the analysis can take lots of time. The present development shows a new strategy from a numerical case study to analyse structures under use of artificial neural networks. The artificial neural networks (ANN) (Sha and Edwards, 2007) are used to learn the change of structural response patterns of damaged structures in opposition to non-damaged structures. With the booming development in large-scale data generation and computation power, deep learning algorithms, especially deep convolutional networks (known as CNNs or ConvNets), are developing by leaps and bounds. Deep learning methods have been applied to various many fields, such as computer vision and natural language processing, and often outperforms conventional methods (LeCun et al., 2015). The multi-layer structured deep models can learn patterns from data at multiple levels of abstraction. Convolution operation performs an important role in CNN layers. ConvNets are particularly suitable to learn from array-like data, such as audio, image, and video. These data are uniformly sampled from signals in spatial and/or temporal domains. In many applications, ConvNets have been used as the backbone for pattern extraction, e.g., recognising objects from images (Girshick, 2015), understanding text (Kim, 2014), and synthesising audio (van den Oord et al., 2016). In recent years, attempts have been made to apply ConvNets to damage detection as well (Abdeljaber et al., 2017; Rautela and Gopalakrishnan, 2019). Deep learning methods have long been expected to be promising for wave-based damage detection (Avci et al., 2021). Compared to the hand-engineered-feature- based methods, the deep-learning-based method uses deep neural networks as a feature extractor to learn representations from wave fields(Guo et al., 2020). 1D CNNs and RNNs are two popular structures to recognize patterns from 1D signals. Abdeljaber et al. trained multiple 1D CNNs to detect whether damage exists at specific locations (joints) (Abdeljaber et al., 2017). Their model uses the acceleration signal at each joint as input and requires extra workload to segment the signal into frames. Considering damage detection as a binary prediction problem, i.e., predicting whether a crack exists from input data, 1D CNN, RNN, and LSTM models all can achieve high accuracy (Rautela and Gopalakrishnan, 2019). In one following paper, the authors developed a two- stage damage detection method (Rautela and Gopalakrishnan, 2020). The method determines whether a sample is damaged or not at the first stage and then predict the location and length of the damage with another regressor network. However, the regressor network deals only with the damage that is orthogonal to the sample’s surface. Khan et al. transformed the structural vibration responses in the time domain into two-dimensional spectral frame representation and then applied a 2D CNN to distinguish between the undamaged and various damaged states (Khan et al., 2019). Besides using CNNs for wave- based damage detection, there are also methods using different input data or a combination with other learning schemes. For example, in (Gulgec et al., 2019), the authors proposed to use 2D CNN to predict the bounding box of a crack from raw strain field. By adding noise and changing loading patterns to augment the data, the model could achieve some robustness. Nunes and her colleagues developed a hybrid strategy to detect undefined structural changes (Nunes et al., 2020). They proposed to apply a unsupervised k-means clustering method to the CNN learned features, and thus the features that are extracted from the samples with undefined changes are expected to fall out of these clusters. Our work differs from the above mentioned works by training an end- to-end model to predict both crack shapes and locations from large amounts of simulated wave fields. The numerical treatments for the paper are done by use of a meso-scale method as it is better suited to capture the effects as initial cracking and crack propagation depending on the material parameter and the initial- & boundary conditions without pre-definition of damaged patches, and it is also applicable for 2D and 3D problems. The Lattice Element Method is a class of discrete models in which the structural solid is represented as a 3D assembly of one-dimensional elements (Wong et al., 2014; Rizvi et al., 2019; Sattari et al., 2017). This idea allows one to provide robust models for propagation of discontinuities, multiple crack interactions, or cracks coalescence even under dynamic loads and wave fields. Different computational procedures for lattice element methods for representing linear elastic continuum have been developed. Beside different mechanical, hydro-mechanical and multi-physical developments, the extension and basics for a new dynamic Lattice Element Method was presented in (Rizvi et al., 2018, 2020). This development will be used in the given paper for the health monitoring of structures. To perform the damage detection as a suitable numerical software, pattern indicators and specific designed neural networks are needed. The numerical simulation is realized under use of Dynamic Lattice-Element Method, where the advantage of the discontinuum method in opposition to continuum methods related to the damage detection will be discussed in the methodology section. The implemented artificial neural networks are also described in this section. Based on the considered numerical and DNN (deep neural networks) models, a case study of a 2D plane is performed to show the developments and results of the new approach. ## 2 Methodology ### 2.1 Dynamic Lattice Approach The assembly of the heterogeneous and homogeneous material will be generated by specific meshing algorithms in LEM. The Lattice Element Models with the lattice nodes can be considered as the centers of the unit cells, which are connected by beams that can carry normal force, shear force and bending moment. Because the strain energy stored in each element can be exceeded by a given threshold, the element is either removed for cracking or assigned a lower stiffness value. The method is based on minimizing the stored energy of the system. The size of the localized fracture process zone around the static or propagating crack plays a key role in failure mechanism, which is observed in various models of linear elastic fracture mechanics and multi-scale theories or homogenization techniques. Normally this propagating crack process needs a regularization, however, an efficient way of dealing with this kind of numerical problem is by introducing the embedded strong discontinuity into lattice elements, resulting in mesh-independent computations of failure response. The generation of the lattice elements are done by Voronoi cells and Delaunay itself (Sattari et al. (2017); Moukarzel and Herrmann (1992)). With the performance of this procedure an easy algebraic equation is generated for the static case. To develop the dynamic LEM for simulation of a propagating wave field, a more complex extension of the LEM is needed. The following solution of the dynamic LEM is solved as a transient solution in the time domain. #### 2.1.1 Equation of motion To solve the dynamic LEM, the static LEM needs an extension of the equation of motion. The general equation of motion without the damping term is defined by $\mathbf{M}\ddot{u}+\mathbf{K}u=F(t)$ (1) where $\mathbf{M}$ and $\mathbf{K}$ are the mass and the stiffness matrices terms and $F(t)$ is the applied time-dependent force. Both matrices, the mass and stiffness matrix, have to be defined in terms of the LEM definition. #### 2.1.2 Mass Matrix generation The mass matrix or the consistent mass matrix (CMM) is generated either by lumping the mass at the nodes or by following the variation mass lumping (VMM) scheme. The VMM scheme is also implemented in the finite element method for dynamic simulations. The element mass $M^{e}$ is computed using the following equation $M_{e}=\int\rho\left\lfloor N_{v}^{e}\right\rfloor^{T}N_{v}d\Omega$ (2) If the shape functions are identical, that is, $N_{v}^{e}=N^{e}$, the mass matrix is called the consistent mass matrix (CMM) or $M_{c}^{e}$. $M_{c}^{e}=\int_{0}^{l}\rho A\left[N^{e}\right]^{T}N^{e}dx=\frac{1}{4}\rho lA\int_{-1}^{1}\begin{bmatrix}1-\epsilon\\\ 1+\epsilon\end{bmatrix}\begin{bmatrix}1-\epsilon&1+\epsilon\end{bmatrix}d\epsilon$ (3) Where, $\rho$ is the density assigned to the Voronoi cells and $A$ and $l$ are the area and the length of the lattice elements The elemental mass matrix is symmetric, physically symmetric, and complies with the condition of conservation and positively. To obtain the global mass matrix, a congruent transformation is applied. In contrast to the stiffness matrix, translational masses never vanish. All the translational masses are retained in the local mass matrix. The global transformation is achieved through the following equation. $\bar{M}_{c}^{e}=\left[T^{e}\right]^{T}\left[M_{c}^{e}\right]\left[T^{e}\right]$ (4) $M_{c}^{e}=\frac{1}{2}m^{e}\begin{bmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{bmatrix}$ (5) #### 2.1.3 Element Stiffness Matrix The force displacement component of a truss element is given by the spring relation $\left\\{F\right\\}=\left[K\right]\left\\{U\right\\}$ (6) The vectors $\left\\{F\right\\}$ and $\left\\{U\right\\}$ are the member joint force and member joint displacement, respectively shown in figure 2. The member stiffness matrix or the local stiffness matrix is [K]. For a truss element it is given by $\left[K\right]=\frac{EA}{L}\begin{bmatrix}1&0&-1&0\\\ 0&0&0&0\\\ -1&0&1&0\\\ 0&0&0&0\end{bmatrix}$ (7) After applying the congruent transformation, the member stiffness matrix in global coordinates are given as $\left[K^{e}\right]=\left[T^{e}\right]^{T}\left[K\right]\left[T^{e}\right]$ (8) $K^{e}=\frac{E^{e}A^{e}}{L^{e}}\begin{bmatrix}l^{e}&lm&-l^{2}&-lm\\\ lm&-m^{2}&-lm&-m^{2}\\\ -l^{2}&-lm&l^{2}&lm\\\ -lm&-m^{2}&lm&m^{2}\end{bmatrix}$ (9) Where $l=cos\phi^{e},m=sin\phi^{e}$ and with $\phi^{e}$ as the orientation angle as shown in Figure 2. #### 2.1.4 Time domain solution of Equation of Motion The equation of motion for the linear system of equations is solved with the Newmark beta method due to its unconditional stability. The displacement and the velocity terms for the next time step are calculated as follows: $u_{t}=u_{t-\Delta t}+\Delta t\dot{u}_{t-\Delta t}+\left(\frac{1}{2}-\beta\right)\Delta t^{2}\ddot{u}_{t}$ (10) $\dot{u}_{t}=\dot{u}_{t-\Delta t}+(1-\gamma)\Delta t\dot{u}_{t-\Delta t}+\gamma\Delta t\ddot{u}_{t}$ (11) We follow the average acceleration approach with $\beta=\frac{1}{4}$ and $\gamma=\frac{1}{2}$ The Newmark beta method solves the algebraic form of the equation of motion (EOM) of undamped forced vibration at the end time interval $t+\Delta t$ $F_{t+\Delta t}=\mathbf{M}\ddot{u}_{t+\Delta t}+\mathbf{K}u_{t+\Delta t}$ (12) The stiffness and the mass matrices are computed in the following fashion to reduce in the form of equation (9) $\mathbf{\hat{K}}=\mathbf{K}+a_{0}\mathbf{M}$ (13) Where $\hat{K}$ is the effective stiffness matrix and $a_{0}=\frac{6}{\gamma\Delta t^{2}}$ Similarly, the effective load vector at time $t+\Delta t$ is calculated as in (17). $\hat{F}_{t+\Delta t}=F_{t+\Delta t}+\mathbf{M}(a_{0}u_{t}+a_{2}\dot{u}_{t}+a_{3}\ddot{u}_{t})$ (14) Here, $a_{2}=\frac{1}{\gamma\Delta t}$ and $a_{3}=\frac{1}{2\gamma}$. The above simplification leads to the algebraic form. $\left\\{\hat{F}_{t+\Delta t}\right\\}=\left[\mathbf{\hat{K}}\right]\left\\{U_{t+\Delta t}\right\\}$ (15) From the above equation, displacement of each node is calculated for every time step. The natural frequency of the system is calculated as given below. $\omega^{2}[M]\Phi=[K]\Phi$ (16) $\omega^{2}=eig([M]^{-1}[K])$ (17) The detailed description of the theory and implementation of the dynamic Lattice-Element method with validation and verification of the method by analytical and numerical benchmarks is given in (Rizvi et al., 2018, 2020). ### 2.2 Wave field identification by Convolutional Neural Networks The basic idea of deep learning system damage detection in the given case of propagating wave fields is the identification of wave field patterns respective of the change in wave field patterns during the damage evolution. To apply the idea, the excitation and receiver points keep constant during the monitoring process. The damage evolution process covers the initial static case on the given plate. After a change of surrounding, static stress condition damages on different positions in the plate area can be created depending on the stress condition and the material parameter. Before and after a damage scenario, a small strain wave field is excited to propagate through the plate. Because of the damage / crack, within the plate the pattern of the propagating wave field will be modified. The interaction of the wave field within the crack is essential for identifying the correct wave field. Under the assumption of an open crack, neglecting shear slipping and crack growth under dynamic loads, the crack will produce a mode conversion and a scattering of the propagating wave field. That phenomena is studied in (Rizvi et al., 2018, 2020). It becomes obvious that the transient solution provides that phenomena. For future application of the DNN-Damage-Detection of real structures, the virtual tool has to be optimized and needs sufficient training data. The numerical data should provide the base of the training data of the CNN. ### 2.3 Training of 1D CNN-Based Detector for Damage Detection #### 2.3.1 The Damage Detection Dataset ##### An Overview of Training Data Set - The Wave Propagation Data The wave field data was produced by numerical simulation on virtual 2D plates as study examples. After the application of a Dirac input, the elastic wave propagates from the source point into the plate. Adding source points at a different location of the plate generates different wave patterns, which are recorded by a set of receiver points. Particularly when the plate has a crack inside, the wave patterns are very different from the one in non-crack plates. Because of the crack, the wave field shadow as well as the reflection becomes visible and change the pattern of the wave field around the damaged region. The plate has fixed boundary conditions at the bottom and no constrains at other sides. The wave source can change its location around the plate boundary. Receivers that record wave displacement are assumed to be along the free boundary as well as on the inner surface of the plate. The sequential measures of time-dependent displacement amplitudes are used as the data source for the planned damage detection network. The instantaneous load added at a chosen excitation point causes high displacement amplitudes at the wave field and decreases rapidly after several time steps to the wave coda at a smaller strain level. The observable surface wave front, as well as at interference of back scattered and reflected wave field is the result of the wave propagation and reflection in the plate. The described phenomenon are clearly visible in an example in Section 3.1. In this paper, the whole content of the wave field, the initial wave front, the coherent part and the diffusive part will be used for the damage detection in the time domain. There is no selection respective analysis of harmonic wave modes in the part coherent wave field (Wuttke et al., 2012b) or application of the interferometric method at the code in the diffusive part (Wuttke et al., 2012a) of the wave field yet. ##### The Damage Detection Dataset The dataset is generated by running the numerical simulation repeatedly for randomly generated plates with or without a crack. In this study, the size of all plates is set to $0.01(m)\times 0.01(m)$ and the lower-left plate corner is defined as the Cartesian coordinate system origin. The wave field is generated by an initial Dirac impulse (in 1 times step span). The simulation runs for 2000 samples with time steps by $1e-9$ seconds. With the excitation of the wave field, the recording at all receiver points starts as the base of the deep detection network. During the method development, possible negative effects by larger displacement values on deep learning model are analysed. The plate, receivers and excitation points are shown in Figure 1. The resulted displacement wave field consists of time histories in the X- and Y-direction at 81 receiver positions with 2000 time steps. To validate the damage detection method with randomly generated cracks, the crack itself is described by 3 parameters, i.e., crack length $l$, orientation $\alpha$, and start position $(x,y)$, with their values being randomly chosen from the following description, $\displaystyle l$ $\displaystyle\in(0,\frac{1}{2}\min(e_{x},\,e_{y})],$ (18) $\displaystyle\alpha$ $\displaystyle\in[0,360],$ (19) $\displaystyle x$ $\displaystyle\in[s_{x},\,e_{x}-s_{x}],$ (20) $\displaystyle y$ $\displaystyle\in[s_{y},\,e_{y}-s_{y}],$ (21) where $e_{x}$ and $e_{y}$ are the length of the sample edges along the x-axis and y-axis, $s_{x}$ and $s_{y}$ are the distance between two receivers in the X-axis and Y-axis (see Figure 1). If one randomly generated crack stretches out of the sample plate, the excess part is discarded. The plate particles that correspond to the crack are marked as removed for the Lattice-Element model calculation. To start the identification scenario, i.e., detecting the damage in a given plate, a binary image of the damage was provided as its label for comparison between the identified structure and original structure. The binary image covers the plate’s surface and indicates the location where a crack exists. The label image can be obtained originally of an 100x100 resolution, where each pixel covers an area similar to the size of a particle. When the model is adjusted to refine or enlarge predictions, the resolution of the label image can be changed accordingly. Figure 2 gives out two label images of different resolutions for the same plate. The image of 16x16 pixels is resized and binarized from the image of 100x100 pixels. We use the image of the reduced resolution (16x16) as the supervision signal for training the detector network. As the proposed model does predictions for each pixel, the label image of 16x16 resolution restricts the problem scale while still maintaining the model’s applicability. Figure 1: The receiver arrangement on the surface of a plate. Figure 2: Plate with crack and its binarised labels at different resolutions. (A). The generated sample with one crack inside in particle view. (B). The 100x100 binary label image for the plate (white pixels indicate the existence of the crack). (C). The 16x16 binary label image for the plate. The numerical simulation randomly generates plate particles and cracks within the plate domain using pre-defined boundary conditions. These randomnesses in sample generation and crack generation are called Type-N samples. Additionally, plates without any crack inside are also generated randomly as reference samples, marked as Type-R. It’s also possible to generate samples of different plates with similar cracks (Type-S) or the same plates with different cracks (Type-C). In this work, 3040 samples for training and 320 samples for testing were generated. In the training dataset, nearly half of the cases are Type-N; Type-R, Type-S, and Type-C cases consist of the remaining portion of the training data. The testing data consists of 144 Type-N samples, 8 Type-R samples, 160 Type-C samples, and 8 Type-S samples. The Type-C samples are generated from 10 different samples and 16 different cracks for each sample (see Figure 20 in the Appendix). Among all test samples, we intentionally generated 7 random samples, and each one has its counter case in the training dataset in terms of the same crack (no-crack cases are excluded in this case). It is worth emphasizing that these samples are not repeated ones. Because of the randomness in the generation process, the diversity of the interior particles and their wave field patterns is ensured. #### 2.3.2 Crack Detection Models with CNN Detector The proposed damage detection model is trained to detect the exciting crack in the 2D plate and the damage location on hand of the wave field pattern. The training covers a series of wave fields in randomly generated plates with or without cracks. The receivers are placed inside the plate and along the free boundaries to record the coordinate-depending wave field time histories. Here, at the receiver points, the displacements in x- and y-directions are recorded. These typical 1D Euclidean time series are the data base for the 1D convolution filters within the feature extractors. The proposed deep network model consists of three components: a set of 1D-CNN layers acting as a wave pattern (WP) extractor to select WPs from input displacement time histories; two fully convolutional layers to handle the time dimension and on the receiver’s dimension to fuse wave patterns; and a predictor module taking the fused features and making predictions of crack existence (Fig 3). Figure 3: Conceptual explanation of the proposed crack detection model with 1D-CNN detector. (A). Diagram of a 1D-CNN that transforms a discretized wave input and produces a discrete feature sequence. (B). Internal structure of a 1D-CNN layer, consisting of a trainable weight $W^{(h)}$, a bias $b^{(h)}$. Activation function is represented by $\sigma$.(C) Diagram of the complete structure of the proposed model. Let $N$ denote the number of receivers that are placed on the plate, they record displacements for $T$ time steps after a load excitation is added. The reading of the $i^{\mathrm{th}}$ receiver at time $t$ is denoted as $s_{k}^{(t)}$, where $k\in{1,2,\cdots,N}$ and $t\in{1,2,\cdots,T}$. The surface of the plate is decomposed into regular grids. Each cell within the grid covers a small area of the plate surface. The number of cells indicates the spatial resolution of the prediction model. The size of the cell was chosen as 10 times smaller than the wave lengths. Finer resolution requires a larger number of cells and each cell covers a smaller area on the sample surface. In contrast, a coarser resolution results in less number of cells and larger coverage for a single cell. The resulting cells can be denoted by the column and row index as $c_{i,j}$. The model makes binary classification on every grid cell to decide whether damage exists in the cell. To summarize, the model can be written as the following equation, where $f$ presents the proposed model and $\theta$ are all trainable parameters; $p_{i,j}$ is the probability of damage existence in $c_{i,j}$. $f_{\theta}(s_{1}^{(1)},\cdots,s_{k}^{(t)},\cdots,s_{N}^{(T)})=\\{p_{i,j}\\}$ (22) ##### CNN is a feed-forward network that consists of trainable multistage feature extractors. The feature extractors are trained in an optimization procedure, where the gradient of an objective function with respect to the weights of the feature extractors is calculated and back-propagated (Rumelhart et al., 1986). CNN is particularly useful to analyse natural signals which can be represented in arrays of different dimension modalities, sequential signals including language and audio as 1D arrays; images as 2D; and video and volumetric images as 3D. The core operation of CNN is to calculate the convolution of input signals with a set of trainable filters. The transformation produces different kinds of features from input signals in spatial (e.g., images) or temporal (e.g., audio) domain (LeCun et al., 1995). In the implementation, CNN differentiates itself from other ANNs by using the local connection and weight sharing strategy. In CNNs, one “neuron” connects locally with only a restricted number of “neurons” in its previous layers, and the connection weights are shared among all the “neurons”. Local connectivity, weight sharing, and pooling, are CNN’s three key properties for dealing with natural signal (LeCun et al., 2015). A typical CNN architecture is composed of firstly some convolutional layers (ConvLayer), and then some more ConvLayer, or fully connected layers. ConvLayer detects local features from the previous layer by transforming the input signal with a set of filters. It produces different kinds of feature maps with respect to its filters, then an activation operation is applied to the feature maps. The non-linear activation functions “squeeze” the values of a feature maps into a certain range, mostly $[0,1]$, $[-0.5,0.5]$ or $[0,+\infty)$. Sometimes, a pooling layer is used for down-sampling the feature maps by taking local average or local maximum values. The pooling layer merges semantically similar features into a higher level (LeCun et al., 2015). The Pooling layer can be intentionally replaced by setting a larger stride in the convolution layer (Springenberg et al., 2015). Figure 4 depicts a schematic drawing of applying n 1d-convolution to N sensory input of T steps. The kernel size is denoted as m. For each receiver’s input, the convolution produces n features. The red mark indicates the data patch used for convolution and the corresponding results in feature maps. Figure 4: The schematic drawing of 1D convolution on N receiver data in T steps. ##### The Network Structure The detailed implementation of the proposed model is described in this part. The WP-Extractor consists of three 1D-ConvLayer blocks. Each block has two 1D-ConvLayers followed by a BatchNormalization layer ((Ioffe and Szegedy, 2015)) and an activation layer - using the LeakyReLU function. ((He et al., 2015)). The BatchNormalization layer standardizes the output of the ConvLayer by re-scale according to the samples in a batch. It prevents dramatic change of the spread and distribution of inputs during training, and has the effect of stabilizing and speeding-up the training process. LeakyReLU is a variation of Rectified Linear Unit activation function (ReLU). The ReLU function results in zero when the input is less than zero and keeps the input unchanged when the input is above or equal to zero. The LeakyReLU function “squeezes” the value when the input is less than zero and thus allows a small, non-zero gradient when the unit is not active. The configurations of the two ConvLayers in the same block are identical while the kernel size and number of filters for ConvLayers vary between blocks. To reduce the size of the extracted wave patterns, the output of each 1D-ConvBlock is passed through a MaxPooling layer. Because the 1D-ConvLayers operate only on the time dimension, the WP-Extractor selects only the receiver patterns. For each case, the input data has shape in $N\times T$, where $N$ represents the number of receivers and $T$ the time steps. The output of the WP-Extractor has shape $N\times\hat{T}\times C$, where $C$ is the filter number of the last ConvBlock. The first fusion layer transforms the receiver data into a 1D vector by passing through a fully convolutional layer. The convolution kernel of this layer is $1\times\hat{T}$, so the transformed field of the ConvLayer covers the whole time domain. The resulted data transformations are given by an $N\times 1\times C^{\prime}$ array. After the transformed data are reshaped according to the position of the receivers and has the size of an $9\times 9\times C^{\prime}$ array ($N=9\times 9$ as shown in Figure 1). The second fusion layer, a 2D fully ConvLayer is used to save the information of all receivers. The 2D ConvLayer employs a kernel of $9\times 9$ and thus produces an $1\times 1\times C^{\prime\prime}$ array. The core module of the predictor is composed of two Transpose-Convolutional layers (TransConvLayer, sometimes called as deconvolution). The two TransConvLayers are used to up-sample the saved transformed data. TransConvLayers are widely used in image generation tasks, such as DCGAN ((Radford et al., 2016)). The saved transformed data are first reshaped to $4\times 4\times c$, and then passed through the two TransConvLayers. The ready-for-predict data shape is $16\times 16\times 4$. Finally, it’s passed through a 2D ConvLayer of $1\times 1$ kernels, and a single output channel. The sigmoid function is used in this ConvLayer as activation to make sure the prediction ranges from 0 to 1. The layer configuration and wave pattern shapes are also shown in Figure 5, K refers to the kernel size, S is the steps, F indicates the number of filters. The step of Max-Pooling is set to 4. The 1D convolution can be implemented using 2D convolution by fixing the kernel size of the receiver dimension to 1. Figure 5: Detailed design of the damage detection model. (A). The network architecture. (B) Shapes of input and output of each component. ##### The Loss Function Training the proposed model is an optimization procedure, and relies on the objective function (also known as loss function). In this work, the proposed model makes a prediction on crack existence for each single patch. In other words, the model makes multiple binary predictions. For single binary classification problems, cross entropy (CE) loss (see Equation 23) is probably the most commonly used loss function. However, considering the “has crack” patches actually consist of only a small portion of total patches, cross entropy (CE) loss can introduce bias towards “no crack” predictions, i.e., simply predicting all patches as “no crack” already result a rather low loss value. To tackle such extreme class imbalance, we select Focal Loss (FL) as our loss function. FL was originally proposed to address the extreme class imbalance in object detection (Lin et al., 2017). FL is a variation of CE loss by adding a penalty term to reduce the loss value of already correctly predicted training cases. The penalty term $(1-p_{t})^{\gamma}$ ($\gamma\geq 0$) re-weights between difficult and easy examples. During training, if a sample is already predicted correctly with a high probability, it is called an “easy” case. The penalty term reduces its loss and thus focus on “hard” cases, where correct predictions are made with a much lower probability. $L_{\mathrm{CE}}=-\frac{1}{N}\sum^{N}_{i=1}log(p_{t}^{(i)}),\quad\mathrm{where}\;p_{t}=\begin{cases}p&\;\mathrm{if}\;y=1,\\\ 1-p&\;\mathrm{otherwise}\end{cases}$ (23) where $y$ is the label (1 and 0). To adjust the loss values of the two binary classes, a weighting factor $\alpha\in[0,1]$ can be added. Similar to defining $p_{t}$, $\alpha_{t}$ can be defined as $\alpha$ for class 1, and $1-\alpha$ for class 0. The focal loss is written as: $\mathrm{FL}(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}log(p_{t}),$ (24) For the crack detection case, let $\boldsymbol{y}$ denote the binary image and $\hat{\boldsymbol{y}}={p_{i,j}}$ a predicted image. The average FL on N cases is calculated by: $\mathrm{FL}(p_{t})=\frac{1}{N}\sum_{n=1}^{N}\sum_{i=1}^{U}\sum_{j=1}^{V}-\alpha_{t}(1-p_{t\\{n,i,j\\}})^{\gamma}log(p_{t\\{n,i,j\\}}),$ (25) where $U$, $V$ are the number of columns and rows of a grid. Two hyper-parameters are introduced by Focal loss, $\alpha$ and $\gamma$. When $\gamma=0$, FL is equivalent to weighted CE. When $\gamma$ increases, the modulating factor uses a lower standard to define easy examples and has greater power to down-weights well-classified examples. In practice, the optimal $\alpha$ and $\gamma$ are found out by empirical studies. #### 2.3.3 Training Process ##### Data Pre-processing In this simulation, the recorded data at the edge receivers are discarded to avoid any possible effects caused by the extremely large values. Thus, in total the records at 81 receivers are used for both training and testing. Then the wave displacements are normalized between -1 and 1 according to each sample’s maximum and minimum value. The resulting input data for the CNN model is $2000\times 81\times 2$ matrix for each case. ##### Training Configurations The Adam optimization algorithm (Kingma and Ba, 2015) is chosen as the optimizer; it’s a commonly used first-order gradient-based optimization algorithm for training deep networks. When updating model parameters during each training step, the algorithm adjusts the learning strength according to the previous gradients. An overview of the optimizers in deep learning can be found in (Ruder, 2016). The initial learning rate is set to 0.0002. The training epochs were set to 150 for all experiments to ensure sufficient training steps for the models to converge. The best model with respect to the evaluation metric is saved for evaluation. As mentioned in the previous section, the focal loss has two hyper-parameters, $\alpha$ and $\gamma$. To determine suitable hyper-parameters, a set of models were trained with different alpha and gamma. The detailed selection of alpha and gamma is listed in section 3.3.1. The model and simulation code is implemented in Python with Tensorflow 2.1 Keras. The simulations are performed on a workstation of Windows 10 platform with Nvidia GPU. ##### Evaluation Metrics In this section, we introduce the Dice similarity coefficient (DSC) metric (defined as $M_{DSC}=2\cdot\frac{recall\cdot precision}{recall+precision}$) and the IoU-based accuracy for model evaluation. Although the loss values indicate the quality of the prediction on the patch basis, predicting cracks is more focused. We expect the model to find out the “has crack” patches cover as much damaged patches as possible, and as few “no crack” patches as possible. In the damage detection study, if the model correctly predicts a cell has (has no) damage, the result is marked as TP (PN); if the model makes wrong predictions on damage existence, the result is either FP or FN as shown in Table 1. Based on this, we can calculate precision ($TP/(TP+FP)$) and the recall ($TP/(TP+FN)$), | Right Prediction (T) | Wrong Prediction (F) ---|---|--- Has Damage (P) | TP | FP No Damage (N) | TN | FN Table 1: Prediction typology in a binary-classification-based damage detection. then the DSC metric can be calculated with Equation 26. Similarly, the IoU calculates the ratio between the intersection and the union of ground truth damage area (TDA) and predicted damage area (PDA). As illustrated in Figure 6, TDA is bound by dashed red lines; PDA is marked by solid red lines. Their intersection is the TP set, where the model makes correct predictions. The remaining part of PDA is wrongly identified as damaged area, i.e., the FP set; the remaining part of TDA that has damage inside but has not been found forms the FN set. For a single case, the IoU can be calculated as Equation 27 by counting the number of each set. Both DSC and IoU metric range between 0 and 1. If there is no overlap between PDA and TDA, both metrics are equal to 0. When PDA is closer to TDA, the intersection area becomes larger and the union area becomes smaller, resulting in a value closer to 1. When PDA covers TDA exactly, DSC and IoU metrics reach their upper limit of 1. When the sample has no damage and the model makes correct predictions, both the intersection and the union become 0. In this special case, both DSC and IoU metrics are assigned the value 1. $M_{\mathrm{DSC}}=2\cdot\frac{\mathrm{area}(TP)}{2\cdot\mathrm{area}(TP)+\mathrm{area}(FP)+\mathrm{area}(FN)}$ (26) $\mathrm{IoU}=\frac{\mathrm{area}(TP)}{\mathrm{area}(TP)+\mathrm{area}(FP)+\mathrm{area}(FN)}$ (27) Figure 6: Conceptual example of calculating IoU. One underlying assumption of data generation is that each sample has maximum “one crack” inside. Based on the assumption, we can define the accuracy using IoU values. For the prediction of a single case, we can consider it as a “correct” prediction if its IoU is greater than a given threshold. Given the threshold, the accuracy on the whole dataset is calculated as the ratio of the number of samples whose IoU value is greater than the threshold, to the total number of evaluated dataset. ## 3 Results ### 3.1 Simulated Displacement Wave Field Using Dynamic Lattice The dynamic lattice method described in section 2.1 is used to simulate the wave fields in a 2D plate. The considered boundary conditions with different excitation points and crack conditions are shown in Fig.7. Fig.8 shows the simulated wave fields in lateral direction for a boundary condition of Fig.7a. The simulated wave fields with a generated crack (Fig.7b) are shown in Fig.9. For the conditions shown in Fig.7c - excitation point in upper middle boundary - the wave field is plotted Fig.10. The results clearly show wave shadows behind the crack as well as the reflection of the wave field from the defined cracked surface. Figure 7: Boundary conditions: (a) horizontal excitation without generated crack, (b) horizontal excitation with generated crack, and (c) vertical excitation with generated crack Figure 8: The 6 frames (100 time steps interval, from left to right) of a displacement ($u_{x}$) wave propagation inside the defined plate in Fig.7a. Figure 9: The 6 frames (100 time steps interval, from left to right) of a displacement ($u_{x}$) wave propagation inside the defined plate in Fig.7b. Figure 10: The 6 frames (100 time steps interval, from left to right) of a displacement ($u_{y}$) wave propagation inside the defined plate in Fig.7c. The time histories ($u_{x}$) of the reference points ($R_{1:9}$) inside the plate (Fig.11) are shown in Fig.12. Two boundary conditions are considered: one with discontinuity (crack), and one without. The plate dimension is 10x10 $cm$ and the load excitation is at the left middle boundary. The applied rectangular impulse load with a magnitude of 1 $kN$ is kept for 10 time steps, where $\Delta t=0.00000001s$. The Young’s modulus of a plate is assigned to 5 $GPa$. Fig.12 clearly shows the arrival time of the wave fields at each reference point. The closest reference point ($R_{2}$) has the maximum amplitude and minimum arrival time. In Fig.12a the arrival time of the wave field to $R_{1}\approx R_{3}$, $R_{4}\approx R_{6}$ and $R_{7}\approx R_{9}$. Due to the generated discontinuity (crack) in Fig.12b, the first arrival times of the wave field to $R_{3}$, $R_{6}$ and $R_{9}$ are delayed. Theses reference points are located in the shadow field behind the generated discontinuity. Having a closer look at the $R_{2}$, it is obvious that due to the wave reflection from the generated discontinuity, the arrival of the second wave field happens sooner than the first boundary condition, approximately $1.45\times 10^{-6}s$. The length, location and orientation of the discontinuities affect the wave field in the domain. The simulated wave fields at the reference points are used for training and developing the artificial neural network model. Figure 11: The boundary conditions and assigned reference points: (a) without crack, (b) with a generated crack Figure 12: The time histories ($u_{x}$) of the reference points inside the plate: (a) without crack, (b) with a generated crack ### 3.2 The Trained Damage Detection Model #### 3.2.1 Identified Optimal Hyper-parameters for Focal Loss FL introduces two hyper-parameters $\gamma$ and $\alpha$. They are used to adjust the loss value of a prediction during the training, so the training can focus on specific types of training cases. Figure 13 shows the FL value with different $\gamma$ and $\alpha$ values. Figure 13 (A) is a remake of Figure 1 in (Lin et al., 2017). With use of the penalty term, the loss value is reduced with the probability of making correct predictions increasing. $\gamma$ controls the decay strength, and larger $\gamma$ ensures the loss to decrease faster. For example, when $\gamma=5$, the predictions where $p_{t}>0.4$ can hardly contribute to the loss. In contrast, when $\gamma=1$, the predictions where $p_{t}>0.6$ still contribute to the loss. Meanwhile $\alpha$ can also be used to re-weight the binary classes (has crack and has no crack) (Figure 13 (B)-(D)). When $\alpha$ is used for one class, the other class is re-weighted by $(1-\alpha)$. Choosing a small $\alpha$ for a class will obviously decrease the contribution of the whole class to the loss. For example, if $\alpha=0.1$ is chosen for the “has crack” class and $\gamma=5$, the predictions can be hardly improved when it is greater than $0.3$. Particularly, when $\gamma=0$ and $\alpha=0.5$, the FL is equivalent to the cross-entropy (CE) loss. Figure 13: The focal loss values with different $\gamma$s and $\alpha$s. (A). Focal loss without weight $\alpha$. (B). Focal loss with different $\alpha$ value with fixed $\gamma=0$. (C). Focal loss with different $\alpha$ value with fixed $\gamma=2$. (D). Focal loss with different $\alpha$ value with fixed $\gamma=5$ In this paper, $\gamma$ and $\alpha$ control the model’s learning strength on “no crack” class and “has crack” class. As $\alpha$ increases, the model is driven to focus on damaged cells, because the false predictions of damaged cells contributes more to the overall loss. As $\gamma$ increases, the model is trained to focus on ”hard” cells, where the model can’t make predictions with high confidence. Because a higher $\gamma$ value forces the model to pay more attention to the ”hard” cells. The evaluations of models that are trained with different $\alpha$s are given in Table 2. When assigning larger weights(larger $\alpha$) to the “has crack” class for CE loss, the trained model tends to have lower precision and higher recall. This can be interpreted as the model’s tendency to give more “has crack” predictions. On the contrary, using small weights(smaller $\alpha$) for the “has crack” class results in higher precision but lower recall. This means the models tend to give less “has crack” predictions. When $\alpha=0.9$, the CE loss gives the model of the highest accuracy, however, the model is also characterised as having low precision and high recall; having high DSC metric value but not the optimal one. The results using focal loss are shown in Table 3. By adding the penalty term, with carefully chosen $\gamma$, the trained models have balanced the precision and recall, and thus result in an increasing in IoU and DSC metrics. The accuracy is also improved compared to the models trained with CE loss. We consider two sets of $\alpha$ and $\gamma$ combinations, $\alpha=0.35$ $\gamma=0.2$ and $\alpha=0.9$ $\gamma=0.4$, have balanced precision and recall, and achieve high DSC and accuracy at the same time. $\alpha$ | prec. | recall | IoU | DSC | accu. ---|---|---|---|---|--- 0.1 | 0.852 | 0.606 | 0.549 | 0.709 | 0.600 0.2 | 0.917 | 0.666 | 0.629 | 0.772 | 0.697 0.25 | 0.864 | 0.603 | 0.551 | 0.710 | 0.609 0.3 | 0.895 | 0.645 | 0.600 | 0.750 | 0.672 0.35 | 0.906 | 0.705 | 0.656 | 0.793 | 0.706 0.5 | 0.882 | 0.675 | 0.619 | 0.765 | 0.666 0.75 | 0.756 | 0.762 | 0.611 | 0.759 | 0.706 0.9 | 0.839 | 0.734 | 0.643 | 0.783 | 0.738 0.95 | 0.834 | 0.737 | 0.643 | 0.783 | 0.691 Table 2: The evaluation results (including IoU, DSC, and accuracy) of models trained by varying $\alpha$ for CE loss($\gamma$= 0). $\gamma$ | $\alpha$ | prec. | recall | IoU | DSC | accu. ---|---|---|---|---|---|--- 0.1 | 0.25 | 0.916 | 0.668 | 0.630 | 0.773 | 0.703 0.2 | 0.35 | 0.880 | 0.730 | 0.664 | 0.798 | 0.756 0.4 | 0.9 | 0.803 | 0.779 | 0.654 | 0.791 | 0.769 1 | 0.5 | 0.884 | 0.711 | 0.650 | 0.788 | 0.731 2 | 0.75 | 0.846 | 0.746 | 0.657 | 0.793 | 0.725 4 | 0.95 | 0.816 | 0.748 | 0.640 | 0.781 | 0.722 Table 3: The evaluation results (including IoU, DSC, and accuracy) of models trained by varying $\gamma$ for FL loss (optimal $\alpha$). #### 3.2.2 The Selected Thresholds The accuracy is calculated dependently with two threshold settings: the threshold of crack existence in a pixel, and the threshold of correct predictions of a sample. The first threshold defines the probability value, above which a pixel can be considered to have a crack inside. In this work, it is also referred to as binarizing threshold ($T_{bin}$). The second threshold is set as a “tolerance” (marked as $T_{tol}$) to the prediction. The “tolerance” allows a prediction to be “correct” when the predicted “has crack” pixels cover a certain area of the crack, i.e., its IoU score is greater than the threshold. The very strict criteria requires that the predicted “has crack” patches cover the true crack-existing area, i.e., the $\mathrm{IoU}=1$, to be a “correct prediction”. The FL function pushes the predicted probabilities of “has crack” and “no crack” towards opposite extremes, because a sample with IoU value that is close to 0.5 will have a large penalty during training. This fact is also illustrated in Figure 14 and 15, which are resulted from the recommended model trained with $\alpha=0.9$ $\gamma=0.4$ and $\alpha=0.35$ $\gamma=0.2$. The sub-figures A of both two figures suggest that most damaged cells are correctly predicted with a probability above 0.5, while still-minor “hard cases” get a border prediction around 0.5, with about 45 cases that both models can not properly handle. In both sub-figures B, the accumulated histograms show a clearer comparison on the quality of predictions for different $T_{bin}$ values. They show that different $T_{bin}$ produce similar accumulative histogram curves. This suggests that most ”no crack” cells and many ”no crack” cells are predicted with very high confidences. We can choose $T_{bin}=0.5$ as it also fits the configuration of FL loss. The curves begin to rise when IoU value reaches 0.5. This suggest us to chose $T_{tol}$ for evaluation, so that the number of cases with IoU values between 0 to 0.5 are relatively small and accumulate quickly when $\mathrm{value}_{\mathrm{IoU}}>0.5$. Figure 14: The distribution of IoU values among 320 test cases. The IoU values are calculated by the model trained with $\gamma=0.4$ and $\alpha=0.9$. A: the 6 histograms of the IoU values that are calculated with 6 different binarization threshold ($T_{bin}$=0.3, 0.4, 0.5, 0.6, 0.7, 0.8); B: the accumulated histograms of the IoU values that are calculated with 6 different threshold ($T_{bin}$=0.3, 0.4, 0.5, 0.6, 0.7, 0.8). Figure 15: The distribution of IoU values among 320 test cases. The IoU values are calculated by the model trained with $\gamma=0.2$ and $\alpha=0.35$. A: the 6 histograms of the IoU values that are calculated with 6 different binarization threshold ($T_{bin}$=0.3, 0.4, 0.5, 0.6, 0.7, 0.8); B: the accumulated histograms of the IoU values that are calculated with 6 different threshold ($T_{bin}$=0.3, 0.4, 0.5, 0.6, 0.7, 0.8). #### 3.2.3 Discussion on Model Performance The histograms on the distribution of test data (Figure 14 and 15) indicate that both models are not good at detecting a minor set of damaged cases in test data. We first examine the distribution of crack size (Crack size is the pixel count or the percentage of “has crack” pixels in the 100x100 labeling image.) in training data and test data. As shown in Figure 16, the samples with small cracks consist of a larger portion in test data then in training data. To further explore the relation between crack size and model performance, we plot the histogram of crack size against IoU values for test data (Figure17). We can easily find out that the IoU values can be very low for tiny crack samples, whereas the IoU values for larger crack samples are mostly above 0.5. Then the accuracy, adjusted by excluding samples with small crack size from test data, is shown in Figure 18. It shows that the proposed model is particularly good at identifying larger cracks. IoU values are calculated from the model predictions with $T_{bin}=0.5$. It indicates that the most low-quality predictions are made for samples with tiny cracks, while cases of larger crack sizes generally have better predictions. This means the developed model can easily distinguish between damaged cases and non-damage cases for large cracks but is not good at detecting tiny cracks. If we only count the cases with crack size greater than 0.002, the accuracy leaps by around 0.1. When excluding the cases with small cracks (crack size less than 0.003), the accuracy of the proposed model is already beyond 0.9. If we only count the cases with larger cracks (crack size greater than 0.004), the accuracy of both models can reach 0.95. Figure 16: Histogram of crack size distribution in training data and testing data. A: crack size distribution in training data; B: crack size distribution in testing data. Figure 17: Histogram of crack size distribution and IoU values for test data. A: results from model trained with $\alpha=0.9$ $\gamma=0.4$; B: results from model trained with $\alpha=0.35$ $\gamma=0.2$. Figure 18: The adjusted accuracy calculated for test data after excluding tiny crack cases. A: predictions are made by the model with $\alpha=0.9$ $\gamma=0.4$; B: predictions are made by the model with $\alpha=0.35$ $\gamma=0.2$. The line plot presents the accuracy that is re-calculated when excluding cases with crack size less than 0.001, 0.002, 0.003, and 0.004; the bar chat shows the number of samples after excluding the samples with tiny cracks against the total number of samples in test dataset. ## 4 Conclusion The paper presents a new approach to detect damages by wave pattern recognition models. The major development is a learning CNN to detect on-hand the visible wave pattern of the damaged zone within a solid structure. To generate the cracked structure, a new dynamic Lattice Element method was used. The major advantage of this method is the application to heterogeneous structures under mechanical, hydraulically, thermal field influence and local chemical changes to describe the evolution of damages in solid structures. The use of new generation deep CNNs to analyse the time dependency within the changed wave pattern is promising. With the described method, a stable detection of 90 percent of the generated large cracks was possible. The next steps will be the reduction of the used number of receivers and increasing the model’s ability of tiny crack detection . ## Acknowledgement The presented work is supported by the Kompetenzzentrum Geo-Energie (KGE) (https://www.kge.uni-kiel.de/, last access: 14 February 2021). We gratefully acknowledge the funding of KGE by the European Union - European Regional Development Fund (EFRE). We also thanks the funding of the project ’Seismic identification of concrete structures’ funded by the Federal Ministry of Economic Affairs and Industry - BMWI and the German Federation of Industrial Research Associations - ZIM/ AIF with funding code ZF4016813FF8. Furthermore, we would like to acknowledge the thoughtful reviews of the reviewers and the editor, and their constructive comments supporting the manuscript revision. ## References * Abdeljaber et al. [2017] Osama Abdeljaber, Onur Avci, Serkan Kiranyaz, Moncef Gabbouj, and Daniel J Inman. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. _Journal of Sound and Vibration_ , 388:154–170, 2017. * Avci et al. [2021] Onur Avci, Osama Abdeljaber, Serkan Kiranyaz, Mohammed Hussein, Moncef Gabbouj, and Daniel J. Inman. A review of vibration-based damage detection in civil structures: From traditional methods to machine learning and deep learning applications. _Mechanical Systems and Signal Processing_ , 147:107077, 2021. ISSN 0888-3270. doi: https://doi.org/10.1016/j.ymssp.2020.107077. URL http://www.sciencedirect.com/science/article/pii/S0888327020304635. * Farhangdoust and Mehrabi [2019] Saman Farhangdoust and Armin Mehrabi. Health monitoring of closure joints in accelerated bridge construction: A review of non-destructive testing application. _Journal of Advanced Concrete Technology_ , 17(7):381–404, 2019. doi: 10.3151/jact.17.381. * Girshick [2015] Ross Girshick. Fast r-cnn. In _Proceedings of the IEEE international conference on computer vision_ , pages 1440–1448, 2015. * Gulgec et al. [2019] Nur Sila Gulgec, Martin Takáč, and Shamim N. Pakzad. Convolutional neural network approach for robust structural damage detection and localization. _Journal of Computing in Civil Engineering_ , 33(3):04019005, 2019. doi: 10.1061/(ASCE)CP.1943-5487.0000820. * Guo et al. [2020] Tian Guo, Lianping Wu, Cunjun Wang, and Zili Xu. Damage detection in a novel deep-learning framework: a robust method for feature extraction. _Structural Health Monitoring_ , 19(2):424–442, 2020. doi: 10.1177/1475921719846051. URL https://doi.org/10.1177/1475921719846051. * He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _Proceedings of the IEEE international conference on computer vision_ , pages 1026–1034, 2015. * Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International Conference on Machine Learning_ , pages 448–456, 2015. * Kaewunruen and Remennikov [2006] S. Kaewunruen and A. Remennikov. Non-destructive testing (ndt): A tool for dynamic health monitoring of railway track structures. 2006\. * Khan et al. [2019] Asif Khan, Dae-Kwan Ko, Soo Chul Lim, and Heung Soo Kim. Structural vibration-based classification and prediction of delamination in smart composite laminates using deep learning neural network. _Composites Part B: Engineering_ , 161:586 – 594, 2019\. ISSN 1359-8368. doi: https://doi.org/10.1016/j.compositesb.2018.12.118. URL http://www.sciencedirect.com/science/article/pii/S1359836818325411. * Kim [2014] Yoon Kim. Convolutional neural networks for sentence classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1746–1751, 2014. * Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , 2015. * LeCun et al. [1995] Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. _The handbook of brain theory and neural networks_ , 3361(10):1995, 1995. * LeCun et al. [2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. _nature_ , 521(7553):436–444, 2015. * Lin et al. [2017] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In _The IEEE International Conference on Computer Vision (ICCV)_ , Oct 2017. * Moukarzel and Herrmann [1992] C. Moukarzel and H. J. Herrmann. A vectorizable random lattice. _J. Stat. Phys._ , 68:911–923, 1992. * Nunes et al. [2020] Lorena Andrade Nunes, Rafaelle Piazzaroli Finotti Amaral, Flávio de Souza Barbosa, and Alexandre Abrahão Cury. A hybrid learning strategy for structural damage detection. _Structural Health Monitoring_ , 2020. doi: 10.1177/1475921720966943. * Radford et al. [2016] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Yoshua Bengio and Yann LeCun, editors, _4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings_ , 2016. * Rautela and Gopalakrishnan [2019] Mahindra Rautela and S Gopalakrishnan. Deep learning frameworks for wave propagation-based damage detection in 1d-waveguides. In _Proceedings of the 11th International Symposium NDT in Aerospace_ , 2019. * Rautela and Gopalakrishnan [2020] Mahindra Rautela and S. Gopalakrishnan. Ultrasonic guided wave based structural damage detection and localization using model assisted convolutional and recurrent neural networks. _Expert Systems with Applications_ , page 114189, 2020. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2020.114189. URL http://www.sciencedirect.com/science/article/pii/S0957417420309234. * Rizvi et al. [2018] Zarghaam Haider Rizvi, Frank Wuttke, and Amir Shorian Sattari. Dynamic analysis by lattice element method simulation. In Wei Wu and Hai-Sui Yu, editors, _Proceedings of China-Europe Conference on Geotechnical Engineering_ , pages 405–409, Cham, 2018. Springer International Publishing. * Rizvi et al. [2019] Zarghaam Haider Rizvi, Mijo Nikolić, and Frank Wuttke. Lattice element method for simulations of failure in bio-cemented sands. _Granular Matter_ , 21, 2019. doi: https://doi.org/10.1007/s10035-019-0878-6. * Rizvi et al. [2020] Zarghaam Haider Rizvi, Syed Husain Mustafa, Amir Shorian Sattari, Shahbaz Ahmad, Peter Furtner, and Frank Wuttke. Dynamic lattice element modelling of cemented geomaterials. In Amit Prashant, Ajanta Sachan, and Chandrakant S. Desai, editors, _Advances in Computer Methods and Geomechanics_ , pages 655–665, Singapore, 2020. Springer Singapore. * Ruder [2016] Sebastian Ruder. An overview of gradient descent optimization algorithms, 2016. * Rumelhart et al. [1986] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. _nature_ , 323(6088):533–536, 1986. * Sattari et al. [2017] A.S. Sattari, Z.H. Rizvi, H.B. Motra, and F. Wuttke. Meso-scale modeling of heat transport in a heterogeneous cemented geomaterial by lattice element method. _Granular Matter_ , 19(66), 2017. * Sha and Edwards [2007] W. Sha and K.L. Edwards. The use of artificial neural networks in materials science based research. _Materials & Design_, 28(6):1747 – 1752, 2007\. ISSN 0261-3069. doi: https://doi.org/10.1016/j.matdes.2007.02.009. URL http://www.sciencedirect.com/science/article/pii/S0261306907000520. * Springenberg et al. [2015] J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for simplicity: The all convolutional net. In _ICLR (workshop track)_ , 2015. * van den Oord et al. [2016] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio, 2016. * Wong et al. [2014] John Kam-wing Wong, Kenichi Soga, Xiaomin Xu, and Jean-Yves Delenne. Modelling fracturing process of geomaterial using lattice element method. 01 2014. * Wuttke et al. [2012a] F Wuttke, M Asslan, and T Schanz. Time-Lapse Monitoring of Fabric Changes in Granular Materials by Coda Wave Interferometry. _Geotechnical Testing Journal_ , 35(2):353–362, feb 2012a. ISSN 0149-6115. doi: 10.1520/GTJ103546. URL https://www.astm.org/DIGITAL_LIBRARY/JOURNALS/GEOTECH/PAGES/GTJ103546.htm. * Wuttke et al. [2012b] F Wuttke, K Markwardt, and T Schanz. Dispersion Analysis in Geotechnical Laboratory Tests: Time-frequency and Time-scale Signal Transforms. _Geotechnical Testing Journal_ , 35(5):703–714, aug 2012b. ISSN 0149-6115. doi: 10.1520/GTJ103727. URL https://www.astm.org/DIGITAL_LIBRARY/JOURNALS/GEOTECH/PAGES/GTJ103727.htm. ## Appendix A Example of Predictions The following figures show true crack occurrence and predictions made by our best performing model for 320 valid cases in 16 columns and 20 rows. In Figure 19, the 5 different case types (see Section 2.3 “The Damage Detection Dataset”) are marked with different coloured letters. Figure 20 is the true occurrence of crack in full resolution (100 x 100) and Figure 21 gives the true occurrence in a reduced resolution (16 x 16). The models in this work are trained based on the labels of reduced resolution. The output in Figure 22 \- 25 are made by two models which are trained with focal loss with $\alpha=0.9$ $\gamma=0.4$ and $\alpha=0.35$ $\gamma=0.2$, the two recommended alpha and gamma values we found in our experiments. Figure 22 and 24 show the predicted probability of crack existence for each pixel, a brighter pixel indicates a higher probability of crack inside. While in Figure 23 and 25, binary predictions with the threshold that the pixel with probability greater than 0.5 is considered as having a crack inside. Figure 19: The category for 320 test cases. The test cases are categorised into four types: 1). randomly generated samples with randomly generated cracks (Type-N), 2). randomly generated samples with no crack (Type-R), 3). randomly generated with similar cracks (Type-S), and 4). the same sample with different cracks (Type-C). They are marked by the colored marks. The special cases (marked as “SC” in yellow) are the 7 cases we intentionally generated with the same cracks in training data but from different samples. Figure 20: The true crack occurrence in reduced resolution (100 x 100) for 320 testing cases. Figure 21: The true crack occurrence in reduced resolution (16 x 16) for 320 testing cases. Figure 22: The predicted probability of crack existence in pixels for 320 testing cases, the brighter a pixel’s color is indicates the higher probability of crack existence inside the pixel. It is made by the model trained with $\alpha=0.9$ and $\gamma=0.4$ Figure 23: The binary predictions made by the model trained with $\alpha=0.9$ and $\gamma=0.4$, where the pixel with a probability greater than a certain threshold (0.5) is considered as having a crack inside. Figure 24: The predicted probability of crack existence in pixels for 320 testing cases, the brighter a pixel’s color is indicates the higher probability of crack existence inside the pixel. It is made by the model trained with $\alpha=0.35$ and $\gamma=0.2$ Figure 25: The binary predictions made by the model trained with $\alpha=0.35$ and $\gamma=0.2$, where the pixel with probability greater than a certain threshold (0.5) is considered as having a crack inside.
# Distance-based regression analysis for measuring associations∗ SHI Yuke $\cdot$ ZHANG Wei $\cdot$ LIU Aiyi $\cdot$ LI Qizhai SHI Yuke _LSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China._<EMAIL_ADDRESS> ZHANG Wei _LSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China._<EMAIL_ADDRESS> LIU Aiyi _Biostatistics and Bioinformatics Branch, Eunice Kennedy Shriver National Institute of Child Health; Human Development, National Institutes of Health, Bethesda, MD 20847, USA._<EMAIL_ADDRESS> LI Qizhai (Corresponding author) _LSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China._<EMAIL_ADDRESS> ∗This work was partially supported by Beijing Natural Science Foundation (Z180006). DOI: Received: x x 20xx / Revised: x x 20xx ©The Editorial Office of JSSC & Springer-Verlag GmbH Germany 2018 Distance-based regression model, as a nonparametric multivariate method, has been widely used to detect the association between variations in a distance or dissimilarity matrix for outcomes and predictor variables of interest in genetic association studies, genomic analyses, and many other research areas. Based on it, a pseudo-$F$ statistic which partitions the variation in distance matrices is often constructed to achieve the aim. To the best of our knowledge, the statistical properties of the pseudo-$F$ statistic has not yet been well established in the literature. To fill this gap, we study the asymptotic null distribution of the pseudo-$F$ statistic and show that it is asymptotically equivalent to a mixture of chi-squared random variables. Given that the pseudo-$F$ test statistic has unsatisfactory power when the correlations of the response variables are large, we propose a square-root $F$-type test statistic which replaces the similarity matrix with its square root. The asymptotic null distribution of the new test statistic and power of both tests are also investigated. Simulation studies are conducted to validate the asymptotic distributions of the tests and demonstrate that the proposed test has more robust power than the pseudo-$F$ test. Both test statistics are exemplified with a gene expression dataset for a prostate cancer pathway. Asymptotic distribution, Chi-squared-type mixture, Nonparametric test, Pseudo-$F$ test, Similarity matrix. ## 1 Introduction Distance-based regression model, as a multivariate method testing the relationship between variation in a distance or dissimilarity matrix of outcomes and predictor variables of interest, has been widely used in a variety of applications including genetics [8], genomics [26, 15], microbiome [24, 16, 12, 20], and other research area such as geoscience [14],environmental science [22], and oceanography [1, 4]. Unlike conventional multivariate methods, the starting point of distance-based regression model is pairwise distance or dissimilarity of subjects instead of individuals’ observations. This model was originally introduced by McArdle and Anderson [13] to analyze simultaneous responses of multiple species to several factors in ecological experimental designs. Subsequently, with the development of high-throughput technologies, it has been received considerable attentions in high-dimensional data analysis. For example, Wessel and Schork [21] applied distance-based regression model to relate variations measured by a genomic distance for multiple phenotypes with multiple variants; Zapala and Schork [26] proposed a multivariate regression approach based on a distance matrix to detect associations between expression patterns on groups of genes and related predictor variables; Chen et al. [3] developed a distance-based statistical test based on generalized UniFrac distances to detect the association of microbiome composition and environment covariates. Gambi et al. [5] applied it to detect the impaction of historical sulfide mine tailings discharge on meiofaunal assemblages. By partitioning the variation in a distance or dissimilarity matrix, a pseudo-$F$ test statistic, analogous to the conventional univariate $F$ statistic in ANOVA or general linear models, is constructed to detect the association between multiple response and predictor variables [18]. However, to the best of our knowledge, the distribution or asymptotic distribution of the pseudo-$F$ test statistic is unknown with only some results under normal assumption and Euclidean or Mahalanobis distance measures [9]. For the general cases, it has not been studied yet in the literature. Resampling procedures are often used to calculate the statistical significance of the pseudo-$F$ test statistic [13, 21, 26, 11, 8, 3]. However, they are generally computationally intensive, especially for high-dimensional data. To address this problem, we study the asymptotic distribution of the pseudo-$F$ statistic. We show that under the null hypothesis, the pseudo-$F$ statistic is asymptotically equivalent to a mixture of Chi-squared random variables, with the weights proportional to the eigenvalues of the similarity matrix. Such Chi-squared-type mixture variables could have a large critical value when the correlation coefficients of the response variables are large, since in this case the eigenvalues of the similarity matrix vary widely. Thus the pseudo-$F$ statistic may suffer from a substantial power loss. So we propose a square-root $F$-type test statistic, which reduces the difference in the eigenvalues of the similarity matrix by replacing the similarity matrix with its square root. The asymptotic null distribution is also established for the new test statistic. The rest of the article is organized as follows. We introduce the pseudo-$F$ test statistic and the proposed test statistic in Section 2. Main theoretical results for the test statistics are given in Section 3. Simulation studies are conducted to validate the asymptotic distribution of the pseudo-$F$ and square-root $F$-type test statistics in Section 4. In Section 5, we illustrate the two tests with gene expression data from a prostate cancer pathway study. Some conclusions are drawn in Section 6. Technical details are relegated to the Appendix. ## 2 Test statistics We introduce the pseudo-$F$ test statistic by taking a similarity matrix as the starting point since the distance matrix can be easily transferred to a similarity matrix. Let $S=(s_{ij})_{n\times n}$ be a similarity matrix for $k$ response variables among $n$ subjects, where $s_{ij}$ is the pairwise similarity between the subjects $i$ and $j$ constructed based on a positive definite kernel $\psi(y_{i},y_{j})$, and $y_{i}$ and $y_{j}$ are two high- dimensional independent observations of the responses variable $\mathbb{Y}$, $i,j=1,\cdots,n$. Let $\mathbb{X}=(\mathbb{X}_{1},\cdots,\mathbb{X}_{m})^{\top}$ be $m$ predictor variables of interest. Denote the observation of $\mathbb{X}$ for the $i$th subject by $x_{i}=(x_{i1},\cdots,x_{im})^{\top}$, $i=1,\cdots,n$. Write $X=(x_{1},\cdots,x_{n})^{\top}$. We refer to the regression model relating $S$ and $X$ as a distance-based regression model, denoted by $S\sim X$. Our interest is in testing the null hypothesis $H_{0}$: there is no association between $\mathbb{Y}$ and $\mathbb{X}$. A $F$-type test statistic proposed by McArdle and Anderson [13, page. 3] can be used to do it, which is given by $T_{pseudo}=\frac{\text{tr}\big{(}H_{X}{{\bf H}S{\bf H}}/m\big{)}}{\text{tr}\big{(}({\bf I}_{n}-{H_{X}}){\bf H}S{\bf H}/(n-m)\big{)}},$ where $H_{X}=X(X^{\top}X)^{-1}X^{\top}$ is the traditional hat” matrix, ${\bf H}={\bf I}_{n}-{\bf 1}_{n}{\bf 1}_{n}^{\top}/n$, ${\bf I}_{n}$ is the $n\times n$ identity matrix, ${\bf 1}_{n}$ is an $n$-dimensional column vector with all elements being 1, and $\text{tr}(\cdot)$ represents the trace of a matrix. Since the distribution or asymptotic distribution of $T_{pseudo}$ is unknown, its statistical significance is conventionally calculated using resampling procedures, which is usually computationally intensive, especially when $n$ is large. One of the goals of this article is to establish the asymptotic distribution for the pseudo-$F$ test statistic $T_{pseudo}$. The numerical results in Section 4 below show that $T_{pseudo}$ is sensitive to the correlation matrix of the response variables. To boost the power when the response variables are moderately or highly correlated, we propose a square-root $F$-type test statistic, which uses the square root of the similarity matrix. The square-root $F$-type statistic is defined as $T_{sqrt}=\frac{\text{tr}\big{(}H_{X}({\bf H}S{\bf H})^{1/2}/m\big{)}}{\text{tr}\big{(}({\bf I}_{n}-H_{X})({\bf H}S{\bf H})^{1/2}/(n-m)\big{)}},$ where the square root $B$ of a matrix $A$ means $A=B\times B$. Consider the eigenvalues $\\{\lambda_{i}\\}_{i=1}^{n}$ and eigenfunctions $\\{v_{i}\\}_{i=1}^{n}$ of $\mathaccent 869{S}={\bf H}S{\bf H}$ and $\mathaccent 869{S}^{1/2}=(v_{1},\cdots,v_{n})diag(\lambda_{1}^{1/2},\cdots,\lambda_{n}^{1/2})(v_{1},\cdots,v_{n})^{\top}$. The numerator of $T_{pseudo}$ and $T_{sqrt}$ now can be intuitively expressed as $\sum_{i=1}^{n}\lambda_{i}v_{i}^{\top}\mathaccent 869{v}_{i}/m$ and $\sum_{i=1}^{n}\lambda_{i}^{1/2}v_{i}^{\top}\mathaccent 869{v}_{i}/m$ where the eigenvalues of $H_{x}$ are all $1$ and the eigenfunctions of $H_{x}$ are $\\{\mathaccent 869{v}_{i}\\}_{i=1}^{n}$. Superficially, when the response variables are moderately correlated, our square-root method may increase the weight of valuable factors to boost the power. ## 3 Theoretic properties and implementation In this section, we derive the asymptotic distributions of the pseudo-$F$ and the proposed test statistics. Assume that $E(\mathbb{X})=\mu$ and $\text{cov}(\mathbb{X})=\Delta=(\delta_{ij})_{m\times m}$. It thus follows that $E(\mathbb{X}\mathbb{X}^{\top})=\Delta+\mu\mu^{\top}\triangleq\mathaccent 869{\Delta}=(\mathaccent 869{\delta}_{ij})_{m\times m}$. Denote the eigenvalues of $\mathaccent 869{S}={\bf H}S{\bf H}$ by $\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\geq 0$. Besides, let $\lambda^{*}_{1}\geq\cdots\geq\lambda^{*}_{n}\geq 0$ be the solutions to the equation $\int\psi_{0}(y_{1},y_{2})u_{i}(y_{1})p(y_{1})dy_{1}=\lambda^{*}_{i}u_{i}(y_{2}),$ where $\psi_{0}(y_{1},y_{2})=\psi(y_{1},y_{2})-E_{y_{1}}(\psi(y_{1},y_{2}))-E_{y_{2}}(\psi(y_{1},y_{2}))+E_{y_{1},y_{2}}(\psi(y_{1},y_{2}))$ and $u_{i}(y_{1})$ is the eigenfunction of the kernel $\psi_{0}(y_{1},y_{2})$ corresponding to $\lambda^{*}_{i}$ for $i=1,\cdots,n$. Throughout, we assume that there exists positive values $c_{0}$ and $c_{1}$, such that $E(\mathbb{X}_{j}^{4})\leq c_{0}$ and $E(s_{ii}^{2})\leq c_{1}$ for $j=1,\cdots,m$ and $i=1,\cdots,n$. The symbol $``\stackrel{{\scriptstyle d}}{{\longrightarrow}}"$ represents “converges in distribution to” and $``\stackrel{{\scriptstyle p}}{{\longrightarrow}}"$ means “converges in probability to”. ### 3.1 Asymptotical null distributions In the following two lemmas, we give the asymptotic null distribution of the numerator and denominator of the pseudo-$F$ test statistic, respectively. ###### Lemma 3.1. Under the null hypothesis $H_{0}$, the numerator of $T_{pseudo}$, $\text{tr}\big{(}H_{X}{\bf H}S{\bf H}\big{)}$, has the same asymptotic distribution as $\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\xi_{i}$ when $n\rightarrow\infty$, where $\xi_{i}=\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)$, $\varpi_{1},\ldots,\varpi_{n}$ are independently identical distributed (iid) chi-squared random variables with $m-1$ degrees of freedom, $\varrho_{1},\cdots,\varrho_{n}$ are iid chi-squared random variables with 1 degrees of freedom, i.e., $\varpi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m-1}$ and $\varrho_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{1}$, and $\varpi_{i}$ and $\varrho_{i}$ are independent, $i=1,\cdots,n$. In particular, when $\mu=\bm{0}_{m}$ an $m$-dimensional column vector with all 0 units, $\xi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m}$. ###### Lemma 3.2. As $n\rightarrow\infty$, the denominator of $T_{pseudo}$, $\displaystyle\text{tr}\big{(}({\bf I}_{n}-H_{X}){\bf H}S{\bf H}/(n-m)\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}E(s_{11})-E(s_{12}),$ where $E(s_{11})=E(\psi(y_{1},y_{1}))$, $E(s_{12})=E(\psi(y_{1},y_{2}))$, and $y_{1}$ and $y_{2}$ are two independent observations of $\mathbb{Y}$. All those proofs are presented in Appendix. Both lemmas lead to the following theorem concerning the asymptotic distribution of the pseudo-$F$ test statistic. ###### Theorem 3.3. Under the null hypothesis $H_{0}$, $T_{pseudo}$ has the same asymptotic distribution with $m^{-1}\sum_{i=1}^{n}w_{i}\xi_{i}$ when $n\rightarrow\infty$, where $\xi_{i}=\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)$, $\varpi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m-1}$ and $\varrho_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{1}$ are mutually independent, $w_{i}=\lambda_{i}/\sum_{j=1}^{n}\lambda_{j}$, and $\\{\lambda_{1},\cdots,\lambda_{n}\\}$ are eigenvalues of ${\bf H}S{\bf H}$ in descending order, $i=1,\cdots,n$. In particular, when $\mu={\bm{0}}_{m}$, $\xi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m}$, $i=1,\cdots,n$. This theorem shows that the pseudo-$F$ test statistic has the same asymptotic distribution as the random variable of Chi-squared-type mixtures. From the proofs presented in the Appendix, we can see that the asymptotic results do not depend on the dimension of response variables and are valid for high- dimensional data. To derive the asymptotic distribution of the proposed test statistic $T_{sqrt}$, we need the following assumption. $\sum_{i=1}^{\infty}(\lambda^{*}_{i})^{1/2}<\infty$ and $\sum_{i=1}^{\infty}\big{|}(\lambda_{i}/n)^{1/2}-(\lambda^{*}_{i})^{1/2}\big{|}\stackrel{{\scriptstyle P}}{{\longrightarrow}}0$. Note that Gretton et al. [7] also assumed that $\sum_{i=1}^{\infty}(\lambda^{*}_{i})^{1/2}<\infty$ and Zhang [27] shows that $(\lambda_{i}/n)^{1/2}\stackrel{{\scriptstyle P}}{{\longrightarrow}}(\lambda^{*}_{i})^{1/2}$ for $i=1,2,\cdots$. It implies that the assumption can be easily satisfied when the number of eigenvalues of kernel $\psi(\cdot,\cdot)$ are finite, such as linear kernel. By the Theorem 1 of Gretton et al. [7] and Assumption 3.1, we have the following proposition. ###### Proposition 3.4. Assume that $\\{z_{1},z_{2},\cdots\\}$ is an infinite sequence of i.i.d. standard Gaussian variables and Assumption 3.1 holds. Then $\sum_{i=1}^{\infty}\big{(}(\lambda_{i}/n)^{1/2}-(\lambda^{*}_{i})^{1/2}\big{)}z_{i}^{2}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$ as $n\rightarrow\infty$. ###### Lemma 3.5. For any symmetric semidefinite matrices $\bf D_{1}$ and $\bf D_{2}$ of n dimension, the eigenvalues of ${\bf D}={\bf D}_{1}{\bf D}_{2}$ are nonnegative such that $\text{tr}({\bf D}^{\top}{\bf D})\leq\text{tr}({\bf D})^{2}$. When $\bf D$ is a symmetric semidefinite matrix, the result in 3.5 naturally holds, since the sum of squares of the nonnegative eigenvalues should be smaller than the squares of the sum of them. Lemma 3.5 is vital to show that a matrix formed by the product of two symmetric semidefinite matrices still have the same property. Based on it and proposition 3.4, the asymptotic null distribution of the proposed test statistic thus follows. ###### Theorem 3.6. When Assumptions $1$ holds, $T_{sqrt}$ has the same asymptotic distribution with $m^{-1}\sum_{i=1}^{n}\eta_{i}\xi_{i}$ when $n\rightarrow\infty$, where $\xi_{i}=\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)$, $\varpi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m-1}$ and $\varrho_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{1}$ are mutually independent, and $\eta_{i}=\lambda_{i}^{1/2}/\sum_{j=1}^{n}\lambda_{j}^{1/2}$, and $\\{\lambda_{1},\cdots,\lambda_{n}\\}$ are eigenvalues of ${\bf H}S{\bf H}$ in descending order, $i=1,\cdots,n$. In particular, when $\mu=\bm{0}_{m}$, $\xi_{i}\stackrel{{\scriptstyle\text{iid}}}{{\thicksim}}\chi^{2}_{m}$, $i=1,\cdots,n$. ### 3.2 Computation issue $T_{pseudo}$ and $T_{sqrt}$ have asymptotical null distribution of the same form with two mixtures of Chi-squared random variables, whose density functions involve multiple integrations. To ease computation and increase efficiency, we employ a parameter bootstrap procedure to approximate them in finite-sample cases as the explicit formulas of the distributions of $T_{pseudo}$, $T_{sqrt}$ and the two mixtures are all intricate. A parameter bootstrap procedure Step 1. Set a large $B$, for example $B=1000$. Step 2. Randomly generate $n$ pairs of observations from the Chi-squared distributions with $m-1$ and $1$ degrees of freedom, denoted by $\\{(\varpi_{i},\varrho_{i})|i=1,\cdots,n\\}$. Let $T_{1}=m^{-1}\sum_{i=1}^{n}w_{i}[\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)]$ and $T_{2}=m^{-1}\sum_{i=1}^{n}\eta_{i}[\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)]$. Step 3. Repeat Step 2 $B$ times and denote the obtained statistics $T_{1}$ and $T_{2}$ by $T_{11},\cdots,T_{1B}$ and $T_{21},\cdots,T_{2B}$. Step 4. By the law of large numbers, the p-values of $T_{pseudo}$ and $T_{sqrt}$ can be empirically estimated by $p_{pseudo}=\frac{1}{B}\sum\limits_{i=1}^{B}I\\{T_{1i}\geq t_{pseudo}\\}$ and $p_{sqrt}=\frac{1}{B}\sum\limits_{i=1}^{B}I\\{T_{2i}\geq t_{sqrt}\\},$ where $t_{pseudo}$ and $t_{sqrt}$ is the observed value of $T_{pseudo}$ and $T_{sqrt}$ respectively and $I\\{\cdot\\}$ is an indicator function. It is worth pointing out that the proposed parameter bootstrap procedure is much faster than the original bootstrap procedure. For a sample of size $n$, the bootstrap procedure needs to generate $n$ individual observations, perform $n(n-1)/2$ calculations to obtain the similarity matrix and conduct two matrix multiplications, while the proposed procedure just needs to generate $2n$ random samples and calculate some summations, which runs ver fast. For example, it takes 0.57 seconds to implement a parameter bootstrap for $B=1000$ and $n=500$ using Inter Core (TM) i9-9900 CPU, and takes 246.85 seconds to implement the original bootstrap procedure under the same setting. In addition to the parameter bootstrap procedure, one can use the Box scaled $\chi^{2}$-approximation [2] or the generalized gamma distribution approximation [10] by matching several cumulants of them. Based on Theorems 1 and 2, the cumulants of $T_{pseudo}$ and $T_{sqrt}$ can be estimated easily. In particular, the $l$th cumulant of $T_{pseudo}$ and $T_{sqrt}$ are estimated as $b_{l}=2^{l-1}(l-1)!m_{0}\sum_{i=1}^{n}(w_{i}/m)^{l}$ and $h_{l}=2^{l-1}(l-1)!m_{0}\sum_{i=1}^{n}(\eta_{i}/m)^{l}$ respectively, $l=1,2,3,4$, where $m_{0}=m-1+1/(1+\mu^{\top}\Delta^{-1}\mu)$, $w_{i}=\lambda_{i}/\sum_{j=1}^{n}\lambda_{j}$, $\eta_{i}=\lambda_{i}^{1/2}/\sum_{j=1}^{n}\lambda_{j}^{1/2}$, and $\\{\lambda_{1},\cdots,\lambda_{n}\\}$ are eigenvalues of ${\bf H}S{\bf H}$ in a descending order, $i=1,\cdots,n$. Let a random variable $X_{g}$ follow a generalized gamma distribution with the density function being $f_{g}(x)=\frac{v_{g}x^{v_{g}w_{g}-1}}{\sigma_{g}^{v_{g}w_{g}}\Gamma(w_{g})}\exp\\{-(\frac{x}{\sigma_{g}})^{v}\\},x>0,$ where $v_{g}$, $w_{g}$ and $\sigma_{g}$ are the parameters. We recommend to use the distribution of $X_{g}+\theta_{g}$ to approximate those of $T_{pseudo}$ and $T_{sqrt}$, where $\theta_{g}$ is a location parameter. The unknown parameters $v_{g}$, $w_{g}$, $\sigma_{g}$, and $\theta_{g}$ can be obtained by solving the equations $\left\\{\begin{array}[]{ccl}b_{1}~{}(\hbox{or}~{}h_{1})&=&m_{1}+\theta_{g}\\\ b_{2}~{}(\hbox{or}~{}h_{2})&=&m_{2}-m_{1}^{2}\\\ b_{3}~{}(\hbox{or}~{}h_{3})&=&m_{3}-3m_{2}m_{1}+2m_{1}^{3}\\\ b_{4}~{}(\hbox{or}~{}h_{4})&=&m_{4}-4m_{3}m_{1}-3m_{2}^{2}+12m_{2}m_{1}^{2}-6m_{1}^{4},\end{array}\right.$ where $m_{l}=E(X_{g}^{l})=\sigma_{g}^{l}\Gamma(v_{g}^{-1}l+w_{g})/\Gamma(w_{g})$ is the $l$th moment of $X_{g}$, $l=1,2,3,4$, and $\Gamma(\cdot)$ is a Gamma function. ### 3.3 Power analysis To study the asymptotic power of the pseudo-$F$ test, we consider the multivariate linear model $Y=X\bm{\beta}+\varepsilon$, where $Y=(y_{1},\ldots,y_{n})^{\top}$ with $y_{i}=(y_{i1},\cdots,y_{iq})^{\top}$, ${\bm{\beta}}=(\beta_{ij})_{m\times q}$ is the effect of the predictor variables $\mathbb{X}$ on the responses variables $\mathbb{Y}$ and $\varepsilon=(\varepsilon_{1},\cdots,\varepsilon_{n})^{\top}$ are random errors with $E(\varepsilon_{i})={\bf 0}_{n\times q}$ and $\text{cov}(\varepsilon_{i})=\Delta_{\varepsilon}$ for $i=1,\cdots,n$. Then for a linear kernel $\psi$, i.e., $s_{ij}=\psi(\bm{y}_{i},\bm{y}_{j})$, we have the following theorem. ###### Theorem 3.7. Assume that for $0<\iota<0.5$, $\max_{1\leq i\leq m,1\leq j\leq q}n^{0.5-\iota}|\beta_{ij}|=\mathaccent 869{c}\neq 0$ as $n\rightarrow\infty$, then for the linear kernel $\psi(\cdot,\cdot)$ and $\mathaccent 869{\tau}_{0}>0$, we have $\lim_{n\rightarrow\infty}P\Big{(}\Big{|}\frac{T_{pseudo}}{n^{2\iota}}-\frac{1}{m}\frac{\text{tr}(\mathaccent 869{\Delta}^{-1}\Delta\mathaccent 869{\bm{\beta}}\mathaccent 869{\bm{\beta}}^{\top}\Delta^{\top})}{\text{tr}(\Delta_{\varepsilon})}\Big{|}>\mathaccent 869{\tau}_{0}\Big{)}=0.$ It thus follows that $P\big{(}T_{pseudo}>F_{1}^{-1}(1-\alpha)\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}1$ as $n\rightarrow\infty$, where $F_{1}$ is the asymptotic distribution function of $T_{pseudo}$, $F_{1}^{-1}(1-\alpha)$ is the $(1-\alpha)$-quantile of $F_{1}$ and $\alpha$ is the nominal significance level. Theorem 3.7 shows that when the effect $\bm{\beta}$ is relatively small, even as small as $n^{\iota-0.5}$ with tiny $\iota$, the power of the pseudo-$F$ test still converges to one in probability as $n\rightarrow\infty$. Actually, when $\bm{\beta}$ is fixed, the asymptotic power of the square-root $F$-type test statistic has the same performance. ###### Theorem 3.8. For the linear kernel $\psi(\cdot,\cdot)$, $P\big{(}T_{sqrt}>F_{2}^{-1}(1-\alpha)\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}1$ as $n\rightarrow\infty$, where $F_{2}$ is the asymptotic distribution function of $T_{sqrt}$, $F_{2}^{-1}(1-\alpha)$ is the $(1-\alpha)$-quantile of $F_{2}$, and $\alpha$ is the nominal significance level. ## 4 Simulation studies ### 4.1 Accuracy of p-value calculation We evaluate through simulation studies the p-value calculation accuracy of the asymptotic distributions of the pseudo-$F$ ($T_{pseudo}$) and square-root $F$-type ($T_{sqrt}$) test statistics for various correlation matrices. The similarity matrix for the response variables is constructed based on the measure of inner product, that is, $S=YY^{\top}$. The observations for the predictor variables $\mathbb{X}$ are generated from the $m$-dimensional normal distribution $N({\bf 1}_{m},{\it\bm{\Theta}}_{x})$ with mean vector ${\bf 1}_{m}=(1,\cdots,1)^{\top}$ and covariance matrix ${\it\bm{\Theta}}_{x}=\big{(}\theta_{ij}^{(x)}\big{)}_{m\times m}$, where $\theta_{ij}^{(x)}=\rho_{x}^{|i-j|}$ with $\rho_{x}=0.5$. The observations for $\mathbb{Y}$ are generated from $N_{k}({\bf 0}_{k},{\it\bm{\Theta}}_{y})$. We consider the following two correlation models for ${\it\bm{\Theta}}_{y}=\big{(}\theta_{ij}^{(y)}\big{)}_{k\times k}$. * • Model 1 (AR(1) correlation): Let $\theta_{ij}^{(y)}=\rho_{y}^{|i-j|}$ for $1\leq i,j\leq k$, where $\rho_{y}=0.3,0.8$. * • Model 2 (Equal correlation): Let $\theta_{ij}^{(y)}=\rho_{y}$ for $1\leq i\neq j\leq k$ and $\theta_{ii}^{(y)}=1$ for $1\leq i\leq k$, where $\rho_{y}=0.3,0.8$. Thus there are a total of four correlation matrices for the outcome variables $\mathbb{Y}$. We set the sample size $n$ to be 500. Let $m=5$ and $k=10$. For each correlation setting, 10000 simulation replicates are performed to evaluate the empirical size of the tests at significance levels ranging from 0 to 1. In each simulation, the p-values of the $T_{pseudo}$ and $T_{sqrt}$ are calculated based on the asymptotic distribution and Monte Carlo method with $B=2000$ replicates. Figures 1 and 2 below display the empirical sizes of $T_{pseudo}$ and $T_{sqrt}$ based on the asymptotic distribution against significance levels under various correlation matrix settings. The figures show that the empirical sizes of $T_{pseudo}$ and $T_{sqrt}$ are always very close to the corresponding significance levels, even in the case of small significance levels. It indicates good accuracy of the asymptotic distributions derived in Theorems 3.3 and 3.6 and such accuracy is not sensitive to the correlation structure and magnitude. Figure 1: Empirical sizes ($-\log_{10}$) of the pseudo-$F$ test statistic ($T_{pseudo}$) based on the asymptotic distribution against the significance levels ($-\log_{10}$) for two correlation models. Figure 2: Empirical sizes ($-\log_{10}$) of the proposed square-root $F$-type test statistic ($T_{sqrt}$) based on the asymptotic distribution against the significance levels ($-\log_{10}$) for two correlation models. ### 4.2 Power comparison Next we compare the type I error rates and powers of $T_{pseudo}$ and $T_{sqrt}$. The multivariate response data are generated based on the linear model $Y=X{\it\bm{\beta}}+\varepsilon$, where $\varepsilon\sim N_{k}({\bf 0}_{k},{\it\bm{\Theta}}_{\varepsilon})$. The correlation matrix ${\it\bm{\Theta}}_{\varepsilon}=\big{(}\theta_{ij}^{(\varepsilon)}\big{)}_{k\times k}$ is set to have the following two correlation structures. * • Model 1 (AR(1) correlation): Let $\theta_{ij}^{(\varepsilon)}=\rho_{\varepsilon}^{|i-j|}$ for $1\leq i,j\leq k$, where $\rho_{\varepsilon}=0.3,0.8$. * • Model 2 (Equal correlation): Let $\theta_{ij}^{(\varepsilon)}=\rho_{\varepsilon}$ for $1\leq i\neq j\leq k$ and $\sigma_{ii}^{(\varepsilon)}=1$ for $1\leq i\leq k$, where $\rho_{\varepsilon}=0.3,0.8$. The observations $X$ for the predictor variables are generated from the multivariate normal distribution $N({\bf 1}_{m},{\it\bm{\Theta}}_{x})$ with ${\it\bm{\Theta}}_{x}=\big{(}\theta_{ij}^{(x)}\big{)}_{m\times m}$ and $\theta_{ij}^{(x)}=0.5^{|i-j|}$. For the two tests, we define the similarity matrix as $S=YY^{\top}$. To investigate the performance of the two tests under different sparsity of the “signals”, we choose the percentage $\tau$ of nonzero elements of ${\it\bm{\beta}}$ from $\\{0\%,20\%,40\%,60\%,80\%,100\%\\}$. The null hypothesis corresponds to the case $\tau=0\%$. The signals (i.e., nonzero elements of ${\it\bm{\beta}}$) are set to be $\\{\log(k)/(25\tau km)\\}^{1/2}+(1/k)\times N(0,0.01)$, to make the powers comparable in different settings, where $N(0,0.01)$ is the normal distribution with mean 0 and variance 0.01 and $\tau km$ is the total number of signals. We set $k=10,m=5$ and $n=500$. The empirical type I error rates and powers of the tests are calculated based on 1000 simulation replicates; in each simulation, 2000 Monte Carlo samples are drawn to calculate the p-values of the tests based on the asymptotic distributions. The nominal significance level is 0.05. Table 1: Type I error rates and powers of the pseudo-$F$ ($T_{pseudo}$) and square-root F-type ($T_{sqrt}$) test statistics under two correlation models. Correlation | Precentage | | $\rho=0.3$ | | $\rho=0.8$ ---|---|---|---|---|--- Model | of nonzeros | | $T_{pseudo}$ | $T_{sqrt}$ | | $T_{pseudo}$ | $T_{sqrt}$ 1 | 0% | | 0.057 | 0.054 | | 0.051 | 0.048 | 20% | | 0.909 | 0.886 | | 0.507 | 0.940 | 40% | | 0.825 | 0.741 | | 0.395 | 0.663 | 60% | | 0.874 | 0.794 | | 0.463 | 0.603 | 80% | | 0.884 | 0.778 | | 0.478 | 0.597 | 100% | | 0.881 | 0.803 | | 0.469 | 0.493 2 | 0% | | 0.051 | 0.053 | | 0.051 | 0.049 | 20% | | 0.787 | 0.957 | | 0.306 | 1.000 | 40% | | 0.669 | 0.810 | | 0.249 | 0.980 | 60% | | 0.706 | 0.778 | | 0.280 | 0.930 | 80% | | 0.683 | 0.652 | | 0.305 | 0.718 | 100% | | 0.710 | 0.571 | | 0.305 | 0.390 Table 1 presents the empirical type I error rates and powers of the two tests for various ${\bm{\Sigma}}_{x}$ and $\tau$. It can be seen from the table that both tests can control the type I error rates adequately and the proposed test $T_{sqrt}$ is generally more powerful than $T_{pseudo}$, especially when the correlation coefficients of the response variables are large. For example, under the AR(1) correlation model with $\rho=0.8$, the powers of $T_{pseudo}$ and $T_{sqrt}$ for $\tau=20\%$ are 0.507 and 0.940. The superiority of $T_{sqrt}$ diminishes as the sparsity level of signals increases, but can still outperform $T_{pseudo}$ when the correlation coefficient is large. This implies that the proposed test tends to gain more power than $T_{pseudo}$ when the signals are sparse. $T_{pseudo}$ is slightly more powerful than $T_{sqrt}$ when the response variables are weakly dependent with decaying correlation (e.g., AR(1) correlation model with $\rho=0.3$). For example, under the AR(1) correlation model with $\rho=0.3$, the powers of $T_{pseudo}$ and $T_{sqrt}$ for $\tau=60\%$ are 0.874 and 0.794. In summary, the proposed test has more robust power than the pseudo-$F$ statistic with respect to various sparsity levels of signals and correlation magnitudes; it performs consistently well across all settings. ## 5 Applications We exemplify the tests using the gene expression data from a prostate cancer study [19], whose aim is to investigate whether gene expression differences underlie common clinical and pathological features of prostate cancer. This can be achieved by comparing gene expression differences between tumor and normal prostate samples. The data of expression profiles of approximately 12,600 genes from 52 tumor and 50 normal prostate specimens were collected. We confine our analysis to the genes of 10 pathways: Alanine, aspartate and glutamate metabolism (map00250); Pathogenic Escherichia coli infection (map05130); Viral myocarditis (map05416); PPAR signaling pathway (map03320); Rheumatoid arthritis (map05323); Tight junction (map04530); Regulation of actin cytoskeleton (map04810); Hypertrophic cardiomyopathy (map05410); Cardiac muscle contraction (map04260); TGF-beta signaling pathway (map04350). Detailed descriptions of these pathways are available in [23], [28], [17], and among others. Here for illustration, we consider using the gene expression patterns as the response variables and the status of tumor as the predictor variable. Likewise, the measure of inner product is used to construct the similarity matrix, that is, $S=YY^{\top}$. We are interested in whether the expression patterns of the genes in a pathway are associated with prostate tumor. Table 2 presents the p values of the pseudo $F$ and square-root $F$-type test statistics for testing the association between gene expression patterns and prostate tumor. The p value results are calculated based on 10000 Monte Carlo replicates. This table shows that, the p values of the pseudo $F$ test are always larger than those of the square-root $F$-type test for the 10 pathways, indicating that the proposed test is more powerful than the pseudo F test. Moreover, for some pathways such as map05130 and map04810, the pseudo $F$ test fails to detect any expression pattern difference between the normal and tumor samples under the significance level of 0.05, while the proposed test can detect a difference. Table 2: P values of the pseudo F ($T_{pseudo}$) and square-root F-type ($T_{sqrt}$) test statistics for the association between gene expression patterns and prostate tumor. Pathway | Numbers of genes | $T_{pseudo}$ | $T_{sqrt}$ ---|---|---|--- map00250 | 33 | 0.0396 | 0.0130 map05130 | 80 | 0.1797 | 0.0134 map05416 | 98 | 0.0738 | 0.0155 map03320 | 66 | 0.0055 | 0.0025 map05323 | 118 | 0.0097 | 0.0010 map04530 | 160 | 0.0564 | 0.0072 map04810 | 279 | 0.2297 | 0.0121 map05410 | 118 | 0.2385 | 0.0185 map04260 | 72 | 0.0007 | 0.0002 map04350 | 128 | 0.0478 | 0.0034 ## 6 Conclusion Distance-based regression model is very effective to detect the relationship between high-dimensional response variables and predictor variables of interest. It has a wide applications in many research fields related to statistics. One drawback is the intensive computation for the calculation of statistical significance, due to the lack of asymptotic null distribution. In this work, we establish the asymptotic distribution for the pseudo-$F$ test statistic based on the distance-based regression model and propose a new test which is more powerful than the original pseudo-$F$ test when the correlation coefficient of the outcomes for measuring the similarity is relatively large. The proposed theory is anticipated to further broaden the application of distance-based regression model. The asymptotic property of the pseudo-$F$ test only requires that the kernel for the similarity matrix is a positive definite and does not impose any restrictions on the dimension of outcomes. So it is free to handle high- dimensional data and is expected to have a wide application in high- dimensional association studies. ## Acknowledgement We would like to thank the editor and two anonymous reviewers for their insighful comments. ## References * Bertocci et al. [2012] Bertocci I, Araújo R, Incera M, Arenas F, Pereira R, Abreu H, Larsen K and Sousa-Pinto I, Benthic assemblages of rock pools in northern portugal: seasonal and between-pool variability, Scientia Marina, 2012, 76(4): 781-789. * Box [1954] Box GEP, Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in the one-way classification, The Annals of Mathematical Statistics, 1954, 25: 290-302. * Chen et al. [2012] Chen J, Bittinger K, Charlson E S, Hoffmann C, Lewis J, Wu G D, Collman R G, Bushman F D and Li H, Associating microbiome cmposition with environmental covariates using generalized UniFrac distances, Bioinformatics, 2012, 28(16): 2106-2113. * Consoli et al. [2013] Consoli P, Romeo T, Ferraro M, Sarà G and Andaloro F, Factors affecting fish assemblages associated with gas platforms in the Mediterranean Sea, Journal of Sea Research, 2013, 77: 45-52. * Gambi et al. [2020] Gambi C, Canals M, Corinaldesi C, Dell’Anno A, Manea E, Pusceddu A, Sanchez-Vidal A and Danovaro R, Impact of historical sulfide mine tailings discharge on meiofaunal assemblages (Portmán Bay, Mediterranean Sea), Science of The Total Environment, 2020, 736: 139641. * Gower [1966] Gower JC, Some distance properties of latent root and vector methods used in multivariate analysis, Biometrika, 1966,53, 325-338. * Gretton et al. [2009] Gretton A, Fukumizu K, Harchaoui Z and Sriperumbudur B, A Fast, Consistent Kernel Two-Sample Test, Advances in Neural Information Processing Systems, 2009,23: 673-681. * Han and Pan [2010] Han F and Pan W, Powerful multi-marker association tests: unifying genomic distance-based regression and logistic regression, Genetic Epidemiology, 2010, 34(7): 680-688. * Li et al. [2019] Li J, Zhang W, Zhang S and Li Q, A theoretic study of a distance-based regression model, Science in China Series A: Mathematics, 2019, 62(5): 979-998. * Li et al. [2014] Li Q, Hu J, Ding J and Zheng G, Fisher’s method of combining dependent statistics using generalizations of the gamma distribution: with applications to genetic pleiotropic associations, Biostatistics, 2014, 15: 284-295. * Li et al. [2009] Li Q, Wacholder S, Hunter D J, Hoover R N, Chanock S, Thomas G and Yu K , Genetic background comparison using distance-based regression, with applications in population stratification evaluation and adjustment, Genetic Epidemiology, 2009, 33(5): 432-441. * Liang, Bushman and FitzGerald [2015] Liang X, Bushman F D and FitzGerald G A , Rhythmicity of the intestinal microbiota is regulated by gender and the host circadian clock, Proceedings of the National Academy of Sciences, 2015, 112(33): 10479-10484. * McArdle and Anderson [2001] McArdle B and Anderson M, Fitting multivariate models to community data: a comment on distance-based redundancy analysis, Ecology, 2001, 82: 290-297. * Molari et al. [2018] Molari M, Guilini K, Lott C, Weber M, de Beer D, Meyer S, Ramette A, Wegener G, Wenzhöfer F, Martin D et al., CO2 leakage alters biogeochemical and ecological functions of submarine sands, Science Advances, 2018, 4(2): eaao2040. * Nievergelt, Libiger and Schork [2007] Nievergelt C M, Libiger O and Schork N J, Generalized analysis of molecular variance, PLoS Genet, 2007, 3(4): e51. * Norman et al. [2015] Norman J M, Handley S A , Baldridge M T, Droit L, Liu C Y, Keller B C, Kambal A, Monaco C L, Zhao G, Fleshner P et al., Disease-specific alterations in the enteric virome in inflammatory bowel disease, Cell, 2015, 160(3): 447-460. * Pinaud et al. [2018] Pinaud L, Sansonetti P J and Phalipon A, Host cell targeting by enteropathogenic bacteria T3SS effectors, Trends in Microbiology, 2018, 26(4): 266-283. * Reiss et al. [2010] Reiss P T, Stevens M H H, Shehzad Z, Petkova E and Milham M P, On distance-based permutation tests for between-group comparisons, Biometrics, 2010, 66(2): 636-643. * Singh et al. [2002] Singh D, Febbo P G, Ross K, Jackson D G, Manola J, Ladd C, Tamayo P, Renshaw A A, D’Amico A V, Richie J P et al., Gene expression correlates of clinical prostate cancer behavior, Cancer Cell, 2002, 1(2): 203-209. * Wang, Yang and Zhao [2019] Wang T, Yang C and Zhao H, Prediction analysis for microbiome sequencing data, Biometrics, 2019, 75(3): 875-884. * Wessel and Schork [2006] Wessel J and Schork N J, Generalized genomic distance-based regression methodology for multilocus association analysis, The American Journal of Human Genetics, 2006, 79(5): 792-806. * White et al. [2020] White L, O’Connor N, Yang Q, Emmerson M and Donohue I, Individual species provide multifaceted contributions to the stability of ecosystems, Nature Ecology $\&$ Evolution, 2020, 12(4): 1594-1601. * Wu [1998] Wu G, Intestinal mucosal amino acid catabolism, Journal of Nutrition, 1998, 128(8): 1249-1252. * Wu et al. [2011] Wu G D, Chen J, Hoffmann C, Bittinger K, Chen Y, Keilbaugh S A, Bewtra M, Knights D, Walters W A, Knight R et al., Linking long-term dietary patterns with gut microbial enterotypes, Science, 2011, 334(6052): 105-108. * Xu et al. [2016] Xu G, Lin L, Wei P and Pan W , An adaptive two-sample test for high-dimensional means, Biometrika, 2016, 103(3): 609-624. * Zapala and Schork [2006] Zapala M A and Schork N J Multivariate regression analysis of distance matrices for testing associations between gene expression patterns and related variables, Proceedings of the National Academy of Sciences, 2006, 103(51): 19430-19435. * Zhang [2012] Zhang K, Peters J, Janzing D and Schölkopf B, Kernel-based conditional independence test and application in causal discovery,” In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), 2012, 804-813. * Zihni et al. [2016] Zihni C, Mills C, Matter K and Balda M S, Tight junctions: from simple barriers to multifunctional molecular gates, Nature Reviews Molecular Cell Biology, 2016, 17(9): 564-580. ## Appendix ### Proof of Lemma 3.1. Denote ${\it\Omega}=X\mathaccent 869{\Delta}^{-1}X^{\top}$. Note that ${\it\Omega}$ can be regarded as a random variable constructed based on a weighted inner product, which is a positive definite kernel. Then the numerator of $T_{pseudo}$ can be written as $\text{tr}\big{(}H_{X}\mathaccent 869{S}\big{)}=\frac{1}{n}\text{tr}\big{(}{\it\Omega}\mathaccent 869{S}\big{)}+\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}\big{)}.$ We first show that $\frac{1}{n}\text{tr}\big{(}{{\it\Omega}{\bf H}S{\bf H}}\big{)}$ has the same asymptotical distribution with $\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\xi_{i}$ under $H_{0}$. To this end, we denote the eigenvalues of $X\mathaccent 869{\Delta}^{-1}X^{\top}$ by $\mathaccent 869{\lambda}_{1}\geq\cdots\geq\mathaccent 869{\lambda}_{m}\geq 0$ and $\mathaccent 869{\lambda}_{m+1}=\cdots=\mathaccent 869{\lambda}_{n}=0$. Let $\mathaccent 869{\lambda}^{*}_{1}\geq\cdots\geq\mathaccent 869{\lambda}^{*}_{n}\geq 0$ be the solutions to the equation $\int\mathaccent 869{\psi}(x_{1},x_{2})\mathaccent 869{u}_{i}(x_{1})p(x_{1})dx_{1}=\mathaccent 869{\lambda}^{*}_{i}\mathaccent 869{u}_{i}(x_{2})$ (1) with $\mathaccent 869{\psi}(x_{1},x_{2})=x_{1}^{\top}\mathaccent 869{\Delta}^{-1}x_{2}-E_{x_{1}}(x_{1}^{\top}\mathaccent 869{\Delta}^{-1}x_{2})-E_{x_{2}}(x_{1}^{\top}\mathaccent 869{\Delta}^{-1}x_{2})+E_{x_{1},x_{2}}(x_{1}^{\top}\mathaccent 869{\Delta}^{-1}x_{2})=(x_{1}-\mu)^{\top}\mathaccent 869{\Delta}^{-1}(x_{2}-\mu),$ where $\mathaccent 869{u}_{i}(x_{1})$ is the eigenfunction of the kernel $\mathaccent 869{\psi}(x_{1},x_{2})$ corresponding to $\mathaccent 869{\lambda}^{*}_{i}$. Note that eigenfunctions are orthonormal with respect to the probability measure $p(x_{1})$, i.e., $\int\mathaccent 869{u}_{i}(x_{1})\mathaccent 869{u}_{j}(x_{1})p(x_{1})dx_{1}=\mathbb{I}_{i=j},$ (2) where $\mathbb{I}_{\bf E}$ denoting the indicator function of ${\bf E}$. Zhang [27] shows that $\frac{1}{n}\mathaccent 869{\lambda}_{i}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathaccent 869{\lambda}^{*}_{i}$,for $i=1,\cdots,n$. Then $\mathaccent 869{\lambda}^{*}_{m+1}=\cdots=\mathaccent 869{\lambda}^{*}_{n}=0$. When $\mu=\bm{0}$, it then follows that $E_{x_{1}}(e_{i}^{\top}\Delta^{-1/2}{x_{1}x_{1}}^{\top}\mathaccent 869{\Delta}^{-1}x_{2})=e_{i}^{\top}\Delta^{-1/2}x_{2}$ and $E_{x_{1}}(e_{i}^{\top}\Delta^{-1/2}{x_{1}x_{1}}^{\top}\Delta^{-1/2}e_{j})=\mathbb{I}_{i=j}$, where $\Delta^{-1/2}\Delta^{-1/2}=\Delta$. This implies that $\mathaccent 869{\lambda}^{*}_{i}=1$ and $\mathaccent 869{u}_{i}(x_{1})=e_{i}^{\top}\Delta^{-1/2}x_{1}$ for $i=1,\cdots,m$. When $\mu\neq\bm{0}$, for $i,j=1,\cdots,m-1,$ it can be proved that $\mathaccent 869{\lambda}^{*}_{i}=1$ and $\mathaccent 869{\lambda}^{*}_{m}=1/(1+\mu^{\top}\Delta^{-1}\mu)$ corresponding to $\mathaccent 869{u}_{i}(x_{1})=v_{i}^{\top}(x_{1}-\mu)$ and $\mathaccent 869{u}_{m}(x_{1})=v_{m}^{\top}(x_{1}-\mu)$ satisfy the equations (1) and (2), where $v_{m}=\Delta^{-1}\mu/\mu^{\top}\Delta^{-1}\mu$ and $v_{1},\cdots,v_{m-1}$ are the solutions to the equations $v_{i}^{\top}\mu=0$ and $v_{i}^{\top}\Delta v_{j}=\mathbb{I}_{i=j}$. We now show that $\mathaccent 869{\lambda}^{*}_{1}=\cdots=\mathaccent 869{\lambda}^{*}_{m-1}=1$, $\mathaccent 869{\lambda}^{*}_{m}=1/(1+\mu^{\top}\Delta^{-1}\mu)$ and $\mathaccent 869{\lambda}^{*}_{m+1}=\cdots=\mathaccent 869{\lambda}^{*}_{n}=0$. By Theorem 3 in Zhang [27], under $H_{0}$, $\frac{1}{n}\text{tr}\big{(}{{\it\Omega}{\bf H}S{\bf H}}\big{)}$ has the same asymptotic distribution as $\sum_{i,j=1}^{n}\lambda^{*}_{i}\mathaccent 869{\lambda}^{*}_{j}z^{2}_{ij}=\sum_{i=1}^{n}\lambda^{*}_{i}\xi_{i}$, where $z_{ij}$ are iid standard Gaussian variables and $\xi_{i}=\sum_{j=1}^{m}\mathaccent 869{\lambda}^{*}_{j}z^{2}_{ij}=\varpi_{i}+\varrho_{i}/(1+\mu^{\top}\Delta^{-1}\mu)$ . In addition, by Theorem 1 in Gretton et al. [7], we have $\sum_{i=1}^{\infty}(\frac{1}{n}\lambda_{i}-\lambda^{*}_{i})\xi_{i}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$. That is, $\sum_{i=1}^{n}\frac{1}{n}\lambda_{i}\xi_{i}$ has the same asymptotic distribution as $\sum_{i=1}^{n}\lambda^{*}_{i}\xi_{i}$. It thus follows that $\frac{1}{n}\text{tr}\big{(}{{\it\Omega}{\bf H}S{\bf H}}\big{)}$ has the same asymptotic distribution as $\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\xi_{i}$. Next we show that $\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}\big{)}$ converges to 0 in probability. Write $\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}\big{)}=\text{tr}(AB)=\sum_{i=1}^{m}\sum_{j=1}^{m}a_{ij}b_{ij},$ where $A=(a_{ij})_{m\times m}=(\frac{1}{n}X^{\top}X)^{-1}-\mathaccent 869{\Delta}^{-1}$ and $B=(b_{ij})_{m\times m}=\frac{1}{n}X^{\top}\mathaccent 869{S}X$. By the law of large numbers, it can be obtained that the $(i,j)$th entry of $\frac{1}{n}X^{\top}X=\frac{1}{n}\sum_{i=1}^{n}x_{i}x_{i}^{\top}$ converges in probability to $\mathaccent 869{\delta}_{ij}$, that is $a_{ij}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$, $i,j=1,\cdots,m$. Note that $(e_{i}-e_{j})^{\top}S(e_{i}-e_{j})\geq 0$ and $(e_{i}+e_{j})^{\top}S(e_{i}+e_{j})\geq 0$ for a positive definite kernel matrix $S$. Then we have $2|s_{ij}|\leq s_{ii}+s_{jj}$ and $|E(s_{ij})|\leq E(s_{ii})$, $i,j=1,\cdots,m$. It follows that $\displaystyle E(s_{ij}^{2})\leq\frac{1}{4}E((s_{ii}+s_{jj})^{2})=\frac{1}{2}E(s_{ii}^{2})+\frac{1}{2}E(s_{ii}s_{jj})\leq E(s_{ii}^{2}),$ $\displaystyle|E(s_{ij}s_{lk})|\leq\frac{1}{2}E(s_{ij}^{2})+\frac{1}{2}E(s_{lm}^{2})\leq E(s_{ii}^{2})=c_{1},$ for $i,j,l,k=1,\cdots,m$. Similarly, we can obtain that $|E(x_{j_{1}i_{1}}x_{j_{2}i_{1}}x_{j_{3}i_{2}}x_{j_{4}i_{2}})|\leq\frac{1}{4}E\big{(}x_{j_{1}i_{1}}^{4}+x_{j_{2}i_{1}}^{4}+x_{j_{3}i_{2}}^{4}+x_{j_{4}i_{2}}^{4}\big{)}\leq c_{0}$ for $i_{1},i_{2}=1,\cdots,m$ and $j_{1},j_{2},j_{3},j_{4}=1,\cdots,n$. Define $\overline{x}=\frac{1}{n}\sum_{j=1}^{n}x_{j}$ and $\overline{x}=(\overline{x}_{1},\cdots,\overline{x}_{m})$. Then we can write $\displaystyle B=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}(x_{i}-\overline{x})s_{ij}(x_{j}-\overline{x})^{\top}.$ Owing to $x_{i}-\overline{x}=(x_{i}-\mu)-(\overline{x}-\mu)$, we can assume that $E(\mathbb{X})=\bm{0}_{m}$ to estimate the expectation and variance of $b_{ij}$ under $H_{0}$, $\displaystyle|E(B)|=$ $\displaystyle\big{|}\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}E(s_{ij})E\big{(}(x_{i}-\overline{x})(x_{j}-\overline{x})^{\top}\big{)}\big{|}$ $\displaystyle\leq$ $\displaystyle E(s_{ii})\frac{1}{n}\sum_{i=1}^{n}\big{|}E\big{(}(x_{i}-\overline{x})(x_{i}-\overline{x})^{\top}\big{)}\big{|}+E(s_{ii})\frac{1}{n}\sum_{i\neq j=1}^{n}\big{|}E\big{(}(x_{i}-\overline{x})(x_{j}-\overline{x})^{\top}\big{)}\big{|}$ $\displaystyle\leq$ $\displaystyle E(s_{ii})|\Delta|+E(s_{ii})|\Delta|=2E(s_{ii})|\Delta|,$ implying that $E(b_{i_{1}i_{2}})$ is finite, $i_{1},i_{2}=1,\cdots,n$. Through some algebraic manipulations, it can be obtained that $\displaystyle E(b_{i_{1}i_{2}}^{2})=$ $\displaystyle\displaystyle\frac{1}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}({x_{j_{1}i_{1}}}-\overline{x}_{i_{1}})({x_{j_{2}i_{1}}}-\overline{x}_{i_{1}})({x_{j_{3}i_{2}}}-\overline{x}_{i_{2}}))({x_{j_{4}i_{2}}}-\overline{x}_{i_{2}})\big{)}$ $\displaystyle=$ $\displaystyle\displaystyle\frac{1}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}}\big{)}-\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{\overline{x}_{i_{2}}}\big{)}$ $\displaystyle+\displaystyle\frac{2}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}}\big{)}+\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{x_{j_{1}i_{1}}}{\overline{x}_{i_{1}}}{x_{j_{3}i_{2}}}{\overline{x}_{i_{2}}}\big{)}$ $\displaystyle-\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{x_{j_{1}i_{1}}}{\overline{x}_{i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}}\big{)}+\displaystyle\frac{1}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}c_{s}E\big{(}{\overline{x}_{i_{1}}}{\overline{x}_{i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}}\big{)}$ $\displaystyle\triangleq$ $\displaystyle\kappa_{1}+\kappa_{2}+\kappa_{3}+\kappa_{4}+\kappa_{5}+\kappa_{6},$ where $c_{s}=E(s_{j_{1}j_{2}}s_{j_{3}j_{4}})$. Then using the assumptions $E(\mathbb{X}_{1}^{4})\leq c_{0}$ and $E(s_{11}^{2})\leq c_{1}$, we have $\displaystyle|\kappa_{1}|\leq$ $\displaystyle\displaystyle\frac{c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}\big{|}E\big{(}x_{j_{1}i_{1}}x_{j_{2}i_{1}}x_{j_{3}i_{2}}x_{j_{4}i_{2}}\big{)}\big{|}$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{c_{1}}{n}\max_{i_{1},i_{2}}E({x^{2}_{1i_{1}}}{x^{2}_{1i_{2}}})+c_{1}\max_{i_{1},i_{2}}E({x^{2}_{1i_{1}}}{x^{2}_{2i_{2}}})+2c_{1}\max_{i_{1},i_{2}}|E({x_{1i_{1}}}{x_{2i_{1}}}{x_{1i_{2}}}{x_{2i_{2}}})|$ $\displaystyle\leq$ $\displaystyle(\displaystyle\frac{1}{n}c_{0}c_{1}+c_{0}c_{1}+2c_{0}c_{1})\leq 4c_{0}c_{1}.$ Similarly, $\displaystyle|\kappa_{2}|\leq$ $\displaystyle\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|c_{s}||E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{\overline{x}_{i_{2}}})|$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{4c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}})|\leq 16c_{0}c_{1},$ $\displaystyle|\kappa_{3}|\leq$ $\displaystyle\displaystyle\frac{2}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|c_{s}||E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}})|$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{2c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}})|\leq 8c_{0}c_{1},$ $\displaystyle|\kappa_{4}|\leq$ $\displaystyle\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|c_{s}||E({x_{j_{1}i_{1}}}{\overline{x}_{i_{1}}}{x_{j_{3}i_{2}}}{\overline{x}_{i_{2}}})|$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{4c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}})|\leq 16c_{0}c_{1},$ $\displaystyle|\kappa_{5}|\leq$ $\displaystyle\displaystyle\frac{4}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|c_{s}||E({x_{j_{1}i_{1}}}{\overline{x}_{i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}})|$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{4c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}})|\leq 16c_{0}c_{1},$ $\displaystyle|\kappa_{6}|\leq$ $\displaystyle\displaystyle\frac{1}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|c_{s}||E({\overline{x}_{i_{1}}}{\overline{x}_{i_{1}}}{\overline{x}_{i_{2}}}{\overline{x}_{i_{2}}})|$ $\displaystyle\leq$ $\displaystyle\displaystyle\frac{c_{1}}{n^{2}}\sum_{j_{1},j_{2},j_{3},j_{4}=1}^{n}|E({x_{j_{1}i_{1}}}{x_{j_{2}i_{1}}}{x_{j_{3}i_{2}}}{x_{j_{4}i_{2}}})|\leq 4c_{0}c_{1}.$ It thus follows that $E(b_{i_{1}i_{2}}^{2})\leq 4c_{0}c_{2}(1+4+6+4+1)=64c_{0}c_{1}$ for $i_{1},i_{2}=1,\cdots,m$ and $j_{1},j_{2},j_{3},j_{4}=1,\cdots,n$. Then $E(b_{i_{1}i_{2}})$ and $\text{var}(b_{i_{1}i_{2}})$ are finite. Further by the Chebyshev’s inequality, $b_{i_{1}i_{2}}$ is bounded in probability. Therefore, $\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}\big{)}$ converges to 0 in probability. This completes the proof. ### Proof of Lemma 3.2. Let $\mathbb{H}$ be a reproducing kernel Hilbert space on $\mathbb{X}$, with a continuous feature mapping $\phi(x)$. With the positive definite kernel function $s_{ij}=\psi(y_{i},y_{j})=\langle\phi({y_{i}}),\phi({y_{j}})\rangle_{\mathbb{H}}$, it can be obtained that $\displaystyle\frac{1}{n^{2}}{\bf 1}^{\top}_{n}S{\bf 1}_{n}=\langle\frac{1}{n}\sum_{i=1}^{n}\phi({y_{i}}),\frac{1}{n}\sum_{j=1}^{n}\phi({y_{j}})\rangle_{\mathbb{H}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}E(\langle\phi({y_{i}}),\phi({y_{j}})\rangle_{\mathbb{H}})=E(s_{ij}).$ By the law of large numbers, we have $\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}=\frac{1}{n}\text{tr}(\mathaccent 869{S})=\frac{1}{n}\text{tr}(S)-\frac{1}{n^{2}}{\bf 1}^{\top}_{n}S{\bf 1}_{n}\stackrel{{\scriptstyle p}}{{\longrightarrow}}E(s_{ii})-E(s_{ij}).$ By Lemma 3.1, $\text{tr}\big{(}H_{X}\mathaccent 869{S}\big{)}/\sum_{i=1}^{n}\frac{\lambda_{i}}{n}$ has the same asymptotic distribution as $\sum_{i=1}^{n}w_{i}\xi_{i}$, where $w_{i}=\lambda_{i}/\sum_{j=1}^{n}\lambda_{j}$. Since $\sum_{i=1}^{n}w_{i}=1$ and $0\leq w_{i}\leq 1$, $i=1,\cdots,n$, it then follows that $E\big{(}\sum_{i=1}^{n}w_{i}\xi_{i}\big{)}=m_{0}\sum_{i=1}^{n}w_{i}=m_{0}~{}~{}\text{and}~{}~{}\text{var}\big{(}\sum_{i=1}^{n}w_{i}\xi_{i}\big{)}=2m_{0}\sum_{i=1}^{n}w_{i}^{2}\leq 2m_{0}\sum_{i=1}^{n}w_{i}=2m_{0},$ where $m_{0}=1/(1+\mu^{\top}\Delta^{-1}\mu)+m-1$. Hence, by Chebyshev’s inequality, for any $\tau_{0}>0$, $\displaystyle P\Big{(}\Big{|}\frac{1}{n}\sum_{i=1}^{n}w_{i}\xi_{i}\Big{|}\geq\tau_{0}+\frac{m_{0}}{n}\Big{)}$ $\displaystyle\leq P\Big{(}\Big{|}\frac{1}{n}\sum_{i=1}^{n}w_{i}\xi_{i}-\frac{m_{0}}{n}\Big{|}\geq\tau_{0}\Big{)}$ $\displaystyle\leq\frac{2m_{0}}{n^{2}\tau_{0}^{2}}.$ Then for any $\tau_{1}=2\tau_{0}>0$, $\lim\limits_{n\rightarrow\infty}P\Big{(}\Big{|}\frac{1}{n}\sum_{i=1}^{n}w_{i}\xi_{i}\Big{|}\geq\tau_{1}\Big{)}=0.$ It follows that $\text{tr}\big{(}H_{X}\mathaccent 869{S}\big{)}/\sum_{i=1}^{n}\lambda_{i}$ converges in distribution to zero, that is, $\text{tr}\big{(}H_{X}\mathaccent 869{S}\big{)}/\sum_{i=1}^{n}\lambda_{i}$ converges in probability to zero. Then we conclude that $\displaystyle\displaystyle\frac{1}{n-m}\text{tr}\big{(}({\bf I}_{n}-H_{X})\mathaccent 869{S}\big{)}$ $\displaystyle=\frac{\text{tr}\big{(}({\bf I}_{n}-H_{X})\mathaccent 869{S}\big{)}}{\sum_{i=1}^{n}\lambda_{i}}\displaystyle\frac{1}{n-m}\sum_{i=1}^{n}\lambda_{i}\stackrel{{\scriptstyle p}}{{\longrightarrow}}E(s_{ii})-E(s_{ij}).$ ### Proof of Theorem 3.3. Theorem 3.3 is a direct consequence of Lemma 3.1 and Lemma 3.2 . ### Proof of Proposition 3.4. When Assumption 3.1 holds, by Theorem 1 of Gretton et al. [7] and the Chebyshev’s inequality, we have $\sum_{i=1}^{\infty}(\frac{\lambda_{i}}{n})^{1/2}z_{i}^{4}$ and $\sum_{i=1}^{\infty}(\lambda_{i}^{*})^{1/2}z_{i}^{4}$ are bounded in probability, as $n\rightarrow\infty$. Combining this and the Cauchy-Schwarz inequality leads to $\displaystyle\Big{|}\sum_{i=1}^{\infty}\big{(}(\frac{\lambda_{i}}{n})^{1/2}-(\lambda^{*}_{i})^{1/2}\big{)}z_{i}^{2}\Big{|}\leq$ $\displaystyle\Big{\\{}\sum_{i=1}^{\infty}(\frac{\lambda_{i}}{n})^{1/2}z_{i}^{4}\Big{\\}}^{1/2}\Big{\\{}\sum_{i=1}^{\infty}\Big{|}(\frac{\lambda_{i}}{n})^{1/4}-(\lambda^{*}_{i})^{1/4}\Big{|}^{2}\Big{\\}}^{1/2}$ $\displaystyle+\Big{\\{}\sum_{i=1}^{\infty}(\frac{\lambda_{i}}{n})^{1/2}z_{i}^{4}\Big{\\}}^{1/2}\Big{\\{}\sum_{i=1}^{\infty}\Big{|}(\frac{\lambda_{i}}{n})^{1/4}-(\lambda^{*}_{i})^{1/4}\Big{|}^{2}\Big{\\}}^{1/2}$ $\displaystyle\leq$ $\displaystyle\Big{\\{}\sum_{i=1}^{\infty}(\frac{\lambda_{i}}{n})^{1/2}z_{i}^{4}\Big{\\}}^{1/2}\Big{\\{}\sum_{i=1}^{\infty}\Big{|}(\frac{\lambda_{i}}{n})^{1/2}-(\lambda^{*}_{i})^{1/2}\Big{|}\Big{\\}}^{1/2}$ $\displaystyle+\Big{\\{}\sum_{i=1}^{\infty}(\lambda_{i}^{*})^{1/2}z_{i}^{4}\Big{\\}}^{1/2}\Big{\\{}\sum_{i=1}^{\infty}\Big{|}{(\frac{\lambda_{i}}{n})}^{1/2}-{\lambda^{*}_{i}}^{1/2}\Big{|}\Big{\\}}^{1/2}\stackrel{{\scriptstyle P}}{{\longrightarrow}}0.$ ### Proof of Lemma 3.5. For any symmetric semidefinite matrices ${\bf D}_{1}$, there exits a symmetric matrices $\bf P$, so that ${\bf D}_{1}={\bf P}{\bf P}^{\top}$. For any n dimension vector $\bf x$, we have ${\bf x}^{\top}{\bf P}^{\top}{\bf D}_{2}{\bf P}{\bf x}\geq 0$. It follows that ${\bf P}^{\top}{\bf D}_{2}{\bf P}$ is a symmetric semidefinite matrice. Thus ${\bf D}={\bf D}_{1}{\bf D}_{2}={\bf PP}^{\top}{\bf D}_{2}$ has the same eigenvalues as ${\bf P}^{\top}{\bf D}_{2}{\bf P}$ whose eigenvalues are nonnegative. Consequently, $\text{tr}({\bf D}^{\top}{\bf D})=\sum_{i=1}^{n}\lambda_{i,D}^{2}\leq\big{(}\sum_{i=1}^{n}\lambda_{i,D}\big{)}^{2}=\text{tr}({\bf D})^{2}$, where $\lambda_{i,D}$ is eigenvalue of $\bf D$, $i=1,\cdots,n$. ### Proof of Theorem 3.6. For ${\it\Omega}=X\mathaccent 869{\Delta}^{-1}X^{\top}$, it can be obtained that $n^{1/2}\text{tr}\big{(}H_{X}\mathaccent 869{S}^{1/2}\big{)}=\text{tr}\big{(}{\it\Omega}(\mathaccent 869{S}/n)^{1/2}\big{)}+n^{1/2}\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}^{1/2}\big{)}.$ We first show that $\text{tr}\big{(}{{\it\Omega}(\mathaccent 869{S}/n)^{1/2}}\big{)}$ has the same asymptotical distribution with $\sum_{i=1}^{n}(\lambda_{i}/n)^{1/2}\xi_{i}$ under $H_{0}$. By extending the proof of Theorem 3 in Zhang [27], under $H_{0}$, $\text{tr}\big{(}{{\it\Omega}(\mathaccent 869{S}/n)^{1/2}}\big{)}$ has the same asymptotic distribution as $\sum_{i,j=1}^{n}(\lambda^{*}_{i})^{1/2}\mathaccent 869{\lambda}^{*}_{j}z^{2}_{ij}=\sum_{i=1}^{n}(\lambda^{*}_{i})^{1/2}\xi_{i}$. In addition, by proposition 3.4, we have $\sum_{i=1}^{n}\big{(}(\lambda_{i}/n)^{1/2}-(\lambda^{*}_{i})^{1/2}\big{)}\xi_{i}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$. It thus follows that $\text{tr}\big{(}{{\it\Omega}(\mathaccent 869{S}/n)^{1/2}}\big{)}$ has the same asymptotic distribution as $\sum_{i=1}^{n}(\lambda_{i}/n)^{1/2}\xi_{i}$. Next we show that $n^{1/2}\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}^{1/2}\big{)}$ converges to 0 in probability. Write $n^{1/2}\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}^{1/2}\big{)}=\text{tr}(A\mathaccent 869{B})=\sum_{i=1}^{m}\sum_{j=1}^{m}a_{ij}\mathaccent 869{b}_{ij},$ where $A=(a_{ij})_{m\times m}=(\frac{1}{n}X^{\top}X)^{-1}-\mathaccent 869{\Delta}^{-1}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$ and $\mathaccent 869{B}=(\mathaccent 869{b}_{ij})_{m\times m}=X^{\top}(\mathaccent 869{S}/n)^{1/2}X$. Denote $B_{0}=\mathaccent 869{\Delta}^{-1/2}\mathaccent 869{B}\mathaccent 869{\Delta}^{-1/2}$ and by Lemma 3.5 $\displaystyle\text{tr}(B_{0}^{\top}B_{0})=\text{tr}\big{(}\mathaccent 869{\Delta}^{-1}\mathaccent 869{B}\mathaccent 869{\Delta}^{-1}\mathaccent 869{B}\big{)}=\text{tr}\big{(}{\it\Omega}(\mathaccent 869{S}/n)^{1/2}{\it\Omega}(\mathaccent 869{S}/n)^{1/2}\big{)}\leq\Big{(}\text{tr}\big{(}{\it\Omega}(\mathaccent 869{S}/n)^{1/2}\big{)}\Big{)}^{1/2},$ where $\text{tr}\big{(}{{\it\Omega}(\mathaccent 869{S}/n)^{1/2}}\big{)}$ tends to $\sum_{i=1}^{\infty}(\lambda^{*}_{i})^{1/2}\xi_{i}$ who is bounded in probability. It implies that $\text{tr}(B_{0}^{\top}B_{0})$, i.e., the $(i,j)$th entry of $B_{0}$ is bounded in probability such that the $(i,j)$th entry of $\mathaccent 869{B}=\mathaccent 869{\Delta}^{1/2}B_{0}\mathaccent 869{\Delta}^{1/2}$ is as well bounded, $i,j=1,\cdots,m$. One can derive that $n^{1/2}\text{tr}\big{(}({H_{X}-\frac{1}{n}{\it\Omega}})\mathaccent 869{S}^{1/2}\big{)}$ converges to 0 in probability. It follows that $\text{tr}\big{(}H_{X}(n\mathaccent 869{S})^{1/2}\big{)}$ has the same asymptotic distribution as $\sum_{i=1}^{n}(\lambda_{i}/n)^{1/2}\xi_{i}$ as $n\rightarrow\infty$. By the extension of Lemma 3.2, we have $\text{tr}\big{(}({\bf I}_{n}-H_{X})\mathaccent 869{S}^{1/2}\big{)}/\sum_{i=1}^{n}\lambda_{i}^{1/2}\stackrel{{\scriptstyle p}}{{\longrightarrow}}1$ as $n\rightarrow\infty$. Consequently, $T_{sqrt}$ has the same asymptotic distribution with $m^{-1}\sum_{i=1}^{n}\eta_{i}\xi_{i}$. ### Proof of Theorem 3.7. For the linear kernel $\psi(\cdot,\cdot)$, it follows that $\displaystyle\frac{1}{n}X^{\top}\mathaccent 869{S}X=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}(x_{i}-\overline{x})s_{ij}(x_{j}-\overline{x})^{\top}$ $\displaystyle=$ $\displaystyle\Big{[}(n)^{-1}\sum_{i=1}^{n}(x_{i}-\overline{x})(x_{i}^{\top}\bm{\beta}+\varepsilon_{i}^{\top})\Big{]}\Big{[}\sum_{j=1}^{n}(\bm{\beta}^{\top}x_{j}+\varepsilon_{j})(x_{j}-\overline{x})^{\top}\Big{]}.$ Let $G=(n)^{-1/2}\sum_{i=1}^{n}(x_{i}-\overline{x})(x_{i}^{\top}\bm{\beta}+\varepsilon_{i}^{\top})=\frac{1}{n^{1-\iota}}\sum_{i=1}^{n}(x_{i}-\overline{x})x_{i}^{\top}\mathaccent 869{\bm{\beta}}_{n}+(n)^{-1/2}\sum_{i=1}^{n}(x_{i}-\overline{x})\varepsilon_{i}^{\top}\triangleq G_{1}+G_{2}$, where $G_{2}$ is bounded in probability by the Chebyshev’s inequality and then $G_{2}/n^{\iota}$ are converge in probability to zero. By the law of large numbers, it can be obtained that $Q_{1}=\frac{1}{n}\sum_{i=1}^{n}s_{ij}=\frac{1}{n}\sum_{i=1}^{n}(x_{i}^{\top}\bm{\beta}+\varepsilon_{i}^{\top})(\bm{\beta}^{\top}x_{i}+\varepsilon_{i})\stackrel{{\scriptstyle p}}{{\longrightarrow}}\Delta_{\varepsilon}~{}~{}\text{and}~{}~{}\ Q_{2}=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}s_{ij}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$ and the $(i,j)$th entry of $G_{1}/n^{\iota}$ converges in probability to the $(i,j)$th entry of $\Delta\mathaccent 869{\bm{\beta}}$ for $i=1,\cdots,m$ and $j=1,\cdots,q$. Then we have the numerator of $T$, $\frac{1}{n^{2\iota}}\text{tr}\big{(}H_{X}\mathaccent 869{S}\big{)}=\frac{1}{n^{2\iota}}\text{tr}\big{(}(\frac{1}{n}X^{\top}X)^{-1}GG^{\top}\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\text{tr}(\mathaccent 869{\Delta}^{-1}\Delta\mathaccent 869{\bm{\beta}}\mathaccent 869{\bm{\beta}}^{\top}\Delta^{\top})>0.$ and the denominator of $T_{pseudo}$, $\text{tr}\big{(}\mathaccent 869{S}/n\big{)}=\text{tr}(Q_{1}-Q_{2})\stackrel{{\scriptstyle p}}{{\longrightarrow}}\text{tr}(\Delta_{\varepsilon})>0.$ It follows that $\frac{T_{pseudo}}{n^{2\iota}}=\frac{n-m}{mn}\frac{\text{tr}\big{(}H_{X}\mathaccent 869{S}/n^{2\iota}\big{)}}{\text{tr}\big{(}({\bf I}_{n}-H_{X})\mathaccent 869{S}/n\big{)}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\frac{1}{m}\frac{\text{tr}(\mathaccent 869{\Delta}^{-1}\Delta\mathaccent 869{\bm{\beta}}\mathaccent 869{\bm{\beta}}^{\top}\Delta^{\top})}{\text{tr}(\Delta_{\varepsilon})}>0$ and then for any $\mathaccent 869{\tau}_{0}>0$, $\lim_{n\rightarrow\infty}P\Big{(}\Big{|}\frac{T_{pseudo}}{n^{2\iota}}-\frac{1}{m}\frac{\text{tr}(\mathaccent 869{\Delta}^{-1}\Delta\mathaccent 869{\bm{\beta}}\mathaccent 869{\bm{\beta}}^{\top}\Delta^{\top})}{\text{tr}(\Delta_{\varepsilon})}\Big{|}>\mathaccent 869{\tau}_{0}\Big{)}=0$ Now we show that $P\big{(}T_{pseudo}>F_{1}^{-1}(1-\alpha)\big{)}=P\big{(}T_{pseudo}/n^{2\iota}>F_{1}^{-1}(1-\alpha)/n^{2\iota}\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}1$ as $n\rightarrow\infty$. ### Proof of Theorem 3.8. Denote $A_{0}=H_{X}(\mathaccent 869{S}/n)^{1/2}$ whose eigenvalues are nonnegative by Lemma 3.5 and then $\text{tr}\big{(}H_{X}(\mathaccent 869{S}/n)^{1/2})=\text{tr}(A_{0})\geq\text{tr}(A_{0}^{\top}A_{0})^{1/2}=\text{tr}\big{(}H_{X}\mathaccent 869{S}/n)$ where $\text{tr}\big{(}H_{X}\mathaccent 869{S}/n)$ converges to a positive constant in probability by Theorem 3.7. It follows that $\frac{T_{sqrt}}{n}=\frac{\text{tr}\big{(}H_{X}\mathaccent 869{S}^{1/2}/mn\big{)}}{\text{tr}\big{(}({\bf I}_{n}-H_{X})\mathaccent 869{S}^{1/2}/(n-m)\big{)}}\geq\frac{n-m}{n}\frac{\text{tr}(A_{0}^{\top}A_{0})^{1/2}}{\sum_{i=1}^{n}(\lambda_{i}/n)^{1/2}}>0.$ Consequently, we have $P\big{(}T_{sqrt}>F_{2}^{-1}(1-\alpha)\big{)}=P\big{(}T_{sqrt}/n>F_{2}^{-1}(1-\alpha)/n\big{)}\stackrel{{\scriptstyle p}}{{\longrightarrow}}1$ as $n\rightarrow\infty$.
# Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application Liangqi Yuan, , Lu Su, , Ziran Wang Manuscript received January 11, 2023.L. Yuan, L. Su, and Z. Wang are with the College of Engineering, Purdue University, West Lafayette, IN 47907, USA (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). ###### Abstract Federated learning (FL) shines through in the internet of things (IoT) with its ability to realize collaborative learning and improve learning efficiency by sharing client model parameters trained on local data. Although FL has been successfully applied to various domains, including driver monitoring application (DMA) on the internet of vehicles (IoV), its usages still face some open issues, such as data and system heterogeneity, large-scale parallelism communication resources, malicious attacks, and data poisoning. This paper proposes a federated transfer-ordered-personalized learning (FedTOP) framework to address the above problems and test on two real-world datasets with and without system heterogeneity. The performance of the three extensions, transfer, ordered, and personalized, is compared by an ablation study and achieves 92.32$\%$ and 95.96$\%$ accuracy on the test clients of two datasets, respectively. Compared to the baseline, there is a 462$\%$ improvement in accuracy and a 37.46$\%$ reduction in communication resource consumption. The results demonstrate that the proposed FedTOP can be used as a highly accurate, streamlined, privacy-preserving, cybersecurity-oriented, personalized framework for DMA. ###### Index Terms: Federated learning, internet of things (IoT), driver monitoring, privacy protection, personalization. ## I Introduction With the rapid development of sensing, computing, and communication technologies, the internet of things (IoT) is a popular solution to solve the problems in industry, agriculture, energy, transportation, etc. However, privacy issues in IoT are often a significant concern have been raised due to the intrusive behavior of sensors [1]. Specifically for the internet of vehicles (IoV), it massively parallels each vehicle and various sensors it carries, including global positioning system (GPS), radar, camera, light detection and ranging (LiDAR), etc., enabling pedestrian detection [2], automated driving [3], mobility digital twins [4], and other transportation applications. Federated learning (FL) has received extensive attention for protecting user privacy by sharing only model weights and not including users’ raw data. FL is widely known for its successful business case in Google mobile keyboard prediction [5]. Nowadays, It has also become one of the mainstream and thriving solutions for privacy protection and efficient learning. ### I-A Federated Learning and Related Work FL is a potentially feasible solution to the privacy problem in IoT, which is able to avoid the proliferation, distribution, and exchange of local client data by sharing model parameters after training the model on local client data. FL frameworks are widely used in healthcare [6, 7], industrial [8, 9], IoV [10, 11], etc., due to their usages of large scale and personalized data in an efficient and privacy-preserving way. Although FL has significant contributions to massively parallel devices and computations, it still has a notable drawback in that it cannot efficiently handle non-independent and identically distributed (non-i.i.d.) data. It is required to customize the applicable FL framework according to the features, resources, and constraints possessed by users, data, clients, and servers. Non-i.i.d. data and heterogeneity have always been a challenge and a key to research in FL [12, 13, 14]. Non-i.i.d. data is a common phenomenon for real- world clients that are scattered and not interoperable: Taking IoV as an example, each driver is heterogeneous as a client. FedAvg [15], as one of the first proposed feasibility methods, has been the subject and center of research. FedAvg averages all local models to get the global model so that the local model may deviate far from the global optimum in the parameter space leading to some limitations in FedAvg. It is necessary to ensure that the local model does not deviate from the global model (prevent overfitting) and, simultaneously, that the local model can effectively learn the local client dataset (prevent underfitting). Based on FedAvg, FedProx [16] is proposed to limit the deviation of the local model from the global model by adding a proximal term. Besides considering accuracy, the FL framework in IoT should not underestimate communication and training resource constraints, cybersecurity, and ubiquity. Some of the recent surveys summarized challenges, threats, and solutions of the FL decentralization paradigm for IoT, including limited computing power, unreliable and limited availability, local training, accuracy, communication overhead, etc. [17, 18, 19, 20, 21, 22]. Transfer and edge learning are popular solutions to reduce communication resource consumption in FL frameworks. Zhang et al. [23] performed a federated transfer learning framework to detect driver drowsiness, where transfer learning was employed to save the communication cost in the FL framework. Su et al. [24] introduced edge servers as a collaborative mechanism, where aggregation of local models was aggregated in the edge server and then sent to the global server to aggregate the global model. The benefit of the additional edge server was that the communication between massively parallel clients and the edge server was consumed because the edge server was geographically close to the clients. High latency and intermittent connections could be mitigated. In addition, the edge server could also provide personalized aggregated local models due to the similarity of geographically adjacent clients. Cyber attack is a problem that cannot be ignored for FL frameworks. Sun et al. [25] developed an attack method for FL framework in IoT, in which a bi-level optimization framework was proposed to compute optimal poisoning attacked FL framework, including direct, indirect, and hybrid attacks. Meanwhile, Zhang et al. [26] utilized a generative adversarial network (GAN)-based approach to attack the FL framework, especially since the attacker did not need any prior knowledge to carry out the attack. Personalization is a common approach for FL frameworks to improve applicability for diverse users [27]. Fallah et al. [28] proposed a personalized variant of the FL, which allowed clients to perform several gradient descent iterations on an initial global model using local data to obtain a personalized local model. Wu et al. [29] explored a cloud edge-based personalized FL framework for in-home health monitoring, which addressed the problem that a single global model performed poorly on a specific client. Since the global model could only capture the common features of all clients, it lacked the ability to analyze fine-grained information of specific clients. ### I-B Federated Learning in Driver Monitoring Applications Driver monitoring application (DMA) in IoV is adopted as the research direction in this paper due to its real and visual image data, valuable application scenarios, and relatively blank research area. DMA also has challenges in terms of driver privacy issues, communication, and diversity and personalized driver behavior. Related DMA literature covers a wide variety of devices with algorithms to achieve different purposes, such as dangerous state detection [30], driver emotion recognition [31], driver lane change inference [32], etc. Compared to other methods [33, 34, 35], FL not only highlights efficient learning but also effectively protects the privacy of driver, passenger, and pedestrian biometric information, driving routes, and confidential driving areas such as military installations. In this paper, we introduce and adapt FL to DMA. Although some FL frameworks exist for DMA, they all suffer from some critical problems. Doshi et al. [36] proposed a FL edge-device framework to obtain a global model by aggregation feature representations and obtained considerable accuracy in recognizing driver activities. For the i.i.d. setting, the dataset was partitioned for each edge node in a random way, while for the non-i.i.d. setting, the dataset was assigned selectively. Zhao et al. [37] proposed a FL framework to monitor fatigue driving, where the non-i.i.d. setting was simulated by controlling the number of images per client. The above FL frameworks for DMA did not really take into account the actual situation of the application but artificially created a simulation scenario. Therefore, there is an urgent need for realistic analysis and research for real-world DMA, considering that the user (driver) should exist independently and be non-interoperable with different clients (vehicles). Moreover, in addition to the necessity of test datasets, the test client is also a critical evaluation criterion, which can reflect the universality of the FL framework. We summarize the existing neglects and challenges in the current FL for DMA framework as follows. * • Clients in FL for DMA frameworks are often defined in unreasonable and incomprehensible forms. A real and natural definition of a client should be a driver or a vehicle. * • There is no paper proposing to test on a testing client (not involved in training process), which lacks universal testing for the FL framework. * • For DMA scenario, there is a great diversity and individuality of driver behaviors, postures, and facial expressions, which call for more presonalized studies than other general IoV scenarios. * • Similarly, DMA also has diverse scenarios, including diverse vehicle models, interior colors, seat positions, etc., which will greatly increase the learning difficulty. ### I-C Proposed Solution and Contribution In this paper, we aim to propose a FL framework applicable and specific to practical applications in IoV, especially DMA, where an imaginary FL framework for IoV is illustrated in Fig. 1. Each local client, i.e., vehicle, includes a training module and a perception module. The training module uploads the model parameters to the server after learning and training the local data. After aggregation and optimizing the parameters of the local client models, the server downloads the global model parameters to the perception module in the local client. Moreover, transfer learning can be used to reduce the number of trainable parameters, resulting in reduced communication consumption. The server can save different global models for different scenarios, such as road types, weather types, and vehicle types, so that the model can have better applicability. Figure 1: Structure illustration of a FL framework for IoV. The server interacts with the local client and saves different scenarios as different models. Transparent neurons are non-trainable parameters, and non-transparent neurons are trainable parameters. Therefore, a federated transfer-ordered-personalized learning (FedTOP) framework is proposed to address the problems of accuracy, cybersecurity, communication resources, and diversified scenarios. In addition to the transfer-extension shown in Fig. 1, the FedTOP framework also enhances robustness and cybersecurity by orderly dropout clients due to their possible overfitting and poisoning of the data. Furthermore, the FedTOP framework is able to remarkably improve accuracy by adapting all clients through personalized-extension. The contributions of this paper are: * • For realistic problems and usage scenarios in DMA, we propose a feasible FL framework FedTOP, realizing privacy protection, high accuracy, low communication requirements, cybersecurity, and pervasiveness. To the best of our knowledge, this is one of the first papers to establish a feasible FL framework for DMA. * • The proposed FedTOP framework is tested on two real-world driver monitoring datasets with and without system heterogeneity, systematically characterizing system heterogeneity in real-world datasets and achieving considerable accuracies with 92.32$\%$ and 95.96$\%$, respectively. * • The experiments highlight a realistic and natural client setup, i.e., drivers and vehicles are naturally formed as clients. Moreover, we innovatively propose evaluation criteria for training and testing clients to test the generalization ability of the proposed FedTOP on different clients. * • Through an ablation study, we demonstrate the performance and utility of the transfer, ordered, and personalized extensions. These detachable extensions can be selectively installed according to the task description, and the FL framework combined with different extensions can effectively adapt to different IoT application scenarios. The presentation of this paper is as follows. The problem statement and proposed solution are described in Section II. The experimental setup, heterogeneity, and results have been demonstrated in Section III. Section IV discusses the performances of three extensions of the proposed framework, followed by Section V summarizing the paper and expounding on future work. ## II Methodologies ### II-A Problem Statement FL framework protects privacy, increases training efficiency, and saves communication resources by sharing only model parameters in IoT. In this paper, the FL framework is used to solve a driver activity classification task in DMA. Clients in real-world IoT are independent and heterogeneous due to the presence of only a minimal number of users per client. Considering the more general application scenarios, the global model $\omega$ for training clients $C$ aggregation needs to be compatible with non-training clients $C^{\prime}$ in addition to $C$. The data of each client $D_{c}$ is non-i.i.d. when the data is not interoperable. We can consider a nested model $L_{c}=\omega_{c}(D_{c}),$ (1) where $\omega_{c}$ is the classifier model corresponding to client $c\in C$. $D_{c}\in\mathbb{R}^{n_{c}\times i\times j\times d}$ is the image set with $n_{c}$ samples, $i$ rows, $j$ columns, and $d$ channels. $L_{c}\in\mathbb{Z}^{n_{c}}$ is the corresponding label set. The global model $\omega$ are obtained by aggregating, e.g., averaging the weights of the local models, $\omega=\sum_{c\in C}p_{c}\omega_{c}=\mathbb{E}[\omega_{c}|c\in C],$ (2) where $p_{c}\in[0,1]$ is a weight density function of clients, for which $\sum p_{c}=1$, $p_{c}$ will be assigned according to the number of samples. Therefore, the optimization problem of the FL algorithm can be formulated as minimizing the global loss, which is equivalent to minimizing the sum of the local losses, $\min_{\omega}\mathcal{L}(\omega)=\sum_{c\in C}p_{c}\mathcal{L}(\omega_{c})=\mathbb{E}[\mathcal{L}(\omega_{c})|c\in C],$ (3) where $\mathcal{L}$ is the loss function that will be assigned. For real-world classification tasks, we assume that the distribution of the local model in the parameter space presents a multivariate Normal distribution $\omega_{c}\sim\mathcal{N}\left(\mu_{\omega},\sigma^{2}_{\omega}\right)$, where $\mu_{\omega}$ is mean of all local models, and $\sigma^{2}_{\omega}$ is the variance of all local models. Fig. 2 shows the process of the FL algorithm finding the optimal solution of the global model in the parameter space. After the initial model is trained locally, communicated, and aggregated globally, the final global model will be obtained by averaging and can be estimated as $\hat{\omega}=\mu_{\omega}$. Especially in the large-scale parallel application scenarios of IoT, according to the law of large numbers, $\hat{\omega}=\mu_{\omega}=\omega^{\ast}$ is an unbiased estimation. However, there are still some defects in the method of obtaining the global model through average aggregation. Firstly, we can confirm that there is enormous system heterogeneity in IoT, and the global model cannot ensure high accuracy for all clients. Secondly, we inevitably need a measure to prevent system heterogeneity and potential attacks and poisoning. As shown in Fig. 2, the farther the optimal local model is from the global model, the lower the accuracy, and vice versa. Therefore, it is conceivable that in the FL problem with heterogeneity, the clients’ accuracy will also obey a Normal distribution. Figure 2: Illustration of the FL algorithm finds the optimal global model solution in the parameter space. The shaded areas are accuracy contour areas. The farther the optimal local model dissociates from the global model, the lower the client accuracy. Local models enclosed by shaded areas have similar accuracies. ### II-B Proposed Solution According to the problem statement, we propose a FedTOP algorithm to address all of the following issues. First, the aggregation of global models needs to be more stable, which can be achieved by preventing the overfitting of local models. Second, considering the actual communication situation in IoT, we propose transfer learning to reduce the trainable parameters and hence reduce communication requirements. Third, the global model should have the ability to resist interference, attacks, and data poisoning, which can be achieved by orderly dropping out local models with large loss. Fourth, a global model cannot take into account the situation of all clients, especially in the presence of data and system heterogeneity. Therefore, we recommend personalizing the global model to suit all the training and testing clients. We refer to FedProx [16] using a proximal term to prevent local models $\omega_{c}$ from deviating from the global model $\omega$. In which, the proximal item $\mathcal{L}_{p}$ that computes the distance between the local and global model is added to the loss function, $\mathcal{L}_{p}=\frac{\mu}{2}\|\omega_{c}-\omega\|^{2},$ (4) where $\mu$ is deviation coefficient, $\omega_{c}$ is local client model parameters, and $\omega$ is global model parameters. The overall loss function can be updated as $\mathcal{L}=\mathcal{L}_{l}+\mathcal{L}_{p},$ (5) where $\mathcal{L}_{l}$ is the loss between the true labels and the predicted labels, such as the negative log-likelihood loss used in our experiments. Figure 3: The global model is shared with training and testing clients after iterative training and optimization on massively parallel training clients. Both training and testing clients are personalized locally and then get results on the testing set, respectively. Among them, some attack or poison clients will be discarded, such as Client 2 has a large loss. Transfer-extension is a common and popular solution in many learning frameworks. In particular, FL framework is favored because it can effectively reduce local client training resources and communication resources. In our experiments, the base model is ResNet34 [38] pre-trained on ImageNet, where only the last residual block and fully connected layer are trainable parameters. Although ImageNet is a large object classification dataset far from DMA images, the lower layers are similar for convolutional neural networks (CNN) and are used to extract image features. Therefore, the upper layers that are used to obtain high-level features and representations are given more attention. The ratio of reduced communication resource requirement in the network is approximately equal to the ratio of non-trainable parameters to total parameters, $\text{Commun}_{\downarrow}\approx\frac{|\omega_{\text{non- trainable}}|}{|\omega|}=37.46\%,$ (6) where $\text{Commun}_{\downarrow}$ is the reduced communication resource requirement, $|\omega_{\text{non-trainable}}|$ is the number of non-trainable model parameters, and $|\omega|$ is the total number of the model parameters. Therefore, the transfer-extension reduces the communication requirement by 37.46$\%$ by decreasing the trainable parameters. Ordered-extension is for orderly dropout clients with enormous variance, which may be subject to malicious attacks and poisoning, extensive data and system heterogeneity, and model underfitting. These local clients with large losses should be discarded to enhance the applicability of the global model. Ordered- extension not only enhances accuracy and robustness but also secures the global model. After all of the clients upload the local model parameters and the final training loss to the server, the server only aggregates the $q\in\mathbb{N}\leq|C|$ local models with the lowest loss as the global model. The set of $q$ local models can be expressed as $C_{q}\in q-\arg\min_{c\in C}\mathcal{L}(\omega_{c}).$ (7) Algorithm 1 FedTOP Input: Communication rounds ($T$), training client set ($C$), training epoch ($E$), initial global model ($\omega^{1}$), loss function ($\mathcal{L}_{l}$), deviation coefficient ($\mu$), number of ordered clients ($q$) Output: Trained global model ($\omega^{T}$) for $t=1$ to $T-1$ do for $c\in C$ in parallel do for $e=1$ to $E-1$ do Backpropagate the loss function and update the local model $\omega_{c}^{t^{e+1}}\leftarrow\arg\min_{\omega_{c}^{t^{e}}}\mathcal{L}_{l}(\omega_{c}^{t^{e}})+\frac{\mu}{2}\|\omega_{c}^{t^{e}}-\omega^{t}\|^{2}$. end for Update the local model $\omega_{c}^{t}\leftarrow\omega_{c}^{t^{E}}$. Client sends $\omega^{t}_{c}$ to the server. end for Find a set $C^{t}_{q}$ of top-$q$ clients in $C^{t}$ in term of loss values: $C^{t}\in q-\arg\min_{c\in C^{t}}\mathcal{L}(\omega^{t}_{c})$. Server aggregates the $\omega$ as $\omega^{t+1}\leftarrow\frac{1}{q}\sum_{q\in C^{t}_{q}}\omega^{t}_{q}$. end for Send $\omega^{T}$ to clients $c\in\\{C,C^{\prime}\\}$ do personalization. Algorithm 2 Personalized-extension Input: Training client set ($C$), testing client set ($C^{\prime}$), personalization epoch ($E$), Trained global model ($\omega^{T}$), loss function ($\mathcal{L}_{l}$) Output: Personalized local model ($\omega_{c}$) for $c\in\\{C,C^{\prime}\\}$ do for $e=1$ to $E-1$ do Backpropagate the loss function and update the local model $\omega_{c}^{T^{e+1}}\leftarrow\arg\min_{\omega_{c}^{T^{e}}}\mathcal{L}_{l}(\omega_{c}^{T^{e}})$. end for Update the personalized local model $\omega_{c}\leftarrow\omega_{c}^{T^{E}}$. end for Personalized-extension is to promote, popularize, and adapt the global model to the heterogeneity of all clients. As shown in Fig. 2, the global model cannot be applied to all clients due to the ubiquitous heterogeneity. The region of interest (ROI) of the model may vary depending on system heterogeneity, such as different camera angles, seat positions, and vehicle structures, resulting in differences in the relative position of the driver in the image. However, personalized-extension proposes to train the global model several times in each client to obtain a more personalized local model to improve accuracy. On the one hand, compared with the traditional FL algorithm, the personalized-extension can significantly and effectively improve accuracy and confidence. On the other hand, compared to the method that only trains locally, the personalized FL algorithm improves the training efficiency and avoids the overfitting of the local model. In particular, the personalized FL algorithm can help and generalize to other non-training clients $C^{\prime}$, which may have minimal training resources. After receiving the global model, the non-training clients $C^{\prime}$ can obtain a highly accurate and reliable local model with minimal training. The system diagram of the proposed FedTOP is shown in Fig. 3 For the proposed FedTOP framework, the client communicates with the server $T$ rounds, and all clients $C$ train $E$ epochs in parallel between each communication. For our preliminary experiments, we set $T=10$ and $E=5$. For transfer-extension, the local model is the transfer learning model of ResNet34 pre-trained on ImageNet. Only the last residual block and fully connected layer are set as trainable parameters. In addition, we add an additional fully connected layer to match the number of our classification categories. Based on FedProx, the activation function of the last layer is LogSoftmax, and the setting of the loss function $\mathcal{L}_{l}$ is a negative log-likelihood loss. $\omega^{1}$ is the initial model parameter. The proposed FedTOP is described in Algorithm 1, and the personalization process is described in Algorithm 2. (a) SFDDD texting - right 1 (b) SFDDD texting - right 2 (c) SFDDD texting - right 3 (d) SFDDD texting - right 4 (e) DriveAct magazine 1 (f) DriveAct magazine 2 (g) DriveAct magazine 3 (h) DriveAct magazine 4 Figure 4: Exampled activities of four drivers in each of SFDDD and DriveAct datasets. (a) SFDDD (b) DriveAct Figure 5: Sampled client image histograms of SFDDD and DriveAct datasets. ## III Experiment and Results Considering the data and system heterogeneity, experiments are conducted on two open real-world driver monitoring datasets, including State Farm Distracted Driver Detection (SFDDD) [39] and DriveAct [40]. In addition to comparing with FedProx as a baseline, this paper also compares the performance of the transfer, ordered, and personalized extensions through an ablation study. ### III-A Experiment Setup To compare the impact of system heterogeneity on FL frameworks, the proposed FedTOP is tested on driver monitoring datasets with and without system heterogeneity. SFDDD dataset includes 26 drivers and 10 activities, and DriveAct dataset includes 15 drivers and 12 activities. SFDDD dataset considers system heterogeneity, that is, different drivers have different vehicles, different seat positions, different camera angles, etc., as shown in Fig. 4a, 4b, 4c, and 4d. DriveAct dataset does not take into account system heterogeneity, i.e., all subjects had their data collected in the same system. Recorded from the same camera angle, different drivers read the same magazine in the same vehicle, as shown in Fig. 4e, 4f, 4g, and 4h. To show more clearly and visually the heterogeneity between different clients in the two datasets, Fig. 5 shows histograms of the sample images of the two datasets. It can be seen that the SFDDD dataset with system heterogeneity has a more considerable difference in the distribution of histograms than the DriveAct dataset without system heterogeneity, and the mean value of the SFDDD images is larger. The possible reason is that the vehicle interiors of the DriveAct dataset view are darker, resulting in most of the pixel values being lower. Therefore, the FL framework may be more challenged by the scene information when training on the SFDDD dataset, such as different vehicle interiors. Clients are naturally divided based on the drivers. In order to better demonstrate the role of personalized-extension, the datasets are first divided into training clients and testing clients at a ratio of about 0.8, 0.2, with $|C_{\text{SFDDD}}|=20$, $|C^{\prime}_{\text{SFDDD}}|=6$, $|C_{\text{DriveAct}}|=12$, and $|C^{\prime}_{\text{DriveAct}}|=3$. And then, the datasets for each client are divided into a training set, verification set, and testing set at a ratio of 0.7, 0.15, and 0.15, respectively. After the global model is trained by the training dataset of training clients, the final trained global model is shared with all clients for personalization. The personalization of the global model will only be processed on the training sets, while the personalized local model will be tested on the unseen testing sets. The FL architectures are established on Pytorch and trained on an Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz, and a Nvidia GeForce RTX(TM) 3080 GPU. ### III-B Ablation Study and Results We explore the role of each FedTOP extension on two real-world datasets through an ablation study. FedProx is used as a baseline for comparison. According to the experimental setup described in the previous subsection, the experimental results are shown in Table I. TABLE I: Performance of FedTOP and ablation study on SFDDD and DriveAct datasets. Dataset | Method 1 | $|C|$ | $q$ | $\mu$ | Transfer | Accuracy ($\%$) 2 | $\text{Time}_{\downarrow}$ ($\%$) 3 | $\text{Commun}_{\downarrow}$ ($\%$) 4 | Cybersecurity ---|---|---|---|---|---|---|---|---|--- | | | | | | Training | Testing | | | SFDDD | FedProx (baseline) | 20 | 20 | 1 | No | 54.63 | 16.44 | $\sim$ | $\sim$ | $\sim$ | FedOP | 20 | 15 | 1 | No | 97.69 | 96.37 | 1.45 $\downarrow$ | $\sim$ | $\uparrow$ | FedTP | 20 | 20 | 1 | Yes | 94.76 | 92.8 | 17.3 $\downarrow$ | 37.46 $\downarrow$ | $\sim$ | FedTO | 20 | 15 | 1 | Yes | 46.16 | 16.43 | 18.91 $\downarrow$ | 37.46 $\downarrow$ | $\uparrow$ | FedTOP | 20 | 15 | 1 | Yes | 94.65 | 92.32 | 18.91 $\boldsymbol{\downarrow}$ | 37.46 $\boldsymbol{\downarrow}$ | $\boldsymbol{\uparrow}$ DriveAct | FedProx (baseline) | 12 | 12 | 1 | No | 73.18 | 23.96 | $\sim$ | $\sim$ | $\sim$ | FedOP | 12 | 10 | 1 | No | 98.07 | 97.97 | 0.44 $\downarrow$ | $\sim$ | $\uparrow$ | FedTP | 12 | 12 | 1 | Yes | 97.00 | 95.71 | 16.83 $\downarrow$ | 37.46 $\downarrow$ | $\sim$ | FedTO | 12 | 10 | 1 | Yes | 62.30 | 22.89 | 19.18 $\downarrow$ | 37.46 $\downarrow$ | $\uparrow$ | FedTOP | 12 | 10 | 1 | Yes | 97.04 | 95.96 | 19.18 $\boldsymbol{\downarrow}$ | 37.46 $\boldsymbol{\downarrow}$ | $\boldsymbol{\uparrow}$ 1 FedOP, FedTP, and FedTO refer to ablating the transfer, ordered, and personalized extensions of the FL framework, respectively. 2 Accuracy refers to the testing sets of training clients and testing clients, which is described in Section III-A. 3 $\text{Time}_{\downarrow}$ refers to the ratio of reduced training time per client to the baseline. 4 $\text{Commun}_{\downarrow}$ refers to ratio of reduced communication consumption to the baseline, which is described in (6). (a) FedProx (b) FedT (c) FedO (d) FedTO (e) FedProx (f) FedT (g) FedO (h) FedTO Figure 6: Accuracy and loss curves of the FL framework and its extensions on the SFDDD and DriveAct datasets, which is the training process of Algorithm 1. Personalization does not affect the convergence of the global model in the FL framework. (a) SFDDD TOP (b) DriveAct TOP Figure 7: Testing accuracy of the training and testing clients on both SFDDD and DriveAct datasets varies with personalized epoch, which is the testing results of Algorithm 2. The results and comparisons for two datasets and three extensions are shown in Fig. 6, which is equivalent to demonstrating Algorithm 1. By observing the accuracy and loss curves on the two datasets, it can be concluded that the SFDDD dataset with system heterogeneity is fundamentally different from the DriveAct dataset without system heterogeneity. It can be clearly seen that the SFDDD dataset with system heterogeneity requires more communication to converge, while the DriveAct dataset without system heterogeneity has a fast convergence speed, especially at the first communication. Therefore, for real- world datasets, system heterogeneity can be mitigated by more communication times. By observing Fig. 6c, 6d, 6g, and 6h, it can be found that the ordered- extension diminishes the stability of the system. Although the anomalous large-loss local model is discarded to reduce the bias of the global model, it also increases the variance of the global model resulting in reduced generalizability. By observing Fig. 6b, 6d, 6f, and 6h, we can see that the effect of transfer-extension is different for datasets with and without system heterogeneity. On the one hand, transfer-extension increases the variance of the model on the SFDDD dataset and leads to a reduced and unstable model convergence. On the other hand, transfer-extension improves the speed of model convergence on DriveAct, and the convergence effect is more stable. The possible reason is that the transfer-extension retains only a small number of trainable parameters, resulting in the neural network model not being able to learn human behavioral features effectively in the SFDDD dataset with system heterogeneity. However, for the DriveAct dataset without system heterogeneity, the factors are constant except for the driver, and the local model does not need to focus on these exact same pixels, but only on the changing pixels, including objects such as drivers, computers, and magazines. Therefore, for the DriveAct dataset, transfer-extension can effectively increase convergence and stability. The proposed FedTOP framework is able to obtain 92.32$\%$ and 95.96$\%$ accuracy on the SFDDD and DriveAct datasets, respectively, when considering five times of personalization training. Compared to FedProx as a baseline, FedTOP can effectively improve the accuracy by 462$\%$ in addition to considering a 37.46$\%$ reduction in communication resources. The results demonstrate the feasibility of the proposed FedTOP in terms of communication resource saving, accuracy improvement, robustness, and cybersecurity. (a) Trained global model $\omega^{T}$ (b) Personalization Epoch 1 $\omega^{T^{1}}$ (c) Personalization Epoch 3 $\omega^{T^{3}}$ (d) Personalization Epoch 5 $\omega^{T^{5}}$ (e) Trained global model $\omega^{T}$ (f) Personalization Epoch 1 $\omega^{T^{1}}$ (g) Personalization Epoch 3 $\omega^{T^{3}}$ (h) Personalization Epoch 5 $\omega^{T^{5}}$ Figure 8: CAMs of the test clients in SFDDD and DriveAct datasets during the personalization process. (a), (b), (c), and (d) are a test client in the SFDDD dataset, which is the same as Fig. 4a. (e), (f), (g), and (h) are a test client in the DriveAct dataset, which is the same as Fig. 4e. ### III-C Performance of Personalized-Extension Personalized-extension needs to be further discussed and analyzed as the most effective approach to improve accuracy. Based on the division of training and testing clients in Section III-A, in this subsection, we further discuss how the trained and aggregated global model is adapted to both training and testing clients. The results of the personalized-extension on the two datasets are shown in Fig. 7 with different personalization epochs, which is equivalent to demonstrating Algorithm 2. It can be seen that the personalization process differs significantly on the datasets with and without system heterogeneity, which is similar to the results in Fig. 6. The clients in the DriveAct dataset have faster convergence, minor accuracy variance, and higher final accuracy. On the contrary, the clients in the SFDDD dataset not only converge slower but also have an anomalous client with relatively low accuracy. The possible reason is that the anomalous client has a huge data and system heterogeneity, causing the optimal model to deviate significantly far from the aggregated global model. Fig. 8 further demonstrates that the trained global model repositions the ROI during the personalized training process via class activation map (CAM) [41]. The test client of the SFDDD dataset can be seen struggling with the personalization process. The trained global model focus the ROI on the seat backrest, driver’s chest, hand, and knee, and vehicle door. Due to the system heterogeneity present in the SFDDD dataset, the positions of the driver, seat, and steering wheel, as shown in Fig. 8a is different from other clients, as shown in Fig. 4b, 4c, and 4d. Therefore, the initial ROI is likely to be a driver’s position among other clients. During the five personalization training processes, the local model is able to effectively reposition the ROI to the driver, which is what the personalized-extension is intended to show. Moreover, the personalization process also reduces the number of ROIs while targeting more attention to a specific area. On the contrary, for the test clients in the DriveAct dataset, the adjustment of the ROI is negligible. Note that the ROI does not necessarily have to cover the driver’s body or an object such as the magazine. The ROI should cover those pixels that can distinguish between different activities, such as static activities like reading the magazine, and dynamic activities like wearing a seatbelt in the DriveAct dataset activity setting. These ROIs focus on areas where large differences are likely to occur. The fact that the ROIs in the DriveAct dataset cover almost the same pixels during the personalization process can also prove the negative impact of system heterogeneity on the FL framework. ## IV Discussion The two datasets used, SFDDD and DriveAct, still have some flaws. First, although the SFDDD dataset takes system heterogeneity into account, quite a few drivers collect data in the same vehicle, that is, the number of clients is greater than the number of users. Therefore, there are still some differences between the dataset and the real-world data, which leads to the fact that the proposed FedTOP may need more communication rounds to achieve similar accuracy on a real-world dataset. Second, there is currently no driver monitoring dataset with real poisoning data currently existing, resulting in the effect of ordered-extension not being reflected. The different modalities, positions, and angles of the camera or the method of generating fake data may be a hypothesis for poisoned data, but it cannot be highlighted as real. Moreover, due to road safety guidelines, the current dataset is only driving on safe roads or simulated driving. Therefore, the driver’s posture, demeanor, facial concentration, etc., are far from the real driving behavior. Therefore, there is an urgent need for a more realistic dataset that can include camera images of different positions and angles, different vehicle scenes, and more drivers driving on real roads. For a FL framework in IoT, in addition to accuracy being the evaluation criterion, factors like communication requirements, robustness, fairness, cybersecurity, etc., also need to be considered. Although it seems that transfer and ordered extensions may not improve accuracy but rather reduce it in the current experimental results, it can potentially improve the performance of the FL framework. Therefore, we keep two extensions as one of our future directions. Personalized-extension is an approach similar to transfer learning and incremental learning. On the one hand, the local client is incrementally learned based on the trained global model, but it does not intentionally retain the previously learned knowledge. On the other hand, the global model is transferred to the client dataset as in transfer-extension, but the low-level non-trainable weights are still pre-trained on ImageNet. Therefore, the proposed personalized-extension actually uses the trained global model weights to fit different client data, such as the reposition of ROIs. Although the personalized-extension requires additional training locally for each client, there are many benefits, including high accuracy, applicability to non-training clients, customization, etc. Conceivably, personalized-extension can effectively address the problem of system heterogeneity, e.g., it can be applied to different cameras, camera angles, vehicle interiors, etc. ## V Conclusion In this paper, we propose a FL framework FedTOP for DMA to address the issues of privacy preservation, efficient training, communication resource-saving, poisoned data, and diversified scenarios. Through the ablation study, the impact, role, and performance of three extensions, including transfer, ordered, and personalized on the model are disclosed. Moreover, the experiments demonstrate dramatic differences between datasets with and without system heterogeneity. In addition to the proposed FedTOP being able to exhibit 92.32$\%$ and 95.96$\%$ accuracy in two datasets for testing clients, it is also appreciated that FedTOP reduces communication consumption by 37.46$\%$ and potentially improves cybersecurity. The experimental results show that the proposed FedTOP is a highly accurate, lightweight, privacy-preserving, robust, cybersecure, and universally applicable FL framework for potential DMA. Future work lies in the continued research of extensions. For the ordered- extension, a possible plan is to introduce some malicious local clients to attack and poison with the global model. For example, subjects may not place the camera on the side as instructed but place it on the front or behind instead. Such outliers may cause the global model to deviate significantly from the optimal solution, so in the case, ordered-expansion can prevent the deviation of the global model by discarding the larger value of the losses. For the transfer-extension, there is currently a lack of a general driver monitoring model, so we used a model pre-trained on ImageNet. Future work can pre-train a driver model ourselves as a base model, which will get better performance in DMA. Fig. 1 shows the FL framework for foresight in IoV, but the dataset used does not contain scenario information such as road, weather, vehicle models, etc. Therefore, we expect a well-developed real-world dataset to include such scenario information, data and system heterogeneity, etc. ## References * [1] Y. Yang, L. Wu, G. Yin, L. Li, and H. Zhao, “A survey on security and privacy issues in internet-of-things,” _IEEE Internet Things J._ , vol. 4, no. 5, pp. 1250–1258, Apr. 2017. * [2] J. Cao, Y. Pang, J. Xie, F. S. Khan, and L. Shao, “From handcrafted to deep features for pedestrian detection: a survey,” _IEEE Trans. Pattern Anal. Mach. Intell._ , Apr. 2021. * [3] S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, and A. Mouzakitis, “A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications,” _IEEE Internet Things J._ , vol. 5, no. 2, pp. 829–846, Mar. 2018. * [4] Z. Wang, R. Gupta, K. Han, H. Wang, A. Ganlath, N. Ammar, and P. Tiwari, “Mobility digital twin: Concept, architecture, case study, and future challenges,” _IEEE Internet Things J._ , Mar. 2022. * [5] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage, “Federated learning for mobile keyboard prediction,” _arXiv preprint arXiv:1811.03604_ , 2018. * [6] I. Dayan, H. R. Roth, A. Zhong, A. Harouni, A. Gentili, A. Z. Abidin, A. Liu, A. B. Costa, B. J. Wood, C.-S. Tsai _et al._ , “Federated learning for predicting clinical outcomes in patients with covid-19,” _Nat. Med._ , vol. 27, no. 10, pp. 1735–1743, Oct. 2021. * [7] N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein _et al._ , “The future of digital health with federated learning,” _NPJ Digit. Med._ , vol. 3, no. 1, pp. 1–7, Sep. 2020. * [8] M. Hao, H. Li, X. Luo, G. Xu, H. Yang, and S. Liu, “Efficient and privacy-enhanced federated learning for industrial artificial intelligence,” _IEEE Trans. Industr. Inform._ , vol. 16, no. 10, pp. 6532–6542, Oct. 2019\. * [9] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Blockchain and federated learning for privacy-preserved data sharing in industrial iot,” _IEEE Trans. Industr. Inform._ , vol. 16, no. 6, pp. 4177–4186, Sep. 2019. * [10] Z. Du, C. Wu, T. Yoshinaga, K.-L. A. Yau, Y. Ji, and J. Li, “Federated learning for vehicular internet of things: Recent advances and open issues,” _IEEE open j. Comput. Soc._ , vol. 1, pp. 45–61, May 2020. * [11] X. Kong, K. Wang, M. Hou, X. Hao, G. Shen, X. Chen, and F. Xia, “A federated learning-based license plate recognition scheme for 5g-enabled internet of vehicles,” _IEEE Trans. Industr. Inform._ , vol. 17, no. 12, pp. 8523–8530, Mar. 2021. * [12] F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-iid data,” _IEEE Trans. Neural Netw. Learn. Syst._ , vol. 31, no. 9, pp. 3400–3413, Nov. 2019. * [13] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in _International Conference on Machine Learning_. PMLR, 2020, pp. 5132–5143. * [14] S. Horvath, S. Laskaridis, M. Almeida, I. Leontiadis, S. Venieris, and N. Lane, “Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout,” _Adv. Neural Inf. Process. Syst._ , vol. 34, pp. 12 876–12 889, 2021. * [15] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in _Artificial intelligence and statistics_. PMLR, 2017, pp. 1273–1282. * [16] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” _Proceedings of Machine Learning and Systems_ , vol. 2, pp. 429–450, 2020. * [17] B. Ghimire and D. B. Rawat, “Recent advances on federated learning for cybersecurity and cybersecurity for federated learning for internet of things,” _IEEE Internet Things J._ , Feb. 2022. * [18] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” _IEEE Signal Process. Mag._ , vol. 37, no. 3, pp. 50–60, May 2020. * [19] S. Niknam, H. S. Dhillon, and J. H. Reed, “Federated learning for wireless communications: Motivation, opportunities, and challenges,” _IEEE Commun. Mag._ , vol. 58, no. 6, pp. 46–51, Jul. 2020. * [20] L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” _arXiv preprint arXiv:2003.02133_ , Mar. 2020. * [21] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings _et al._ , “Advances and open problems in federated learning,” _Found. Trends Mach. Learn._ , vol. 14, no. 1–2, pp. 1–210, Jun. 2021. * [22] Q. Li, Z. Wen, Z. Wu, S. Hu, N. Wang, Y. Li, X. Liu, and B. He, “A survey on federated learning systems: vision, hype and reality for data privacy and protection,” _IEEE Trans. Knowl. Data Eng._ , Nov. 2021. * [23] L. Zhang, H. Saito, L. Yang, and J. Wu, “Privacy-preserving federated transfer learning for driver drowsiness detection,” _IEEE Access_ , vol. 10, pp. 80 565–80 574, Jul. 2022. * [24] Z. Su, Y. Wang, T. H. Luan, N. Zhang, F. Li, T. Chen, and H. Cao, “Secure and efficient federated learning for smart grid with edge-cloud collaboration,” _IEEE Trans. Industr. Inform._ , vol. 18, no. 2, pp. 1333–1344, Jul. 2021\. * [25] G. Sun, Y. Cong, J. Dong, Q. Wang, L. Lyu, and J. Liu, “Data poisoning attacks on federated machine learning,” _IEEE Internet Things J._ , Nov. 2021. * [26] J. Zhang, B. Chen, X. Cheng, H. T. T. Binh, and S. Yu, “Poisongan: Generative poisoning attacks against federated learning in edge computing systems,” _IEEE Internet Things J._ , vol. 8, no. 5, pp. 3310–3322, Sep. 2020. * [27] A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” _IEEE Trans. Neural Netw. Learn. Syst._ , Mar. 2022. * [28] A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning: A meta-learning approach,” _arXiv preprint arXiv:2002.07948_ , Feb. 2020. * [29] Q. Wu, X. Chen, Z. Zhou, and J. Zhang, “Fedhome: Cloud-edge based personalized federated learning for in-home health monitoring,” _IEEE Trans. Mob. Comput._ , Dec. 2020. * [30] A. Kashevnik, I. Lashkov, and A. Gurtov, “Methodology and mobile application for driver behavior analysis and accident prevention,” _IEEE Trans. Intell. Transp. Syst._ , vol. 21, no. 6, pp. 2427–2436, Jun. 2019. * [31] S. Zepf, J. Hernandez, A. Schmitt, W. Minker, and R. W. Picard, “Driver emotion recognition for intelligent vehicles: A survey,” _ACM Comput. Surv._ , vol. 53, no. 3, pp. 1–30, Jun. 2020. * [32] Y. Xing, C. Lv, H. Wang, H. Wang, Y. Ai, D. Cao, E. Velenis, and F.-Y. Wang, “Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 5, pp. 4377–4390, Mar. 2019. * [33] A. Masood, D. S. Lakew, and S. Cho, “Security and privacy challenges in connected vehicular cloud computing,” _IEEE Commun. Surv._ , vol. 22, no. 4, pp. 2725–2764, Jul. 2020. * [34] S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of deep learning applications to autonomous vehicle control,” _IEEE Trans. Intell. Transp. Syst._ , vol. 22, no. 2, pp. 712–733, Jan. 2020. * [35] M. Ramzan, H. U. Khan, S. M. Awan, A. Ismail, M. Ilyas, and A. Mahmood, “A survey on state-of-the-art drowsiness detection techniques,” _IEEE Access_ , vol. 7, pp. 61 904–61 919, May 2019. * [36] K. Doshi and Y. Yilmaz, “Federated learning-based driver activity recognition for edge devices,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 3338–3346. * [37] C. Zhao, Z. Gao, Q. Wang, K. Xiao, Z. Mo, and M. J. Deen, “Fedsup: A communication-efficient federated learning fatigue driving behaviors supervision approach,” _Future Gener. Comput. Syst._ , vol. 138, pp. 52–60, Jan. 2023. * [38] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [39] State Farm, “State farm distracted driver detection,” Apr 2016. [Online]. Available: https://www.kaggle.com/competitions/state-farm-distracted-driver-detection/overview/description * [40] M. Martin, A. Roitberg, M. Haurilet, M. Horne, S. Reiß, M. Voit, and R. Stiefelhagen, “Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 2801–2810. * [41] D. Omeiza, S. Speakman, C. Cintas, and K. Weldermariam, “Smooth grad-cam++: An enhanced inference level visualization technique for deep convolutional neural network models,” _arXiv preprint arXiv:1908.01224_ , 2019. | Liangqi Yuan (S’22) received the B.E. degree from the Beijing Information Science and Technology University, Beijing, China, in 2020, and the M.Sc. degree from the Oakland University, Rochester, MI, USA, in 2022. He is currently pursuing the Ph.D. degree with the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA. His research interests are in the areas of sensors, the internet of things, human–computer interaction, signal processing, and machine learning. ---|--- | Lu Su (M’15) is an associate professor in the School of Electrical and Computer Engineering at Purdue University. His research interests are in the general areas of Internet of Things and Cyber-Physical Systems, with a current focus on wireless, mobile, and crowd sensing systems. He received Ph.D. in Computer Science, and M.S. in Statistics, both from the University of Illinois at Urbana-Champaign, in 2013 and 2012, respectively. He has also worked at IBM T. J. Watson Research Center and National Center for Supercomputing Applications. He has published more than 100 papers in referred journals and conferences, and serves as an associate editor of ACM Transactions on Sensor Networks. He is the recipient of NSF CAREER Award, University at Buffalo Young Investigator Award, ICCPS’17 best paper award, and the ICDCS’17 best student paper award. He is a member of ACM and IEEE. ---|--- | Ziran Wang (S’16-M’19) received the Ph.D. degree from the University of California, Riverside in 2019. He is an Assistant Professor in the College of Engineering at Purdue University, and was a Principal Researcher at Toyota Motor North America. He serves as Founding Chair of IEEE Technical Committee on Internet of Things in Intelligent Transportation Systems, and Associate Editor of four academic journals, including IEEE Internet of Things Journal and IEEE Transactions on Intelligent Vehicles. His research focuses on automated driving, human-autonomy teaming, and digital twin. ---|---
# No-resonance conditions, random matrices, and quantum chaotic models Jonathon Riddell<EMAIL_ADDRESS>School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK Nathan Pagliaroli<EMAIL_ADDRESS>Department of Mathematics, Western University, 1151 Richmond St, London ON N6A 3K7, Canada ###### Abstract In this article we investigate no-resonance conditions for quantum chaotic and random matrix models. No-resonance conditions are properties on the spectrum of a model, usually employed as a theoretical tool in the analysis of late time dynamics. The first order no-resonance condition holds when a spectrum is non-degenerate, while higher order no-resonance conditions imply sums of an equal number of energies are non-degenerate outside of permutations of the indices. The condition is usually assumed to hold for quantum chaotic models. In this work we use several tests from random matrix theory to demonstrate that no-resonance conditions are likely to be violated for all equal sums containing greater than one energy. This is due to the presence of level- attraction in the spectra after resolving appropriate symmetries. This result is produced for both a quantum chaotic Hamiltonian and two random matrix models. We then generalize important bounds in quantum equilibration theory to a case where the conditions are violated, and to the case of random matrix models. One of the most ubiquitous observations in many body physics is the connection between the spectral statistics of many body quantum systems and that of random matrices. Quantum systems are not chaotic in the classical sense since unitary time evolution guarantees that the overlap between two states in time is constant. This excludes the classical notion of chaos in quantum systems for which we observe exponential sensitivity to small differences in initial conditions. However, their spectral statistics behave qualitatively differently if their corresponding classical limit is integrable or chaotic. If the classical limit is chaotic, the spectral statistics of the quantum Hamiltonian agree with the predictions of Random Matrix Theory (random) and we refer to these models as quantum chaotic Berry (1987). The notion of quantum chaos can be extended to quantum systems that do not have a well-defined classical limit D’Alessio _et al._ (2016). An extremely important property of the spectral statistics of a quantum chaotic Hamiltonian is the presence of level-repulsion amongst neighboring energies. Originally this level-repulsion was first modeled for heavy atomic nuclei by Wigner using Gaussian ensembles of random matrices. Since Wigner’s work, it has been established that features of the spectrum of classically chaotic quantum systems are accurately described by various ensembles of random matricesPorter (1965); Brody _et al._ (1981); Guhr _et al._ (1998); Berry and Tabor (1976, 1977). The connection between the spectrum of quantum chaotic systems and random matrices has been well studied in single particle systems Jalabert _et al._ (1990); Marcus _et al._ (1992); Milner _et al._ (2001); Friedman _et al._ (2001); Stockmann and Stein (1990); Sridhar (1991); Moore _et al._ (1994); Steck _et al._ (2001); Hensinger _et al._ (2001); Chaudhury _et al._ (2009); Weinstein _et al._ (2002); Zhang _et al._ (2022); Łydżba _et al._ (2021a, b), along with many body systems Santos and Rigol (2010a, b); Rigol (2010); Kollath _et al._ (2010); Santos _et al._ (2012); Richter _et al._ (2020); Atas _et al._ (2013a, b); $\boldsymbol{S}$untajs _et al._ (2020) and recently has seen a surge of interest in the case of circuit or periodically driven type models Chan _et al._ (2018a); D’Alessio and Rigol (2014); Bertini _et al._ (2021a, 2018). The first to extend Wigner’s work were Dyson and Mehta in the series of papers Dyson (1962a, b, c); Dyson and Mehta (1963); Mehta and Dyson (1963). In particular, Dyson classified the three most immediately relevant ensembles: the Gaussian Unitary Ensemble, the Gaussian Orthogonal Ensemble, and the Gaussian Symmplectic Ensemble in what is known as the “threefold way” Dyson (1962d). Of the most immediate interest to this work is the Gaussian Orthogonal Ensemble (GOE). The Bohigas, Giannoni, and Schmit (BGS) conjecture Bohigas _et al._ (1984) states that the GOE has the same level-spacing as a wide class of quantum systems with classical limits Bohrdt _et al._ (2017); Andreev _et al._ (1996); Müller _et al._ (2004). Let $E_{0}\leq E_{1}\leq E_{2},...$ be a sequence of unfolded energy eigenvalues of the GOE; then Wigner surmised the distribution of average consecutive level-spacings, that is the average of $s_{k}=E_{k+1}-E_{k}$ for all $k$ is $p(s)=\frac{\pi s}{2}e^{-\pi^{2}s^{2}/4}.$ (1) To see how to unfold a spectrum see Chapter 6 of Mehta (2004) or for example Bruus and Angl‘es d’Auriac (1997). It is important to note that Wigner’s Surmise is an approximation Mehta (1960) of the actual distribution, originally derived in Jimbo _et al._ (1980). This was further simplified in terms of Painlevé transcendentals Forrester and Witte (2000). In contrast to level-repulsion, if one considers the level-spacing of i.i.d. random variables, not only does one not see repulsion, but rather one sees attraction Livan _et al._ (2018), which has been used as a marker for non- chaotic systems D’Alessio _et al._ (2016). In particular after unfolding the spacing of such systems, the distribution is Poisson $p(s)=e^{-s}.$ (2) The presence of level-repulsion and GOE spectral statistics is a hallmark test of Quantum chaos, while Poisson statistics are associated with integrable or non-chaotic models. A key consequence of the presence of level-repulsion is that the value of the probability density at zero is zero, meaning that we can assume with high probability that we will not find degeneracies in the quantum chaotic spectrum. This observation is useful, for example, when considering dephasing arguments, which has recently been particularly popular in the quantum equilibration community Alhambra _et al._ (2020); Riddell and Sørensen (2020); Gogolin and Eisert (2016); Masanes _et al._ (2013); Wilming _et al._ (2018); Heveling _et al._ (2020); Campos Venuti and Zanardi (2010); Knipschild and Gemmer (2020); Carvalho _et al._ (2023). If we consider the time-evolution of many dynamical functions under unitary dynamics, time- dependent terms in the series will often appear as the following: $z\,e^{-i(E_{m}-E_{n})t},$ (3) where $z$ is a complex number and $t$ is time. Terms such as these survive the infinite time average if and only if $E_{m}=E_{n}$. In the case of quantum chaotic Hamiltonians it is a safe assumption that any surviving term would imply that $m=n$, since we do not expect degeneracy due to the presence of level-repulsion. The cases where $E_{m}=E_{n}$ and $m\not=n$ are referred to as resonances. However, in general dynamical functions can be more complex with terms such as $z\,e^{-i(E_{m_{1}}-E_{n_{1}}+E_{m_{2}}-E_{n_{2}}+...)t}.$ (4) Such terms can, for example, appear in out of time ordered correlators or other higher order correlation functions Riddell and Sørensen (2020); Riddell _et al._ (2021); Riddell and Sørensen (2019); Yoshida and Yao (2019); Fortes _et al._ (2019); Shukla _et al._ (2022); Fortes _et al._ (2020). To discuss the terms that survive the infinite time average in equation 4 we introduce the qth order no-resonance condition. ###### Definition 1. Let $H$ be a Hamiltonian with spectrum $H=\sum_{j}E_{j}\ket{E_{j}}\bra{E_{j}}$, and let $\Lambda_{q},\Lambda^{\prime}_{q}$ be two arbitrary sets of $q$ energy levels $\\{E_{j}\\}$. $H$ satisfies the q no-resonance condition if for all $\Lambda_{q},\Lambda^{\prime}_{q}$, the equality $\sum_{j\in\Lambda_{q}}E_{j}=\sum_{j\in\Lambda^{\prime}_{q}}E_{j}$ (5) implies that $\Lambda_{q}=\Lambda^{\prime}_{q}$. By definition 1 the set of terms that satisfy the q no-resonance condition are the minimum set of terms that survive the infinite time average as in equation 4. Terms that fall outside of definition 1 are referred to as $q$-resonances. Typically in the literature it is suggested that quantum chaotic Hamiltonians satisfy definition 1 Mark _et al._ (2022); Srednicki (1999); Riddell _et al._ (2022). This greatly simplifies arguments involving infinite time averages in quantum chaotic models. Despite this condition being somewhat common in the literature, studies only test this condition for the $q=1$ case where one finds level-repulsion governed by the Wigner-Dyson distribution Richter _et al._ (2020); D’Alessio _et al._ (2016). As for the $q=2$ case, an explicit formula is known for the density of states Khalkhali and Pagliaroli (2022), but as far the authors can tell nothing is known about the level-spacing distribution. However, as we will see, the numerical simulations performed in this paper strongly suggest that for the GOE the $q=2$ level- spacing distribution is Poisson. In the appendix we numerically demonstrate that $q=3,4$ also appear Poisson and have level-attraction. We then conjecture that all level spacing distributions for $q\geq 2$ have level-attraction and appear Poissonian. ## I Spectral statistics for a quantum chaotic Hamiltonian In this section we first investigate what the spectral statistics look like for a specific quantum chaotic model. In particular we study a Heisenberg type model with nearest and next nearest neighbour interactions. $\displaystyle{H}=$ $\displaystyle\sum_{j=1}^{L}J_{1}\left({S}_{j}^{+}S_{j+1}^{-}+\text{h.c.}\right)+\gamma_{1}\,{S}_{j}^{Z}{S}_{j+1}^{Z}$ (6) $\displaystyle+J_{2}\left({S}_{j}^{+}{S}_{j+2}^{-}+\text{h.c.}\right)+\gamma_{2}{S}_{j}^{Z}{S}_{j+2}^{Z},$ (7) where $(J_{1},\gamma_{1},J_{2},\gamma_{2})\nobreak=\nobreak(-1,1,-0.2,0.5)$ gives us a non-integrable model. This model has a free limit for $(J_{1},0,0,0)$ and an interacting integrable limit for $(J_{1},\gamma_{1},0,0)$. Recently this model was confirmed to obey the eigenstate thermalization hypothesis LeBlond _et al._ (2019). We perform full spectrum exact diagonalization in the maximally symmetric sector of this model. In particular, this matrix conserves the total magnetization $m_{z}=\sum_{j}S_{j}^{Z}$, and is translation invariant. We choose to work in the sector such that $\langle m_{z}\rangle=0$ with quasi-momenta $k=0$. This allows us to further diagonalize the model with the spatial reflection symmetry $P$ and the spin inversion symmetry $Z$. In this section we will focus on the spectral statistics for the cases $q=1$, as a benchmark, and $q=2$, the first non-resonance condition that is unexplored in the literature. As we will show in the appendix, the behavior for $q>2$ is qualitatively similar to $q=2$. First, let us establish that our model satisfies the usual tests for quantum chaos in the $q=1$ case. Perhaps the most common test is to investigate the level spacing distribution $s_{j}=E_{j+1}-E_{j}$. The act of unfolding allows us to have a universal scale for the comparison of spectra of different Hamiltonians. The distribution of $s_{j}$ for a quantum chaotic model should be a Wigner surmise. To unfold the spectrum we use Gaussian broadening. Namely we map our energies $E_{k}$ to $\epsilon_{k}$ in the following way Bruus and Angl‘es d’Auriac (1997), $\epsilon_{k}=N(E_{k}),$ (8) $N(E)=\int_{-\infty}^{E}\sum_{k}\frac{1}{\sigma_{k}\sqrt{2\pi}}e^{-\frac{(e-E_{k})^{2}}{2\sigma_{k}^{2}}}de,$ (9) where we use the same convention as in Bruus and Angl‘es d’Auriac (1997) and take $\sigma_{k}=0.608\alpha\Delta_{k},$ (10) where $\Delta=(E_{k+\alpha}-E_{k-\alpha})/{2\alpha}$ and we find that $\alpha=20$ is quite suitable for our spectrum. Figure 1: (a) Level spacing $q=1$, $L=24$ unfolded data, which looks approximately like a Wigner-surmise exhibiting level propulsion. In black we plot the Wigner-surmise and in purple we plot the Poisson distribution. (b) Ratio test for spectral statistics at $L=24$. In black we plot the corresponding GOE distribution. Fig. 1 demonstrates that our model for $q=1$ has level-repulsion and appears to have a level spacing distribution well approximated by the Wigner surmise. While this result shows us that our spectrum strongly resembles the predictions of RMT, the unfolding procedure is usually chosen to find such agreement, therefore it is desirable to perform a test that does not need unfolding. Such a test is given by investigating the distribution of ratios between successive gaps Łydżba _et al._ (2021c); Oganesyan and Huse (2007); Atas _et al._ (2013c). We introduce the ratios $r_{j}=\frac{\min\\{s_{j},s_{j+1}\\}}{\max\\{s_{j},s_{j+1}\\}},$ (11) which tells us that $r_{j}\in[0,1]$. We emphasize that the $s_{j}$ we use here don’t need to be unfolded gaps. This test can be done with the model’s physical spectrum. For the GOE in Atas _et al._ (2013c) it was analytically shown that the distribution of the $r_{j}$ for $3\times 3$ matrices is given by $p(r)=\frac{27}{4}\frac{r+r^{2}}{\left(1+r+r^{2}\right)^{\frac{5}{2}}}.$ (12) If instead our energy levels were independent randomly distributed variables we would instead get level-attraction, $p(r)=\frac{2}{(1+r)^{2}}.$ (13) We see in Fig. 1 (b) that our result experiences level-repulsion, agreeing with the distribution in equation 12. Next we consider the case for $q=2$. The spectrum we are now interested in is equivalent to the spectrum of the Hamiltonian, $\hat{H}_{2}=\hat{H}\otimes\mathbb{I}+\mathbb{I}\otimes\hat{H},$ (14) which has the spectrum $\Lambda_{k,l}=E_{k}+E_{l}$. This construction introduces an unwanted symmetry in the spectrum of $\hat{H}_{2}$, namely that $\Lambda_{k,l}=\Lambda_{l,k}$, that is, the spectrum is invariant under permutations of the individual energies’ indices. For $q=2$ this might be understood as a spatial reflection symmetry for a larger two component non- interacting system. Addressing this symmetry is simple. We only consider unique pairs of $(k,l)$, namely, we take $l>k$, where we also ignore the portion of the spectrum where $k=l$. Ignoring $k=l$ does not appear to significantly alter the results but allows us to eliminate trivial multiples of the $q=1$ spectrum. In fact, the contribution of the $q=l$ portion of the spectrum is vanishingly small compared to the total size of our spectrum. We further introduce a new index that orders the spectrum $\alpha=1,2\dots$ such that $\Lambda_{\alpha}<\Lambda_{\alpha+1}$. With this new spectrum we can analyze the level spacing and ratio distribution. Figure 2: (a) Level spacing $q=2$, $L=20$ unfolded data, which looks approximately like a Poisson distribution. (b) Ratio test for spectral statistics at $L=20$ for $q=2$. In both plots we draw the GOE prediction in black and the independent random variable prediction (Poisson) in purple. Fig. 2 indicates that the spectrum of $\hat{H}_{2}$ experiences level- attraction. This is contrary to the $q=1$ case which has level-repulsion. Importantly this indicates that the spectrum of $\hat{H}_{2}$ behaves like an integrable model, and has gaps clustered around $s=0$. While this does not guarantee violations of the $q=2$ no-resonance theorem, it does make violations more likely. Likewise, we expect a large amount of pseudo violations such that $s_{j}=\Lambda_{j+1}-\Lambda_{j}\approx 0$, meaning unless very large time scales (potentially non-physically large) are considered these violations would appear as resonances in the spectrum. Considering this fact, results such as Short (2011); Riddell _et al._ (2022); Srednicki (1999); Mark _et al._ (2022) should be investigated to understand the effects of resonances. In the appendix B we demonstrate that the Poisson statistics and level-attraction persist for higher values of $q$ and conjecture that level-attraction persists for all values of $q>1$. One further test we can perform is to test the actual average value of $r$ we observe in the ratio distribution. $\langle r\rangle=2\ln 2-1\approx 0.38629436112$ for Poisson systems and $\langle r\rangle=4-2\sqrt{3}\approx 0.535898384$ for the GOE. Testing this quantity allows us to clearly observe convergence to the predictions of random matrix theory as a function of system size. We see this test in Fig. 3. In the right panel we see the test for $q=2$ which reveals a strong convergence in agreement with the Poisson predictions. The data at $L=22$ gives $\langle r\rangle=0.386294325894$, which confirms seven decimal points of convergence. Therefore, from the perspective of short range correlations in the spectrum we conclude that $\hat{H}_{2}$ obeys Poisson statistics, and importantly, that the $q=2$ case experiences level- attraction. Figure 3: Plotted data for the convergence of $\langle r\rangle$ to RMT predictions. In black we plot the GOE predictions and in purple we plot the corresponding Poisson prediction. Left: We see the $q=1$ data converge to the GOE prediction as a function of system size. Right: We see the $q=2$ data. In appendix B, we demonstrate that this level-attraction persists for higher values of $q$ and speculate that for all $q>1$ the spectrum must experience level-attraction. In appendix A we repeat our numerical studies but for random matrices, showing that our results from a quantum chaotic Hamiltonian agree with the results of RMT. Importantly our tests here are local tests on the spectrum. It is an open question if the symmetry resolved Hamiltonian $\hat{H}_{2}$ will still obey Poisson statistics for more complex tests such as investigating the spectral form factor Bertini _et al._ (2021b); Chan _et al._ (2018b). We leave this question to future work. An important observation from this work is that when studying $q>1$ we never find cases where the spectral statistics aren’t Poisson. In some models such as the tight binding model for free fermions we expect $p(s)$ to be a delta function in the the thermodynamic limit. Crucially, this implies the $q$ no resonance condition is robustly violated for free fermions. We do not observe any such behavior in the spectral statistics for $q>1$ when starting with a chaotic Hamiltonian implying violations should be rare or non-existent. We emphasize that the presence of level-attraction does not imply violations of the $q>1$ no-resonance condition. It does, however, imply the gaps in the spectrum of $\hat{H}_{2}$ cluster close to zero. If we investigate the probability of finding a gap within the range $0<s<\epsilon$, where $\epsilon$ is small, we have for the GOE, $\int_{0}^{\epsilon}\frac{\pi s}{2}e^{-\pi^{2}s^{2}/4}ds=\frac{1-e^{-\frac{-\pi^{2}\epsilon^{2}}{4}}}{\pi}\approx\frac{\pi\epsilon^{2}}{4}-\frac{\pi^{3}\epsilon^{4}}{32}\dots,$ (15) so we see the probability is proportional to $\epsilon^{2}$ for small gaps. On the contrary for the Poisson distribution one intuitively yields something much larger, $\int_{0}^{\epsilon}e^{-s}ds=\sinh\epsilon-\cosh\epsilon+1\approx\epsilon-\frac{\epsilon^{2}}{2}\dots,$ (16) giving us only linear scaling for small gaps. While both probabilities are of course small, the GOE is significantly smaller, giving one a significantly stronger case to assume definition 1 is satisfied in your chaotic model. In the case of Poisson statistics, one might expect to find one or many gaps that are essentially zero due to level-attraction. Infinite time averages are theoretical tools for which we average over times significantly longer than the Heisenberg time $\tau_{H}\sim e^{S}$, where $S$ is the thermodynamic entropy at the appropriate energy $E$ Srednicki (1999). The presence of essentially zero gaps will lead to terms $e^{i(E_{k}-E_{k-1})t}$ which are stationary on time scales proportional to $\tau_{H}$. Despite the presence of such violators, we expect the set of problematic gaps to be small relative to the total Hilbert space dimension. Since it is likely that some violations or cases that are indistinguishable from violations of definition 1 are inevitable, especially for cases using $q>1$, it is instructive to revisit past results keeping in mind a small number of violations will most likely be present. Below we discuss modifying key results in the field of quantum equilibration theory to accommodate the presence of violations of definition 1. ## II Equilibration and recurrence ### II.1 Physical models In this section we tackle the problem of equilibration in light of our investigation of the higher order no-resonance conditions and the presence of level-attraction. First, let us review a basic setup. Consider a time independent system with the Hamiltonian $\hat{H}$ where we label the energy eigen basis as $\hat{H}|E_{k}\rangle=E_{k}|E_{k}\rangle$. For simplicity, we take the spectrum of $\hat{H}$ to be discrete and finite. We will initialize our system in some pure state $|\psi(t=0)\rangle=\sum_{m}c_{m}|E_{k}\rangle.$ (17) To track equilibration, we study properties of the expectation value of an observable $\hat{A}$. This observable is general, but we demand that the largest single value $||A||$ is independent of system size or saturates to a finite value in the thermodynamic limit. In what follows we will assume our spectrum has level-repulsion, so that we may safely assume, $E_{m}=E_{l}\implies m=l$ (18) . If our observable equilibrates, its finite time value $\langle\hat{A}(t)\rangle=\langle\psi(t)|\hat{A}|\psi(t)\rangle$ must relax to its infinite time average value i.e. $\bar{A}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\langle\hat{A}(t)\rangle dt=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\sum_{m,n}\bar{c}_{m}c_{n}A_{m,n}e^{i(E_{m}-E_{n})t}dt=\sum_{m}|c_{m}|^{2}A_{m,m}.$ (19) $\bar{A}$ is usually written in terms of the diagonal ensemble $\omega=\sum_{m}|c_{m}|^{2}|E_{m}\rangle\langle E_{m}|$ as $\bar{A}=\operatorname{Tr}\left(\omega\hat{A}\right)$. A typical quantity to study in quantum equilibration would be the variance of the expectation value around $\bar{A}$. This was studied and bounded in Short (2011); Reimann and Kastner (2012) assuming that the $q=2$ no-resonance condition was satisfied. The variance is written as $\mu_{2}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\langle\hat{A}(t)\rangle-\bar{A}\right)^{2}dt.$ (20) It was famously found in Short (2011) that this variance can be bounded by the purity of the diagonal ensemble $\mu_{2}\leq||A||^{2}\operatorname{Tr}\left(\omega^{2}\right).$ (21) Note equation 21 holds as a consequence of the $q=2$ no-resonance condition holding. The purity of the diagonal ensemble usually decays exponentially fast with respect to the system size (see for example Fig. 2 in Riddell _et al._ (2022)). If one assumes higher order $q$ no-resonance conditions, it was recently found that, for higher moments, $\mu_{q}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\langle\hat{A}(t)\rangle-\bar{A}\right)^{q}dt,$ (22) a similar bound can be found Riddell _et al._ (2022), $|\mu_{q}|\leq\left(q||A||\sqrt{\operatorname{Tr}\left(\omega^{2}\right)}\right)^{q}.$ (23) In light of section I and the presence of level-attraction for higher order $q$, these results should be updated to reflect the high probability of there being a violation of the $q$ no-resonance condition. ###### Theorem 1. Suppose we have a model that has violations of the $q$ no-resonance condition. Then the moments $\mu_{q}$ can be bounded as $|\mu_{q}|\leq||A||^{q}\left(q^{q}+\frac{\mathcal{N}_{q,L}}{2q}\right)\sqrt{\operatorname{Tr}\left(\omega^{2}\right)}^{q},$ (24) where $\mathcal{N}_{q,L}$ is the maximum number of times one $E_{m}$ appears in violations of the $q$ no-resonance condition for a given system size $L$. We call the $E_{m}$’s that appear in more than one violation of the resonance condition exceptional violators. ###### Proof. Terms that contribute to $\mu_{q}$ are sums of energies that are equal. Let $\Lambda_{q}$ and $\Lambda_{q}^{\prime}$ be sets of indices corresponding to particular energies, $\sum_{m\in\Lambda_{q}}E_{m}=\sum_{m\in\Lambda_{q}^{\prime}}E_{m}.$ (25) The no-resonance condition picks out the trivial set of energies that satisfy this equality, which is when $\Lambda_{q}=\Lambda_{q}^{\prime}$. These contributions were bounded in Short (2011); Riddell _et al._ (2022). We collect the remaining violations in a set $\mathcal{S}$ and write, $|\mu_{q}|\leq\left(q||A||\sqrt{\operatorname{Tr}\left(\omega^{2}\right)}\right)^{q}+\left|\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}\bar{c}_{m_{j}}c_{n_{j}}A_{m_{j},n_{j}}\right|,$ (26) where we have identified $\Lambda_{q}\in\mathcal{S}=\\{m_{j},n_{j}\\}$. The second term can be bounded as follows. $\left|\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}\bar{c}_{m_{j}}c_{n_{j}}A_{m_{j},n_{j}}\right|\leq||A||^{q}\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}|c_{m_{j}}||c_{n_{j}}|.$ (27) Since all $|c_{m_{j}}|$ are positive, we may use the inequality of arithmetic and geometric means, giving $\leq\frac{||A||^{q}}{2q}\sum_{\Lambda_{q}\in\mathcal{S}}\sum_{j=1}^{q}\left(|c_{m_{j}}|^{2q}+|c_{n_{j}}|^{2q}\right).$ (28) We know that $\operatorname{Tr}\left(\omega^{q}\right)=\sum_{m}|c_{m}|^{2q}$. Assuming an individual $|c_{m_{j}}|^{2q}$ contributes at most $\mathcal{N}_{q,L}$ times, we have that $\leq\frac{||A||^{q}\mathcal{N}_{q,L}}{2q}\operatorname{Tr}\left(\omega^{q}\right).$ (29) We lastly recall that $\operatorname{Tr}\left(\omega^{q}\right)\leq\operatorname{Tr}\left(\omega^{2}\right)^{q/2}$, which completes the proof. ∎ Accommodating the presence of degenerate gaps for the $q=2$ case has been considered before in Short and Farrelly (2012). Our bound reads, $|\mu_{2}|\leq||A||^{2}\left(1+\frac{\mathcal{N}_{2,L}}{4}\right)\operatorname{Tr}\left(\omega^{2}\right).$ (30) Instead, one can likewise write Short and Farrelly (2012) as $|\mu_{2}|\leq N(\epsilon)||A||^{2}\operatorname{Tr}\left(\omega^{2}\right),$ (31) where $N(\epsilon)$ is the maximum number of energy gaps in any interval for $\epsilon>0$, i.e. $N(\epsilon)=\max_{E}|\\{(k,l)|\enspace E_{k}-E_{l}\in[E,E+\epsilon)\\}|.$ (32) One can recover the maximum degeneracy of the gaps by considering $\lim_{\epsilon\to 0^{+}}N(\epsilon)$. In the limit of non-degenerate gaps these bounds are identical, and only differ by a constant factor for a small number of degeneracies in the gaps. Our result might in theory give better constant factors than the result in Short and Farrelly (2012), however $N(\epsilon)$ is likely a more intuitive quantity and easier to work with numerically. We next wish to understand the properties of $\mathcal{N}_{q,L}$, which in practice is challenging to study numerically. The worst scaling it could have is the total number of violations, i.e. $0\leq\mathcal{N}_{q,L}\leq|\mathcal{S}|$. As we have noted earlier, the presence of level-attraction does not imply $|\mathcal{S}|>0$. An easy property to understand however is that if $\mathcal{N}_{q,L}\geq 2$ this implies at the very least that $\mathcal{N}_{q+1,L}\geq 1$. To see this consider $q=2$ for an exceptional violator $E_{m}$ that appears at least twice. We might have $E_{m}$ as an exceptional violator as, $E_{m}+E_{n}=E_{p}+E_{l},\enspace E_{m}+E_{k}=E_{r}+E_{h}.$ (33) This implies for $q=3$, a violation of the no-resonance condition is $E_{p}+E_{l}+E_{k}=E_{r}+E_{h}+E_{n}.$ (34) Despite two exceptional violations for $q=2$ implying at least one for $q=3$, this does not imply $\mathcal{N}_{q,L}$ is decreasing in $q$. To get a handle on the size of $\mathcal{N}_{q,L}$ we can attempt to quantify the expected or average behavior of the quantity. First, let us assume we randomly generated the set $S$. We will assume that the indices which appear are uniformly generated, so each element of $S$ can be understood to be a tuple of $2q$ indices, $(m_{1},\dots m_{2q})$. These indices are not necessarily independent. For example, they cannot be equal to each other under our assumptions. Despite this, in the large $L$ limit this dependence cannot effect results due to the smallness of $q$ and the corresponding exponential nature of the number of possible indices $2^{L}$. We can therefore focus on the first index of each tuple $m_{1}$. Our goal will be to predict the average number of times $m_{1}$ ends up being the same index. It can at most appear $|S|$ times, and thus we wish to compute $\langle\mathcal{N}_{q,L}\rangle=\sum_{n=1}^{|S|}np(n),$ (35) where $p(n)$ is the probability of the same index appearing $n$ times. The total number of configurations possible for the first index of each tuple in $S$ is $2^{|S|L}$, and therefore we must simply count the number of configurations where $n$ copies of the same $m_{1}$ appear. This is given by ${|S|\choose n}2^{L\left(|S|-n\right)},$ (36) which gives the following formula for our expected value $\langle\mathcal{N}_{q,L}\rangle=\sum_{n=0}^{|S|}\frac{n{|S|\choose n}}{2^{Ln}}=\frac{|S|}{2^{L}}(2^{-L}+1)^{|S|-1}.$ (37) We now have some special limiting cases to consider. Suppose that $|S|\propto c2^{L}$ for some constant $c$. Then the expected value of $\langle\mathcal{N}_{q,L}\rangle$ is $c\,e^{c}$ as $L$ goes to infinity. However, if $|S|$ has sub exponential growth, for example if it scales as $L$, then the expected value goes to zero for large system size as $\mathcal{O}(L/2^{L})=\mathcal{O}(|S|/2^{L})$. Therefore we expect that for most cases, even with modest violations of the no-resonance condition we expect $\lim_{L\to\infty}N_{q,L}$ to be finite and quite small. ### II.2 A Random Matrix Theory Approach In this section we will show how one could compute $\mu_{q}$ for the GUE and GOE with an unfolded spectrum in the large $N$ limit. We can rewrite equation 22 for finite $T$ as $\mu_{q}(T)=\sum_{i_{1},j_{1},...,i_{q},j_{q}}\left(\prod_{k=1}^{q}c_{i_{k}}\overline{c}_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right)\frac{1}{T}\int_{0}^{T}e^{i\sum_{k=1}^{q}(\lambda_{i_{k}}-\lambda_{j_{k}})t}dt,$ (38) where the eigenvalues are unfolded and from either a $N\times N$ GUE or GOE distributed matrix. We define its moments as its expectation value. Define the $n$-level spectral form factor as $\mathcal{K}_{2n}^{\overline{\beta}}(t)=\frac{1}{N^{2n}}\left\langle\sum_{i_{1},j_{1},...,i_{n},j_{n}=1}^{N}e^{it\sum_{k=1}^{n}\lambda_{i_{k}}-\lambda_{j_{k}}}\right\rangle_{\overline{\beta}},$ where the subscript $\overline{\beta}=1$ or 2 denotes the GOE and GUE expectation values, respectively. Then we may express the expectation values of $\mu_{q}$ in terms of its its $q$-level spectral form factor $\displaystyle\langle\mu_{q}(T)\rangle_{\overline{\beta}}$ $\displaystyle=\lim_{N\rightarrow\infty}\sum_{i_{1},j_{1},...,i_{q},j_{q}}\left(\prod_{k=1}^{q}c_{i_{k}}\overline{c}_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right)\frac{1}{T}\int_{0}^{T}\left.\langle e^{i\sum_{k=1}^{q}(\lambda_{i_{k}}-\lambda_{j_{k}})t}\right.\rangle_{\overline{\beta}}dt$ (39) $\displaystyle=\left(\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2|B|}^{\overline{\beta}}(t)dt\right)\sum_{i_{1}\not=j_{1},...,i_{q}\not=j_{q}}\left(\prod_{k=1}^{q}c_{i_{k}}c_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right)$ (40) $\displaystyle=\left(\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2|B|}^{\overline{\beta}}(t)dt\right)(\operatorname{Tr}(A\rho-\omega))^{q}.$ (41) It is also worth noting that this equation is so general that it applies to any random matrix ensemble. Usually the GUE and GOE are of interest, but progress has been made studying the spectral form factor for other matrix ensembles. For example see Forrester (2021a, b); Cipolloni _et al._ (2023). The $q$-level spectral form factor can be computed explicitly, but it is a computationally heavy task. For example see Liu (2018), where it is computed but for ensembles that are not unfolded. In particular, for the GOE and GUE, the 2-level spectral form factor of the unfolded spectrum has a well-known explicit formula in the large $N$ limit Mehta (2004). This leads to the following result. ###### Theorem 2. For any fixed $T$ greater than zero, the GOE and GUE the expectation value of $\mu_{2}(T)$ goes to zero as $1/N^{2}$ in the large $N$ limit. Furthermore, if $T$ goes to infinity at the same rate as $N$ (i.e. $N=T$) then $\mu_{2}(T)$ goes to zero as $1/T$. ###### Proof. From Mehta (2004); Cipolloni _et al._ (2023), we know that for large $N$ the spectral form factors can be approximated by $\mathcal{K}_{2}^{1}(t)\approx\left\\{\begin{array}[]{lr}\frac{4t}{\pi N^{2}}+\frac{2t}{\pi N^{2}}\ln(1+\frac{4t}{\pi N})&\text{if }0\leq t\leq\frac{\pi N}{2}\\\ \frac{2}{N}+\frac{2t}{\pi N^{2}}\ln\left(\frac{\frac{4t}{\pi N}+1}{\frac{4t}{\pi N}-1}\right)&\text{if }t\geq\frac{\pi N}{2}\end{array}\right.$ (42) and $\mathcal{K}_{2}^{2}(t)\approx\left\\{\begin{array}[]{lr}\frac{2t}{\pi N^{2}}&\text{if }0\leq t\leq\frac{\pi N}{2}\\\ \frac{1}{N}&\text{if }t\geq\frac{\pi N}{2}\end{array}\right..$ (43) Clearly, the first part of every piecewise function will dominate for large $N$, thus completing the first claim. Next, set $T=N$. Taking the above quantities time averages we get $\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2}^{1}(t)dt\approx\frac{1}{T}\left(\frac{3}{2\,\pi}-{\frac{\pi}{16}\ln\left(1+{\frac{4}{\pi}}\right)}+{\frac{1}{\pi}\ln\left(1+{\frac{4}{\pi}}\right)}+{\frac{3\,\pi}{32}}+{\frac{1}{4}}\right)$ (44) and $\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2}^{2}(t)dt\approx\frac{1}{\pi\,T}.$ (45) This proves the second claim. ∎ As we demonstrate in appendix A, the spectrum of the random matrix Hamiltonian likewise experiences level-attraction for $q\geq 2$. However, despite the presence of level-attraction, the above RMT result indicates that we still should expect $\mu_{q}\to 0$ indicating equilibration on average of our observable. ## III Conclusion In this work we have explored spectral statistics of chaotic Hamiltonians, namely the statistics surrounding sums of energies. We found that despite being chaotic, sums of energies displayed Poisson statistics instead of Wigner-Dyson statistics. This was demonstrated numerically for both a chaotic spin Hamiltonian and the GOE. The presence of level-attraction leads one to believe that accounting for potential degeneracies or “resonances” in infinite time averages of some dynamical quantities is necessary. We applied this observation to the theory of equilibration where we generalized known bounds to accommodate for degeneracies. Assuming the number of degeneracies is not exponentially large in system size, we demonstrated that the the bounds can be easily generalized to accomondate the presence of resonances. We further used techniques from RMT to prove that, for the GOE, moments of equilibration go to zero in the thermodynamic limit. ## IV Acknowledgements J.R. would like to thank Bruno Bertini, Marcos Rigol and Alvaro Alhambra for fruitful conversations. J.R. would like to extend special thanks in particular to Bruno who gave valuable feedback at various stages of the project. J.R. acknowledges the support of Royal Society through the University Research Fellowship No. 201101. N.J.P. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). ## References * Berry (1987) M. V. Berry, Proc. R. Soc. A 413, 183 (1987). * D’Alessio _et al._ (2016) L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, Advances in Physics 65, 239 (2016). * Porter (1965) C. E. Porter, _Statistical theories of spectra: fluctuations_ , Tech. Rep. (1965). * Brody _et al._ (1981) T. A. Brody, J. Flores, J. B. French, P. Mello, A. Pandey, and S. S. Wong, Reviews of Modern Physics 53, 385 (1981). * Guhr _et al._ (1998) T. Guhr, A. Müller-Groeling, and H. A. Weidenmüller, Physics Reports 299, 189 (1998). * Berry and Tabor (1976) M. V. Berry and M. Tabor, Proc. R. Soc. A 349, 101 (1976). * Berry and Tabor (1977) M. V. Berry and M. Tabor, Proc. R. Soc. A 356, 375 (1977). * Jalabert _et al._ (1990) R. A. Jalabert, H. U. Baranger, and A. D. Stone, Phys. Rev. Lett. 65, 2442 (1990). * Marcus _et al._ (1992) C. M. Marcus, A. J. Rimberg, R. M. Westervelt, P. F. Hopkins, and A. C. Gossard, Phys. Rev. Lett. 69, 506 (1992). * Milner _et al._ (2001) V. Milner, J. L. Hanssen, W. C. Campbell, and M. G. Raizen, Phys. Rev. Lett. 86, 1514 (2001). * Friedman _et al._ (2001) N. Friedman, A. Kaplan, D. Carasso, and N. Davidson, Phys. Rev. Lett. 86, 1518 (2001). * Stockmann and Stein (1990) H. J. Stockmann and J. Stein, Phys. Rev. Lett. 64, 2215 (1990). * Sridhar (1991) S. Sridhar, Phys. Rev. Lett. 67, 785 (1991). * Moore _et al._ (1994) F. L. Moore, J. C. Robinson, C. Bharucha, P. E. Williams, and M. G. Raizen, Phys. Rev. Lett. 73, 2974 (1994). * Steck _et al._ (2001) D. A. Steck, W. H. Oskay, and M. G. Raizen, Science 293, 274 (2001). * Hensinger _et al._ (2001) W. K. Hensinger, H. Häffner, A. Browaeys, N. R. Heckenberg, K. Helmerson, C. McKenzie, G. J. Milburn, W. D. Phillips, S. L. Rolston, H. Rubinsztein-Dunlop, and B. Upcroft, Nature 412, 52 (2001). * Chaudhury _et al._ (2009) S. Chaudhury, A. Smith, B. E. Anderson, S. Ghose, and P. S. Jessen, Nature 461, 768 (2009). * Weinstein _et al._ (2002) Y. S. Weinstein, S. Lloyd, J. Emerson, and D. G. Cory, Phys. Rev. Lett. 89, 157902 (2002). * Zhang _et al._ (2022) Y. Zhang, L. Vidmar, and M. Rigol, Phys. Rev. E 106, 014132 (2022). * Łydżba _et al._ (2021a) P. Łydżba, Y. Zhang, M. Rigol, and L. Vidmar, Phys. Rev. B 104, 214203 (2021a). * Łydżba _et al._ (2021b) P. Łydżba, M. Rigol, and L. Vidmar, Phys. Rev. B 103, 104206 (2021b). * Santos and Rigol (2010a) L. F. Santos and M. Rigol, Phys. Rev. E 81, 036206 (2010a). * Santos and Rigol (2010b) L. F. Santos and M. Rigol, Phys. Rev. E 82, 031130 (2010b). * Rigol (2010) M. Rigol, ArXiv e-prints (2010), arXiv:1008.1930 [cond-mat.stat-mech] . * Kollath _et al._ (2010) C. Kollath, G. Roux, G. Biroli, and A. M. Läuchli, Journal of Statistical Mechanics: Theory and Experiment 2010, P08011 (2010). * Santos _et al._ (2012) L. F. Santos, A. Polkovnikov, and M. Rigol, Physical Review E 86 (2012), 10.1103/physreve.86.010102. * Richter _et al._ (2020) J. Richter, A. Dymarsky, R. Steinigeweg, and J. Gemmer, Physical Review E 102 (2020), 10.1103/physreve.102.042127. * Atas _et al._ (2013a) Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Phys. Rev. Lett. 110, 084101 (2013a). * Atas _et al._ (2013b) Y. Y. Atas, E. Bogomolny, O. Giraud, P. Vivo, and E. Vivo, Journal of Physics A: Mathematical and Theoretical 46, 355204 (2013b). * $\boldsymbol{S}$untajs _et al._ (2020) J. $\boldsymbol{S}$untajs, J. Bon$\boldsymbol{c}$a, T. c. v. Prosen, and L. Vidmar, Phys. Rev. E 102, 062144 (2020). * Chan _et al._ (2018a) A. Chan, A. De Luca, and J. Chalker, Phys. Rev. X 8, 041019 (2018a). * D’Alessio and Rigol (2014) L. D’Alessio and M. Rigol, Phys. Rev. X 4, 041048 (2014). * Bertini _et al._ (2021a) B. Bertini, P. Kos, and T. Prosen, Communications in Mathematical Physics 387, 597 (2021a). * Bertini _et al._ (2018) B. Bertini, P. Kos, and T. c. v. Prosen, Phys. Rev. Lett. 121, 264101 (2018). * Dyson (1962a) F. J. Dyson, Journal of Mathematical Physics 3, 140 (1962a). * Dyson (1962b) F. J. Dyson, Journal of Mathematical Physics 3, 157 (1962b). * Dyson (1962c) F. J. Dyson, Journal of Mathematical Physics 3, 166 (1962c). * Dyson and Mehta (1963) F. J. Dyson and M. L. Mehta, Journal of Mathematical Physics 4, 701 (1963). * Mehta and Dyson (1963) M. Mehta and F. Dyson, Journal of Mathematical Physics (New York)(US) 4 (1963). * Dyson (1962d) F. J. Dyson, Journal of Mathematical Physics 3, 1199 (1962d). * Bohigas _et al._ (1984) O. Bohigas, M.-J. Giannoni, and C. Schmit, Physical review letters 52, 1 (1984). * Bohrdt _et al._ (2017) A. Bohrdt, C. B. Mendl, M. Endres, and M. Knap, New. J. Phys. 19, 063001 (2017). * Andreev _et al._ (1996) A. Andreev, O. Agam, B. Simons, and B. Altshuler, Physical review letters 76, 3947 (1996). * Müller _et al._ (2004) S. Müller, S. Heusler, P. Braun, F. Haake, and A. Altland, Physical review letters 93, 014103 (2004). * Mehta (2004) M. L. Mehta, _Random matrices_ (Elsevier, 2004). * Bruus and Angl‘es d’Auriac (1997) H. Bruus and J.-C. Angl‘es d’Auriac, Phys. Rev. B 55, 9142 (1997). * Mehta (1960) M. L. Mehta, Nuclear Physics 18, 395 (1960). * Jimbo _et al._ (1980) M. Jimbo, T. Miwa, Y. Mori, and M. Sato, Physica D: Nonlinear Phenomena 1, 80 (1980). * Forrester and Witte (2000) P. Forrester and N. Witte, Letters in Mathematical Physics 53, 195 (2000). * Livan _et al._ (2018) G. Livan, M. Novaes, and P. Vivo, Monograph Award 63 (2018). * Alhambra _et al._ (2020) Á. M. Alhambra, J. Riddell, and L. P. García-Pintos, Phys. Rev. Lett. 124, 110605 (2020). * Riddell and Sørensen (2020) J. Riddell and E. S. Sørensen, Phys. Rev. B 101, 024202 (2020). * Gogolin and Eisert (2016) C. Gogolin and J. Eisert, Rep. Prog. Phys. 79, 056001 (2016). * Masanes _et al._ (2013) L. Masanes, A. J. Roncaglia, and A. Acín, Phys. Rev. E 87, 032137 (2013). * Wilming _et al._ (2018) H. Wilming, T. R. de Oliveira, A. J. Short, and J. Eisert, in _Thermodynamics in the Quantum Regime_ (Springer, 2018) pp. 435–455. * Heveling _et al._ (2020) R. Heveling, L. Knipschild, and J. Gemmer, Journal of Physics A: Mathematical and Theoretical 53, 375303 (2020). * Campos Venuti and Zanardi (2010) L. Campos Venuti and P. Zanardi, Phys. Rev. A 81 (2010), 10.1103/physreva.81.022113. * Knipschild and Gemmer (2020) L. Knipschild and J. Gemmer, Physical Review E 101 (2020), 10.1103/physreve.101.062205. * Carvalho _et al._ (2023) G. D. Carvalho, L. F. dos Prazeres, P. S. Correia, and T. R. de Oliveira, (2023), arXiv:2305.11985 [quant-ph] . * Riddell _et al._ (2021) J. Riddell, W. Kirkby, D. H. J. O’Dell, and E. S. Sørensen, “Scaling at the otoc wavefront: Integrable versus chaotic models,” (2021), arXiv:2111.01336 [cond-mat.stat-mech] . * Riddell and Sørensen (2019) J. Riddell and E. S. Sørensen, Physical Review B 99, 054205 (2019). * Yoshida and Yao (2019) B. Yoshida and N. Y. Yao, Phys. Rev. X 9, 011006 (2019). * Fortes _et al._ (2019) E. M. Fortes, I. García-Mata, R. A. Jalabert, and D. A. Wisniacki, Phys. Rev. E 100, 042201 (2019). * Shukla _et al._ (2022) R. K. Shukla, A. Lakshminarayan, and S. K. Mishra, Phys. Rev. B 105, 224307 (2022). * Fortes _et al._ (2020) E. M. Fortes, I. García-Mata, R. A. Jalabert, and D. A. Wisniacki, Europhysics Letters 130, 60001 (2020). * Mark _et al._ (2022) D. K. Mark, J. Choi, A. L. Shaw, M. Endres, and S. Choi, “Benchmarking quantum simulators using quantum chaos,” (2022). * Srednicki (1999) M. Srednicki, Journal of Physics A: Mathematical and General 32, 1163 (1999). * Riddell _et al._ (2022) J. Riddell, N. Pagliaroli, and Á. M. Alhambra, arXiv preprint arXiv:2206.07541 (2022). * Khalkhali and Pagliaroli (2022) M. Khalkhali and N. Pagliaroli, Journal of Mathematical Physics 63, 053504 (2022). * LeBlond _et al._ (2019) T. LeBlond, K. Mallayya, L. Vidmar, and M. Rigol, Phys. Rev. E 100, 062134 (2019). * Łydżba _et al._ (2021c) P. Łydżba, M. Rigol, and L. Vidmar, Phys. Rev. B 103, 104206 (2021c). * Oganesyan and Huse (2007) V. Oganesyan and D. A. Huse, Physical review b 75, 155111 (2007). * Atas _et al._ (2013c) Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Physical review letters 110, 084101 (2013c). * Short (2011) A. J. Short, New. J. Phys. 13, 053009 (2011). * Bertini _et al._ (2021b) B. Bertini, P. Kos, and T. Prosen, Communications in Mathematical Physics 387, 597 (2021b). * Chan _et al._ (2018b) A. Chan, A. De Luca, and J. T. Chalker, Phys. Rev. X 8, 041019 (2018b). * Reimann and Kastner (2012) P. Reimann and M. Kastner, New Journal of Physics 14, 043020 (2012). * Short and Farrelly (2012) A. J. Short and T. C. Farrelly, New. J. Phys. 14, 013063 (2012). * Forrester (2021a) P. J. Forrester, Journal of Statistical Physics 183, 33 (2021a). * Forrester (2021b) P. J. Forrester, Communications in Mathematical Physics 387, 215 (2021b). * Cipolloni _et al._ (2023) G. Cipolloni, L. Erdős, and D. Schröder, Communications in Mathematical Physics , 1 (2023). * Liu (2018) J. Liu, Physical Review D 98, 086026 (2018). ## Appendix A RMT predictions for $q=2,3$ In this section we demonstrate that random matrix models have level-attraction for $q=2,3$. We accomplish this by simply looking at the ratio test outlined in the main text. First let us define a random matrix Hamiltonian, $\hat{H}=A+A^{T},$ (46) where $A$ is a matrix filled with random numbers generated from a normal distribution with zero mean and unit variance. We label the side length of $A$ as $N$. Similar to the physical Hamiltonian, we can study the $q=2$ case by first constructing the Hamiltonian, $\hat{H}_{2}=\hat{H}\otimes\mathbb{I}+\mathbb{I}\otimes\hat{H},$ (47) with the spectrum $\Lambda_{k,l}$. Again this spectrum is symmetric under permutations of the indices, so we resolve this symmetry and only treat eigenvalues with unique $(k,l)$ such that $l>k$. We investigate the spectral properties of this Hamiltonian after resolving our symmetry in figure 4 in the left panel. Here we clearly see agreement with a Poisson distribution. The random matrix model experiences level-attraction. Figure 4: (a) Ratio test for the $q=2$ symmetry resolved random matrix Hamiltonian. Side length is $N=1200$. (b) Ratio test for the $q=3$ symmetry resolved random matrix Hamiltonian. Side length is $N=200$. The construction of a new Hamiltonian for $q=3$ is similar. We have $\hat{H}_{3}=\hat{H}\otimes\mathbb{I}\otimes\mathbb{I}+\mathbb{I}\otimes\hat{H}\otimes\mathbb{I}+\mathbb{I}\otimes\mathbb{I}\otimes\hat{H}.$ (48) This gives us a new spectrum of $\Lambda_{k,l,q}=E_{k}+E_{l}+E_{q}$, where this new spectrum is also invariant under permutations of its indices. We resolve this symmetry by considering terms such that $q>l>k$. The result of the ratio test on this new spectrum is given in the right panel of Fig. 4, indicating again level-attraction and agreement with Poisson statistics. This is similarly found for higher values of $q$, which leads us to conjecture that this will be true for all $q\geq 2$. ## Appendix B Physical Hamiltonian q = 3,4 spectral statistics In this section we provide numerical evidence for level-attraction for $q=3,4$ in the physical Hamiltonian. We repeat the $q=3$ case as was covered in the RMT appendix, and also investigate the $q=4$ statistics. Both cases will be covered with the ratio test, and we will use the same physical model as the main text, where we resolve all relevant symmetries. For the $q=4$ case, we must work with the Hamiltonian, $\hat{H}_{4}=\hat{H}\otimes\mathbb{I}\otimes\mathbb{I}\otimes\mathbb{I}+\mathbb{I}\otimes\hat{H}\otimes\mathbb{I}\otimes\mathbb{I}+\mathbb{I}\otimes\mathbb{I}\otimes\hat{H}\otimes\mathbb{I}+\mathbb{I}\otimes\mathbb{I}\otimes\mathbb{I}\otimes\hat{H}.$ (49) This gives us a new spectrum, again which is invariant under index permutations. We can resolve this symmetry with an identical strategy to the $q=2,3$ cases, and study the corresponding symmetry-resolved spectrum. The results of this for $q=3,4$ in the physical Hamiltonian are given in Fig. 5. These results again indicate that the spectrum has level-attraction and obeys Poisson statistics. The left panel in Fig. 5 also serves as evidence that the statistics of the Hamiltonian agrees with RMT as seen in the right panel of Fig. 4. Figure 5: (a) Ratio test for the $q=3$ symmetry resolved physical Hamiltonian. The physical Hamiltonian $\hat{H}$ is generated with $L=16$. (b) Ratio test for the $q=4$ symmetry resolved physical Hamiltonian. The Hamiltonian $\hat{H}$ here has $L=14$.
# On Support Recovery with Sparse CCA: Information Theoretic and Computational Limits Nilanjana Laha<EMAIL_ADDRESS> Department of Biostatistics Harvard university Boston, MA 02115, USA Rajarshi Mukherjee<EMAIL_ADDRESS> Department of Biostatistics Harvard University Boston, MA 02115, USA ###### Abstract In this paper we consider asymptotically exact support recovery in the context of high dimensional and sparse Canonical Correlation Analysis (CCA). Our main results describe four regimes of interest based on information theoretic and computational considerations. In regimes of “low” sparsity we describe a simple, general, and computationally easy method for support recovery, whereas in a regime of “high” sparsity, it turns out that support recovery is information theoretically impossible. For the sake of information theoretic lower bounds, our results also demonstrate a non-trivial requirement on the “minimal” size of the non-zero elements of the canonical vectors that is required for asymptotically consistent support recovery. Subsequently, the regime of “moderate” sparsity is further divided into two sub-regimes. In the lower of the two sparsity regimes, using a sharp analysis of a coordinate thresholding (Deshpande and Montanari, 2014) type method, we show that polynomial time support recovery is possible. In contrast, in the higher end of the moderate sparsity regime, appealing to the “Low Degree Polynomial” Conjecture (Kunisky et al., 2019), we provide evidence that polynomial time support recovery methods are inconsistent. Finally, we carry out numerical experiments to compare the efficacy of various methods discussed. Keywords: Sparse Canonical Correlation Analysis, Minimax Support Recovery, Low Degree Polynomials ## 1 Introduction Canonical Correlation Analysis (CCA) is a highly popular technique to perform initial dimension reduction while exploring relationships between two multivariate objects. Due to its natural interpretability and success in finding latent information, CCA has found enthusiasm across of vast canvass of disciplines, which include, but are not limited to psychology and agriculture, information retrieving (Hardoon et al., 2004; Rasiwasia et al., 2010; Gong et al., 2014), brain-computer interface (Bin et al., 2009), neuroimaging (Avants et al., 2010), genomics (Witten et al., 2009), organizational research (Bagozzi, 2011), natural language processing (Dhillon et al., 2011; Faruqui and Dyer, 2014), fMRI data analysis data analysis (Friman et al., 2003), computer vision (Kim et al., 2007), and speech recognition (Arora and Livescu, 2013; Wang et al., 2015). Early developments in the theory and applications of CCA have now been well documented in statistical literature, and we refer the interested reader to Anderson (2003) and references therein for further details. However, modern surge in interests for CCA, often being motivated by data from high throughput biological experiments (Lê Cao et al., 2009; Lee et al., 2011; Waaijenborg et al., 2008), requires re-thinking several aspects of the traditional theory and methods. A natural structural constraint that has gained popularity in this regard, is that of sparsity, i.e. the phenomenon of an (unknown) collection of variables being related to each other. In order to formally introduce the framework of sparse CCA, we present our statistical set up next. We shall consider $n$-i.i.d. samples $(X_{i},Y_{i})\sim\mathbb{P}$ with $X_{i}\in\mathbb{R}^{p}$ and $Y_{i}\in\mathbb{R}^{q}$ being multivariate mean zero random variables with joint variance covariance matrix $\Sigma=\begin{bmatrix}{\Sigma}_{x}&{\Sigma}_{yx}\\\ {\Sigma}_{yx}&{\Sigma}_{y}\\\ \end{bmatrix}.$ (1) The first canonical correlation $\Lambda_{1}$ is then defined as the maximum possible correlation between two linear combinations of $X$ and $Y$. This definition interprets $\Lambda_{1}$ as the optimal value of the following maximization problem: $\displaystyle\mathmakebox[width("$\underset{\displaystyle u\in\mathbb{R}^{p},v\in\mathbb{R}^{q}}{\mathrm{subject~{}to}}$")][l]{\underset{\displaystyle u\in\mathbb{R}^{p},v\in\mathbb{R}^{q}}{\mathrm{maximize}}}\quad u^{T}{\Sigma}_{xy}v\hfil\hfil\hfil\hfil$ (2) $\displaystyle\mathmakebox[width("$\underset{\displaystyle\phantom{u\in\mathbb{R}^{p},v\in\mathbb{R}^{q}}}{\mathrm{subject~{}to}}$")][c]{{\mathrm{subject~{}to}}}\quad$ $\displaystyle u^{T}{\Sigma}_{x}u=v^{T}{\Sigma}_{y}v$ $\displaystyle=1$ The solutions to (2) are the vectors which maximize the correlation of the projections of $X$ and $Y$ in those respective directions. Higher order canonical correlations can thereafter be defined in a recursive fashion (cf. Anderson, 1999). In particular, for $j\geq 1$, we define the $j^{\rm th}$ canonical correlation $\Lambda_{j}$ and the corresponding directions $u_{j}$ and $v_{j}$ by maximizing (2) with the additional constraint $\displaystyle u^{T}{\Sigma}_{x}u_{l}=v^{T}{\Sigma}_{y}v_{l}=0,\quad 0\leq l\leq j-1.$ (3) As mentioned earlier, in many modern data examples, the sample size $n$ is typically at most comparable to or much smaller than $p$ or $q$ – rendering the classical CCA inconsistent and inadequate without further structural assumptions (Cai et al., 2018; Ma et al., 2020; Bao et al., 2019). The framework of Sparse Canonical Correlation Analysis (SCCA) (Witten et al., 2009; Mai and Zhang, 2019), where the $u_{i}$’s and the $v_{i}$’s are sparse vectors, was subsequently developed to target low dimensional structures (that allows consistent estimation) when $p,q$ are potentially larger than $n$. The corresponding sparse estimates of the leading canonical directions naturally perform variable selection, thereby leading to recovery of their support (Witten et al., 2009; Mai and Zhang, 2019; Waaijenborg et al., 2008; Solari et al., 2019). It is unknown, however, under what settings, this naïve method of support recovery, or any other method for the matter, is consistent. The support recovery of the leading canonical directions serves an important purpose of identifying groups of variables which explain the most linear dependence among high dimensional random objects ($X$ and $Y$) under study – and thereby renders crucial interpretability. Asymptotically optimal support recovery is yet to be explored systematically in the context of SCCA – both theoretically, and from the computational viewpoint. In fact, despite the renewed enthusiasm on CCA, both the theoretical and applied communities have mainly focused on the estimation of the leading canonical directions, and relevant scalable algorithms – see e.g. Chen et al. (2013); Gao et al. (2015, 2017); Ma et al. (2020); Mai and Zhang (2019). This paper is motivated by exploring the crucial question of support recovery in the context of SCCA 111In this paper, by support recovery, we refer to the exact recovery of the combined support of the $u_{i}$’s (or the $v_{i}$’s).. The problem of support recovery for SCCA naturally connects to a vast class of variable selection problems (Wainwright, 2009; Amini and Wainwright, 2009; Butucea et al., 2015; Butucea and Stepanova, 2017; Meinshausen and Bühlmann, 2010). The problem closest in terms of complexity turns out to be the sparse PCA (SPCA) problem (Johnstone and Lu, 2009). Support recovery in the latter problem is known to present interesting information theoretic and computational bottlenecks (cf. Krauthgamer et al., 2015; Amini and Wainwright, 2009; Ding et al., 2019; Arous et al., 2020). Moreover, information theoretic and computational issues also arise in context of SCCA estimation problem (Chen et al., 2013; Gao et al., 2015, 2017; Mai and Zhang, 2019). In view of the above, it is natural to expect that such information theoretic and computational issues exist in context of SCCA support recovery problem as well. However, the techniques used in the SPCA support recovery analysis is not directly applicable to the SCCA problem, which poses additional challenges due to the presence of high dimensional nuisance parameters ${\Sigma}_{x}$ and ${\Sigma}_{y}$. The main focus of our work is therefore retrieving the complete picture of the information theoretic and computational limitations of SCCA support recovery. Before going into details, we next present a brief summary of our contributions, and defer the discussions on the main subtleties to Section 3. ### 1.1 Summary of Main Results We say a method successfully recovers the support if it achieves exact recovery with probability tending to one uniformly over the sparse parameter spaces defined in Section 2. In the sequel, we denote the cardinality of the combined support of the $u_{i}$’s and the $v_{i}$’s by $s_{x}$ and $s_{y}$, respectively. Thus $s_{x}$ and $s_{y}$ will be our respective sparsity parameters. Our main contributions are listed below. #### 1.1.1 General methodology In Section 3.1, we construct a general algorithm called RecoverSupp, which leads to successful support recovery whenever the latter is information theoretically tractable. This also serves as the first step in creating a polynomial time procedure for recovering support in one of the difficult regimes of the problem – see e,g. Corollary 17 which shows that RecoverSupp accompanied by a coordinate thresholding type method recovers the support in polynomial time in a regime that requires subtle analysis. Moreover, Theorem 2 shows that the minimal signal strength required by RecoverSupp is matches the information theoretic limit whenever the nuisance precision matrices ${\Sigma}_{x}^{-1}$ and ${\Sigma}_{y}^{-1}$ are sufficiently sparse. #### 1.1.2 Information Theoretic and Computational Hardness as a Function of Sparsity As the sparsity level increases, we show that the CCA support recovery problem transitions from being efficiently solvable, to NP hard (conjectured), and to information theoretically impossible. According to this hardness pattern, the sparsity domain can be partitioned into the following three regimes: (i) $s_{x},s_{y}\lesssim\sqrt{n}$, (ii) $\sqrt{n}\lesssim s_{x},s_{y}\lesssim n/\log(p+q)$, and (iii) $s_{x},s_{y}\gtrsim n/\log(p+q)$. We describe below the distinguishing behaviours of these three regimes, which is consistent with the sparse PCA scenario. * • We show that when $s_{x},s_{y}\lesssim\sqrt{n/\log(p+q)}$ (“easy regime”), polynomial time support recovery is possible, and well-known consistent estimators of the canonical correlates (Mai and Zhang, 2019; Gao et al., 2017) can be utilized to that end. When $\sqrt{n/\log(p+q)}\lesssim s_{x},s_{y}\lesssim\sqrt{n}$ (“difficult regime”), we show that a coordinate thresholding type algorithm (inspired by Deshpande and Montanari, 2014) succeeds provided $p+q\asymp n$. We call the last regime “difficult” because existing estimation methods like COLAR (Gao et al., 2017) or SCCA (Mai and Zhang, 2019) are yet to be shown to have valid statistical guarantees in this regime – see Section 3.1 and Section 3.4 for more details. * • In Section 3.3, we show that when $\sqrt{n}\lesssim s_{x},s_{y}\lesssim n/\log(p+q)$ (“hard regime”), support recovery is computationally hard subject to the so called “low degree polynomial conjecture” recently popularized by Hopkins and Steurer (2017); Hopkins (2018); Kunisky et al. (2019). Of course this phenomenon is observable only when $p,q\gtrsim n$, because otherwise, the problem would be solvable by the ordinary CCA analysis (Bao et al., 2019; Ma and Yang, 2021). Our findings are consistent with the conjectured computational barrier in context of SCCA estimation problem (Gao et al., 2017). * • When $s_{x},s_{y}\gtrsim n/\log(p+q)$, we show that support recovery is information theoretically impossible (see Section 3.2). #### 1.1.3 Information Theoretic Hardness as a Function of Minimal Signal Strength In context of support recovery, the signal strength is quantified by $\texttt{Sig}_{x}=\min_{k\in[p]}\max_{i\in[r]}|(u_{i})_{k}|\quad\text{and}\quad\texttt{Sig}_{y}=\min_{k\in[q]}\max_{i\in[r]}|(v_{i})_{k}|.$ Generally, support recovery algorithms require the signal strength to be above some threshold. As a concrete example, the detailed analyses provided in (Amini and Wainwright, 2009; Deshpande and Montanari, 2014; Krauthgamer et al., 2015) all is based on the non-zero principal component elements being $\pm 1/\sqrt{\rm sparsity}$. To the best of our knowledge, prior to our work, there was no result in the PCA/CCA literature on the information theoretic limit of the minimal signal strength. * • In Section 3.2, we show that $\texttt{Sig}_{x}\gtrsim\sqrt{\log(p-s_{x})/n}$ and $\texttt{Sig}_{y}\gtrsim\sqrt{\log(q-s_{y})/n}$ is a necessary requirement for successful support recovery. ### 1.2 Notation For a vector $x\in\mathbb{R}^{p}$, we denote its support $D(x)$ by $D(x)=\\{i:x_{i}\neq 0\\}$. We will overload notation, and for a matrix $A\in\mathbb{R}^{p\times q}$, we will denote by $D(A)$ the indexes of the non- zero rows of $A$. By an abuse of notation, sometimes we will refer to $D(A)$ as the support of $A$ as well. When $A\in\mathbb{R}^{p\times q}$ and $\alpha\in\mathbb{R}^{p}$ are unknown parameters, generally the estimator of their supports will be denoted by $\widehat{D}(A)$ and $\widehat{D}(\alpha)$, respectively. We let $\mathbb{N}$ denote the set of all positive numbers, and write $\mathbb{Z}$ for the set of all natural numbers $\\{0,1,2,\ldots,\\}$. For any $n\in\mathbb{N}$, We let $[n]$ denote the set $\\{1,\ldots,n\\}$. For any finite set $\mathcal{A}$, we denote its cardinality by $|\mathcal{A}|$. Also, we let $1\\{\mathcal{A}\\}$ be the indicator of the event $\mathcal{A}$. We let $\|\cdot\|_{k}$ be the usual $l_{k}$ norm in $\mathbb{R}^{k}$ for $k\in\mathbb{Z}$. In particular, we let $\|x\|_{0}$ denote the number of non- zero elements of a vector $x\in\mathbb{R}^{p}$. For any probability measure $\mathbb{P}$ on the Borel sigma field of $\mathbb{R}^{p}$, we take $L_{2}(\mathbb{P})$ be the set of all measurable functions $f:\mathbb{R}^{p}\mapsto\mathbb{R}$ such that $\|f\|_{L_{2}(\mathbb{P})}=\sqrt{\int f^{2}d\mathbb{P}}<\infty$. The corresponding $L_{2}(\mathbb{P})$ inner product will be denoted by $\langle\cdot,\cdot\rangle_{L_{2}(\mathbb{P})}$. We denote the operator norm and the Frobenius norm of a matrix $A\in\mathbb{R}^{p\times q}$ by $\|A\|_{op}$ and $\|A\|_{F}$, respectively. For $k\in\mathbb{N}$, we define the norms $\|A\|_{k,\infty}=\max_{j\in[q]}\|A_{j}\|_{k}$ and $\|A\|_{\infty,k}=\max_{i\in[q]}\|A_{i*}\|_{k}$. The maximum and minimum eigenvalue of a square matrix $A$ will be denoted respectively by $\Lambda_{max}(A)$ and $\Lambda_{min}(A)$. We let $A_{i*}$ and $A_{j}$ denote the i-th row and $j$-th column of $A$, respectively. Also, we let $s(A)$ denote the maximum number of non-zero entries in any column of $A$, i.e. $s(A)=\max_{j\in[q]}\|A_{j}\|_{0}$. The results in this paper are mostly asymptotic (in $n$) in nature and thus require some standard asymptotic notations. If $a_{n}$ and $b_{n}$ are two sequences of real numbers then $a_{n}\gg b_{n}$ (and $a_{n}\ll b_{n}$) implies that ${a_{n}}/{b_{n}}\rightarrow\infty$ (and ${a_{n}}/{b_{n}}\rightarrow 0$) as $n\rightarrow\infty$, respectively. Similarly $a_{n}\gtrsim b_{n}$ (and $a_{n}\lesssim b_{n}$) implies that $\liminf_{n\rightarrow\infty}{{a_{n}}/{b_{n}}}=C$ for some $C\in(0,\infty]$ (and $\limsup_{n\rightarrow\infty}{{a_{n}}/{b_{n}}}=C$ for some $C\in[0,\infty)$). Alternatively, $a_{n}=o(b_{n})$ will also imply $a_{n}\ll b_{n}$ and $a_{n}=O(b_{n})$ will imply that $\limsup_{n\rightarrow\infty}\ a_{n}/b_{n}=C$ for some $C\in[0,\infty)$). We will write $a_{n}=\tilde{\Phi}(b_{n})$ to indicate $a_{n}$ and $b_{n}$ are asymptotically of the same order up to a poly-log term. Finally, in our mathematical statements, $C$ and $c$ will be two different generic constants which can vary from line to line. ## 2 Mathematical Formalism We define the rank of ${\Sigma}_{xy}$ by $r$. It can be shown that exactly $r$ canonical correlations are positive and the rest are zero in the model (2). We will consider the matrices $U=[u_{1},\ldots,u_{r}]$ and $V=[v_{1},\ldots,v_{r}]$. From (2) and (3), it is not hard to see that $U^{T}{\Sigma}_{x}U=I_{p}$ and $V^{T}{\Sigma}_{y}V=I_{q}$. The indexes of the nonzero rows of $U$ and $V$, respectively, are the combined support of the $u_{i}$’s and the $v_{i}$’s. Since we are interested in the recovery of the latter, it will be useful for us to study $U$ and $V$. To that end, we often make use of the following representation connecting ${\Sigma}_{xy}$ to $U$ and $V$ (Anderson, 2003): ${\Sigma}_{xy}={\Sigma}_{x}U\Lambda V^{T}{\Sigma}_{y}=\sum_{i=1}^{r}\Lambda_{i}u_{i}v_{i}^{T}.$ (4) To keep our results straightforward, we restrict our attention to a particular model $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ throughout, defined as follows. ###### Definition 1 Suppose $(X,Y)\sim\mathbb{P}$. Let $\mathcal{B}>1$ be a constant. We say $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ if * A1 (Sub-Gaussian) $X$ and $Y$ are sub-Gaussian random vectors 222See e.g. Vershynin (2018)., with joint covariance matrix $\Sigma$ as defined in (1). Also $\text{rank}({\Sigma}_{xy})=r$. * A2 Recall the definition of the canonical correlation $\Lambda_{i}$’s from (3). Note that by definition, $0\leq\Lambda_{r}\leq\ldots\leq\Lambda_{1}$. For $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, $\Lambda_{r}$ additionally satisfies $\Lambda_{r}\geq 1/\mathcal{B}$. * A3 (Sparsity) The number of nonzero rows of $U$ and $V$ are $s_{x}$ and $s_{y}$, respectively, that is $s_{x}=|\cup_{i=1}^{r}D(u_{i})|$ and $s_{y}=|\cup_{i=1}^{r}D(v_{i})|$. Here $U$ and $V$ are as defined in (4). * A4 (Bounded eigenvalue) $1/\mathcal{B}<\Lambda_{min}({\Sigma}_{y}),\Lambda_{min}({\Sigma}_{y}),\Lambda_{max}({\Sigma}_{x}),\Lambda_{max}({\Sigma}_{y})<\mathcal{B}$. * A5 (Positive eigen-gap) $\Lambda_{i}-\Lambda_{i-1}\geq\mathcal{B}^{-1}$ for $i=2,\ldots,r$. Sometimes we will consider a sub-model of $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ where each $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ is Gaussian. This model will be denoted by $\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$, where “$G$” stands for the Gaussian assumption. Some remarks on the modelling assumptions A1—A5 are in order, which we provide next. 1. A1. We begin by noting that we do not require $X$ and $Y$ to be jointly sub- Gaussian. Moreover, the individual sub-Gaussian assumption itself is common in the $s_{x},s_{y}\lesssim\sqrt{n}/\log(p+q)$ regime in the SCCA literature (Mai and Zhang, 2019; Gao et al., 2017). For the sharper analysis in the difficult regime ($\sqrt{n/\log{(p+q)}}\lesssim s_{x},s_{y}\lesssim\sqrt{n}$), our proof techniques require the Gaussian model $\mathcal{P}_{G}$ – which is in parallel with Deshpande and Montanari (2014)’s treatment of the sparse PCA in the corresponding difficult regime. In general, the Gaussian spiked model assumption in sparse PCA goes back to Johnstone (2001), and is common in the PCA literature (Amini and Wainwright, 2009; Krauthgamer et al., 2015). 2. A2-A4. These assumptions are standard in the analysis of canonical correlations (Mai and Zhang, 2019; Gao et al., 2017). 3. A5. This assumption concerns the gap between consecutive canonical correlation strengths. However we refer to this gap as “Eigengap” because of its similarity with the Eigengap in the sparse PCA literature (cf. Deshpande and Montanari, 2014; Janková and van de Geer, 2018). This assumption is necessary for the estimation of the $i$-th canonical covariates. Indeed, if $\lambda_{i}=\lambda_{i+1}$ then there is no hope of estimating the $i$-th canonical covariates because they are not identifiable, and so support recovery also becomes infeasible. This assumption can be relaxed to requiring only $k$ many $\lambda_{i}$’s to be strictly larger than $\lambda_{i-1}$’s where $k\leq r$. In this case, we can recover the support of only the first $k$ canonical covariates. In the following sections, we will denote the preliminary estimators of $U$ and $V$ by $\widehat{U}$ and $\widehat{V}$, respectively. The columns of $\widehat{U}$ and $\widehat{V}$ will be denoted by $\widehat{u}_{n,i}$ and $\widehat{v}_{n,i}$ ($i\in[r]$), respectively. Therefore $\widehat{u}_{n,i}$ and $\widehat{v}_{n,i}$ will stand for the corresponding preliminary estimators of $u_{i}$ and $v_{i}$. In case of CCA, the $u_{i}$’s and $v_{i}$’s are identifiable only up to a sign flip. Hence, they are also estimable only up to a sign flip. Finally, we denote the empirical estimates of ${\Sigma}_{x}$, ${\Sigma}_{y}$, and ${\Sigma}_{xy}$, by $\widehat{\Sigma}_{n,x}$, $\widehat{\Sigma}_{n,y}$, and $\widehat{\Sigma}_{n,xy}$, respectively – which will often be appended with superscripts to denote their estimation through suitable sub-samples of the data 333e.g. $\widehat{\Sigma}_{n,x}^{(j)}$, $\widehat{\Sigma}_{n,y}^{(j)}$, and $\widehat{\Sigma}_{n,xy}^{(j)}$ will stand for the empirical estimators created from the $j^{th}$-equal split of the data.. Finally, we let $C_{\mathcal{B}}$ denote a positive constant which depends on $\mathbb{P}$ only through $\mathcal{B}$, but can vary from line to line. ## 3 Main Results We divide our main results into the following parts based on both statistical and computational difficulties of different regimes. First in Section 3.1 we present a general method and associated sufficient conditions for support recovery. This allows us to elicit a sequence of questions regarding necessity of the conditions and remaining gaps both from statistical and computational perspectives. Our subsequent sections are devoted to answering these very questions. In particular, in Section 3.2 we discuss information theoretic lower bounds followed by evidence for statistical-computational gaps in Section 3.3. Finally, we close a final computational gap in asymptotic regime through sharp analysis of a special coordinate-thresholding type method in Section 3.4. ### 3.1 A Simple and General Method: We begin with a simple method for estimating the support, which readily establishes the result for the easy regime, and sets the directions for the investigation into other more subtle regimes. Since the estimation of $D(U)$ and $D(V)$ are similar, we focus only on the estimation of $D(V)$ for the time being. Suppose $\widehat{V}$ is a row sparse estimator of $V$. The non-zero indexes of $\widehat{V}$ is the most intuitive estimator of $D(V)$. Such an $\widehat{V}$ is also easily attainable because most estimators of the canonical directions in high dimension are sparse (cf. Chen et al., 2013; Gao et al., 2017; Mai and Zhang, 2019, among others). Although we have not yet been able to show the validity of this apparently “naïve” method, we provide numerical results in Section 4 to explore its finite sample performance. However, a simple method can refine these initial estimators, to often optimally recover the support $D(V)$. We now provide the details of this method and derive its asymptotic properties. To that end, suppose we have at our disposal an estimating procedure for ${\Sigma}_{y}^{-1}$, which we generically denote by $\widehat{\Omega}_{n}$ and an estimator $\widehat{U}\in\mathbb{R}^{p\times r}$ of $U$. We split the sample in two equal parts, and compute $\widehat{U}^{(1)}$ and $\widehat{\Omega}_{n}^{(1)}$ from the first part of the sample, and the estimator $\widehat{\Sigma}_{n,xy}^{(2)}$ from the second part of the sample. Define $\widehat{V}^{clean}=\widehat{\Omega}_{n}^{(1)}\widehat{\Sigma}_{n,xy}^{(2)}\widehat{U}^{(1)}$. Our estimator of $D(V)$ is then given by $\widehat{D}(V):=\\{i\in[q]:|\widehat{V}^{clean}_{ij}|>\texttt{cut}\text{ for some }j\in[r]\\},$ (5) where cut is a pre-specified cut-off or threshold. We will discuss later how to choose cut efficiently. The resulting algorithm, detailed as Algorithm 1 for convenience, will be referred as RecoverSupp from now on. RecoverSupp is similar in spirit to the “cleaning” step in the sparse PCA support recovery literature (cf. Deshpande and Montanari, 2014). Algorithm 1 RecoverSupp $(\widehat{U}^{(1)},\widehat{\Omega}_{n}^{(1)},\widehat{\Sigma}_{n,xy}^{(2)},\texttt{cut},r)$: suppoet recovery of $V$ 0: 1. 1. Preliminary estimators $\widehat{U}^{(1)}$ and $\widehat{\Omega}_{n}^{(1)}$ of $U$ and ${\Sigma}_{y}^{-1}$, respectively, based on sample $O_{1}=(x_{i},y_{i})_{i=1}^{[n/2]}$. 2. 2. Estimator $\widehat{\Sigma}_{n,xy}^{(2)}$ of ${\Sigma}_{xy}$ based on sample $O_{2}=(x_{i},y_{i})_{i=[n/2]+1}^{n}$. 3. 3. Threshold level $\texttt{cut}>0$ and rank $r\in\mathbb{N}$. 0: $\widehat{D}(V)$, an estimator of ${D(V)}$. 1. 1. Cleaning: $\widehat{V}^{clean}\leftarrow\widehat{\Omega}_{n}^{(1)}{{\widehat{\Sigma}_{n,yx}}}^{(2)}\widehat{U}^{(1)}$. 2. 2. Threshold: Compute $\widehat{D}(V)$ as in (5). Return: $\widehat{D}(V)$. It turns out that, albeit being so simple, RecoverSupp has desirable statistical guarantees provided $\widehat{U}^{(1)}$ and $\widehat{\Omega}_{n}^{(1)}$ are reasonable estimators of $U$ and ${\Sigma}_{y}^{-1}$, respectively. These theoretical properties of RecoverSupp , and the hypotheses and queries generated thereof, lays out the roadmap for the rest of our paper. However, before getting into the detailed theoretical analysis of RecoverSupp , we state a $l_{2}$-consistency condition on $\widehat{u}_{n,i}$ and $\widehat{v}_{n,i}$’s, where we remind the readers that we let $\widehat{u}_{n,i}$ and $\widehat{v}_{n,i}$ denote the $i$-th columns of $\widehat{V}$ and $\widehat{U}$, respectively. Recall also that the $i$-th columns of $U$ and $V$ are denoted by $u_{i}$ and $v_{i}$, respectively. ###### Condition 1 ($l_{2}$ consistency ) There exists a function $\texttt{Err}\equiv\texttt{Err}:(n,p,q,s_{x},s_{y},\mathcal{B})\mapsto\mathbb{R}$ so that $|\texttt{Err}|<1/(2\mathcal{B}\sqrt{r})$ and the estimators $\widehat{u}_{n,i}$ and $\widehat{v}_{n,i}$ of $u_{i}$ and $v_{i}$ satisfy $\max_{i\in[r]}\min_{w\in\\{\pm 1\\}}\bigg{|}(w\widehat{u}_{n,i}-u_{i})^{T}{\Sigma}_{x}(w\widehat{u}_{n,i}-u_{i})\bigg{|}<\texttt{Err}^{2},$ $\max_{j\in[r]}\min_{w\in\\{\pm 1\\}}\bigg{|}(w\widehat{v}_{n,i}-v_{i})^{T}{\Sigma}_{y}(w\widehat{v}_{n,i}-v_{i})\bigg{|}<\texttt{Err}^{2}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},B)$ . We will discuss the estimators which satisfy Condition 1 later. Theorem 2 also requires the signal strength $\texttt{Sig}_{y}$ to be at least of the order $\epsilon_{n}=\xi_{n}\sqrt{\log(p+q)s({\Sigma}_{y}^{-1})/n}$, where the parameter $\xi_{n}$ depends on the type of $\widehat{\Omega}_{n}$ as follows: * A. $\widehat{\Omega}_{n}$ is of type A if there exists $C_{\text{pre}}>0$ so that $\widehat{\Omega}_{n}$ satisfies $\|\widehat{\Omega}_{n}-{\Sigma}_{y}^{-1}\|_{\infty,1}\leq C_{\text{pre}}s({\Sigma}_{y}^{-1})\sqrt{(\log q)/n}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. Here we remind the readers that $s({\Sigma}_{y}^{-1})=\max_{j\in[q]}\|({\Sigma}_{y}^{-1})_{j}\|_{0}$. In this case, $\xi_{n}=C_{\text{pre}}\sqrt{s({\Sigma}_{y}^{-1})}$. * B. $\widehat{\Omega}_{n}$ is of type B if $\|\widehat{\Omega}_{n}-{\Sigma}_{y}^{-1}\|_{\infty,2}\leq C_{\text{pre}}\sqrt{s({\Sigma}_{y}^{-1})\log(q)/n}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$ for some $C_{\text{pre}}>0$. In this case, $\xi_{n}=C_{\text{pre}}\max\\{\sqrt{r(\log q)/n},1\\}$. * C. $\widehat{\Omega}_{n}$ is of type C if $\widehat{\Omega}_{n}={\Sigma}_{y}^{-1}$. In this case, $\xi_{n}=1$. The estimation error of $\widehat{\Omega}_{n}$ clearly decays from type A to C, with the error being zero at type C. Because $\sqrt{r(\log q)/n}$ is generally much smaller than $s({\Sigma}_{y}^{-1})$, $\xi_{n}$ shrinks from Case A to Case C monotonously as well. Thus it is fair to say that $\xi_{n}$ reflects the precision of the estimator $\widehat{\Omega}_{n}$ in that $\xi_{n}$ is smaller if $\widehat{\Omega}_{n}$ is a sharper estimator. We are now ready to state Theorem 2. This theorem is proved in Appendix B. ###### Theorem 2 Suppose $\log(p\vee q)=o(n)$ and the estimators $\widehat{u}_{n,i}$’s satisfy Condition 1. Further suppose $\widehat{\Omega}_{n}$ is of type A, B, or C, which are stated above. Let $\epsilon_{n}=\xi_{n}\sqrt{\log(p+q)s({\Sigma}_{y}^{-1})/n}$ where $\xi_{n}$ depends on the type of $\widehat{\Omega}_{n}$ as outlined above. Then there exists a constant $C^{\prime}_{\mathcal{B}}>0$, depending only on $\mathcal{B}>0$, so that if $\texttt{Sig}_{y}>2C_{\mathcal{B}}^{\prime}\epsilon_{n},$ (6) and $\texttt{cut}\in[C_{\mathcal{B}}^{\prime}\epsilon_{n}/(2\mathcal{B}),{(\theta_{n}-1)}C_{\mathcal{B}}^{\prime}\epsilon_{n}/(2\mathcal{B})]$ with $\theta_{n}=\texttt{Sig}_{y}/(C_{\mathcal{B}}^{\prime}\epsilon_{n})$, then the algorithm RecoverSupp fully recovers $D(V)$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ (for $\widehat{\Omega}_{n}$ of type A and C), or uniformly over $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$ (for $\widehat{\Omega}_{n}$ of type B). The assumption that $\log p$ and $\log q$ are $o(n)$ appears in all theoretical works of CCA (Gao et al., 2017; Mai and Zhang, 2019). A requirement of this type is generally unavoidable. Note that Theorem 2 implies a more precise estimator $\widehat{\Omega}_{n}$ requires smaller signal strength for full support recovery. Before going into subtler implications of Theorem 2, we make two important remarks. ###### Remark 3 Although the estimation of the high dimensional precision matrix ${\Sigma}_{y}^{-1}$ is potentially complicated, it is often unavoidable owing to the inherent subtlety of the CCA framework due to the presence of high dimensional nuisance parameters ${\Sigma}_{x}$ and ${\Sigma}_{y}$. Chen et al. (2013) also used precision matrix estimator for partial recovery of the support. In case of sparse CCA, to the best of our knowledge, there does not exist an algorithm which can recover the support, partially or completely, without estimating the precision matrix. However, our requirements on $\widehat{\Omega}_{n}$ are not strict in that many common precision matrix estimators, e.g. the nodewise Lasso (Theorem 2.4, van de Geer et al., 2014), the thresholding estimator (cf. Theorem 1 and Section 2.3, Bickel and Levina, 2008), and the CLIME estimator (Theorem 6, Cai et al., 2011) exhibit the decay rate of type A and B under standard sparsity assumptions on ${\Sigma}_{y}^{-1}$. We will not get into the detail of the sparsity requirements on ${\Sigma}_{y}^{-1}$ because they are unrelated to the sparsity of $U$ or $V$, and hence is irrelevant to the primary goal of the current paper. ###### Remark 4 In the easy regime $s_{y}\lesssim\sqrt{n/(\log(p+q)}$, estimators satisfying Condition 1 are already available, e.g. COLAR (cf. Theorem 4.2, Gao et al., 2017) or SCCA (cf. Condition C4 Mai and Zhang, 2019). Thus it is easily seen that polynomial time support recovery is possible in the easy regime provided (6) is satisfied. That $r=O(n/(\log(p+q))$ and $s({\Sigma}_{y}^{-1})=O(1)$ are sufficient conditions for the latter in this regime. The implications of Theorem 2 in context of the sparsity requirements on $D(U)$ and $D(V)$ for full support recovery are somewhat implicit through the assumptions and conditions. However, the restriction on the sparsity is indirectly imposed by two different sources – which we elaborate on now. To keep the interpretations simple, throughout the following discussion, we assume that (a) $r=O(n/\log q)$, (b) $p$ and $q$ are of the same order, and (c) $s_{x}$ and $s_{y}$ are also of the same order. Since we separate the task of estimating the nuisance parameter ${\Sigma}_{y}^{-1}$ from the support recovery of $V$, we also assume that $s({\Sigma}_{y}^{-1})=O(1)$, which reduces the minimal signal strength condition (18) to $\texttt{Sig}_{y}\geq C_{\mathcal{B}}\sqrt{\log(p+q)/n}$. In lieu of the discussion above, the first source of sparsity restriction is the minimal signal strength condition on $\texttt{Sig}_{y}$ mentioned above. It is easily seen that $\texttt{Sig}_{y}\leq s_{y}^{-1/2}$. Therefore, implicit in Theorem 2 lies the condition $\displaystyle s_{y}\leq\frac{C_{\mathcal{B}}^{2}n}{\log(p+q)}.$ (7) Thus Theorem 2 does not hold for $s_{y}\gg\log(p+q)/n$ even when $s({\Sigma}_{y}^{-1})$ and $r$ are small. This regime requires some attention because in case of sparse PCA (Amini and Wainwright, 2009) and linear regression (Wainwright, 2009), support recovery at $s\gg\log(p-s)/n$ 444here and later we will use $s$ to generically denote the sparsity of relevant parameter vectors in parallel problems like Sparse PCA or Sparse Linear Regression. is proven to be information theoretically impossible. However, although a parallel result can be intuited to hold for CCA, the detailed explorations of the nuances of SCCA support recovery in this regime is yet to be explored. Therefore, the sparsity requirement in (7) raises the question whether support recovery for CCA is at all possible when $s_{y}\gg n/\log(p+q)$, even if ${\Sigma}_{x}$ and ${\Sigma}_{y}$ is known. ###### Question 1 Does there exist any decoder $\widehat{D}$ such that $\sup_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}(\widehat{D}(V)\neq D(V))\to 0$ when $s_{y}\gg\log(q-s_{y})/n$? A related question is whether the minimal signal strength requirement (6) is necessary. To the best of our knowledge, there is no formal study on the information theoretic limit of the minimal signal strength even in context of the sparse PCA support recovery. Indeed, as we noted before, the detailed analyses of support recover for SPCA provided in Amini and Wainwright (2009); Deshpande and Montanari (2014); Krauthgamer et al. (2015) all is based on the non-zero principal component elements being $\pm 1/\sqrt{\rm sparsity}$. Finally, although this question is not directly related with the sparsity conditions, it indeed probes the sharpness of the results in Theorem 2. ###### Question 2 What is the minimum signal strength required for the recovery of $D(V)$? We will discuss Question 1 and Question 2 at greater length in Section 3.2. In particular, Theorem 6(A) shows that there exists $C>0$ so that support recovery at $s_{y}\geq C\mathcal{B}^{-2}n/\log(q-s_{y})$ is indeed information theoretically intractable. On the other hand, in Theorem 6(B), we show that the minimal signal strength has to be of the order $\mathcal{B}\sqrt{\log(q-s_{y})/n}$ for full recovery of $D(V)$. Thus when $p\asymp q$ , (6) is indeed necessary from information theoretic perspectives. The second source of restriction 6on the sparsity lies in Condition 1. Condition 1 is a $l_{2}$-consistency condition, which has sparsity requirement itself owing the inherent hardness in the estimation of $U$. Indeed, Theorem 3.3 of Gao et al. (2017) entails that it is impossible to estimate the canonical directions $u_{i}$’s consistently if $s_{x}>Cn/(r+\log(ep/s_{x}))$ for some large $C>0$. Hence, Condition 1 indirectly imposes the restriction $s_{x}\lesssim n/\max\\{\log(p/s_{x}),r\\}$. However, when $s_{x}\asymp s_{y}$, $p\asymp q$, and $r=O(1)$, the above restriction is already absorbed into the condition $s_{y}\lesssim\mathcal{B}^{-2}n/\log(q-s_{y})$ elicited in the last paragraph. In fact, there exist consistent estimators of $U$ whenever $s_{x}\lesssim n/\max\\{\log(p/s_{x}),r\\}$ and $s_{y}\lesssim n/\max\\{\log(q/s_{y}),r\\}$ (see Gao et al. (2015) or Section 3 of Gao et al. (2017)). Therefore, in the latter regime, RecoverSupp coupled with the above- mentioned estimators succeeds. In view of the above, it might be tempting to think that Condition 1 does not impose significant additional restrictions. The restriction due to Condition 1, however, is rather subtle and manifests itself through computational challenges. Note that when support recovery is information theoretically possible, the computational hardness of recovery by RecoverSupp will be at least as much as that of the estimation of $U$. Indeed, the estimators of $U$ which work in the regime $s_{x}\asymp n/\log(p/s_{x})$, $s_{y}\asymp n/\log(q/s_{y})$ are not adaptive of the sparsity, and they require a search over exponentially many sets of size $s_{x}$ and $s_{y}$. Furthermore, under $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, all polynomial time consistent estimators of $U$ in the literature, e.g. COLAR (cf. Theorem 4.2, Gao et al., 2017) or SCCA (cf. Condition C4 Mai and Zhang, 2019), require $s_{x}$, $s_{y}$ to be of the order $\sqrt{n/\log(p+q)}$. In fact, Gao et al. (2017) indicates that estimation of $U$ or $V$ for much larger sparsity will be NP hard. The above raises the question whether RecoverSupp (or any method as such) can succeed at polynomial time when $\sqrt{n/\log(p+q)}\ll s_{x},s_{y}\lesssim n/\log(p+q)$. We turn to the landscape of sparse PCA for intuition. Indeed, in case of sparse PCA, different scenarios are observed in the regime $s\lesssim n/\log p$ depending on whether $\sqrt{n}\ll s\lesssim n/\log p$, or $s\lesssim\sqrt{n}$ (we recall that for SPCA we denote the sparsity of the leading principal component direction generically through $s$). We focus on the sub-regime $\sqrt{n}\ll s\lesssim n/\log p$ first. In this case, both estimation and support recovery for sparse PCA are conjectured to be NP hard, which means no polynomial time method succeeds; see Section 3.3 for more details. The above hints that the regime $s_{x},s_{y}\gg\sqrt{n}$ is NP hard for sparse CCA as well. ###### Question 3 Is there any polynomial time method which can recover the support $D(V)$ when $s_{x},s_{y}\gg\sqrt{n}$? We dedicate Section 3.3 for answering this question. Subject to the recent advances in the low degree polynomial conjecture, we establish computational hardness of the regime $s_{x},s_{y}\gg\sqrt{n}$ (up to a logarithmic factor gap) subject to $n\lesssim p,q$. Our results are consistent with Gao et al. (2017)’s findings in the estimation case and covers a broader regime; see Remark 12 for a comparison. When the sparsity is of the order $\sqrt{n}$ and $p\asymp n$, however, polynomial time support recovery and estimation is possible for the sparse PCA case. Deshpande and Montanari (2014) showed that a coordinate thresholding type spectral algorithm works in this regime. Thus the following question is immediate. ###### Question 4 Is there any polynomial time method which can recover the support $D(V)$ when $s_{x},s_{y}\in[\sqrt{n/\log(p+q)},\sqrt{n}]$? We give affirmative answer to Question 4 in Section 3.4, which is parallel with the observations for the sparse PCA. In fact, Corollary 17 shows that when ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are known, $p+q\asymp n$, and $s_{x},s_{y}\lesssim\sqrt{n}$, estimation is possible in polynomial time. Since estimation is possible, RecoverSupp suffices for polynomial time support recovery in this regime, where $\sqrt{n}$ is well below the information theoretic limit of $n/\log(p+q)$. The main tool used in Section 3.4 is coordinate thresholding, which is originally a method for high dimensional matrix estimation (Bickel and Levina, 2008), and apparently has nothing to do with estimation of canonical directions. However, under our set up, if the covariance matrix is consistently estimated in operator norm, by Wedin’s Sin $\theta$ Theorem (Yu et al., 2015), an SVD is enough to get a consistent estimator of $U$ and $V$ suitable for further precise analysis. ###### Remark 5 RecoverSupp uses sample splitting, which can reduce the efficiency. One can swap between the samples and compute two estimators of the supports. One can easily show that both the intersection and the union of the resulting supports enjoy the asymptotic garuantees of Theorem 2. This section can be best summarized by Figure 1, which gives the information theoretic and computational landscape of sparse CCA analysis in terms of the sparsity. It can be seen that our contributions (colored in red) complete the picture, which was initiated by Gao et al. (2017). COLAR $\sqrt{n/\log(p+q)}$$\sqrt{n}$CTNP hardEstimation$n/\log(p+q)$Intractable(IT)RecoverSupp $+$ COLAR/SCCA$\sqrt{n/\log(p+q)}$$\sqrt{n}$$s_{x}$RecoverSupp $+$CTRecoverSupp NP hardSupportrecovery$n/\log(p+q)$Intractable(IT) Figure 1: State of the art for estimation informetion theoretic and computational limits in sparse CCA. We have taken $s_{x}=s_{y}$ here. COLAR corresponds to the estimation method of Gao et al. (2017). Our contributions are colored in red. See Gao et al. (2017) for more details on the regions colored in blue. ### 3.2 Information Theoretic Lower Bounds: Answers to Question 1 and 2 Theorem 6 establishes the information theoretic limits on the sparsity levels $s_{x}$, $s_{y}$, and the signal strengths $\texttt{Sig}_{x}$ and $\texttt{Sig}_{y}$. The proof of Theorem 6 is deferred to Appendix C. ###### Theorem 6 Suppose $\widehat{D}(U)$ and $\widehat{D}(V)$ are estimators of $D(U)$ and $D(V)$, respectively. Let $s_{x}$, $s_{y}>1$, and $p-s_{x},q-s_{y}>16$. Then the following assertions hold: * A. If $s_{x}>16n/\\{(\mathcal{B}^{2}-1)\log(p-s_{x})\\}$, then $\inf_{\widehat{D}}\sup_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\bigg{(}\widehat{D}(U)\neq D(U)\bigg{)}>1/2.$ On the other hand, if $s_{y}>16n/\\{(\mathcal{B}^{2}-1)\log(q-s_{y})\\}$, then $\inf_{\widehat{D}}\sup_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\bigg{(}\widehat{D}(V)\neq D(V)\bigg{)}>1/2.$ * B. Let $\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})$ to be the class of distributions $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ satisfying $\texttt{Sig}^{2}_{x}\leq(\mathcal{B}^{2}-1)(\log(p-s_{x}))/(8n)$. Then $\inf_{\widehat{D}}\sup_{\mathbb{P}\in\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\bigg{(}\widehat{D}(U)\neq D(U)\bigg{)}>1/2.$ On he other hand, if $\texttt{Sig}^{2}_{y}\leq(\mathcal{B}^{2}-1)(\log(q-s_{y}))/(8n)$, then $\inf_{\widehat{D}}\sup_{\mathbb{P}\in\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\bigg{(}\widehat{D}(V)\neq D(V)\bigg{)}>1/2.$ In both cases, the infimum is over all possible decoders $\widehat{D}(U)$ and $\widehat{D}(V)$. First we discuss the implications of part A of Theorem 6. This part entails that for full support recovery of $V$, the minimum sample size requirement is of the order $s_{y}\log(q-s_{y})$. This requirement is consistent with the traditional lower bound on $n$ in context of support recovery for sparse PCA (Amini and Wainwright, 2009, Theorem 3) and $L_{1}$ regression (cf. Corollary 1 of Wainwright, 2009). However, when $r=O(1)$, the sample size requirement for estimation of $V$ is slightly relaxed, that is, $n\gg s_{y}\log(q/s_{y})$ (cf. Theorem 3.2, Gao et al., 2017). Therefore, from information theoretic point of view, the task of full support recovery appears to be slightly harder than the task of estimation. The scenario for partial support recovery might be different and we do not pursue it here. Moreover, as mentioned earlier, in the regime $s_{y}\lesssim C_{\mathcal{B}}n/\log(p+q)$, RecoverSupp works with Gao et al. (2017)’s (see Section 3 therein) estimator of $U$. Thus part A of Theorem 6 implies that $n/\log(p+q)$ is the information theoretic upper bound on the sparsity for the full support recovery of sparse CCA. Part B of Theorem 6 implies that it is not possible to push the minimum signal strength below the level $O(\sqrt{\log(q-s_{y})/n})$. Thus the minimal signal strength requirement (6) by Theorem 2 is indeed minimal up to a factor of $\xi_{n}\sqrt{s({\Sigma}_{y}^{-1})}$. The last statement can be refined further. To that end, we remind the readers that for a good estimator of ${\Sigma}_{y}^{-1}$, i.e. a type B estimator, $\xi_{n}=O(1)$ if $r=O(n/\log q)$. However, the latter always holds if support recovery is at all possible, because in that case $s_{y}\lesssim n/\log(p+q)$, and elementary linear algebra gives $s_{y}\geq r$. Thus, it is fair to say that, provided a good estimator of ${\Sigma}_{y}^{-1}$, the requirement (6) is minimal up to a factor of $\sqrt{s({\Sigma}_{y}^{-1})}$. Indeed, this implies that for banded inverses with finite band-width our results are rate optimal. It is further worth comparing this part of the result to the SPCA literature. In the SPCA support recovery literature, generally, the lower bound on the signal strength is depicted in terms of the sparsity $s$, and usually a signal strength of order $O(1/\sqrt{s})$ is postulated (Deshpande and Montanari, 2014; Amini and Wainwright, 2009; Krauthgamer et al., 2015). Using our proof strategies, it can be easily shown that for SPCA, the analogous lower bound on the signal strength would be $\sqrt{\log(p-s)/n}$. The latter is generally much smaller than $1/\sqrt{s}$ and only when $s\asymp n/\log(p)$, the requirement of $1/\sqrt{s}$ is close to the lower bound. Thus, in the regime $s\lesssim\sqrt{n/\log p}$, clearly the actual lower bound should be of the order $O(1/s)$. Therefore the signal strength requirement of $O(1/\sqrt{s})$ typically assumed in literature seems much larger than necessary even for SPCA context. ### 3.3 Computational Limits and Low Degree Polynomials: Answer to Question 3 We have so far explored the information theoretic upper and lower bounds for recovering the true support of leading canonical correlation directions. However, as indicated in the discussion preceding Question 3, the statistically optimal procedures in the regime where $\sqrt{n}\lesssim s_{x},s_{y}\lesssim n/\log{(p+q)}$ are computationally intensive and is of exponential complexity (as a function of $p,q$) in this regime. In particular, Gao et al. (2017) have already showed that when $s_{x}$ and $s_{y}$ belong to parts of this regime, estimation of the canonical correlates is computationally hard, subject to a computational complexity based “Planted Clique Conjecture”. For the case of support recovery, the SPCA has been explored in detail and the corresponding computational hardness has been established in analogous regimes – see e.g. Amini and Wainwright (2009); Deshpande and Montanari (2014); Krauthgamer et al. (2015) for details. A similar phenomenon of computational hardness is observed in case of SPCA spike detection problem (Berthet and Rigollet, 2013). In light of the above, it is natural to believe that the SCCA support recovery is also computationally hard in the regime $\sqrt{n}\lesssim s_{x},s_{y}\lesssim n/\log{(p+q)}$ and as a result yields a statistical-computational gap. Although several paths exist to provide evidence towards such gaps 555e.g. Planted Clique Conjecture (Berthet and Rigollet, 2013; Gao et al., 2017; Brennan et al., 2018), Statistical Query based lower bounds (Kearns, 1998; Feldman and Kanade, 2012; Brennan et al., 2020; Dudeja and Hsu, 2021), and Overlap Gap Property based analysis (Gamarnik and Zadik, 2017; Gamarnik et al., 2019; Arous et al., 2020) ., the recent developments using “Predictions from Low Degree Polynomials” (Hopkins and Steurer, 2017; Hopkins, 2018; Kunisky et al., 2019) is particularly appealing due its simplicity in exposition. In order to show computationally hardness of the SCCA support recovery problem in the $s\in(\sqrt{n},n/\log(p+q))$ regime, we shall resort to this very style of ideas, which has so far been applied successfully to explore statistical-computational gaps under sparse PCA (Ding et al., 2019), Stochastic Block Models, and tensor PCA (Hopkins, 2018), among others. This will allow us to explore the computational hardness of the problem in the entire regime where $s_{x}+s_{y}\gtrsim(\sqrt{n})(\log n)^{c},$ (8) compared to the somewhat partial results (see Remark 12 for detailed comparison) in earlier literature. We divide our discussions to argue the existence of a statistical- computational gap in this regime as follows. Starting with a brief background on the statistical literature on such gaps we first present a natural reduction of our problem to a suitable hypothesis testing problem in Section 3.3.1. Subsequently, in Section 3.3.2 we present the main idea of the “low degree polynomial conjecture” by appealing to the recent developments in Hopkins and Steurer (2017); Hopkins (2018); Kunisky et al. (2019). Finally, we present our main result for this regime in Section 3.3.3, thereby providing evidence of the aforementioned gap modulo the Low Degree Polynomial Conjecture presented in Conjecture 8. #### 3.3.1 Reduction to Testing Problem: Denote by $\mathbb{Q}$ the distribution of a $N_{p+q}(0,I_{p+q})$ random vector. Therefore $(X,Y)\sim\mathbb{Q}$ corresponds to the case when $X$ and $Y$ are uncorrelated. We first show that there is any scope of support recovery in $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ only if $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ is distinguishable from $\mathbb{Q}$, i.e. the test $H_{0}:(X,Y)\sim\mathbb{Q}$ vs $H_{1}:(X,Y)\sim\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ has asymptotic zero error. To formalize the ideas, suppose we observe i.i.d random vectors $\\{X_{i},Y_{i}\\}_{i=1}^{n}$ which are distributed either as $\mathbb{P}$ or $\mathbb{Q}$. We denote the n-fold product measures corresponding to $\mathbb{P}$ and $\mathbb{Q}$ by $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$, respectively. Note that if $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, then $\mathbb{P}_{n}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$. We overload notation, and denote the combined sample $\\{X_{i}\\}_{i=1}^{n}$ and $\\{Y_{i}\\}_{i=1}^{n}$ by $\mathbf{X}$ and $\mathbf{Y}$ respectively. In this section, $\mathbf{X}$ and $\mathbf{Y}$ should be viewed as unordered sets. The test $\Phi_{n}:\mathbb{R}^{pn+qn}\mapsto\\{0,1\\}$ for testing the null $H_{0}:(\mathbf{X},\mathbf{Y})\sim\mathbb{Q}_{n}$ vs the alternative $H_{1}:(\mathbf{X},\mathbf{Y})\sim\mathbb{P}_{n}$ is said to strongly distinguish $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$ if $\lim_{n}\mathbb{Q}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=1)+\lim_{n}\mathbb{P}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=0)=0.$ The above implies that both the type I error and the type II error of $\Phi_{n}$ converges to zero as $n\to\infty$. In case of composite alternative $H_{1}:(\mathbf{X},\mathbf{Y})\sim\mathbb{P}_{n}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$, the test strongly distinguishes $\mathbb{Q}_{n}$ from $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$ if $\liminf_{n\to\infty}\bigg{\\{}\mathbb{Q}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=1)+\sup_{\mathbb{P}_{n}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}}\mathbb{P}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=0)\bigg{\\}}=0.$ Now we explain how support recovery and the testing framework are connected. Suppose there exist decoders which exactly recover $D(U)$ and $D(V)$ under $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ for $\mathcal{B}\geq 0$. Then the trivial test, which rejects the null if either of the estimated supports is non-empty, strongly distinguishes $\mathbb{Q}_{n}$ from $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$. The above can be coined as the following lemma. ###### Lemma 7 Suppose there exist polynomial time decoders $\hat{D}_{x}$ and $\hat{D}_{y}$ of $D(U)$ and $D(V)$ so that $\liminf_{n\to\infty}\sup_{\mathbb{P}_{n}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}}\mathbb{P}_{n}\bigg{(}\hat{D}_{x}(\mathbf{X},\mathbf{Y})=D(U)\text{ and }\hat{D}_{y}(\mathbf{X},\mathbf{Y})=D(V)\bigg{)}=1$ (9) Further assume, $\mathbb{Q}_{n}(\hat{D}_{x}(\mathbf{X},\mathbf{Y})=\emptyset)\to 1$, and $\mathbb{Q}_{n}(\hat{D}_{y}(\mathbf{X},\mathbf{Y})=\emptyset)\to 1$. Then there exists a polynomial time test which strongly distinguishes $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$ and $\mathbb{Q}_{n}$. Thus, if a regime does not allow any polynomial time test for distinguishing $\mathbb{Q}_{n}$ from $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$, there can be no polynomial time computable consistent decoder for $D(U)$ and $D(V)$. Therefore, it suffices to show that there is no polynomial time test which distinguishes $\mathbb{Q}_{n}$ from $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}$ in the regime $s_{x},s_{y}\gg\sqrt{n}$. To be more explicit, we want to show that if $s_{x},s_{y}\gg\sqrt{n}$, then $\displaystyle\liminf_{n\to\infty}\bigg{\\{}\mathbb{Q}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=1)+\sup_{\mathbb{P}_{n}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})^{n}}\mathbb{P}_{n}(\Phi_{n}(\mathbf{X},\mathbf{Y})=0)\bigg{\\}}>0$ (10) for any $\Phi_{n}$ that is computable in polynomial time. The testing problem under concern is commonly known as the CCA detection problem, owing to its alternative formulation as $H_{0}:\Lambda_{1}=0$ vs $H_{1}:\Lambda_{1}>0$. In other words, the test tries to detect if there is any any signal in the data. Note that, Lemma 7 also implies that detection is an easier problem than support recovery in that the former is always possible whenever the latter is feasible. The opposite direction may not be true, however, since detection does not reveal much information on the support. #### 3.3.2 Background on the Low-degree Framework: We shall provide a brief introduction to the low-degree polynomial conjecture which forms the basis of our analyses here, and refer the interested reader to Hopkins and Steurer (2017); Hopkins (2018); Kunisky et al. (2019) for in-depth discussions on the topic. We will apply this method in context of the test $H_{0}:(\mathbf{X},\mathbf{Y})\sim\mathbb{Q}_{n}$ vs $H_{1}:(\mathbf{X},\mathbf{Y})\sim\mathbb{P}_{n}$. The low-degree method centres around the likelihood ratio $\mathbb{L}_{n}$ which takes the form $\frac{d\mathbb{P}_{n}}{d\mathbb{Q}_{n}}$ in the above framework. Our key tool here will be the Hermite polynomials, which form a basis system of $L_{2}(\mathbb{Q}_{n})$ (Szegö, 1939). Central to the low-degree approach lies the projection of $\mathbb{L}_{n}$ onto the subspace (of $L_{2}(\mathbb{Q}_{n})$) formed by the Hermite polynomials of degree at most $D_{n}\in\mathbb{N}$. The latter projection, to be denoted by $\mathbb{L}_{n}^{\leq D_{n}}$ from now on, is important because it measures how well polynomials of degree $\leq D_{n}$ can distinguish $\mathbb{P}_{n}$ from $\mathbb{Q}_{n}$. In particular, $\|\mathbb{L}_{n}^{\leq D_{n}}\|_{L_{2}(\mathbb{Q}_{n})}:=\max_{f\text{ deg }{\leq D_{n}}}\frac{\mathbb{E}_{\mathbb{P}_{n}}[f(\mathbf{X},\mathbf{Y})]}{\sqrt{\mathbb{E}_{\mathbb{Q}_{n}}[f(\mathbf{X},\mathbf{Y})^{2}]}},$ (11) where the maximization is over polynomials $f:\mathbb{R}^{n(p+q)}\mapsto\mathbb{R}$ of degree at most $D_{n}$ (Ding et al., 2019). The $L_{2}(\mathbb{Q}_{n})$ norm of the un-truncated likelihood ration $\mathbb{L}_{n}$ has long held an important place in the theory hypothesis testing since $\|\mathbb{L}_{n}\|_{L_{2}(\mathbb{Q}_{n})}=O(1)$ implies $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$ are asymptotically indistinguishable. While the un-truncated likelihood ratio $\mathbb{L}_{n}$ is connected to the existence of _any_ distinguishing test, degree $D_{n}$ projections of $\mathbb{L}_{n}$ are connected to the existence of polynomial time distinguishing tests. The implications of the above heuristics are made precise by the following conjecture (cf. Hypothesis 2.1.5 of Hopkins, 2018). ###### Conjecture 8 (Informal) Suppose $t:\mathbb{N}\mapsto\mathbb{N}$. For “nice” sequences of distributions $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$, if $\|\mathbb{L}_{n}^{\leq D_{n}}\|_{L_{2}(\mathbb{Q}_{n})}=O(1)$ as $n\to\infty$ whenever $D_{n}\leq t(n)\text{polylog}(n)$, then there is no time-$n^{t(n)}$ test $\Phi_{n}:\mathbb{R}^{n(p+q)}\mapsto\\{0,1\\}$ that strongly distinguishes $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$. Thus Conjecture 8 implies that the degree-$D_{n}$ polynomial $\mathbb{L}_{n}^{\leq D_{n}}$ is a proxy for time-$n^{t(n)}$ algorithms (Kunisky et al., 2019). If we can show that $\|\mathbb{L}_{n}^{\leq D_{n}}\|_{L_{2}(\mathbb{Q}_{n})}=O(1)$ for a $D_{n}$ of the order $(\log n)^{1+\epsilon}$ for some $\epsilon>0$, then the low degree Conjecture says that no polynomial time test can strongly distinguish $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$ (Conjecture 1.16 of Kunisky et al., 2019). Conjecture 8 is informal in the sense that we do not specify the “nice” distributions, which are defined in Section 4.2.4 of Kunisky et al. (2019) (see also Conjecture 2.2.4 of Hopkins, 2018). Niceness requires $\mathbb{P}_{n}$ to be sufficiently symmetric, which is generally guaranteed by naturally occuring high dimensional problems like ours. The condition of “niceness” is attributed to eliminate pathological cases where the testing can be made easier by methods like Gaussian elimination. See Hopkins (2018) for more details. #### 3.3.3 Main Result Similar to Ding et al. (2019), we will consider a Bayesian framework. It might not be immediately clear how a Bayesian formulation will fit into the low- degree framework, and lead to (10). However, the connection will be clear soon. We put independent Rademacher priors $\pi_{x}$ and $\pi_{y}$ on $\alpha$ and $\beta$. We say $\alpha\sim\pi_{x}$ if $\alpha_{1},\ldots,\alpha_{p}$ are i.i.d., and for each $i\in[p]$, $\displaystyle\alpha_{i}=\begin{cases}1/\sqrt{s_{x}}&w.p.\quad s_{x}/(2p)\\\ -1/\sqrt{s_{x}}&w.p.\quad s_{x}/(2p)\\\ 0&w.p.\quad 1-s_{x}/p.\end{cases}$ (12) The Rademacher prior $\pi_{y}$ can be defined similarly. We will denote the product measure $\pi_{x}\times\pi_{y}$ by $\pi$. Let us define $\displaystyle\Sigma(\alpha,\beta,\rho)=\begin{bmatrix}I_{p}&\rho\alpha\beta^{T}\\\ \rho\beta\alpha^{T}&I_{q}\end{bmatrix},\quad\alpha\in\mathbb{R}^{p},\ \beta\in\mathbb{R}^{q},\ \rho>0.$ (13) When $\rho\|\alpha\|_{2}\|\beta\|_{2}<1$, $\Sigma(\alpha,\beta,\rho)$ is the covariance matrix corresponding to $X\sim N_{p}(0,I_{p})$ and $Y\sim N_{q}(0,I_{q})$ with covariance $\text{cov}(X,Y)=\rho\alpha\beta^{T}$. Hence, for $\Sigma(\alpha,\beta,\rho)$ to be positive definite, $\|\alpha\|_{2}\|\beta\|_{2}<1/\rho$ is a sufficient condition. The priors $\pi_{x}$ and $\pi_{y}$ put positive weight on $\alpha$ and $\beta$ that do not lead to a positive definite $\Sigma(\alpha,\beta,\rho)$, and hence calls for extra care during the low-degree analysis. This subtlety is absent in the sparse PCA analogue (Ding et al., 2019). Let us define $\mathbb{P}_{\alpha,\beta}=\begin{cases}N(0,\Sigma(\alpha,\beta,1/\mathcal{B}))&\text{when }\|\alpha\|_{2}\|\beta\|_{2}<\mathcal{B}\\\ \mathbb{Q}&\text{o.w.}\end{cases}$ (14) We denote the $n$-fold product measure corresponding to $\mathbb{P}_{\alpha,\beta}$ by $\mathbb{P}_{n,\alpha,\beta}$. If $(\mathbf{X},\mathbf{Y})\mid\alpha,\beta\sim\mathbb{P}_{n,\alpha,\beta}$, then the marginal density of $(\mathbf{X},\mathbf{Y})$ is $\mathbb{E}_{\alpha\sim\pi_{x},\beta\sim\pi_{y}}d\mathbb{P}_{n,\alpha,\beta}$. The following lemma, which is proved in Appendix G.3, explains how the Bayesian framework is connected to (10). ###### Lemma 9 Suppose $\mathcal{B}>2$ and $s_{x},s_{y}\to\infty$. Then $\liminf_{n}\sup_{\mathbb{P}_{n}\in\mathcal{P}_{G}(r,2s_{x},2s_{y},\mathcal{B})^{n}}\mathbb{P}_{n}\Big{(}\Phi_{n}(\mathbf{X},\mathbf{Y})=0\Big{)}\geq\liminf_{n}\mathbb{E}_{\pi}\mathbb{P}_{n,\alpha,\beta}\Big{(}\Phi_{n}(\mathbf{X},\mathbf{Y})=0\Big{)},$ where $\mathbb{E}_{\pi}$ is the shorthand for $\mathbb{E}_{\alpha\sim\pi_{x},\beta\sim\pi_{y}}$. Note that a similar result holds for $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ as well because $\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})\subset\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. Lemma 9 implies that to show (10), it suffices to show that a polynomial time computable $\Phi_{n}$ fails to strongly distinguish the marginal distribution of $\mathbf{X}$ and $\mathbf{Y}$ from $\mathbb{Q}_{n}$. However, the latter falls within the realms of the low degree framework. To see this, note that the corresponding likelihood ratio takes the form $\mathbb{L}_{n}=\frac{\mathbb{E}_{\alpha\sim\pi_{x},\beta\sim\pi_{y}}d\mathbb{P}_{n,\alpha,\beta,\mathcal{B}}}{d\mathbb{Q}_{n}(\mathbf{X},\mathbf{Y})}.$ (15) If we can show that $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}=O(1)$ for some $D_{n}=O(\log n)$, then Conjecture 8 would indicate that a $n^{\tilde{\Theta}(D_{n})}$-time computable $\Phi_{n}$ fails to distinguish the distribution of $\mathbb{E}_{\alpha\sim\pi_{x},\beta\sim\pi_{y}}d\mathbb{P}_{n,\alpha,\beta,\mathcal{B}}$ from $\mathbb{Q}_{n}$. Theorem 10 accomplishes the above under some additional conditions on $p$, $q$, and $n$, which we will discuss shortly. Theorem 10 is proved in Appendix D. ###### Theorem 10 Suppose $D_{n}\leq\min(\sqrt{p},\sqrt{q},n)$, $s_{x},s_{y}\geq\sqrt{enD_{n}}/\mathcal{B}\quad\text{and}\quad p,q\geq 3en/\mathcal{B}^{2}.$ (16) Then $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}$ is $O(1)$ where $\mathbb{L}_{n}$ is as defined in (15). The following Corollary results from combining Lemma 9 with Theorem 10. ###### Corollary 11 Suppose $s_{x},s_{y}\geq 2\sqrt{enD_{n}}/\mathcal{B}\quad\text{and}\quad p,q\geq 3en/\mathcal{B}^{2}.$ (17) If Conjecture 8 is true, then for $D_{n}\leq\min(\sqrt{p},\sqrt{q},n)$, there is no time-$n^{\tilde{\Theta}(D_{n})}$ test that strongly distinguishes $\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$ and $\mathbb{Q}_{n}$. Corollary 11 conjectures that polynomial time algorithms can not strongly distinguish $\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})^{n}$ and $\mathbb{Q}_{n}$ provided $s_{x},s_{y}$, $p$, and $q$ satisfy (17). Therefore under (17), Lemma 7 conjectures support recovery to be NP hard. Now we discuss a bit on condition (17). The first constraint in (17) is expected because it ensures $s_{x},s_{y}\gg\sqrt{n}$, which indicates that the sparsity is in the hard regime. We need to explain a bit on why the other constraint $p,q>3en/\mathcal{B}^{2}$ is needed. If $n\gg p,q$, the sample canonical correlations are consistent, and therefore strong separation is possible in polynomial time without any restriction on the sparsity (Bao et al., 2019; Ma and Yang, 2021). Even if $p/n\to c_{1}\in(0,1)$ and $q/n\to c_{2}\in(0,1)$, then also strong separation is possible in model 13 provided the canonical correlation $\rho$ is larger than some threshold depending on $c_{1}$ and $c_{2}$ (Bao et al., 2019). The restriction $p,q>3en/\mathcal{B}^{2}$ ensures that the problem is hard enough so that the vanilla CCA does not lead to successful detection. The constant $3e$ is not sharp and possibly can be improved. The necessity of the condition $p,q\gtrsim n/\mathcal{B}^{2}$ is unknown for support recovery, however. Since support recovery is a harder problem than detection, in the hard regime, polynomial time support recovery algorithms may fail at a weaker condition on $n$, $p$, and $q$. ###### Remark 12 [Comparison with previous work:] As mentioned earlier, Gao et al. (2017) was the first to discover the existence of computational gap in context of sparse CCA. In their seminal work, Gao et al. (2017) established the computational hardness of CCA estimation problem at a particular subregime of $s_{x},s_{y}\gg\sqrt{n}/(\mathcal{B}\sqrt{\log(p+q)})$ provided $\mathcal{B}\to\infty$ is allowed. In view of the above, it was hinted that sparse CCA becomes computationally hard when $s_{x},s_{y}\gg\sqrt{n}/(\mathcal{B}\sqrt{\log(p+q)})$. However, when $\mathcal{B}$ is bounded, the entire regime $s_{x},s_{y}\gg\sqrt{n}/(\mathcal{B}\sqrt{\log(p+q)})$ is probably not computationally hard. In Section 3.4, we show that if $p+q\asymp n$, then both polynomial time estimation and support recovery are possible even if $s_{x}+s_{y}\lesssim\sqrt{n}$, at least in the known ${\Sigma}_{x}$ and ${\Sigma}_{y}$ case. The latter sparsity regime can be considerably larger than $s_{x},s_{y}\lesssim\sqrt{n/\log(p+q)}$. Together, Section 3.4 and the current section indicate that in the bounded $\mathcal{B}$ case, the transition of computational hardness for sparse CCA probably happens at the sparsity level $\sqrt{n}$, not $\sqrt{n/\log(p+q)}$, which is consistent with sparse PCA. Also, the low-degree polynomial conjecture allowed us to explore almost the entire targeted regime $s_{x},s_{y}\gg\sqrt{n}$, where Gao et al. (2017), who used the planted clique conjecture, considers only a subregime of $s_{x},s_{y}\gg\sqrt{n}/(\mathcal{B}\sqrt{\log(p+q)})$. ### 3.4 A Polynomial Time Algorithm for $\sqrt{n}/\log{(p+q)}\ll s_{x},s_{y}\ll\sqrt{n}$ Regime : Answer to Question 4 In this subsection, we show that in the difficult regime $s_{x}+s_{y}\in[\sqrt{n/\log(p+q)},\sqrt{n}]$, using a soft coordinate thresholding (CT) type algorithm, we can estimate the canonical directions consistently for our purpose when $p+q\asymp n$. CT was introduced by the the seminal work of Bickel and Levina (2008) for the purpose of estimating high dimensional covariance matrices. For SPCA, Deshpande and Montanari (2014)’s CT is the only algorithm which provably recovers the full support in the difficult regime (Krauthgamer et al., 2015). In context of CCA, Chen et al. (2013) uses CT for partial support recovery in the rank one model under what we referred to as the easy regime. However, Chen et al. (2013)’s main goal was the estimation of the leading canonical vectors, not support recovery. As a result, Chen et al. (2013) detects the support of the relatively large elements of the leading canonical directions, which are subsequently used to obtain consistent preliminary estimators of the leading canonical directions. Our thresholding level and theoretical analysis is different from that of Chen et al. (2013) because the analytical tools used in the easy regime does not work in the difficult regime. #### 3.4.1 Methodology: Estimation via CT By thresholding a matrix $A$ coordinate-wise, we will roughly mean the process of assigning the value zero to any element of $A$ which is below a certain threshold in absolute value. Similar to Deshpande and Montanari (2014), we will consider the soft thresholding operator, which, at threshold level $t$, takes the form $\eta(x,t)=\begin{cases}x-t&x>t\\\ 0&|x|<t\\\ x+t&x<-t.\end{cases}$ It will be worth noting that the soft thresholding operator $x\mapsto\eta(x,t)$ is continuous. Algorithm 2 Coordinate Thresholding (CT) for CCA 0: 1. 1. Sample covariance matrices $\widehat{\Sigma}_{n,xy}^{(1)}$ and $\widehat{\Sigma}_{n,xy}^{(2)}$ based on samples $O_{1}=(x_{i},y_{i})_{i=1}^{[n/2]}$ and $O_{2}=(x_{i},y_{i})_{i=[n/2]+1}^{n}$, respectively. 2. 2. Variances ${\Sigma}_{x}$ and ${\Sigma}_{y}$. 3. 3. Parameters Thr and cut. 4. 4. $r$, i.e. rank of ${\Sigma}_{xy}$ 0: $\widehat{D}(V)$. 1. 1. Peeling: calculate $\tilde{\Sigma}_{xy}={\Sigma}_{x}^{-1}\widehat{\Sigma}_{n,xy}^{(1)}{\Sigma}_{y}^{-1}$. 2. 2. Threshold: Letting $N=m+n$, perform soft thresholding $x\mapsto\eta(x;\texttt{Thr}/\sqrt{N})$ entrywise on $\tilde{\Sigma}_{xy}$ to obtain thresholded $\eta(\tilde{\Sigma}_{xy})$. 3. 3. Sandwitch: $\eta(\tilde{\Sigma}_{xy})\mapsto{\Sigma}_{x}^{1/2}\eta(\tilde{\Sigma}_{xy}){\Sigma}_{y}^{1/2}$. 4. 4. SVD: Find $\widehat{U}_{pre}$, the matrix of the leading $r$ singular vector of ${\Sigma}_{x}^{1/2}\eta(\tilde{\Sigma}_{xy}){\Sigma}_{y}^{1/2}$. 5. 5. Premultiply: Set $\widehat{U}^{(1)}={\Sigma}_{x}^{-1/2}\widehat{U}_{pre}$. Return: RecoverSupp $(\widehat{U}^{(1)},\texttt{cut},{\Sigma}_{y}^{-1},\widehat{\Sigma}_{n,xy}^{(2)},r)$ where RecoverSupp is given by Algorithm 1. We will also assume that the covariance matrices ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are known. To understand the difficulty of unknown ${\Sigma}_{x}$ and ${\Sigma}_{y}$, we remind the readers that ${\Sigma}_{xy}={\Sigma}_{x}U\Lambda V^{T}{\Sigma}_{y}$. Because the matrices $U$ and $V$ are sandwiched by the matrices ${\Sigma}_{x}$ and ${\Sigma}_{y}$, their sparsity pattern does not get reflected in the sparsity pattern of ${\Sigma}_{xy}$. Therefore, if one blindly applies CT to $\widehat{\Sigma}_{n,xy}$, they can at best hope to recover the sparsity pattern of the outer matrices ${\Sigma}_{x}$ and ${\Sigma}_{y}$. If the supports of the matrices $U$ and $V$ are of main concern, CT should rather be applied on the matrix $\tilde{\Sigma}_{xy}={\Sigma}_{x}^{-1}\widehat{\Sigma}_{n,xy}{\Sigma}_{y}^{-1}$. If ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are unknown, one needs to efficiently estimate $\tilde{\Sigma}_{xy}$ before the application of CT. Although under certain structural conditions, it is possible to find rate optimal estimators $\widehat{\Sigma}_{n,x}^{-1}$ and $\widehat{\Sigma}_{n,y}^{-1}$ of ${\Sigma}_{x}^{-1}$ and ${\Sigma}_{y}^{-1}$ at least in theory, the errors $\|(\widehat{\Sigma}_{n,x}^{-1}-{\Sigma}_{x}^{-1})\widehat{\Sigma}_{n,xy}{\Sigma}_{y}^{-1}\|_{op}$ and $\|{\Sigma}_{x}^{-1}\widehat{\Sigma}_{n,xy}(\widehat{\Sigma}_{n,y}^{-1}-{\Sigma}_{y}^{-1})\|_{op}$ may still blow up due to the presence of the high dimensional matrix $\widehat{\Sigma}_{n,xy}$, which can be as big as $O(\sqrt{(p+q)/n})$ in operator norm. One may be tempted to replace $\widehat{\Sigma}_{n,xy}$ by a sparse estimator of ${\Sigma}_{xy}$ to facilitate faster estimation, but that does not work because we explicitly require the formulation of $\widehat{\Sigma}_{n,xy}$ as the sum of Wishart matrices (see equation 28 in the proof). The latter representation, which is critical for the sharp analysis, may not be preserved by a CLIME (Cai et al., 2011) or nodewise Lasso estimator (van de Geer et al., 2014) of ${\Sigma}_{xy}$. We remark in passing that it is possible to obtain an estimator $\widehat{A}$ so that $|\widehat{A}-\tilde{\Sigma}_{xy}|_{\infty}=o_{p}(1)$. Although the latter does not provide much control over the operator norm of $\widehat{A}-\tilde{\Sigma}_{xy}$, it is sufficient for partial support recovery, e.g. the recovery of the rows of $U$ or $V$ with strongest signals. (See Appendix B of Chen et al., 2013, for example, for some results in this direction under the easy regime when $r=1$.) As indicated by the previous paragraph, we apply coordinate thresholding to the matrix $\tilde{\Sigma}_{xy}={\Sigma}_{x}^{-1}\widehat{\Sigma}_{n,xy}{\Sigma}_{y}^{-1}$, which directly targets the matrix ${\Sigma}_{x}^{-1}{\Sigma}_{xy}{\Sigma}_{y}^{-1}=U\Lambda V^{T}$. We call this step the peeling step because it extracts the matrix $\tilde{\Sigma}_{xy}$ from the sandwiched matrix $\widehat{\Sigma}_{n,xy}={\Sigma}_{x}\tilde{\Sigma}_{xy}{\Sigma}_{y}$. We then perform the entry-wise coordinate thresholding algorithm on the peeled form $\tilde{\Sigma}_{xy}$ with threshold Thr so as to obtain $\eta(\tilde{\Sigma}_{xy};\texttt{Thr}/\sqrt{n})$. We postpone the discussion on Thr to Section 3.4.2. The thresholded matrix is a good estimator of ${\Sigma}_{x}^{-1}{\Sigma}_{xy}{\Sigma}_{y}^{-1}$, but we need an estimator of ${\Sigma}_{x}^{-1/2}{\Sigma}_{xy}{\Sigma}_{y}^{-1/2}$. Therefore, we again sandwich $\tilde{\Sigma}_{xy}$ between ${\Sigma}_{x}^{1/2}$ and ${\Sigma}_{y}^{1/2}$. The motivation behind this sandwiching is that if $\|\tilde{\Sigma}_{xy}-{\Sigma}_{x}^{-1}{\Sigma}_{xy}{\Sigma}_{y}^{-1}\|_{op}=\epsilon_{n}$, then ${\Sigma}_{x}^{1/2}\tilde{\Sigma}_{xy}{\Sigma}_{y}^{1/2}$ is a good estimator of ${\Sigma}_{x}^{-1/2}{\Sigma}_{xy}{\Sigma}_{y}^{-1/2}$ in that $\|{\Sigma}_{x}^{1/2}\tilde{\Sigma}_{xy}{\Sigma}_{y}^{1/2}-{\Sigma}_{x}^{-1/2}{\Sigma}_{xy}{\Sigma}_{y}^{-1/2}\|_{op}\leq\sqrt{\|{\Sigma}_{x}\|_{op}\|{\Sigma}_{y}\|_{op}}\epsilon_{n}\leq\mathcal{B}\epsilon_{n}.$ However, ${\Sigma}_{x}^{1/2}U\Lambda V^{T}{\Sigma}_{y}^{1/2}$ is an SVD of ${\Sigma}_{x}^{-1/2}{\Sigma}_{xy}{\Sigma}_{y}^{-1/2}$. Therefore, using Davis- Kahan sin theta theorem (Yu et al., 2015), one can easily see that the SVD of ${\Sigma}_{x}^{1/2}\tilde{\Sigma}_{xy}{\Sigma}_{y}^{1/2}$ produces estimators $\widehat{U}^{\prime}$ and $\widehat{V}^{\prime}$ whose columns are $\epsilon_{n}$-consistent in $l_{2}$ norm for the columns of ${\Sigma}_{x}^{1/2}U$ and ${\Sigma}_{y}^{1/2}V$ up to a sign flip (cf. Theorem 2, Yu et al., 2015). Pre-multiplying the resulting $U^{\prime}$ by ${\Sigma}_{x}^{-1/2}$ yields an estimator $\widehat{U}$ of $U$ up to a sign flip of the columns. We do not worry about the sign flip because Condition 1 allows for the sign flips of the columns. Therefore, we feed this $\widehat{U}$ into RecoverSupp as our final step. See Algorithm 2 for more details. ###### Remark 13 In case of electronics health records data, it is possible to obtain large surrogate data on $X$ and $Y$ separately and thus might allow relaxing the known precision matrices assumption above. We do not pursue such semi- supervised setups here. #### 3.4.2 Analysis of the CT Algorithm For the asymptotic analysis of the CT algorithm, we will assume the underlying distribution to be Gaussian, i.e. $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$. This Gaussian assumption will be used to perform a crucial decomposition of sample covariance matrix which typically holds for Gaussian random vectors (see equation 28). Deshpande and Montanari (2014), who used similar devices for obtaining the sharp rate results in SPCA, also required a similar Gaussian assumption. We do not yet know how to extend these results to sub-Gaussian random vectors. Let us consider the threshold $\texttt{Thr}/\sqrt{n}$, where Thr is explicitly given in Theorem 14. Unfortunately, tuning of Thr requires the knowledge of the underlying sparsity $s_{x}$ and $s_{y}$. Similar to Deshpande and Montanari (2014), our thresholding level is different than the traditional choice of order $O(\sqrt{\log(p+q)/n})$ in the easy regime (Bickel and Levina, 2008; Cai et al., 2012; Chen et al., 2013). The latter level is too large to successfully recover all the non-zero elements in the difficult regime. We threshold $\tilde{\Sigma}_{xy}$ at a lower level at higher sparsity, which in its turn, complicates the analysis to a greater degree. Our main result in this direction, stated in Theorem 14, is proved in Appendix E. ###### Theorem 14 Suppose $(X_{i},Y_{i})\sim\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$. Further suppose $s_{x}+s_{y}<\sqrt{n}$, $p\vee q=o(\log n)$, and $\log n=o(\sqrt{p}\vee\sqrt{q})$. Let $K$ and $C_{1}$ be constants so that $K\geq 1288\mathcal{B}^{4}$ and $C_{1}\geq C\mathcal{B}^{4}$, where $C>0$ is an absolute constant. Suppose the threshold level Thr is defined by $\displaystyle\texttt{Thr}=\begin{cases}\sqrt{C_{1}\log(p+q)}&\text{ if }\quad(s_{x}+s_{y})^{2}<2^{1/4}(p+q)^{3/4}\text{ (case i)}\\\ \Big{(}K\log(\frac{p+q}{(s_{x}+s_{y})^{2}})\Big{)}^{1/2}&\text{ if }\quad 2^{1/4}(p+q)^{3/4}\leq(s_{x}+s_{y})^{2}\leq(p+q)/e\text{ (case ii)}\\\ 0&\text{ o.w. (case iii).}\end{cases}$ Suppose $c_{\mathcal{B}}$ is a constant that takes the value $K$, $C_{1}$, or one in case (i), (ii), and (iii), respectively. Then there exists an absolute constant $C>0$ so that the following holds with probability $1-o(1)$ for $\tilde{\Sigma}_{xy}={\Sigma}_{x}^{-1}\widehat{\Sigma}_{n,xy}{\Sigma}_{y}^{-1}:$ $\|\eta(\tilde{\Sigma}_{xy};\eta)-{\Sigma}_{x}^{-1}{\Sigma}_{xy}{\Sigma}_{y}^{-1}\|_{op}\leq C\mathcal{B}^{2}\frac{(s_{x}+s_{y})}{\sqrt{n}}\max\bigg{\\{}\bigg{(}c_{\mathcal{B}}\log(\frac{p+q}{(s_{x}+s_{y})^{2}})\bigg{)}^{1/2},1\bigg{\\}}.$ To disentangle the implications of the theorem above, let us assume $p+q\asymp n$ for the time being. Then case (ii) in the theorem corresponds to $n^{3/4}\lesssim(s_{x}+s_{y})^{2}\leq n$. Thus, CT works in the difficult regime provided $p+q\asymp n$. It should be noted that the threshold for this case is almost of the order $O(1/\sqrt{n})$, which is much smaller than $O(\sqrt{\log(p+q)/n})$, the traditional threshold for the easy regime. Next, observe that case (i) is an easy case because $s_{x}+s_{y}$ is much smaller than $\sqrt{n}$. Therefore, in this case, the traditional threshold of the easy regime works. Case (iii) includes the hard regime, where polynomial time support recovery is probably impossible. Because it is unlikely that CT can improve over the vanilla estimator $\tilde{\Sigma}_{xy}$ in this regime, a threshold of zero is set. ###### Remark 15 Theorem 14 requires $\log n=o(\sqrt{p}\vee\sqrt{q})$ because one of our concentration inequalities in the analysis of case (ii) needs this technical condition (see Lemma 27). The omitted regime $\log n>C(\sqrt{p}\vee\sqrt{q})$ is indeed an easier one, where special methods like CT is not even required. In fact, it is well known that subgaussian $X$ and $Y$ satisfy (cf. Theorem 4.7.1 of of Vershynin, 2018) $\|\widehat{\Sigma}_{n,xy}-{\Sigma}_{xy}\|_{op}\leq C\bigg{(}\bigg{(}\frac{p+q}{n}\bigg{)}^{1/2}+\frac{p+q}{n}\bigg{)},$ which is $O(\log n/\sqrt{n})$ in the regime under concern. We decided include this result in the statement of Theorem 14 since it would unnecessarily lengthen the exposition. Therefore, in this section, we exclude this regime from our consideration to focus more on the $s_{x}+s_{y}\approx\sqrt{p+q}$ regime. ###### Remark 16 The statement of Theorem 14 is not explicit on the lower bound of the constant $C_{1}$. However, our simulation shows setting $C_{1}\geq 50\mathcal{B}^{4}$ works. Both threshold parameters $C_{1}$ and $K$ in Theorem 14 depend on the unknown $\mathcal{B}>0$. The proof actually shows that $\mathcal{B}$ can be replaced by $\max\\{\Lambda_{max}({\Sigma}_{x}),\Lambda_{max}({\Sigma}_{y}),\Lambda_{max}({\Sigma}_{x}^{-1}),\Lambda_{max}({\Sigma}_{y}^{-1})\\}$. In practice, estimating the latter also can be difficult. Therefore our suggestion is to set $K$ and $C_{1}$ to be some large numbers, or to use cross validation to choose them. Finally, Theorem 14 leads to the following corollary, which establishes that in the difficult regime, there exist estimators which satisfy Condition 1, and Algorithm 2 succeeds with probability one provided $p+q\asymp n$. This answers Question 4 in the affirmative for Gaussian distributions. ###### Corollary 17 Instate the conditions of Theorem 14. Then there exists $C_{\mathcal{B}}>0$ so that if $n\geq C_{\mathcal{B}}r(s_{x}+s_{y})^{2}\max\bigg{\\{}\log\bigg{(}\frac{p+q}{(s_{x}+s_{y})^{2}}\bigg{)},1\bigg{\\}},$ (18) then the $\widehat{U}^{(1)}$ defined in Algorithm 2 satisfies Condition 1, and $\inf_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}($Algorithm 2 correctly recovers $D(V)$ $)\to_{n}1$. We defer the proof of Corollary 17 to Appendix F. ## 4 Numerical Experiments This section illustrates the performance of different polynomial time CCA support recovery methods when the sparsity transitions from the easy to difficult regime. We base our demonstration on a Gaussian rank one model, i.e. $(X,Y)$ are jointly Gaussian with covariance matrix ${\Sigma}_{xy}=\rho{\Sigma}_{x}\alpha\beta^{T}{\Sigma}_{y}$. For simplicity, we take $p=q$ and $s_{x}=s_{y}=s$. In all our simulations, $\rho$ is set to be $0.5$, and $\alpha=\alpha^{*}/\sqrt{(\alpha^{*})^{T}{\Sigma}_{x}\alpha^{*}}$, $\beta=\beta^{*}/\sqrt{(\beta^{*})^{T}{\Sigma}_{y}\beta^{*}}$ where $\displaystyle\alpha^{*}=$ $\displaystyle\ (1/\sqrt{s},\ldots,1/\sqrt{s},0,\ldots,0),$ $\displaystyle\beta^{*}=$ $\displaystyle\ \Big{(}\sqrt{1-(s-1)s^{-4/3}},s^{-2/3},\dots,s^{-2/3},0,\ldots,0\Big{)}$ are unit norm vectors. Note that the order of most elements of $\beta$ is $O(s^{-2/3})$, where a typical element of $\alpha$ is $O(s^{-1/2})$. Therefore, we will refer to $\alpha$ and $\beta$ as the moderate and the small signal case, respectively. For the population covariance matrices ${\Sigma}_{x}$ and ${\Sigma}_{y}$ of $X$ and $Y$, we consider the following two scenarios: A (Identity): ${\Sigma}_{x}=I_{p}$ and ${\Sigma}_{y}=I_{q}$. Since $p=q$, they are essentially the same. B (Sparse inverse): This example is taken from Gao et al. (2017). In this case, ${\Sigma}_{x}^{-1}={\Sigma}_{y}^{-1}$ are banded matrices, whose entries are given by $({\Sigma}_{x}^{-1})_{i,j}=1\\{i=j\\}+0.65\times 1\\{|i-j|=1\\}+0.4\times 1\\{|i-j|=2\\}.$ Now we explain our common simulation scheme. We take the sample size $n$ to be $1000$, and consider three values for $p$: $100$, $200$, and $300$. The highest value of $p+q$ is thus $600$, which is smaller than but in proportion to $n$ regime. Our simulations indicate that all of the methods considered here requires $n$ to be quite larger than $p+q$ for the asymptotics to kick in at $\rho=0.5$ and we comment on it as required below. We further let $s/\sqrt{n}$ vary in the set $[0.01,2]$. To be more specific, we consider $16$ equidistant points in the set $[0.01,2]$ for the ratio $s/\sqrt{n}$. Now we discuss the error metric used here to compare the performance of different support recovery methods. Type I and type II errors are commonly used tools to measure the performance of support recovery (Deshpande and Montanari, 2014). In case of support recovery of $\alpha$, we define the type I error to be the proportion of zero elements in $\alpha$ that appear in the estimated support $\widehat{D}(\alpha)$. Thus, we quantify the type I error of $\alpha$ by $|\widehat{D}(\alpha)\setminus D(\alpha)|/(p-s)$. On the other hand, the type II error for $\alpha$ is the proportion of elements in $D(\alpha)$ which are absent in $\widehat{D}(\alpha)$, i.e. the type II error is quantified by $|D(\alpha)\setminus\widehat{D}(\alpha)|/s$. One can define the type I and type II errors corresponding to $\beta$ similarly. Our simulations demonstrate that often the methods with low type I error exhibit high type II error, and vice versa. In such situations, comparison between the corresponding methods become difficult if one uses the type I and type II errors separately. Therefore, we consider a scaled Hamming loss type metric which suitably combines the type I and type II error. The symmetric Hamming error of estimating $D(\alpha)$ by $\widehat{D}(\alpha)$ is (cf. Section 2.1 of Wang et al., 2021) $\bigg{(}1-\frac{|D(\alpha)\cap\widehat{D}(\alpha)|}{\sqrt{|D(\alpha)||\widehat{D}(\alpha)|}}\bigg{)}.$ Note that the above quantity is always bounded above by one. We can similarly define the symmetric Hamming distance between $D(\beta)$ and $\widehat{D}(\beta)$. Finally, the estimates of these three errors (Type I, Type, and scaled Hamming Loss) are obtained based on $1000$ Monte Carlo replications. Now we discuss the support recovery methods we compare here. Naïve SCCA We estimate $\alpha$ and $\beta$ using the SCCA method of Mai and Zhang (2019), and set $\widehat{D}(\alpha)=\\{i\in[p]:\widehat{\alpha}_{i}\neq 0\\}$ and $\widehat{D}(\beta)=\\{i\in[q]:\widehat{\beta}_{i}\neq 0\\}$, where $\widehat{\alpha}$ and $\widehat{\beta}$ are the corresponding SCCA estimators. To implement the SCCA method of Mai and Zhang (2019), we use the R code referred therein with default tuning parameters. Cleaned SCCA This method implements RecoverSupp with the above mentioned SCCA estimators of $\alpha$ and $\beta$ as the preliminary estimators. CT This is the method outlined in Algorithm 2, which is RecoverSupp coupled with the CT estimators of $\alpha$ and $\beta$. Our CT method requires the knowledge of the population covariance matrices ${\Sigma}_{x}$ and ${\Sigma}_{y}$. Therefore, to keep the comparison fair, in case of the cleaned SCCA method as well, we implement RecoverSupp with the popular covariance matrices. Because of their reliance on RecoverSupp, both cleaned SCCA and CT depend on the threshold cut, tuning which seems to be a non-trivial task. We set $\texttt{cut}=C\sqrt{\log(p+q)s({\Sigma}_{x}^{-1})/n}$, where $C$ is the thresholding constant. Our simulations show that a large $C$ results in high type II error, where insufficient thresholding inflates the type I error. Taking the hamming loss into account, we observe that $C\approx 1$ leads to a better performance in case A in an overall sense. On the other hand, case B requires a smaller value of thresholding parameter. In particular, we let $C$ to be one in case A, and set $C=0.05$ and $0.2$, respectively, for the support recovery of $\alpha$ and $\beta$ in case B. The CT algorithm requires an extra threshold parameter, namely the parameter Thr in Algorithm 2, which corresponds to the coordinate thresholding step. We set Thr in accordance with Theorem 14 and Remark 16, with $K$ being $1288\mathcal{B}^{4}$ and $C_{1}$ being $50\mathcal{B}^{4}$. We set $\mathcal{B}$ as in Remark 16, that is $\mathcal{B}=\max\\{\Lambda_{max}({\Sigma}_{x}),\Lambda_{max}({\Sigma}_{y}),\Lambda_{max}({\Sigma}_{x}^{-1}),\Lambda_{max}({\Sigma}_{y}^{-1})\\}.$ The errors incurred by our methods in case A are displayed in Figure 2 (for $\alpha$) and Figure 3 (for $\beta$). Figures 4 and 5, on the other hand, display the errors in the recovery of $\alpha$ and $\beta$, respectively, in case B. Now we discuss the main observations from the above plots. When the sparsity parameter $s$ is considerably low (less than ten in the current settings), the naïve SCCA method is sufficient in the sense that the specialized methods do not perform any better. Moreover, the naïve method is the most conservative one among all three methods. As a consequence, the associated type I error is always small, although the type II error of the naïve method grows faster than any other method. The specialized methods are able to improve the type II error at the cost of higher type I error. At a higher sparsity level, the specialized methods can outperform the naïve method in terms of the Hamming error, however. This is most evident when the setting is also complex, i.e. the signal is small, or the underlying covariance matrices are not identity. In particular, Figure 2 and 4 entail that when the signal strength is moderate and the sparsity is high, the cleaned SCCA has the lowest hamming error. In the small signal case, however, CT exhibits the best hamming error as $s/\sqrt{n}$ increases; cf. Figure 3 and 5. The Type I error of CT can be slightly improved if the sparsity information can be incorporated during the thresholding step. We simply replace cut by the maximum of cut and the $s$-th largest element of $\widehat{V}^{clean}$, where the latter is as in Algorithm RecoverSupp. See for example Figure 6, which entails that this modification reduces the Hamming error of the CT algorithm in case A. Our empirical analysis hints that the CT algorithm has potential for improvement from the implementation perspective. In particular, it may be desirable to obtain a more efficient procedure for choosing cut in a systematic way. However, such a detailed numerical analysis is beyond the scope of the current paper and will require further modifications of the initial methods for estimation of $\alpha,\beta$ both for scalability and finite sample performance reasons. We keep these explorations as important future directions. It is natural to wonder what is the effect of cleaning via RecoverSupp on SCCA. As mentioned earlier, during our simulations we observed that a cleaning step generally improves the type II error of the naïve SCCA, but it also increases the type I error. In terms of the combined measure, i.e. the Hamming error, it turns out that cleaning does have an edge at higher sparsity levels in case B; cf. Figure 4 and Figure 5. However, the scenario is different in case A. Although Figures 2 and 3 indicate that almost no cleaning occurs at the set threshold level of one, we saw that cleaning happens at lower threshold levels. However, the latter does not improve the overall Hamming error of naïve SCCA. The consequence of cleaning may be different for other SCCA methods. To summarize, when the sparsity is low, support recovery using the naïve SCCA is probably as good as the specialized methods. However, at higher sparsity level, specialized support recovery methods may be preferable. Consequently, the precise analysis of the apparently naïve SCCA will indeed be an interesting future direction. (a) Type I error for support recovery of $\alpha$ (b) Type II error for support recovery of $\alpha$ (c) Symmetrized Hamming error for support recovery of $\alpha$ Figure 2: Support recovery for $\alpha$ when ${\Sigma}_{x}=I_{p}$ and ${\Sigma}_{y}=I_{q}$. (a) Type I error for support recovery of $\beta$ (b) Type II error for support recovery of $\beta$ (c) Symmetrized Hamming error for support recovery of $\beta$ Figure 3: Support recovery for $\beta$ when ${\Sigma}_{x}=I_{p}$ and ${\Sigma}_{y}=I_{q}$. (a) Type I error for support recovery of $\alpha$ (b) Type II error for support recovery of $\alpha$ (c) Symmetrized Hamming error for support recovery of $\alpha$ Figure 4: Support recovery for $\alpha$ when ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are the sparse covariance matrices. (a) Type I error for support recovery of $\beta$ (b) Type II error for the support recovery of $\beta$ (c) Symmetrized Hamming error for support recovery of $\beta$ Figure 5: Support recovery for $\beta$ when ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are the sparse covariance matrices. (a) Errors for support recovery of $\alpha$ (b) Errors for support recovery of $\beta$ Figure 6: Support recovery by the CT algorithm when we use the information on sparsity to improve the type I error. Here ${\Sigma}_{x}$ and ${\Sigma}_{y}$ are $I_{p}$ and $I_{q}$, respectively. To see the decrease in type I error, compare the errors with that of Figure 2 and Figure 3. ## 5 Discussion In this paper we have discussed rate optimal behavior of information theoretic and computational limits of the joint support recovery problem for the sparse canonical correlation analysis problem. Inspired by recent results in estimation theory of sparse CCA, flurry of results in sparse PCA, and related developments based on low-degree polynomial conjecture – we are able to paint a complete picture of the landscape of support recovery for SCCA. For future directions, it is worth noting that our results are so far not designed to recover $D(\beta_{i})$ for individual $i\in[r]$ separately (and hence the term joint recovery). Although this is also the case for most state of the art in the case of the sparse PCA problem (results often exist only for the combined support (Deshpande and Montanari, 2014) or the single spike model where $r=1$ (Wainwright, 2009).), we believe that this is an interesting question for deeper explorations in the future. Moreover, moving beyond asymptotically exact recovery of support to more nuanced metrics (e.g. Hamming Loss) will also require new ideas worth studying. Finally, it remains an interesting question to pursue whether polynomial time support recovery is possible in the $\sqrt{n}/\log{(p+q)}\ll s_{x},s_{y}\ll\sqrt{n}$ regime using a CT type idea – but for unknown yet structured high dimensional nuisance parameters $\Sigma_{x},\Sigma_{y}$. Acknowledgments This work was supported by National Institutes of Health grant P42ES030990. ## A Proof preliminaries The Appendix collects the proof of all our theorems and lemmas. This section introduces some new notations and collects some facts, which are used repeatedly in our proofs. ### A.1 New Notations Since the columns of ${\Sigma}_{x}^{1/2}U$, i.e. $[{\Sigma}_{x}^{1/2}U_{1},\ldots,{\Sigma}_{x}^{1/2}U_{r}]$ are orthogonal, we can extend it to an orthogonal basis of $\mathbb{R}^{p}$, which can also be expressed in the form $[{\Sigma}_{x}^{1/2}u_{1},\ldots,{\Sigma}_{x}^{1/2}u_{p}]$ since ${\Sigma}_{x}$ is non-singular. Let us denote the matrix $[u_{1},\ldots,u_{p}]$ by $\tilde{U}$, whose first $r$ columns form the matrix $U$. Along the same line, we can define $\tilde{V}$, whose first $q$ columns constitute the matrix $V$. Suppose $A\in\mathbb{R}^{p\times q}$ is a matrix. We define the projection of $A$ onto $D\subset[p]\times[q]$ by $\bigg{(}\mathcal{P}_{D}\\{A\\}\bigg{)}_{i,j}=\begin{cases}A_{i,j}&\text{ if }(i,j)\in D,\\\ 0&\text{otherwise.}\end{cases}$ Also, for any $S\subset[p]$, we let $A_{S*}$ denote the matrix $\mathcal{P}_{S\times[q]}\\{A\\}$. Similarly, for $F\subset[q]$, we let $A_{F}$ be the matrix $\mathcal{P}_{[p]\times F}\\{A\\}$. For $k\in\mathbb{N}$, we define the norms $\|A\|_{k,\infty}=\max_{j\in[q]}\|A_{j}\|_{k}$ and $\|A\|_{\infty,k}=\max_{i\in[q]}\|A_{i}\|_{k}$. We will use the notation $|A|_{\infty}$ to denote the quantity $\sup_{1\in[p],j\in[q]}|A_{i,j}|$. The Kullback Leibler (KL) divergence between two probability distributions $P_{1}$ and $P_{2}$ will be denoted by $KL(P_{1}\mid P_{2})$. For $x\in\mathbb{R}$, we let $\left\lfloor x\right\rfloor$ denote greatest integer less than or equal to $x\in\mathbb{R}$. ### A.2 Facts on $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ First note that since $v_{i}^{T}{\Sigma}_{y}v_{i}=1$ by (2) for all $i\in[q]$, we have $\|v_{i}\|_{2}\leq\sqrt{\mathcal{B}}$. Similarly, we can also show that $\|u_{i}\|_{2}\leq\sqrt{\mathcal{B}}$. Second, we note that $\|{\Sigma}_{x}^{1/2}U\|_{op}=\|{\Sigma}_{y}^{1/2}V\|_{op}=1$, and $\displaystyle|{\Sigma}_{yx}|_{\infty}\leq\|{\Sigma}_{yx}\|_{op}=$ $\displaystyle\ \|{\Sigma}_{y}V\Lambda U^{T}{\Sigma}_{x}\|_{op}\leq\|{\Sigma}_{y}^{1/2}\|_{op}\|{\Sigma}_{y}^{1/2}V\|_{op}\|\Lambda\|_{op}\|{\Sigma}_{x}^{1/2}U\|_{op}\|{\Sigma}_{x}^{1/2}\|_{op}\leq\mathcal{B}$ (19) because the largest element of $\Lambda$ is not larger than one. Since $X_{i}$’s and $Y_{i}$’s are Subgaussian, for any random vector $v$ independent of $\mathbf{X}$ and $\mathbf{Y}$, it follows that (cf. Lemma 7 of Janková and van de Geer, 2018) $|(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})v|_{\infty}\leq C_{\mathcal{B}}\|v\|_{2}\sqrt{\frac{\log(p+q)}{n}}$ (20) with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. Also, we can show that $\Phi_{0}={\Sigma}_{y}^{-1}$ satisfies $\|(\Phi_{0})_{k}\|_{1,\infty}\leq\sqrt{s({{\Sigma}_{x}})}\|(\Phi_{0})_{k}\|_{2,\infty}\leq\sqrt{s({{\Sigma}_{x}})}\|\Phi_{0}\|_{op}\leq\sqrt{s({{\Sigma}_{x}})}\mathcal{B},$ where Cauchy-Schwarz inequality was used in the first step. ### A.3 General Technical Facts ###### Fact 18 For two matrices $A\in\mathbb{R}^{m\times n}$ and $B\in\mathbb{R}^{n\times q}$, we have $\|AB\|_{F}^{2}\leq\|A\|_{op}^{2}\|B\|_{F}^{2},\quad\|AB\|_{F}^{2}\leq\|A\|_{F}^{2}\|B\|_{op}^{2}$ ###### Fact 19 (Lemma 11 of Deshpande and Montanari (2014)) Let $\mathbf{Z}\in\mathbb{R}^{n\times p}$ be a matrix with i.i.d. standard normal entries, i.e. $Z_{i,j}\sim N(0,1)$. Then for every $t>0$, $\mathbb{P}(\|\mathbf{Z}\|_{op}\geq\sqrt{p}+\sqrt{n}+t)\leq\exp(-t^{2}/2).$ As a consequence, there exists an absolute constant $C>0$ such that $\mathbb{P}\Big{(}\|\mathbf{Z}\|_{op}\geq\sqrt{2}(\sqrt{p}+\sqrt{n})\Big{)}\leq\exp(-C(p+n)).$ Recall that for $A\in\mathbb{R}^{p\times q}$, in Appendix A.1, we defined $\|A\|_{1,\infty}$ and $\|A\|_{\infty,1}$ to be the matrix norms $\max_{j\in[q]}\|A_{j}\|_{1}$ and $\max_{i\in[p]}\|A_{i*}\|_{1}$, respectively. The following fact is a Corollary to (20). ###### Fact 20 Suppose $X$ and $Y$ are jointly subgaussian. Then $|\widehat{\Sigma}_{n,xy}-{\Sigma}_{xy}|_{\infty}=O_{p}(\sqrt{\log(p+q)/n})$. ###### Fact 21 (Chi-square tail bound) Suppose $\mathbb{Z}_{1},\ldots,\mathbb{Z}_{k}\stackrel{{\scriptstyle iid}}{{\sim}}N(0,1)$. Then for any $y>5$, we have $\mathbb{P}\Big{(}\sum_{l=1}^{k}\mathbb{Z}_{l}^{2}\geq yk\Big{)}\leq\exp(-yk/5).$ Proof [Proof of Fact 21] Since $Z_{l}$’s are independent standard Gaussian random variables, by tail bounds on Chi-squared random variables (The form below is from Lemma 12 of Deshpande and Montanari, 2014), $\mathbb{P}\Big{(}\sum_{l=1}^{k}\mathbb{Z}_{l}^{2}\geq k+2\sqrt{kx}+2x\Big{)}\leq\exp(-x).$ Plugging in $x=yk$, we obtain that $\mathbb{P}\Big{(}\sum_{l=1}^{k}\mathbb{Z}_{l}^{2}\geq(1+2\sqrt{y}+2y)k\Big{)}\leq\exp(-yk),$ which implies for $y>1$, $\mathbb{P}\Big{(}\sum_{l=1}^{k}\mathbb{Z}_{l}^{2}\geq 5yk\Big{)}\leq\exp(-yk),$ which can be rewritten as $\mathbb{P}\Big{(}\sum_{l=1}^{k}\mathbb{Z}_{l}^{2}\geq yk\Big{)}\leq\exp(-yk/5)$ as long as $y>5$. ## B Proof of Theorem 2 For the sake of simplicity, we denote $\widehat{U}^{(1)}$, $\widehat{\Sigma}_{n,xy}^{(2)}$, and $\widehat{\Omega}_{n}^{(1)}$ by $\widehat{U}$, $\widehat{\Sigma}_{n,xy}$, and $\widehat{\Omega}_{n}$, respectively. The reader should keep in mind that $\widehat{U}$ is independent of $\widehat{\Sigma}_{n,xy}$ and $\widehat{\Omega}_{n}$ because it is constructed from a different sample. Next, using Condition 1, we can show that there exists $(w_{i},\ldots,w_{p})\in\\{\pm 1\\}^{p}$ so that $\inf_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}\max_{i\in[r]}\Big{|}(w_{i}\widehat{u}_{n,i}-u_{i})^{T}{\Sigma}_{x}(w_{i}\widehat{u}_{n,i}-u_{i})\Big{|}<\texttt{Err}^{2}\Big{)}\to 1$ as $n\to\infty$. Without loss of generality, we assume $w_{i}=1$ for all $i\in[r]$. The proof will be similar for general $w_{i}$’s. Thus $\displaystyle\inf_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}\max_{i\in[r]}\Big{|}(\widehat{u}_{n,i}-u_{i})^{T}{\Sigma}_{x}(\widehat{u}_{n,i}-u_{i})\Big{|}<\texttt{Err}^{2}\Big{)}\to 1$ (21) Therefore $\|\widehat{u}_{n,i}-u_{i}\|_{2}\leq\texttt{Err}\sqrt{\mathcal{B}}$ for all $i\in[r]$ with $\mathbb{P}$ probability tending to one. Now we will collect some facts which will be used during the proof. Because $\widehat{u}_{n,i}$ and $\widehat{\Sigma}_{n,yx}$ are independent, (20) implies that $|(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}|_{\infty}\leq C_{\mathcal{B}}\|\widehat{u}_{n,i}\|_{2}\sqrt{\frac{\log(p+q)}{n}}.$ Using (21), we obtain that $\|\widehat{u}_{n,i}\|_{2}\leq\|\widehat{u}_{n,i}-u_{i}\|_{2}+\|u_{i}\|_{2}\leq\sqrt{B}(\texttt{Err}+1)$. Because $\texttt{Err}<\mathcal{B}^{-1}\leq 1$, we have $\inf_{\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\bigg{(}\max_{i\in[r]}|(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}|_{\infty}\leq C_{\mathcal{B}}\sqrt{\frac{\log(p+q)}{n}}\bigg{)}=1-o(1).$ (22) Noting (19) implies $|{\Sigma}_{yx}\widehat{u}_{n,i}|_{\infty}\leq\|{\Sigma}_{yx}\|_{op}\|\widehat{u}_{n,i}\|_{2}\leq 2\mathcal{B}^{3/2}$, and that $\log(p+q)=o(n)$, using (22), we obtain that $\displaystyle\max_{i\in[r]}|\widehat{\Sigma}_{n,yx}\widehat{u}_{n,i}|_{\infty}\leq|(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}|_{\infty}+|{\Sigma}_{yx}\widehat{u}_{n,i}|_{\infty}\leq 3\mathcal{B}^{3/2}$ (23) with $\mathbb{P}$ probability $1-o(1)$. Now we are ready to prove Theorem 2. Because $\Lambda_{i}(v_{i})_{k}=e_{k}^{T}{\Sigma}_{y}^{-1}{\Sigma}_{yx}u_{i}$, it holds that $\displaystyle(\widehat{v}_{n,i}^{clean})_{k}-\Lambda_{i}(v_{i})_{k}=$ $\displaystyle\ e_{k}^{T}(\widehat{\Omega}_{n}-\Phi_{0})\widehat{\Sigma}_{n,yx}\widehat{u}_{n,i}+e_{k}^{T}\Phi_{0}(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}+e_{k}^{T}\Phi_{0}{\Sigma}_{yx}(\widehat{u}_{n,i}-u_{i})$ leading to $\displaystyle|(\widehat{v}_{n,i}^{clean})_{k}-\Lambda_{i}(v_{i})_{k}|\leq$ $\displaystyle\ \underbrace{|e_{k}^{T}(\widehat{\Omega}_{n}-\Phi_{0})\widehat{\Sigma}_{n,yx}\widehat{u}_{n,i}|}_{T_{1}(i,k)}+\underbrace{|e_{k}^{T}\Phi_{0}(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}|}_{T_{2}(i,k)}+\underbrace{|e_{k}^{T}\Phi_{0}{\Sigma}_{yx}(\widehat{u}_{n,i}-u_{i})|}_{T_{3}(i,k)}.$ Handling the term $T_{2}$ is the easiest because $\displaystyle\max_{i\in[r],k\in[q]}T_{2}(i,k)\leq\|\Phi_{0}\|_{1,\infty}|(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}|_{\infty}\leq C_{\mathcal{B}}\sqrt{\frac{s({\Sigma}_{y}^{-1})\log(p+q)}{n}}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, where we used (22) and the fact that $\|\Phi_{0}\|_{1,\infty}\leq\sqrt{s({{\Sigma}_{x}})}\mathcal{B}$. The difference in cases (A), (B), (C) arises only due to different bounds on $T_{1}(i,k)$ in these cases. We demonstrate the whole proof only for case (A). For the other two cases, we only discuss the analysis of $T_{1}(i,k)$ because the rest of the proof remains identical in these cases. #### B.0.1 Case (A) Since we have shown in (23) that $|\widehat{\Sigma}_{n,yx}\widehat{u}_{n,i}|_{\infty}\leq 3\mathcal{B}^{3/2}$, we calculate $\displaystyle\max_{i\in[r],k\in[q]}T_{1}(i,k)\leq\|\widehat{\Omega}_{n}-\Phi_{0}\|_{1,\infty}\max_{i\in[r]}|\widehat{\Sigma}_{n,yx}\widehat{u}_{n,i}|_{\infty}\leq 3\mathcal{B}^{3/2}C_{\text{pre}}s({\Sigma}_{y}^{-1})\sqrt{\frac{\log q}{n}}$ with $\mathbb{P}$ probability tending to one, uniformly over $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, where to get the last inequality, we also used the bound on $\|\widehat{\Omega}_{n}-\Phi_{0}\|_{\infty,1}$ in case (A). Finally, for $T_{3}$, we notice that $T_{3}(i,k)=\big{|}e_{k}^{T}\Phi_{0}{\Sigma}_{yx}(\widehat{u}_{n,i}-u_{i})\big{|}=\bigg{|}e_{k}^{T}\sum_{j=1}^{r}\Lambda_{j}v_{j}u_{j}^{T}{\Sigma}_{x}(\widehat{u}_{n,i}-u_{i})\bigg{|}\leq\max_{j\in[r]}\big{|}(v_{j})_{k}\big{|}\bigg{|}\sum_{j=1}^{r}u_{j}^{T}{\Sigma}_{x}(\widehat{u}_{n,i}-u_{i})\bigg{|}$ since $\Lambda_{1}\leq 1$. Since $(v_{j})_{k}=V_{kj}$, it is clear that $T_{3}(i,k)$ is identically zero if $k\notin D(V)$. Otherwise, Cauchy Schwarz inequality implies, $\bigg{|}\sum_{j=1}^{r}u_{j}^{T}{\Sigma}_{x}(\widehat{u}_{n,i}-u_{i})\bigg{|}\leq\sqrt{r}\bigg{(}\sum_{j=1}^{r}(u_{j}^{T}{\Sigma}_{x}(\widehat{u}_{n,i}-u_{i}))^{2}\bigg{)}^{1/2}\leq\sqrt{r}\|{\Sigma}_{x}^{1/2}(\widehat{u}_{n,i}-u_{i})\|_{2}$ because ${\Sigma}_{x}^{1/2}u_{j}$’s are orthogonal. Thus $\displaystyle\max_{i\in[r],k\in D(V)}|T_{3}(i,k)|\leq\sqrt{r}\max_{j\in[r]}\big{|}(v_{j})_{k}\big{|}\texttt{Err}.$ Now we will combine the above pieces together. Note that $\displaystyle\max_{i\in[q]}\max_{k\in[r]}(|T_{1}(i,k)|+|T_{2}(i,k)|)\leq C_{\mathcal{B}}\underbrace{C_{\text{pre}}s({\Sigma}_{y}^{-1})\sqrt{\frac{\log(p+q)}{n}}}_{\epsilon_{n}}.$ (24) For $k\notin D(V)$, denoting the $i$-th column of $\widehat{V}^{clean}$ by $\widehat{v}_{n,i}^{clean}$ we observe that, $\displaystyle\max_{k\notin D(V)}\max_{i\in[r]}|\widehat{V}_{ki}^{clean}|=\max_{k\notin D(V)}\max_{i\in[r]}|(\widehat{v}_{n,i}^{clean})_{k}|\leq\max_{i\in[q]}\max_{k\in[r]}(|T_{1}(i,k)|+|T_{2}(i,k)|)\leq C_{\mathcal{B}}{\epsilon_{n}}$ (25) with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. On the other hand, if $k\in D(v_{i})$, then we have for all $i\in[r]$, $|(\widehat{v}_{n,i}^{clean})_{k}|>\Lambda_{i}|(v_{i})_{k}|-\sqrt{r}\max_{j\in[r]}\big{|}(v_{j})_{k}\big{|}\texttt{Err}-\max_{i\in[q]}\max_{k\in[r]}(|T_{1}(i,k)|+|T_{2}(i,k)|),$ which implies $\max_{i\in[r]}|\widehat{V}_{ki}^{clean}|>\max_{i\in[r]}\Lambda_{i}|(v_{i})_{k}|-\sqrt{r}\max_{i\in[r]}\big{|}(v_{i})_{k}\big{|}\texttt{Err}-C_{\mathcal{B}}\epsilon_{n}.$ Since $\texttt{Err}<\mathcal{B}^{-1}/(2\sqrt{r})$ and $\mathcal{B}^{-1}<\min_{i\in[r]}\Lambda_{i}$, we have $\max_{i\in[r]}\Lambda_{i}|(v_{i})_{k}|-\sqrt{r}\max_{i\in[r]}\big{|}(v_{i})_{k}\big{|}\texttt{Err}>(\mathcal{B}^{-1}-\sqrt{r}\texttt{Err})\max_{i\in[r]}\big{|}(v_{i})_{k}\big{|}>\mathcal{B}^{-1}\max_{i\in[r]}\big{|}(v_{i})_{k}\big{|}/2.$ Thus, noting $V_{ki}=(v_{i})_{k}$, we obtain that $\min_{k\in D(V)}\max_{i\in[r]}|(\widehat{v}_{n,i}^{clean})_{k}|=\min_{k\in D(V)}\max_{i\in[r]}|\widehat{V}_{ki}^{clean}|>\min_{k\in D(V)}\max_{i\in[r]}\big{|}V_{ki}\big{|}/(2\mathcal{B})-C_{\mathcal{B}}\epsilon_{n}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. Suppose $C_{\mathcal{B}}^{\prime}=2\mathcal{B}C_{\mathcal{B}}$. Note that $\min_{k\in[p]}\max_{i\in[r]}\big{|}(v_{i})_{k}\big{|}=\theta_{n}C_{\mathcal{B}}^{\prime}\epsilon_{n}$ where $\theta_{n}>2$. Then $\min_{k\in D(V)}\max_{i\in[r]}\widehat{V}_{ki}^{clean}>(\theta_{n}-1)C_{\mathcal{B}}^{\prime}\epsilon_{n}/(2\mathcal{B}).$ with $\mathbb{P}$ probability $1-o(1)$ uniformly over $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$. This, combined with (25) implies setting $\texttt{cut}\in[C_{\mathcal{B}}^{\prime}\epsilon_{n}/(2\mathcal{B}),(\theta_{n}-1)C_{\mathcal{B}}^{\prime}\epsilon_{n}/(2\mathcal{B})]$ leads to full support recovery with $\mathbb{P}$ probability $1-o(1)$. The proof of the first part follows. #### B.0.2 Case (B) In the Gaussian case, we resort to the hidden variable representation of $X$ and $Y$ due to Bach and Jordan (2005), which enables sharper bound on the term $T_{1}(i,k)$. Suppose $\mathbf{Z}\sim N_{r}(0,I_{r})$ where $r$ is the rank of ${\Sigma}_{xy}$. Consider $Z_{1}\sim N_{p}(0,I_{p})$ and $Z_{2}\sim N_{q}(0,I_{q})$ independent of $\mathbf{Z}$. Then $X$ and $Y$ can be represented as $X=\mathcal{W}_{1}Z+\ \mathcal{H}_{1}Z_{1}\quad\text{ and }\quad Y=\mathcal{W}_{2}Z+\mathcal{H}_{2}Z_{2},$ (26) where $\mathcal{W}_{1}={\Sigma}_{x}U\Lambda^{1/2},\ \mathcal{W}_{2}={\Sigma}_{y}V\Lambda^{1/2},\ \mathcal{H}_{1}=({\Sigma}_{x}-\mathcal{W}_{1}\mathcal{W}_{1}^{T})^{1/2},\ \ \text{and}\ \mathcal{H}_{2}=({\Sigma}_{y}-\mathcal{W}_{2}\mathcal{W}_{2}^{T})^{1/2}.$ Here $({\Sigma}_{x}-\mathcal{W}_{1}\mathcal{W}_{1}^{T})^{1/2}$ is well defined because ${\Sigma}_{x}-\mathcal{W}_{1}\mathcal{W}_{1}^{T}={\Sigma}_{x}\tilde{U}(I_{p}-\Lambda_{x})\tilde{U}^{T}{\Sigma}_{x}$, where $\Lambda_{x}$ is a $p\times p$ diagonal matrix whose first $p$ elements are $\Lambda_{1},\ldots,\Lambda_{r}$, and they rest are zero. Because $\Lambda_{1}\leq 1$, we have $({\Sigma}_{x}-\mathcal{W}_{1}\mathcal{W}_{1}^{T})^{1/2}={\Sigma}_{x}\tilde{U}(I_{p}-\Lambda_{x})^{1/2}\tilde{U}^{T}{\Sigma}_{x}$. Similarly, we can show that $({\Sigma}_{y}-\mathcal{W}_{2}\mathcal{W}_{2}^{T})^{1/2}={\Sigma}_{y}\tilde{V}(I_{q}-\Lambda_{y})^{1/2}\tilde{V}^{T}{\Sigma}_{y}$ where $\Lambda_{y}$ is the diagonal matrix whose first $r$ elements are $\Lambda_{1},\ldots,\Lambda_{r}$, and the rest are zero. It can be easily verified that $Var(X)=\mathcal{W}_{1}\mathcal{W}_{1}^{T}+\mathcal{H}_{1}={\Sigma}_{x},\quad Var(Y)=\mathcal{W}_{2}\mathcal{W}_{2}^{T}+\mathcal{H}_{2}={\Sigma}_{y},\quad{and}\quad{\Sigma}_{xy}=\mathcal{W}_{1}\mathcal{W}_{2}^{T}={\Sigma}_{x}U\Lambda V^{T}{\Sigma}_{y},$ which ensures that the joint variance of $(X,Y)$ is still $\Sigma$. Also, some linear algebra leads to $\displaystyle\max\bigg{\\{}\|\mathcal{H}_{1}\|^{2}_{op},\|\mathcal{H}_{2}\|^{2}_{op},\|\mathcal{W}_{1}\|_{op},\|\mathcal{W}_{2}\|_{op}\bigg{\\}}<\mathcal{B}.$ (27) Suppose we have $n$ independent realizations of the pseudo-observations $Z_{1}$, $Z_{2}$, and $Z$. Denote by $\mathbf{Z}_{1}$, $\mathbf{Z}_{2}$, and $\mathbf{Z}$, the stacked data matrices with the i-th row as $(Z_{1})_{i}$, $(Z_{2})_{i}$, and $Z_{i}$, respectively, where $i\in[n]$. Here we used the term data-matrix although we do not observe $\mathbf{Z}$, $\mathbf{Z}_{1}$ and $\mathbf{Z}_{2}$ directly. Due to the representation in (26), the data matrices $\mathbf{X}$ and $\mathbf{Y}$ have the form $\mathbf{X}=\mathbf{Z}\mathcal{W}_{1}^{T}+\mathbf{Z}_{1}\mathcal{H}_{1},\quad\mathbf{Y}=\mathbf{Z}\mathcal{W}_{2}^{T}+\mathbf{Z}_{2}\mathcal{H}_{2}.$ We can write the covariance matrix $\widehat{\Sigma}_{n,xy}=\mathbf{X}^{T}\mathbf{Y}/n$ as $\widehat{\Sigma}_{n,xy}=\frac{1}{n}\bigg{\\{}\mathcal{W}_{1}\mathbf{Z}^{T}\mathbf{Z}\mathcal{W}_{2}^{T}+\mathcal{W}_{1}\mathbf{Z}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}\mathcal{W}_{2}^{T}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}\bigg{\\}}.$ (28) Therefore, for any vector $\theta_{1}\in\mathbb{R}^{p}$ and $\theta_{2}\in\mathbb{R}^{q}$, we have $\theta_{1}^{T}(\widehat{\Sigma}_{n,xy}-{\Sigma}_{xy})\theta_{2}=\theta_{1}^{T}\mathcal{W}_{1}^{T}\Big{(}\frac{\mathbf{Z}^{T}\mathbf{Z}}{n}-I_{r}\Big{)}\mathcal{W}_{2}\theta_{2}+\frac{1}{n}\theta_{1}^{T}\Big{(}\mathcal{W}_{1}\mathbf{Z}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}\mathcal{W}_{2}^{T}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}\Big{)}\theta_{2}.$ (29) By Bai-Yin law on eigenvalues of Wishart matrices (Bai and Yin, 1993), there exists abolute constant $C>0$ so that for any $t>1$, $P\bigg{(}\norm{\frac{\mathbf{Z}^{T}\mathbf{Z}}{n}-I_{r}}_{op}<t\sqrt{r/n}\bigg{)}\geq 1-2\exp(-Ct^{2}r),$ which, combined with (27), implies $\inf_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}\Big{|}\theta_{1}^{T}\mathcal{W}_{1}^{T}({\mathbf{Z}^{T}\mathbf{Z}}/{n}-I_{r})\mathcal{W}_{2}\theta_{2}\Big{|}\leq t\mathcal{B}^{2}\|\theta_{1}\|_{2}\|\theta_{2}\|_{2}\sqrt{r/n}\Big{)}\geq 1-2\exp(-Ct^{2}r).$ Now we will state a lemma which will be required to control the other terms on the right hand side of (29). ###### Lemma 22 Suppose $\mathbf{Z}_{1}\in\mathbb{R}^{n\times p}$ and $\mathbf{Z}_{2}\in\mathbb{R}^{n\times q}$ are independent Gaussian data matrices. Further suppose $x\in\mathbb{R}^{p}$ and $y\in\mathbb{R}^{q}$ are either deterministic or independent of both $\mathbf{Z}_{1}$ and $\mathbf{Z}_{2}$. Then there exists a constant $C>0$ so that for any $t>1$, $P\Big{(}\absolutevalue{x^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}_{2}y}>t\|x\|_{2}\|y\|_{2}\sqrt{n}\Big{)}\leq\exp(-Cn)-\exp(t^{2}/2).$ The proof of Lemma 22 follows directly setting $b=1$ in the following Lemma, which is proved in Appendix G.4. ###### Lemma 23 Suppose $\mathbf{Z}_{1}\in\mathbb{R}^{n\times p}$ and $\mathbf{Z}_{2}\in\mathbb{R}^{n\times q}$ are independent standard Gaussian data matrices, and $D\in\mathbb{R}^{n\times k_{1}}$ and $B\in\mathbb{R}^{n\times k_{2}}$ are deterministic matrices with rank $a$ and $b$, respectively. Let $a\leq b\leq n$. Then there exists an absolute constant $C>0$ so that for any $t\geq 0$, the following holds with probability at least $1-\exp(-Cn)-\exp(-t^{2}/2)$: $\|D^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}_{2}B\|_{op}\leq C\|D\|_{op}\|B\|_{op}\sqrt{n}\max\\{\sqrt{b},t\\}.$ Lemma 22, in conjunction with (27), implies that there exists an absolute constant $C>0$ so that $\frac{1}{n}\Big{|}\theta_{1}^{T}\Big{(}\mathcal{W}_{1}\mathbf{Z}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}\mathcal{W}_{2}^{T}+\mathcal{H}_{1}^{T}\mathbf{Z}_{1}^{T}\mathbf{Z}_{2}\mathcal{H}_{2}\Big{)}\theta_{2}\Big{|}\leq t\mathcal{B}^{2}\|\theta_{1}\|_{2}\|\theta_{2}\|_{2}n^{-1/2}$ with $\mathbb{P}$ probability at least $1-\exp(-Cn)-\exp(t^{2}/2)$ for all $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$. Therefore, there exists $C>0$ so that $\inf_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}|\theta_{1}^{T}(\widehat{\Sigma}_{n,xy}-{\Sigma}_{xy})\theta_{2}|\leq t\sqrt{r}\mathcal{B}^{2}\|\theta_{1}\|_{2}\|\theta_{2}\|_{2}n^{-1/2}\Big{)}\geq 1-\exp(-Cn)-\exp(-Ct^{2}).$ (30) . Note that $T_{1}(i,k)\leq\underbrace{\Big{|}\Big{(}(\widehat{\Omega}_{n})_{k*}-({\Sigma}_{y}^{-1})_{k*}\Big{)}^{T}(\widehat{\Sigma}_{n,yx}-{\Sigma}_{yx})\widehat{u}_{n,i}\Big{|}}_{T_{11}(i,k)}+\underbrace{\Big{|}\Big{(}(\widehat{\Omega}_{n})_{k*}-({\Sigma}_{y}^{-1})_{k*}\Big{)}^{T}{\Sigma}_{yx}\widehat{u}_{n,i}\Big{|}}_{T_{12}(i,k)}.$ Now suppose $\theta_{1}=(\widehat{\Omega}_{n})_{k*}-({\Sigma}_{y}^{-1})_{k*}$ and $\theta_{2}=\widehat{u}_{n,i}$. By our assumption, $\|\theta_{1}\|_{2}\leq C_{\text{pre}}\sqrt{s({\Sigma}_{y}^{-1})(\log q)/n}$ with $\mathbb{P}$ probability $1-o(1)$ uniformly across $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$. We also showed that $\|\widehat{u}_{n,i}\|_{2}\leq 2\sqrt{\mathcal{B}}$. It is not had to see that $\displaystyle\sup_{i\in[q],k\in[r]}T_{12}(i,k)\leq 2\mathcal{B}^{3/2}C_{\text{pre}}\sqrt{s({\Sigma}_{y}^{-1})(\log q)/n}$ (31) with $\mathbb{P}$ probability $1-o(1)$ uniformly across $\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})$. For $T_{11}$, observe that (30) applies because $\theta_{i}=(\widehat{\Omega}_{n})_{k*}-({\Sigma}_{y}^{-1})_{k*}$ and $\theta_{2}=\widehat{u}_{n,i}$ are independent of $\widehat{\Sigma}_{n,xy}$. Thus we can write that for any $t>1$, there exists $C_{\mathcal{B}}>1$ such that $\sup_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}|T_{11}(i,k)|>tC_{\mathcal{B}}C_{\text{pre}}\sqrt{rs({\Sigma}_{y}^{-1})\log q}/n\Big{)}\leq\exp(-Cn)+\exp(-Ct^{2}).$ Applying union bound, we obtain that $\displaystyle\sup_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}\max_{i\in[q]}\max_{k\in[r]}|T_{11}(i,k)|>tC_{\mathcal{B}}C_{\text{pre}}\sqrt{rs({\Sigma}_{y}^{-1})\log q}/n\Big{)}$ $\displaystyle\leq\exp(-Cn+\log(qr))+\exp(-Ct^{2}+\log(qr)).$ Since $r<q$ and $\log q=o(n)$, setting $t=2\sqrt{\log q}/C$, we obtain that $\sup_{\mathbb{P}\in\mathcal{P}_{G}(r,s_{x},s_{y},\mathcal{B})}\mathbb{P}\Big{(}\max_{i\in[q]}\max_{k\in[r]}|T_{11}(i,k)|>C_{\mathcal{B}}C_{\text{pre}}\sqrt{rs({\Sigma}_{y}^{-1})}\log q/n\Big{)}=o(1).$ Using (24) and (31), one can show that $\epsilon_{n}=C_{\text{pre}}\sqrt{s({\Sigma}_{y}^{-1})(\log(p+q))/n}\max\\{\sqrt{r(\log q)/n},1\\}$ in this case. #### B.0.3 Case (C) Note that when $\widehat{\Omega}_{n}={\Sigma}_{y}^{-1}$, $T_{1}(i,k)=0$. Therefore, (24) implies $\epsilon_{n}=\sqrt{{s({\Sigma}_{y}^{-1})\log(p+q)}/{n}}$ in this case. ## C Proof of Theorem 6 Since the proof for $U$ and $V$ follows in a similar way, we will only consider the support recovery of $U$. The proof for both cases follows a common structure. Therefore, we will elaborate the common structure first. Since the model $\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ is fairly large, we will work with a smaller submodel. Specifically, we will consider a subclass of the single spike models, i.e. $r=1$. Because we are concerned with only the support recovery of the left singular vectors, we fix $\beta_{0}$ in $\mathbb{R}^{q}$ so that $\|\beta_{0}\|_{2}=1$. We also fix $\rho\in(0,1)$ and consider the subset $\mathcal{E}\subset\\{\alpha\in\mathbb{R}^{p}:\|\alpha\|_{2}=1\\}$. Both $\rho$ and $\mathcal{E}$ will be chosen later. We restrict our attention to the submodel $\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})$ given by $\bigg{\\{}\mathbb{P}\in\mathcal{P}(1,s_{x},s_{y},\mathcal{B})\ :\ \mathbb{P}\equiv N_{p+q}(0,\Sigma)\text{ where }\Sigma\text{ is of the form }\eqref{def: sigma in Theorem 3}\text{ with }\alpha\in\mathcal{E},\beta=\beta_{0}\bigg{\\}},$ where (32) is as follows: $\Sigma=\begin{bmatrix}I_{p}&\rho\alpha\beta^{T}\\\ \rho\beta\alpha^{T}&I_{q}\end{bmatrix}.$ (32) That $\Sigma$ is positive definite for $\rho\in(0,1)$ can be shown either using elementary linear algebra or the the hidden variable representation (26). During the proof of part (B), we will choose $\mathcal{E}$ so that $\texttt{Sig}_{x}^{2}\leq(B^{2}-1)(\log(p-s_{x}))/8n$, which will ensure that $\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})\subset\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})$ as well. Note that for $\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})$, $U$ corresponds to $\alpha$, and hence $D(U)=D(\alpha)$. Therefore for the proof of both parts, it suffices to show that for any decoder $\widehat{D}_{\alpha}$ of $D(\alpha)$, $\inf_{\widehat{D}_{\alpha}}\sup_{\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\mathcal{E})}\mathbb{P}\Big{(}\widehat{D}_{\alpha}\neq D(\alpha)\Big{)}>1/2.$ (33) In both of the proofs, our $\mathcal{E}$ will be a finite set. Our goal is to choose $\mathcal{E}$ so that $\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})$ is structurally rich enough to garuantee (33), yet lends itself to easy computations. The guidance for choosing $\mathcal{E}$ comes from our main technical tool for this proof, which is Fano’s inequality. We use the verson of Fano’s inequality in Yatracos (1988) (Fano’s Lemma). Applied to our problem, this inequality yields $\inf_{\widehat{D}_{\alpha}}\sup_{\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})}\mathbb{P}\Big{(}\widehat{D}_{\alpha}\neq D(\alpha)\Big{)}\geq 1-\dfrac{\frac{\sum_{\mathbb{P}_{1},\mathbb{P}_{2}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})}KL(\mathbb{P}_{1}^{n}|\mathbb{P}_{2}^{n})}{|\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})|^{2}}+\log 2}{\log(|\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})|-1)},$ (34) where $\mathbb{P}_{n}$ denotes the product measure corresponding to $n$ i.i.d. observations from $\mathbb{P}$. We also have the following result for product measures, $KL(\mathbb{P}_{1}^{n}|\mathbb{P}_{2}^{n})=nKL(\mathbb{P}_{1}|\mathbb{P}_{2})$. Moreover, when $\mathbb{P}_{1},\mathbb{P}_{2}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})$ with left singular vectors $\alpha_{1}$ and $\alpha_{2}$, respectively, $KL(\mathbb{P}_{1}|\mathbb{P}_{2})=\log\frac{\text{det}(\Sigma_{2})}{\text{det}(\Sigma_{1})}-(p+q)+Tr(\Sigma_{2}^{-1}\Sigma_{1}),$ where $\text{det}(\Sigma_{1})=\text{det}(\Sigma_{2})=1-\rho^{2}$ by Lemma 32, and $-(p+q)+Tr(\Sigma_{2}^{-1}\Sigma_{1})=\frac{2\rho^{2}}{1-\rho^{2}}\bigg{(}1-(\alpha_{1}^{T}\alpha_{2})\|\beta_{0}\|_{2}^{2}\bigg{)}$ by Lemma 33. Noting $\alpha_{1}$, $\alpha_{2}$, and $\beta_{0}$ are unit vectors, we derive $KL(\mathbb{P}_{1}|\mathbb{P}_{2})=\rho^{2}(\|\alpha_{1}-\alpha_{2}\|^{2}_{2})/(1-\rho^{2})$. Therefore, in our case, (34) reduces to $\inf_{\widehat{D}_{\alpha}}\sup_{\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})}\mathbb{P}\Big{(}\widehat{D}_{\alpha}\neq D(\alpha)\Big{)}\geq 1-\dfrac{n\rho^{2}\sup_{\alpha_{1},\alpha_{2}\in\mathcal{E}}\|\alpha_{1}-\alpha_{2}\|^{2}/(1-\rho^{2})+\log 2}{\log(|\mathcal{E}|-1)}.$ (35) Thus, to ensure the right hand side of (35) is non-negligible, the key is to choose $\mathcal{E}$ so that the $\alpha$’s in $\mathcal{E}$ are close in $l_{2}$ norm, but $|\mathcal{E}|$ is sufficiently large. Note that the above ensures that distinguishing the $\alpha$’s in $\mathcal{E}$ is difficult. ### C.1 Proof of part (A) Note that our main job is to choose $\mathcal{E}$ and $\rho$ suitably. Let us denote $\alpha_{0}=(\underbrace{1/\sqrt{s}_{x},\ldots,1/\sqrt{s}_{x}}_{s_{x}\text{ many }},\underbrace{0,\ldots,0}_{p-s_{x}\text{ many }}).$ We generate a class of $\alpha$’s by replacing one of the $1/\sqrt{s}_{x}$’s in $\alpha_{0}$ by $0$, and one of the zero’s in $\alpha_{0}$ by $1/\sqrt{s}_{x}$. A typical $\alpha$ obtained this way looks like $\alpha=\Big{(}\underbrace{1/\sqrt{s}_{x},\ldots,{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathbf{0}},\ldots 1/\sqrt{s}_{x}}_{s_{x}\text{ many }},\underbrace{0,\ldots,{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathbf{1/\sqrt{s_{x}}}},\ldots,0}_{p-s_{x}\text{ many }}\Big{)}.$ Let $\mathcal{E}$ be the class, which consists of $\alpha_{0}$, and all such resulting $\alpha$’s. Note that $|\mathcal{E}|=s_{x}(p-s_{x})$, and $\alpha_{1},\alpha_{2}\in\mathcal{E}$ satisfy $\|\alpha_{1}-\alpha_{2}\|^{2}_{2}\leq\|\alpha_{1}-\alpha_{0}\|^{2}_{2}+\|\alpha_{2}-\alpha_{0}\|^{2}_{2}\leq 4s_{x}^{-1}.$ Because $p>s_{x}>1$, we have $\log(s_{x}(p-s_{x})-1)\geq\log(p-s_{x})$. Therefore, (35) leads to $\inf_{\widehat{D}_{\alpha}}\sup_{\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})}\mathbb{P}\Big{(}\widehat{D}_{\alpha}\neq D(\alpha)\Big{)}\geq 1-\frac{4\rho^{2}ns_{x}^{-1}/(1-\rho^{2})+\log 2}{\log(p-s_{x})},$ which is bounded below by $1/2$ whenever $s_{x}>\frac{8\rho^{2}n}{(1-\rho^{2})\\{\log(p-s_{x})-\log 4\\}},\ \text{ which follows if }\ s_{x}>\frac{16\rho^{2}n}{(1-\rho^{2})\log(p-s_{x})}$ because $4=\sqrt{16}<\sqrt{p-s_{x}}$. To get the best bound on $s_{x}$, we choose the value of $\rho$ which minimizes $\rho^{2}/(1-\rho^{2})$ for $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$, that is $\rho=1/\mathcal{B}$. Plugging in $\rho=1/\mathcal{B}$, the proof follows. ### C.2 Proof of part (B) Suppose each $\alpha\in\mathcal{E}$ is of the following form $\alpha=\Big{(}\underbrace{b,\ldots,b}_{s_{x}-1\text{ many }},\underbrace{0,\ldots,0,{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}z},0,\ldots,0}_{p-s_{x}+1\text{ many }}\Big{)}.$ We fix $z\in(0,1)$, and hence $b=\sqrt{(1-z^{2})/(s_{x}-1)}$ is also fixed. We will choose the value of $\rho$ and $z$ later so that $\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})\supset\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})$. Since $z$ is fixed, such an $\alpha$ can be chosen in $p-s_{x}+1$ ways. Therefore $|\mathcal{E}|=p-s_{x}+1$. Also note that for $\alpha,\alpha^{\prime}\in\mathcal{E}$, $\|\alpha-\alpha^{\prime}\|_{2}^{2}\leq 2z^{2}$. Therefore (35) implies $\inf_{\widehat{D}_{\alpha}}\sup_{P\in\mathcal{M}(s_{x},s_{y},\rho,\mathcal{E})}P\Big{(}\widehat{D}_{\alpha}\neq D(\alpha)\Big{)}\geq 1-\dfrac{2n\rho^{2}z^{2}/(1-\rho^{2})+\log 2}{\log(p-s_{x})},$ (36) which is greater than $1/2$ whenever $z^{2}<\frac{1-\rho^{2}}{4n\rho^{2}}\log(\frac{p-s_{x}}{4}),\quad\text{which holds if }\quad z^{2}=\frac{1-\rho^{2}}{8n\rho^{2}}\log(p-s_{x})$ because $16<p-s_{x}$. To get the best bound on $z$, we choose the value of $\rho$ for $\mathbb{P}\in\mathcal{P}(r,s_{x},s_{y},\mathcal{B})$ which maximizes $(1-\rho^{2})/\rho^{2}$, that is $\rho=1/\mathcal{B}$. Thus (33) is satisfied when $\rho=1/\mathcal{B}$, and $\mathcal{E}$ corresponds to $z^{2}=(\mathcal{B}^{2}-1)\log(p-s_{x})/(8n)$. Since the minimal signal strength $\texttt{Sig}_{x}$ for any $\mathbb{P}\in\mathcal{M}(s_{x},s_{y},\mathcal{B}^{-1},\mathcal{E})$ equals $\min(z,b)\leq z$, we have $\mathcal{P}_{\texttt{Sig}}(r,s_{x},s_{y},\mathcal{B})\supset\mathcal{M}(s_{x},s_{y},\mathcal{B}^{-1},\mathcal{E})$, which completes the proof. ## D Proof of Theorem 10 We first introduce some notations and terminologies that are required for the proof. For $w\in\mathbb{Z}^{m}$, and $x\in\mathbb{R}^{m}$, we denote $w!=\prod_{i=1}^{m}w_{i}!$ and $x^{w}=\prod_{i=1}^{m}x_{i}^{w_{i}}$. In low- degree polynomial literature, when $w\in\mathbb{Z}^{m}$, the notation $|w|$ is commonly used to denote the sum $\sum_{i=1}^{m}w_{i}$ for sake of simplicity. We also follow the above convention. Here the notation $|\cdot|$ should not be confused with the absolute value of real numbers. Also, for any function $f:\mathbb{R}^{m}\mapsto\mathbb{R}$, $w\in\mathbb{Z}^{m}$, and $t=(t_{1},\ldots,t_{m})$, we denote $\partial_{t}^{w}f(t)=\frac{\partial^{|w|}}{\partial t_{1}^{w_{1}}\ldots\partial t_{r}^{w_{r}}}f(t).$ We will also use the shorthand notation $\\\ E_{\pi}$ to denote $\mathbb{E}_{\alpha\sim\pi_{x},\beta\sim\pi_{y}}$ sometimes. Our analysis relies on the Hermite polynomial, which we will discuss here very briefly. For a detailed account on the Hermite polynomials, see Chapter V of Szegö (1939). The univariate Hermite polynomials of degree $k$ will be denoted by $h_{k}$. For $k\geq 0$, the univariate Hermite polynomials $h_{k}:\mathbb{R}\mapsto\mathbb{R}$ are defined recursively as follows: $\displaystyle h_{0}(x)=1,\quad h_{1}(x)=xh_{0}(x),\quad\ldots,\quad h_{k+1}(x)=xh_{k}(x)-h_{k}^{\prime}(x).$ The normalized univariate Hermite polynomials are given by $\widehat{h}_{k}(x)=h_{k}(x)/\sqrt{k!}$. The univariate Hermite polynomials form an orthogonal basis of $L_{2}(N(0,1))$. For $w\in\mathbb{Z}^{m}$, the $m$-variate Hermite polynomials are given by $H_{w}(y)=\prod_{i=1}^{m}h_{w_{i}}(y_{i})$, where $y\in\mathbb{R}^{m}$. The normalized version $\widehat{H}_{w}$ of $H_{w}$ equals $H_{w}/\sqrt{w!}$. The polynomials $\widehat{H}_{w}$’s form an orthogonal basis of $L_{2}(N_{m}(0,I_{m}))$. We denote by $\Pi_{n}^{\leq D_{n}}$ the linear span of all $n(p+q)$-variate Hermite polynomials of degree at most $D_{n}$. Since $\mathbb{L}_{n}^{\leq D_{n}}$ is the projection of $\mathbb{L}_{n}$ on $\Pi^{\leq D_{n}}$, it then follows that $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}=\sum_{\begin{subarray}{c}w\in\mathbb{Z}^{n(p+q)}\\\ |w|\leq D_{n}\end{subarray}}\langle\mathbb{L}_{n},\widehat{H}_{w}\rangle_{L^{2}(\mathbb{Q}_{n})}^{2}.$ (37) From now on, the degree-index vector $w$ of $\widehat{H}_{w}$ or $H_{w}$ will be assumed to lie in $\mathbb{Z}^{n(p+q)}$. We will partition $w$ into $n$ components, which gives $w=(w_{1},\ldots,w_{n})$, where $w_{i}\in\mathbb{Z}^{p+q}$ for each $i\in[n]$. Clearly, $i$ here corresponds to the $i$-th observation. We also separate each $w_{i}$ into two parts $w_{i}^{x}\in\mathbb{Z}^{p}$ and $w_{i}^{y}\in\mathbb{Z}^{q}$ so that $w_{i}=(w_{i}^{x},w_{i}^{y})$. We will also denote $w^{x}=(w^{x}_{1},\ldots,w^{x}_{n})$, and $w_{y}=(w^{y}_{1},\ldots,w^{y}_{n})$. Note that $w^{x}\in\mathbb{Z}^{np}$ and $w^{y}\in\mathbb{Z}^{nq}$, but $w\neq(w^{x},w^{y})$ in general, although $|w|=|w^{x}|+|w^{y}|$. Now we state the main lemmas which yields the value of $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}$. The first lemma, proved in Appendix G.3, gives the form of the inner products $\langle\mathbb{L}_{n},\widehat{H}_{w}\rangle_{L^{2}(\mathbb{Q}_{n})}$. ###### Lemma 24 Suppose $w$ is as defined above and $\mathbb{L}_{n}$ is as in (15). Then it holds that $\langle\mathbb{L}_{n},\widehat{H}_{w}\rangle_{L^{2}(\mathbb{Q}_{n})}^{2}=\begin{cases}\frac{\mathcal{B}^{-|w|}}{w!}\bigg{\\{}\mathbb{E}_{\pi}\bigg{[}1\\{\|\alpha\|_{2}\|\beta\|_{2}<\mathcal{B}\\}\alpha^{\sum_{i=1}^{n}w^{x}_{i}}\beta^{\sum_{i=1}^{n}w^{y}_{i}}\bigg{]}\bigg{\\}}^{2}\bigg{(}\prod_{i=1}^{n}{|w^{x}_{i}|!}\bigg{)}^{2}&\\\ \quad\text{ if }\quad|w^{x}_{i}|=|w^{y}_{i}|\text{ for all }i\in[n],&\\\ 0&o.w.\end{cases}$ Here the priors $\pi_{x}$ and $\pi_{y}$ are the Rademacher priors defined in (12). Our next lemma uses Lemma 24 to give the form of $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}$. This lemma uses replicas of $\alpha$ and $\beta$. Suppose $\alpha_{1},\alpha_{2}\sim\pi_{x}$ and $\beta_{1},\beta_{2}\sim\pi_{y}$ are all independent Rademacher priors, where $\pi_{x}$ and $\pi_{y}$ are defined as in (12). We overload notation, and use $\mathbb{E}_{\pi}$ to denote the expectation under $\alpha_{1}$, $\alpha_{2}$, $\beta_{1}$, and $\beta_{2}$. ###### Lemma 25 Suppose $W$ is the indicator function of the event $\\{\|\alpha_{1}\|_{2}\|\beta_{1}\|_{2}<\mathcal{B},\ \|\alpha_{2}\|_{2}\|\beta_{2}\|_{2}<\mathcal{B}\\}$. Then For any $D_{n}\in\mathbb{N}$, $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}=\mathbb{E}_{\pi}\bigg{[}W\sum_{d=0}^{\left\lfloor D_{n}/2\right\rfloor}{d+n-1\choose d}\bigg{\\{}\mathcal{B}^{-2}(\alpha_{1}^{T}\alpha_{2})(\beta_{1}^{T}\beta_{2})\bigg{\\}}^{d}\bigg{]}.$ The proof of Lemma 25 is also deferred to Appendix G.3. We remark in passing that the negative binomial series expansion yields $(1-x)^{-n}=\sum_{d=0}^{\infty}{n+d-1\choose d}x^{d},\quad\text{for }|x|<1,$ (38) whose $D_{n}$-th order truncation equals $\bigg{(}(1-x)^{-n}\bigg{)}^{\leq D_{n}}=\sum_{d=0}^{D_{n}}{n+d-1\choose d}x^{d}.$ Note that $W$ is non-zero if and only if $\|\alpha_{1}\|_{2}\|\beta_{1}\|_{2}<\mathcal{B}$ and $\|\alpha_{2}\|_{2}\|\beta_{2}\|_{2}<\mathcal{B}$, which, by Cauchy Schwarz inequality, implies $|(\alpha_{1}^{T}\alpha_{2})(\beta_{1}^{T}\beta_{2})|<\mathcal{B}^{2}.$ Thus $|\mathcal{B}^{-2}(\alpha_{1}^{T}\alpha_{2})(\beta_{1}^{T}\beta_{2})|<1$ when $W=1$. Hence Lemma 25 can also be written as $\|\mathbb{L}_{n}^{\leq D_{n}}\|^{2}_{L_{2}(\mathbb{Q}_{n})}=\mathbb{E}_{\pi}\left[W\bigg{\\{}\bigg{(}1-\mathcal{B}^{-2}(\alpha_{1}^{T}\alpha_{2})(\beta_{1}^{T}\beta_{2})\bigg{)}^{-n}\bigg{\\}}^{\leq\left\lfloor D_{n}/2\right\rfloor}\right].$ Now we are ready to prove Theorem 10. Proof [Proof of Theorem 10] Our first task is to get rid of $W$ from the expression of $\|\mathbb{L}_{n}^{\leq D_{n}}\|_{L_{2}(\mathbb{Q}_{n})}$ in Lemma 25. However, we can not directly bound $W$ by one since the term $(\alpha_{1}^{T}\alpha_{2})^{d}(\beta_{1}^{T}\beta_{2})^{d}W$ may be negative for odd $d\in\mathbb{N}$. We claim that $\mathbb{E}[(\alpha_{1}^{T}\alpha_{2})^{d}(\beta_{1}^{T}\beta_{2})^{d}W]=0$ if $d\in\mathbb{N}$ is odd. To see this, first we write $\displaystyle\mathbb{E}\Big{[}(\alpha_{1}^{T}\alpha_{2})^{d}(\beta_{1}^{T}\beta_{2})^{d}W\Big{]}=\mathbb{E}\Big{[}\mathbb{E}\Big{[}(\alpha_{1}^{T}\alpha_{2})^{d}W\Big{|}\beta_{1},\beta_{2}\Big{]}(\beta_{1}^{T}\beta_{2})^{d}\Big{]}.$ (39) Note that
# Asymptotic Cohomology and Uniform Stability for Lattices in Semisimple Groups Lev Glebsky, Alexander Lubotzky, Nicolas Monod, Bharatram Rangarajan ###### Abstract It is, by now, classical that lattices in higher rank semisimple groups have various rigidity properties. In this work, we add another such rigidity property to the list, namely uniform stability with respect to the family of unitary operators on finite-dimensional Hilbert spaces equipped with submultiplicative norms. Towards this goal, we first build an elaborate cohomological theory capturing the obstruction to such stability, and show that the vanishing of second cohomology implies uniform stability in this setting. This cohomology can be roughly thought of as an asymptotic version of bounded cohomology, and sheds light on a question raised in [Mon06] about a possible connection between vanishing of second bounded cohomology and Ulam stability. Along the way, we use this criterion to provide a short conceptual (re)proof of the classical result of Kazhdan [Kaz82] that discrete amenable groups are Ulam stable. We then use this machinery to establish our main result, that lattices in a class of higher rank semisimple groups (which are known to have vanishing bounded cohomology) are uniformly stable. Dedicated to Robert J. Zimmer with admiration and affection ## Introduction Consider a semisimple group $G=\prod^{k}_{i=1}\mathbf{G}_{i}(K_{i})$, where for $1\leq i\leq k$, $K_{i}$ is a local field, and $\mathbf{G}_{i}$ is an almost $K_{i}$-simple group. If the rank $\sum^{k}_{i=1}rk_{K_{i}}(\mathbf{G}_{i})\geq 2$, such a $G$ is referred to as a _higher rank_ semisimple group. The class of irreducible lattices $\Gamma$ in such groups $G$ (referred to as _higher rank lattices_) form an interesting class of groups, which over the years, have been shown to satisfy many rigidity properties, such as local rigidity, Mostow strong-rigidity, Margulis super-rigidity (implying that they are arithmetic groups), Zimmer cocycle rigidity, quasi-isometric rigidity, first-order rigidity, etc. (see [ES05], [ALM19], [Mar91], [BFH20] and the references therein). A common feature of the classical rigidity results is that such a higher rank lattice $\Gamma$ has some clear family of representations, and all other representations are just easy variants of them. The goal of this paper is to demonstrate another type of rigidity phenomena of these lattices. Before stating the exact formulation, let us recall that Margulis super-rigidity, while usually not formulated this way, also gives a full classification of all the finite dimensional unitary representations of a higher rank lattice $\Gamma$ as above. Margulis super-rigidity implies that all such irreducible representations come from a combination of those that factor through finite quotients (and these are the only ones if $\Gamma$ is a non-uniform lattice) and from the representations of $\Gamma$ appearing naturally in its definition as an arithmetic group by Galois twisting (see §$1.3$ in [Mar91]). The rigidity phenomenon we study here, which is called _uniform stability_ , is the property that every unitary almost-representation of $\Gamma$ is a small deformation of a unitary representation. ### Uniform Stability of Groups Let $\Gamma$ be a discrete group and $(G,d_{G})$ be a metric group (where $d_{G}$ is a bi-invariant metric on $G$). For $\epsilon>0$, a map $\phi:\Gamma\to G$ is said to be an $\epsilon$-almost homomorphism (or _$\epsilon$ -homomorphism_) if $d_{G}(\phi(xy),\phi(x)\phi(y))\leq\epsilon$ for every $x,y\in\Gamma$. The value $\sup_{x,y\in\Gamma}d_{G}(\phi(xy),\phi(x)\phi(y))$ is called the _defect_ of $\phi$. Let $\mathcal{G}$ be a family of metric groups. We say that $\Gamma$ is _uniformly stable_ with respect to $\mathcal{G}$ if for any $\epsilon>0$, there exists $\delta=\delta(\epsilon)$ with $\lim_{\epsilon\to 0}\delta(\epsilon)=0$ such that for any $\epsilon$-homomorphism $\phi:\Gamma\to G$ (for $G\in\mathcal{G}$), there exists a homomorphism $\psi\in Hom(\Gamma,G)$ with $\sup_{x\in\Gamma}\>d_{G}(\phi(x),\psi(x))\leq\delta$. In other words, $\Gamma$ is uniformly stable with respect to $\mathcal{G}$ if any almost homomorphism of $\Gamma$ to any group in the family $\mathcal{G}$ is close to a (true) homomorphism. Questions of this nature were first raised and studied in [Tur38], [vN29] and [Ula60], and of particular interest is the case when $\mathcal{G}$ is the family of unitary operators on Hilbert spaces and the metric is given by a norm (on the space of bounded operators), as studied in [Kaz82] and [BOT13]. Note that in this work, we will be interested solely in uniform stability, as opposed to _pointwise_ stability, as studied in [DCGLT20], [AP15] and the references therein. The notion of uniform stability with respect to unitary operators on Hilbert spaces equipped with the operator norm is referred to in [BOT13] as _strong Ulam stability_ , while if we restrict the family to unitary operators on _finite-dimensional_ Hilbert spaces, it is referred to as _Ulam stability_. In the pioneering work of Kazhdan [Kaz82] (and clarified further in [Sht13] and [Joh88]), it is shown that ###### Theorem 0.0.1 ([Kaz82]). Every (discrete) amenable group $\Gamma$ is Ulam stable (in fact, even strongly Ulam stable). It is worth noting that the only known examples of strongly Ulam stable groups are amenable, and it is natural to ask if strong Ulam stability characterizes amenability. Let us mention at this point that one of the (innumerable) equivalent characterizations of amenability is given in terms of the vanishing of bounded cohomology with dual coefficients: $\Gamma$ is amenable iff $\operatorname{H}_{b}^{n}(\Gamma,V)=0$ for every dual Banach $\Gamma$-module $V$ and $n>0$. Here $\operatorname{H}_{b}^{n}(\Gamma,V)$ denotes the $n$-th _bounded_ cohomology group of $\Gamma$ with coefficients in the Banach $\Gamma$-module $V$. Kazhdan’s proof does not use this result explicitly but does use a notion of $\epsilon$-cocycles and approximate cohomology in degree $2$. Ulam stability was further studied in [BOT13] where they show more examples (and non-examples) of Ulam stable groups. It is shown there that if a group contains a non-abelian free subgroup, then it is _not_ strongly Ulam stable. In particular, this means that higher rank lattices are not strongly Ulam stable. On the positive side, they show: ###### Theorem 0.0.2 ([BOT13]). Let $\mathcal{O}$ be the ring of integers of a number field, $S$ a finite set of primes, and $\mathcal{O}_{S}$ the corresponding localization. Then for every $n\geq 3$, $SL(n,\mathcal{O}_{S})$ is Ulam stable. The proof of this result in [BOT13] uses the fact that $SL(n,\mathcal{O}_{S})$ (for $n\geq 3$) is boundedly generated by elementary matrices, and makes no reference to bounded cohomology (this result is further extended in [Gam11] in the case of $n=2$ when $\mathcal{O}_{S}$ has infinitely many units). However, note that for $\Gamma=SL(n,\mathcal{O}_{S})$, $\operatorname{H}_{b}^{2}\left(\Gamma,V\right)=0$ for every dual separable $\Gamma$-module $V$. In fact, it is shown in [BM99] that for every higher rank lattice $\Gamma$ and any dual, separable Banach $\Gamma$-module $V$ with $V^{\Gamma}=\\{0\\}$, $\operatorname{H}_{b}^{2}(\Gamma,V)=0$. All this hints at a possible connection between bounded cohomology and Ulam stability, as raised by Monod in his ICM talk [Mon06, Problem F], and serves as one of the starting points for our current work. ## Main Results and Methods In this paper, we generalize Theorem 0.0.1 and Theorem 0.0.2 to a wider class of groups and metrics. We shall consider the question of uniform stability with respect to the family $\mathfrak{U}$ of groups of unitary operators on _finite-dimensional_ Hilbert spaces, with the metrics induced from submultiplicative norms on matrices (which we shall denote uniform $\mathfrak{U}$-stability). These include the $p$-Schatten norms for $1\leq p\leq\infty$ (and in particular, uniform $\mathfrak{U}$-stability subsumes Ulam stability). Furthermore, we shall show uniform $\mathfrak{U}$-stability _with a linear estimate_ , which means that the distance of an almost homomorphism from a homomorphism is linearly bounded by its defect, and all our results are proved in this stronger notion of stability. To this end, we build a new type of bounded cohomological theory that can capture obstructions to uniform $\mathfrak{U}$-stability, so that uniform $\mathfrak{U}$-stability follows as a consequence of the vanishing of the second cohomology group in this theory. While we shall develop this in full detail in §4, our technique involves the following two main steps: * • Defect Diminishing: Expressing the problem of uniform stability as a homomorphism lifting problem, we can treat it as a culmination of intermediate lifts so that at each step, the lifting kernel is abelian. This is a uniform variant of defect diminishing that was introduced in [DCGLT20] in the non- uniform setting, and is applicable when the relevant norms in the target groups are submultiplicative. * • Asymptotic Cohomology: Such a homomorphism lifting problem with abelian kernel naturally leads to a cohomological reformulation (as in [DCGLT20]). However, unlike in the non-uniform setting where ordinary group cohomology comes up, in our uniform setting we need to carefully construct a new cohomology theory such that the vanishing of the second cohomology group in this model implies (uniform) defect diminishing, and hence uniform stability. The cohomological theory we construct is an asymptotic variant of the bounded cohomology of the ultrapower ${}^{*}\Gamma$ with coefficients in a suitable ultraproduct Banach space $\mathcal{W}$, but _restricted to the internal objects in this universe_ , which we shall call the _asymptotic cohomology_ of $\Gamma$ denoted $\operatorname{H}_{a}^{\bullet}(\Gamma,\mathcal{W})$. ###### Theorem 0.0.3. Suppose $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})=0$, then $\Gamma$ is uniformly $\mathfrak{U}$-stable with a linear estimate. This new cohomology theory bears some similarity to the theory of bounded cohomology, and sometimes we can easily adapt arguments there to our model (for instance, we can show that $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})=0$ for an amenable group $\Gamma$, immediately implying that amenable groups are Ulam-stable as in [Kaz82],[Sht13],[Joh88]), though other times serious difficulties arise (which are responsible for the length of this paper). The groups $\Gamma$ that we will be particularly interested in are lattices in higher rank semisimple groups. Unlike some of the other rigidity results which sometimes hold for some lattices in rank one simple groups, we also first show the following result: ###### Proposition 0.0.4. If $\Gamma$ is a lattice in a semisimple group of rank $1$, then $\Gamma$ is not uniformly $\mathfrak{U}$-stable. It is shown in [Fuj98] that for such a $\Gamma$, $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ is infinite dimensional. More precisely, Fujiwara constructs (many) non-trivial quasi-homomorphisms witnessing that the comparison map $c:\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$ is not injective. By exponentiation, such quasimorphisms yield almost homomorphisms to $U(1)$ that are not close to any homomorphism (we shall discuss this in more detail in §1). In particular, a lattice of rank one is not uniformly U-stable, and to hope for uniform $\mathfrak{U}$-stability, the condition that rank of $\Gamma$ is at least $2$ is necessary. For the main result of the paper, we need some definitions capturing properties of the class of semisimple groups we will be interested in. For a locally compact group $G$, we denote by $\operatorname{H}_{b}^{\bullet}(G,\mathbf{R})$ the _continuous_ bounded cohomology of $G$ with trivial coefficients. * • A locally compact group $G$ is said to have the 2½property (of vanishing bounded cohomology) if $\operatorname{H}_{b}^{2}(G,\mathbf{R})=0$ and $\operatorname{H}_{b}^{3}(G,\mathbf{R})$ is Hausdorff. * • Let $G$ be a non-compact simple Lie group, and fix a minimal parabolic subgroup $P\leq G$. The group $G$ is said to have Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$ if there exist two proper parabolic subgroups $Q_{1}$ and $Q_{2}$ containing $P$, both having the 2½-property, such that $G$ is boundedly generated by the union $Q_{1}\cup Q_{2}$. A semisimple group $G$ is said to have Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$ if all its simple factors have Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$. Note that if a semisimple group has Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$, then by definition, it must be of rank at least $2$. But note that not all simple groups have the property (for instance, $SL_{3}(\mathbf{R})$). However, in §6.3, we will show that many classes of groups do have this property, for example, all simple groups (of rank at least $2$) over $\mathbf{C}$ or over a non-archimedean field, and $SL_{n}(\mathbf{R})$ for $n\geq 4$. We can now state our main result: ###### Theorem 0.0.5. Let $\Gamma$ be a lattice in a semisimple group $G$ that has Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$. Then $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})=0$, so in particular, $\Gamma$ is uniformly $\mathfrak{U}$-stable. The main result of our paper is thus concerned with showing that $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})=0$ for $\Gamma$ being a lattice in a higher rank Lie group (satisfying certain conditions). For this, we take inspiration from the results of [BM99] about the vanishing of bounded cohomology for such lattices. More specifically, our approach is inspired by a proof of [MS04] specifically in degree two, and a version of that result and proof technique are outlined below. ###### Theorem 0.0.6 ([MS04]). Let $G$ be a higher rank simple group, and $P\leq G$ be a minimal parabolic subgroup. Suppose $G$ contains two proper parabolic subgroups $Q_{1}$ and $Q_{2}$ such that $P\subseteq Q_{1}\cap Q_{2}$, $G$ is generated by $Q_{1}\cup Q_{2}$, and $\operatorname{H}_{b}^{2}(Q_{1},\mathbf{R})=\operatorname{H}_{b}^{2}(Q_{2},\mathbf{R})=0$. Then for any lattice $\Gamma$ in $G$ and a dual separable Banach $\Gamma$-module $W$, $\operatorname{H}_{b}^{2}(\Gamma,W)=0$. The proof of Theorem 0.0.6 proceeds in several steps briefly sketched below, where we also mention the corresponding steps and difficulties in the proof of Theorem 0.0.5 even when $G$ is simple: * • The first step is to use an Eckman-Shapiro induction to construct a dual, separable, continuous Banach $G$-module $V$ so that $\operatorname{H}_{b}^{2}(\Gamma,W)=\operatorname{H}_{b}^{2}(G,V)$, thus reducing the problem to showing that $\operatorname{H}_{b}^{2}(G,V)=0$. A similar inductive procedure, described in §5, allows us to construct an ultraproduct Banach space $\mathcal{V}$ with an asymptotic action of $G$ so that $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})=\operatorname{H}_{a}^{2}(G,\mathcal{V})$. Note that in the setting of asymptotic cohomology, we actually work with ultrapowers ${}^{*}\Gamma$ and ${}^{*}G$, and so ${}^{*}G/{}^{*}\Gamma$ is not locally compact. However, the restriction to internal objects allows us to carefully work out an induction procedure as needed. The induced module $\mathcal{V}$ also has an internal continuity property (defined in §4.1) that we establish in §5. * • Since $P$ is amenable, the bounded cohomology $\operatorname{H}_{b}^{\bullet}(G,V)$ can be computed as the cohomology of the complex ${0}$${V^{G}}$${L^{\infty}(G/P,V)^{G}}$${L^{\infty}((G/P)^{2},V)^{G}}$${L^{\infty}((G/P)^{3},V)^{G}}$${\dots}$$\scriptstyle{\epsilon}$$\scriptstyle{d^{0}}$$\scriptstyle{d^{1}}$$\scriptstyle{d^{2}}$ Furthermore, for a parabolic subgroup $P\leq Q\leq G$, the bounded cohomology $\operatorname{H}_{b}^{\bullet}(Q,V)$ can be computed as the cohomology of the complex ${0}$${V^{Q}}$${L^{\infty}(G/P,V)^{Q}}$${L^{\infty}((G/P)^{2},V)^{Q}}$${L^{\infty}((G/P)^{3},V)^{Q}}$${\dots}$$\scriptstyle{\epsilon}$$\scriptstyle{d^{0}}$$\scriptstyle{d^{1}}$$\scriptstyle{d^{2}}$ These steps too can be reworked in the asymptotic setting, analogous to the procedure in bounded cohomology theory, again thanks to the restriction on internality, and this is described in §4. * • The motivation behind the preceding step is that we have at our disposal the following double ergodicity theorem: let $V$ be a continuous $G$-module and $\alpha\in L^{\infty}\left((G/P)^{2},V\right)^{G}$, then $\alpha$ is essentially constant. This theorem follows from the Mautner’s lemma: let $V$ be a continuous $G$-module and $N\leq G$ be a non-compact subgroup, then $V^{N}=V^{G}$. Both these results are particularly useful in the context of $\operatorname{H}_{b}^{2}(G,V)$. In our setting, there are particular difficulties in obtaining an analoguous Maunter lemma due to the asymptotic nature of our model. We overcome them for our specific Banach module $\mathcal{V}$ by applying a suitable correction to exact cocycles using structure results for the semisimple group $G$; this is worked out in §6. * • For the parabolic subgroups $Q_{1}$ and $Q_{2}$ as in the hypothesis, an inflation-restriction sequence argument implies that $\operatorname{H}_{b}^{2}(Q_{i},V)=\operatorname{H}_{b}^{2}(Q_{i}/N_{i},V^{N_{i}})$ for $N_{i}$ being the (amenable) radical of $Q_{i}$ for $i\in\\{1,2\\}$. By Mautner’s lemma and the hypothesis that $\operatorname{H}_{b}^{2}(Q_{i},\mathbf{R})=0$, one concludes that $\operatorname{H}_{b}^{2}(Q_{i},V)=0$. The analogous hypothesis in our asymptotic setting is Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$, where the conditions that $\operatorname{H}_{b}^{2}(Q_{i},\mathbf{R})=0$ and $\operatorname{H}_{b}^{3}(Q_{i},\mathbf{R})$ is Hausdorff together are used to conclude that $\operatorname{H}_{a}^{2}(Q_{i},\mathcal{V})=0$ in §6.1. * • Let $\omega\in L^{\infty}\left((G/P)^{3},V\right)^{G}$ be a bounded $2$-cocycle for $G$. Since $\operatorname{H}_{b}^{2}(Q_{1},V)=\operatorname{H}_{b}^{2}(Q_{2},V)=0$, there exist $\alpha_{1}\in L^{\infty}\left((G/P)^{2},V\right)^{Q_{1}}$ and $\alpha_{2}\in L^{\infty}\left((G/P)^{2},V)\right)^{Q_{2}}$ such that $\omega=d\alpha_{1}=d\alpha_{2}$. In particular, $\alpha_{1}-\alpha_{2}$ is a $1$-cocycle for $Q_{1}\cap Q_{2}$. Since $P\leq Q_{1}\cap Q_{2}$, $\alpha_{1}-\alpha_{2}$ is a $1$-cocycle for $P$ as well. But since $\operatorname{H}_{b}^{1}(P,V)=0$, one can show using the double ergodicity theorem, that $\alpha_{1}=\alpha_{2}$ ($=\alpha$, say), implying that $\alpha$ is equivariant with respect to both $Q_{1}$ and $Q_{2}$, and hence is $G$-equivariant. Thus $\omega=d\alpha$ for $\alpha\in L^{\infty}\left((G/P)^{2},V\right)^{G}$, proving that $\operatorname{H}_{b}^{2}(G,V)=0$. This step too goes through in our setting once we have our asymptotic variant of the ergodicity theorem used in the classical case. While in this paper, we focus on using the theory of asymptotic cohomology to prove uniform $\mathfrak{U}$-stability for lattices in semisimple groups, the framework and tools developed here have also been used in subsequent work [FFR23] to prove uniform $\mathfrak{U}$-stability for other classes of groups such as lamplighter groups $\Gamma\wr\Lambda$ where $\Lambda$ is infinite and amenable, as well as several groups of dynamical origin such as Thompson’s group $F$. The techniques there too are analogous to corresponding vanishing results of bounded cohomology in [Mon22], yet again highlighting the connections between the theories of bounded cohomology and asymptotic cohomology. ### Outline of the Paper We begin with the much simple setting of uniform stability with respect to the fixed group $U(1)$ (equipped with the absolute value metric) in §1. In this case, we can reduce the question of uniform $U(1)$-stability of $\Gamma$ to the injectivity of the comparison map $c:\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$. After recalling how classical results from [Fuj98] imply that lattices in Lie groups of rank $1$ are not uniformly $U(1)$-stable, we then show that lattices in higher rank Lie groups are uniformly $U(1)$-stable. While the connection between uniform $U(1)$-stability of a group $\Gamma$ and the second bounded cohomology $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ is classical, it motivates the idea of using the logarithm map on an almost homomorphism to construct a bounded $2$-cocycle of $\Gamma$ in a Banach space, which we develop in §3.2 for a more general setting. In §2, we begin by defining the basic notions in full detail and rigor in §2.1. In particular we focus on interpreting stability in terms of sequences of maps and asymptotic homomorphisms, which we then refine further in §2.2 in the language of non-standard analysis. This formulation will allow us to reinterpret the question of uniform stability as a _homomorphism lifting problem_ on the lines of the approach used in [DCGLT20] [AP15]. While such a lifting problem motivates the attempt at constructing a cohomology, an obstacle here is that the kernel of the extension is not abelian. This issue is resolved in [DCGLT20] by considering lifts in small increments so that the kernel at each step is abelian. This idea, known as _defect diminishing_ , is explored in §2.3, and can be shown to imply uniform stability. In §3 we begin by focusing on a particular family of metric groups for which defect diminishing corresponds to a homomorphism lifting problem with an abelian kernel. This is the family of unitary groups equipped with submultiplicative norms, discussed in §3.1. In [DCGLT20], defect diminishing combined with ordinary group cohomology with coefficients in the abelian kernel (which turns out to be a Banach $\Gamma$-module) is sufficient to study non-uniform stability. But in our setting, the uniformity condition involves subtleties that necessitate transfering to the internal Lie algebra with an internal _asymptotic_ -action of ${}^{*}\Gamma$ (the ultrapower of $\Gamma$), and the formulation of an internal and asymptotic bounded cohomology with cofficients in that internal space, denoted $\mathcal{W}$. This is motivated in §3.2, and we conclude the section by demonstrating the machinery built so far in (re)proving the result of Kazhdan [Kaz82] that discrete amenable groups are Ulam stable. §4 begins with the rigorous definitions of an internal Banach spaces and asymptotic ${}^{*}G$-modules for a locally compact group $G$ (defining our notions in the category of topological groups is necessary in order to deal with lattices in Lie groups, which shall involve an Eckmann-Shapiro induction of cohomologies explored in §5) in §4.1. In §4.2, we formally define $\operatorname{H}_{a}^{\bullet}(G,\mathcal{V})$ using an internal $L^{\infty}$-spaces, and study some functorial properties and different cochain complexes that can be used to compute $\operatorname{H}_{a}^{\bullet}(G,\mathcal{V})$ in §4.3. Many of the techniques used here have parallels in the theory of continuous bounded cohomology as in [Mon01]. In §5, we restrict our attention to a lattice $\Gamma$ in a Lie group $G$, and begin by studying an intermediate structure $\mathcal{L}_{b}^{\infty}(G,\mathcal{W})^{\sim{}^{*}\Gamma}$ that is not quite the induction module $\mathcal{V}=\mathcal{L}^{\infty}(D,\mathcal{W})$ in our machinery, but comes with an internal (true) ${}^{*}G$-action (as opposed to an action upto infinitesimals). This structure leads to useful results proved in §5.1, and the actual induction module and an Eckmann-Shapiro induction procedure are discussed in §5.2. We conclude the section with a continuity property of our module $\mathcal{V}$, which we use to define contracting elements to lay the groundwork for a Mautner’s lemma to be proved in the next section. Finally, in §6, we come to the higher rank semisimple groups of interest, and begin by discussing an analogue of the Mautner’s lemma in our setting, along with double ergodicity lemmas in §6.1, and use these to prove that $\operatorname{H}_{a}^{2}(Q,\mathcal{V})=0$ for $Q\leq G$ being a proper parabolic subgroup. All these techniques come together in §6.2 where we prove that $\operatorname{H}_{a}^{2}(G,\mathcal{V})=0$ for semisimple groups $G$ that have Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$, thus implying uniform stability with a linear estimate for lattices in such groups. In §6.3, we list out classes of simple groups $G$ that have Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$, and conclude in §7 with some discussion on the limitations of our method and related open questions. ### Acknowledgements The second author acknowledges with gratitude the hospitality and support of the Fields Institute (Toronto) and the Institute for Advanced Study (Princeton) where part of this work was carried on, as well as a grant by the European Research Council (ERC) under the European Union’s Horizon 2020 (grant agreement No $882751$). The results presented here are part of the fourth author’s PhD thesis, also supported by the same grant. The authors would like to thank Andrei Rapinchuk for his guidance in the proof of Proposition 6.1.1. This paper is dedicated to Bob Zimmer for his 75th birthday in honor of his remarkable achievements and influence on the study of the rigidity of lattices in semisimple Lie groups. That influence is also apparent in the current paper. ###### Contents 1. 1 $U(1)$-Stability of Groups 2. 2 Preliminaries and Basic Constructions 1. 2.1 Uniform Stability and Asymptotic Homomorphisms 2. 2.2 Ultraproducts and Internal Maps 3. 2.3 Internal Liftings and Defect Diminishing 3. 3 A Cohomological Interpretation of Stability 1. 3.1 Lifting with an Abelian Kernel 2. 3.2 Linearization and the Lie Algebra 3. 3.3 Uniform Stability of Amenable groups 4. 4 Asymptotic Cohomology of Groups 1. 4.1 Basic Definitions and Some Cohomological Algebra 2. 4.2 The $L^{\infty}$-cohomology and $\operatorname{H}_{a}^{\bullet}(G,\mathcal{V})$ 3. 4.3 Amenable Actions and Cohomology of Subgroups 5. 5 The Induction Module 1. 5.1 The ${}^{*}G$-action on $\mathcal{L}^{\infty}_{b}({}^{*}G,\mathcal{W})^{\sim{}^{*}\Gamma}$ 2. 5.2 $\mathcal{L}^{\infty}({}^{*}D,\mathcal{W})$ and the Eckmann-Shapiro Induction 3. 5.3 Internal Contraction and Fixed Points 6. 6 Vanishing of $\operatorname{H}_{a}^{2}(\Gamma,\mathcal{W})$ 1. 6.1 Asymptotic Mautner Property and Ergodicity 2. 6.2 Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$ and the Main Theorem 3. 6.3 Groups with Property-$G(\mathcal{Q}_{1},\mathcal{Q}_{2})$ 7. 7 Conclusions and Discussion ## 1 $U(1)$-Stability of Groups We begin with a simpler setting, namely uniform stability of a discrete group $\Gamma$ with respect to the abelian group $U(1)$. This section can be read independently of the rest. ###### Definition 1.0.1. For $\epsilon>0$, a map $\phi:\Gamma\to U(1)$ is said to be an $\epsilon$-homomorphism if $\sup_{x,y\in\Gamma}\lvert\phi(xy)-\phi(x)\phi(y)\rvert\leq\epsilon$ The value $\sup_{x,y\in\Gamma}\lvert\phi(xy)-\phi(x)\phi(y)\rvert$ is called the _defect_ of $\phi$. ###### Definition 1.0.2. A group $\Gamma$ is said to be uniformly $U(1)$-stable if for every $\delta>0$, there exists $\epsilon>0$ such that if $\phi:\Gamma\to U(1)$ is an $\epsilon$-homomorphism, there exists a homomorphism $\psi:\Gamma\to U(1)$ such that $\sup_{x\in\Gamma}\lvert\phi(x)-\psi(x)\rvert<\delta$. A closely related notion is that of a quasimorphism. A _quasimorphism_ is a map $f:\Gamma\to\mathbf{R}$ such that there exists $D\geq 0$ such that for every $x,y\in\Gamma$, $\lvert f(x)+f(y)-f(xy)\rvert\leq D$ Let $QM(\Gamma)$ denote the space of all quasimorphisms of $\Gamma$. A trivial example of a quasimorphism is obtained by perturbing a homomorphism by a bounded function, and such quasimorphisms form the subspace $Hom(\Gamma,\mathbf{R})\oplus C_{b}(\Gamma,\mathbf{R})$ of $QM(\Gamma)$ (where $C_{b}(\Gamma,\mathbf{R})$ denotes the space of all bounded functions from $\Gamma$ to $\mathbf{R}$) A quasimorphism that is not at a bounded distance from any homomorphism is called a _non-trivial quasimorphism_. In this setting, a question analogous to uniform stability is whether every quasimorphism is at a bounded distance from a homomorphism. That is, is $QM(\Gamma)=Hom(\Gamma,\mathbf{R})\oplus C_{b}(\Gamma,\mathbf{R})$? It is known that every quasimorphism class contains a unique _homogenous_ quasimorphism (a quasimorphism $f$ is said to be homogenous if for every $g\in\Gamma$ and $n\in\mathbf{N}$, $f(g^{n})=nf(g)$). Suppose $f$ is a homogenous quasimorphism of $\Gamma$ that is not a homomorphism. Then its exponent $\mu\coloneq e^{2\pi i\epsilon f}:\Gamma\to U(1)$ is a function whose defect can be made arbitrarily small (by choosing $\epsilon\to 0$), but whose distance from homomorphisms is bounded below by a positive constant. ###### Proposition 1.0.3 ([BOT13] [MS06]). If $\Gamma$ admits a non-trivial quasimorphism, then $\Gamma$ is not uniformly $U(1)$-stable. Quasimorphisms are also closely related to group cohomology (this is well- known and classical, see [Fri17],[Mon06] for references). Observe that given a quasimorphism $f:\Gamma\to\mathbf{R}$, the map $df:\Gamma\times\Gamma\to\mathbf{R}$ $df(x,y)=f(x)+f(y)-f(xy)$ is a $2$-coboundary for $\Gamma$ in $\mathbf{R}$, while also being a bounded function that satisfies the $2$-cocycle condition (and hence a _bounded $2$-cocycle_). This leads to the following characterization of quasimorphisms classes ([Fri17], [Mon06]): ###### Proposition 1.0.4. The kernel, denoted $E\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$, of the comparison map $c:\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$ is isomorphic to the space of quasimorphisms modulo the suabspace of trivial quasimorphisms. $E\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\cong\frac{QM(\Gamma)}{C_{b}(\Gamma,\mathbf{R})\oplus Hom(\Gamma,\mathbf{R})}$ Hence to show that $\Gamma$ is not $U(1)$-stable, it is sufficient to show that the comparison map $c:\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$ is not injective. In [Fuj98], it is shown that for a lattice in a rank one semisimple Lie group, its comparison map has non-zero kernel, and hence ###### Theorem 1.0.5. Let $H$ be a semisimple group (as in the introduction) of rank one, and let $\Gamma$ be a lattice in $H$. Then $\Gamma$ is not uniformly $U(1)$-stable. The rest of this section is devoted to showing that higher rank lattices are $U(1)$-stable. For this goal the bounded cohomology plays a central role. For simplicity, we endow $U(1)$ with the distance coming from seeing it as $\mathbf{R}/\mathbf{Z}$ (so we just have to apply a trigonometric formula if we prefer the norm distance). Recall the long exact sequences associated to $\mathbf{Z}\to\mathbf{R}\to\mathbf{R}/\mathbf{Z}$ for $H^{n}$ and $\operatorname{H}_{b}^{n}$. Together with the comparison maps, we get a commutative diagram: ${\cdots}$${H^{1}(\Gamma,\mathbf{R}/\mathbf{Z})}$${\operatorname{H}_{b}^{2}(\Gamma,\mathbf{Z})}$${\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})}$${H^{2}(\Gamma,\mathbf{R}/\mathbf{Z})}$${\cdots}$${\cdots}$${H^{1}(\Gamma,\mathbf{R}/\mathbf{Z})}$${H^{2}(\Gamma,\mathbf{Z})}$${H^{2}(\Gamma,\mathbf{R})}$${H^{2}(\Gamma,\mathbf{R}/\mathbf{Z})}$${\cdots}$ Consider the subset $K\subseteq\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ given by $K=\mathrm{Ker}\Big{(}\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R}/\mathbf{Z})\Big{)}=\mathrm{Image}\Big{(}\operatorname{H}_{b}^{2}(\Gamma,\mathbf{Z})\to\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\Big{)}$ Recall that $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ is a Banach space; we can therefore consider the norm of elements in $K$. ###### Proposition 1.0.6. The following are equivalent for a group $\Gamma$. 1. 1. (Lipschitz $U(1)$ stability): Every $\epsilon$-representation to $U(1)$ with $\epsilon$ small enough is at distance less than $\epsilon$ from a representation. 2. 2. (Linear $U(1)$ stability): There is a constant $c$ such that every $\epsilon$-representation to $U(1)$ is at distance less than $c\epsilon$ from a representation. 3. 3. ($U(1)$ stability): For each $\delta>0$ there is $\epsilon>0$ so that every $\epsilon$-representation to $U(1)$ is at distance less than $\delta$ from a representation. 4. 4. ($U(1)$ $1/3$-stability): There is $\delta<1/3$ such that every $\epsilon$-representation to $U(1)$ with $\epsilon$ small enough is at distance less than $\delta$ from a representation. 5. 5. (Cohomology gap): Non-zero elements of $K$ have norm bounded below by a positive constant. ###### Remark 1.0.7. The kernel of the comparison map $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$ is contained in $K$, see above diagram. Since this kernel is a vector space, point (5) fails as soon as this kernel is non-zero. This explains why quasimorphisms imply non-stability: it is a special case of the above as quasimorphisms live in $K$. ###### Proof. Trivially (1)$\Rightarrow$(2)$\Rightarrow$(3)$\Rightarrow$(4). We prove (4)$\Rightarrow$(5) for the positive constant $\epsilon$ as in (4) which, without loss of generality, can be made to satisfy $\epsilon<1-3\delta$. Consider a class $[\omega]$ in $K$ with $\|[\omega]\|<\epsilon$. We can choose the representative $\omega\in\ell^{\infty}(\Gamma^{2})$ such that $\|\omega\|_{\infty}<\epsilon$. Since $[\omega]$ is in the kernel to $H^{2}(\Gamma,\mathbf{R}/\mathbf{Z})$, there is $f\colon\Gamma\to\mathbf{R}$ such that $\omega\equiv df$ modulo $\mathbf{Z}$. The map $\pi=\exp(2\pi if)$ is an $\epsilon$-representation. Thus $\pi$ is at distance less than $\delta$ of an actual representation, which means that there is $b\colon\Gamma\to[-\delta,\delta]$ with $d(f+b)\equiv 0$ modulo $\mathbf{Z}$. This means that $\omega+db$ is integer- valued. On the other hand, $\|db\|_{\infty}$ is at most $3\delta$. Thus, since $\epsilon+3\delta<1$ we deduce that $\omega+db$ actually vanishes and thus $[\omega]$ is zero in $K$. We now prove (5)$\Rightarrow$(1). We show it for $0<\epsilon<1/4$ smaller than the constant given by (5). For any $\pi\colon\Gamma\to U(1)$, choose $f\colon\Gamma\to\mathbf{R}$ such that $\pi=\exp(2\pi if)$. If $\pi$ is an $\epsilon$-representation, then there is $\omega\colon\Gamma^{2}\to[-\epsilon,\epsilon]$ such that $\omega\equiv df$ modulo $\mathbf{Z}$. In particular, $d\omega\equiv 0$ modulo $\mathbf{Z}$, i.e. $d\omega$ is integer-valued. Given that $\|d\omega\|_{\infty}\leq 4\epsilon$, we have in fact $d\omega=0$ and hence we obtain a class $[\omega]$ in $K$ of norm less than $\epsilon$. Therefore, we can assume that $[\omega]$ is trivial, which means $\omega=db$ for some $b\in\ell^{\infty}(\Gamma)$. In fact the operator $d$ on $\ell^{\infty}(\Gamma)$ has norm one: this was first observed by [Mit84, p. 468] in a special case; the short proof is given in general in (the proof of) Corollary 2.7 in [MM85]. We can therefore choose $b$ such that $\|b\|_{\infty}\leq\epsilon$. Now $\exp(2\pi i(f-b))$ is a representation at distance less than $\epsilon$ of $\pi$. ∎ ###### Theorem 1.0.8. Let $\Gamma$ be a group such that $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ is finite-dimensional and injects into $H^{2}(\Gamma,\mathbf{R})$. Then $\Gamma$ is uniformly $U(1)$-stable. ###### Lemma 1.0.9. Let $A$ be an abelian group and let $B$ be a subgroup of $Hom_{\mathbf{Z}}(A,\mathbf{Z})$. If the image of $B$ in $Hom_{\mathbf{Z}}(A,\mathbf{R})$ spans a subspace of finite $\mathbf{R}$-dimension $d$, then $B$ is free abelian of rank $d$. ###### Proof of Lemma 1.0.9. We can consider $B$ as a subgroup spanning a space of finite $\mathbf{Q}$-dimension $d$ in $Hom_{\mathbf{Z}}(A,\mathbf{Q})$. Viewing $A$ as a quotient of a free abelian group on some set $X$, the group $Hom_{\mathbf{Z}}(A,\mathbf{Z})$ is contained in $\mathbf{Z}^{X}$. While $\mathbf{Z}^{X}$ is not free abelian in general, in Specker (Satz I p. 133 in [Spe50]) it is proved that countable subgroups of $\mathbf{Z}^{X}$ are free abelian . To be more precise, Specker proved it for $X$ countable but his proof works in general; alternatively, the statement immediately reduces to the case $X$ countable by taking a subset of $X$ separating the points of $B$. Our $B$, being a subgroup of a finite dimensional $\mathbf{Q}$-space, is countable and hence free (from the above mentioned result from [Spe50]). Spanning a $d$-dimensional $\mathbf{Q}$-vector space, its rank must be also $d$. ∎ ###### Proof of Theorem 1.0.8. It suffices to show that $\Gamma$ satisfies the cohomology gap — i.e., condition (5) of Proposition 1.0.6. Let $\tilde{B}$ denote the image of $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{Z})$ in $H^{2}(\Gamma,\mathbf{Z})$. Thus the image of $\tilde{B}$ in $H^{2}(\Gamma,\mathbf{R})$ is precisely the image of $K$. ${\operatorname{H}_{b}^{2}(\Gamma,\mathbf{Z})}$${K\subseteq\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})}$${\tilde{B}\subseteq H^{2}(\Gamma,\mathbf{Z})}$${H^{2}(\Gamma,\mathbf{R})}$ Let $d<\infty$ be the dimension of the space spanned by $K$ in $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$. To prove the cohomology gap (5), we first claim that $K$ is free abelian of rank $d$. This claim implies that $K$ is discrete in the finite dimensional space $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ since it spans a space of dimension $d$ (we use here the comparison with the standard lattice $\mathbf{Z}^{d}$ in $\mathbf{R}^{d}$ and the fact that linear maps are continuous in finite dimensions). Then, discreteness implies the desired gap. To prove the claim that $K$ is free abelian of rank $d$, we can work with $H^{2}(\Gamma,\mathbf{R})$ since $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ injects there. The universal coefficient theorem gives the following commutative diagram with exact rows. ${0}$${Ext_{\mathbf{Z}}^{1}(H_{1}(\Gamma,\mathbf{Z}),\mathbf{Z})}$${H^{2}(\Gamma,\mathbf{Z})}$${Hom_{\mathbf{Z}}(H_{2}(\Gamma,\mathbf{Z}),\mathbf{Z})}$${0}$${0}$${H^{2}(\Gamma,\mathbf{R})}$${Hom_{\mathbf{Z}}(H_{2}(\Gamma,\mathbf{Z}),\mathbf{R})}$${0}$ Therefore it suffices to show that the image $B$ of $\tilde{B}$ in $Hom_{\mathbf{Z}}(H_{2}(\Gamma,\mathbf{Z}),\mathbf{Z})$ is free abelian of rank $d$, which follows from Lemma 1.0.9 applied to $A=H_{2}(\Gamma,\mathbf{Z})$. ∎ Since finitely presented groups have finite-dimensional $H^{2}$, we obtain the following clean necessary and sufficient condition: ###### Corollary 1.0.10. Let $\Gamma$ be a finitely presented group. Then $\Gamma$ is uniformly $U(1)$-stable if and only if every quasi-morphism of $\Gamma$ is at bounded distance from a homomorphism. ###### Proof. As explained above if $QM(\Gamma)\neq 0$ (i.e., not every quasi-morphism of $\Gamma$ is at bounded distance from a homomorphism) then $\Gamma$ is not $U(1)$-stable. On the other hand, if $QM(\Gamma)=0$, then $Ker(\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R}))=0$, i.e., $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ injects into $H^{2}(\Gamma,\mathbf{R})$. If in addition $\Gamma$ is finitely presented $H^{2}(\Gamma,\mathbf{R})$ is of finite dimension. Thus all the assumptions of Theorem 1.0.8 are satisfied and $\Gamma$ is uniformly $U(1)$-stable. ∎ Now, back to the case in which $\Gamma$ is a lattice in a higher rank group. For such $\Gamma$, Burger and Monod [BM99] showed that $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ injects into $H^{2}(\Gamma,\mathbf{R})$. Such $\Gamma$ is usually finitely presented (and then we can apply the last corollary) except when $\Gamma$ is a lattice in (rank 2) semisimple Lie group of positive characteristic. But in this case $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ is anyway zero, by another result of Burger and Monod [BM99], so Theorem 1.0.8 applies and we can deduce: ###### Theorem 1.0.11. If $\Gamma$ is a higher rank lattice then it is uniformly $U(1)$-stable. The main result of this paper will be a far reaching extension of the above theorem (with some additional assumptions on the lattice $\Gamma$) to a larger family of metric groups (namely any unitary group $U(n)$ equipped with any submultiplicative matrix norm $\|\cdot\|$). In this general case, bounded cohomology theory would not suffice, so we will motivate and build a more appropriate cohomology theory in the upcoming sections. We conclude this section with an observation in the case of groups of Hermitian type: ###### Proposition 1.0.12. Let $G$ be a split Lie group of Hermitian type (for example, $Sp(2m,\mathbf{R})$), with universal central extension $\tilde{G}$. Let $\Gamma$ be a cocompact lattice in $G$, and $\tilde{\Gamma}$ be its preimage in $\tilde{G}$. Then $\tilde{\Gamma}$ is not uniformly $U(1)$-stable (and hence, not uniformly $\mathfrak{U}$-stable). ###### Proof. Consider the non-trivial $2$-cocycle $\alpha\in H^{2}(\Gamma,\mathbf{Z})$ corresponding to the central extension $\tilde{\Gamma}$ of $\Gamma$. From [GW78], we know that $\alpha$ is actually an element of $\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})$ that does not vanish in $H^{2}(\Gamma,\mathbf{R})$. In particular, $\alpha$ is not contained in the kernel of the comparison map $c:\operatorname{H}_{b}^{2}(\Gamma,\mathbf{R})\to H^{2}(\Gamma,\mathbf{R})$. Since the extension $\tilde{\Gamma}$ of $\Gamma$ is central (in particular, has amenable kernel), $\alpha$ has a pullback $\tilde{\alpha}\in\operatorname{H}_{b}^{2}(\tilde{\Gamma},\mathbf{R})$ which is a non-trivial bounded $2$-cocycle. However, $\tilde{\alpha}$ is trivial in cohomology, hence $\tilde{\alpha}$ is an element of the kernel of the comparison map $c:\operatorname{H}_{b}^{2}(\tilde{\Gamma},\mathbf{R})\to H^{2}(\tilde{\Gamma},\mathbf{R})$. Thus, from Propositions Proposition 1.0.3 and Proposition 1.0.4, we conclude that $\tilde{\Gamma}$ is not uniformly $U(1)$-stable. ∎ ###### Remark 1.0.13. A non-trivial quasimorphism in the above proof of Proposition 1.0.12 can be explicitly described: let $j:\Gamma\to\tilde{\Gamma}$ be a section corresponding to the cocycle $\alpha\in H^{2}(\Gamma,\mathbf{Z})$ so that as a set, $\tilde{\Gamma}=\mathbf{Z}\times j(\Gamma)$. Consider the map $\phi:\tilde{\Gamma}\to\mathbf{R}$ defined to be $\phi(m,j(\gamma))\coloneqq m$. One can check that this is a non-trivial quasimorphism of $\tilde{\Gamma}$, and note that this map is trivial on $\Gamma$ and we do know, from Theorem 1.0.11, that $\Gamma$ is indeed uniformly $U(1)$-stable if it has rank at least $2$, even if $\tilde{\Gamma}$ is not. ## 2 Preliminaries and Basic Constructions In this section, we establish the connection between uniform stability and a homomorphism lifting problem, which is then further explored in §3 for discrete groups, and in §4 for topological groups. In the first §2.1, we define our central notion of uniform stability, and describe it using sequences of maps. This then allows for a reformulation of the notion of stability as a homomorphism lifting problem using the language of ultrafilters in §2.2. Finally, in §2.3, we introduce the idea of defect diminishing, which is a relaxation of the lifting problem with abelian kernels. This property can naturally be related to a cohomological problem that is then studied in the subsequent §3. ### 2.1 Uniform Stability and Asymptotic Homomorphisms Let $\Gamma$ be a countable discrete group, and let $(G,d_{G})$ be a metric group (that is, a group $G$ equipped with a bi-invariant metric $d_{G}$). We use the metric to define the (uniform) distance between maps from $\Gamma$ to $G$ as follows: for $f_{1},f_{2}:\Gamma\to G$, the distance between $f_{1}$ and $f_{2}$, denoted $dist_{\Gamma,G}(f_{1},f_{2})$, as $dist_{\Gamma,G}(f_{1},f_{2})\coloneq\sup_{x\in\Gamma}d_{G}\left(f_{1}(x),f_{2}(x)\right)$ This allows us to define the distance of a function $f:\Gamma\to G$ from a homomorphism as follows: ###### Definition 2.1.1. The homomorphism distance of a function $f:\Gamma\to G$, denoted $D_{\Gamma,G}(f)$, is defined as $D_{\Gamma,G}(f)\coloneq\inf\\{dist_{\Gamma,G}(f,\psi):\psi\in Hom(\Gamma,G)\\}$ The function $f$ is said to be $\delta$-close to a homomorphism if $D_{\Gamma,G}(f)\leq\delta$. There is another invariant of a function $f$ that also quantifies its distance from being a homomorphism. ###### Definition 2.1.2. For a function $f:\Gamma\to G$, we define its (uniform) defect $def_{\Gamma,G}(f)$ as $def_{\Gamma,G}(f)\coloneq\sup_{x,y\in\Gamma}d_{G}\left(f(xy),f(x)f(y)\right)$ The function $f$ is said to be an $\epsilon$-homomorphism if $def_{\Gamma,G}(f)\leq\epsilon$. Note that a priori both $def_{\Gamma,G}(f)$ and $D_{\Gamma,G}(f)$ could be $\infty$. It is easy to show (by the triangle inequality) that $def_{\Gamma,G}(f)\leq 3D_{\Gamma,G}(f)$ for any function $f$, so if $f$ is close to a homomorphism, then it has small defect. Uniform stability is the question of whether the converse is true: is any function with small defect necessarily close to a homomorphism? Uniform stability is usually studied with respect to not just one metric group $(G,d_{G})$ but a family $\mathcal{G}$ of metric groups. In this case, we can define $F_{\Gamma,\mathcal{G}}:[0,\infty]\to[0,\infty]$ $F_{\Gamma,\mathcal{G}}(\epsilon)\coloneq\sup_{G\in\mathcal{G}}\sup_{f}\\{D_{\Gamma,G}(f):def_{\Gamma,G}(f)\leq\epsilon\\}$ ###### Definition 2.1.3. The group $\Gamma$ is said to be uniformly $\mathcal{G}$-stable if $\lim\limits_{\epsilon\to 0^{+}}F_{\Gamma,\mathcal{G}}(\epsilon)=0$ A further refinement of the above definition involves a quanitification of the above convergence: ###### Definition 2.1.4. The group $\Gamma$ is uniformly $\mathcal{G}$-stable _with a linear estimate_ if $\exists\epsilon_{0}>0$ and $\exists M\geq 0$ such that $\forall\epsilon<\epsilon_{0}$ and $\forall G\in\mathcal{G}$, every $\epsilon$-homomorphism $\phi:\Gamma\to G$ is $M\epsilon$-close to a homomorphism. It is helpful to rephrase these notions also in terms of sequences of maps, especially since we shall further refine this view in §2.2 and §2.3. Consider a sequence of functions $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ where $G_{n}\in\mathcal{G}$ for every $n\in\mathbf{N}$. * • A sequence $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ is said to be a (uniform) asymptotic homomorphism of $\Gamma$ to $\mathcal{G}$ if $\lim\limits_{n\to\infty}def_{\Gamma,G_{n}}(\phi_{n})=0$. * • A sequence $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ is said to be (uniformly) asymptotically close to a homomorphism if $\lim\limits_{n\to\infty}D_{\Gamma,G_{n}}(\phi_{n})=0$. It is easy to see that a group $\Gamma$ is uniformly $\mathcal{G}$-stable iff every asymptotic homomorphism of $\Gamma$ to $\mathcal{G}$ is asymptotically close to a homomorphism. Uniform $\mathcal{G}$-stability with a linear estimate too can be rephrased in terms of sequences of maps. Recall the Laundau big-O notation: for sequences $\\{x_{n}\\}_{n\in\mathbf{N}}$ and $\\{y_{n}\\}_{n\in\mathbf{N}}$ of positive real numbers, we denote $x_{n}=O(y_{n})$ if there exists a constant $C\geq 0$ and $N\in\mathbf{N}$ such that for all $n\geq N$, $x_{n}\leq Cy_{n}$. We denote by $x_{n}=o(y_{n})$ if there exists a sequence $\\{\epsilon_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}\epsilon_{n}=0$ and such that $x_{n}=\epsilon_{n}y_{n}$. Firstly, note that if $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate, then for any asymptotic homomorphism $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ of $\Gamma$ to $\mathcal{G}$, $D_{\Gamma,G_{n}}(\phi_{n})=O\left(def_{\Gamma,G_{n}}(\phi_{n})\right)$. The following lemma shows that the converse is also true: ###### Lemma 2.1.5. The group $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate iff for every asymptotic homomorphism $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ of $\Gamma$ to $\mathcal{G}$, $D_{\Gamma,G_{n}}(\phi_{n})=O\left(def_{\Gamma,G_{n}}(\phi_{n})\right)$ ###### Proof. Suppose $\Gamma$ is not uniformly $\mathcal{G}$-stable with a linear estimate. Then for any $M>0$ and any $\epsilon>0$, there exists a map $\phi:\Gamma\to G$ (for some $G\in\mathcal{G}$) such that $def_{\Gamma,G}(\phi)\leq\epsilon$ and $D_{\Gamma,G}(\phi)>M\epsilon$. Now consider a sequence $\\{M_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}M_{n}=\infty$ and a sequence $\\{\epsilon_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}\epsilon_{n}=0$. For each $n\in\mathbf{N}$, let $\phi_{n}:\Gamma\to G_{n}$ be the map with $def_{\Gamma,G_{n}}(\phi_{n})\leq\epsilon_{n}$ but $D_{\Gamma,G_{n}}(\phi_{n})>M_{n}\epsilon_{n}$. Then $\\{\phi_{n}\\}_{n\in\mathbf{N}}$ is an asymptotic homomorphism of $\Gamma$ to $\mathcal{G}$ such that $D_{\Gamma,G_{n}}(\phi_{n})$ is clearly not $O\left(def_{\Gamma,G_{n}}(\phi_{n})\right)$. ∎ We shall now conclude this section by informally introducing the following relaxation of the hypothesis of Lemma 2.1.5, which we shall call _asymptotic defect diminishing_. The idea is to look for improvements to the asymptotic homomorphism, rather than a true homomorphism. ###### Definition 2.1.6. The group $\Gamma$ is said to have the asymptotic defect diminishing property with respect to the family $\mathcal{G}$ if for any asymptotic homomorphism $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$, there exists an asymptotic homomorphism $\\{\psi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ such that * • The defect $def_{\Gamma,G_{n}}(\psi_{n})=o\left(def_{\Gamma,G_{n}}(\phi_{n})\right)$. * • The distance $dist_{\Gamma,G_{n}}(\phi_{n},\psi_{n})=O\left(def_{\Gamma,G_{n}}(\phi_{n})\right)$. The following results motivate this notion, which we shall formally study in more detail in §2.2 in an ultraproduct setting. ###### Lemma 2.1.7. Suppose $\Gamma$ has the asymptotic defect diminishing property with respect to $\mathcal{G}$. Then there exists $\epsilon_{0}>0$ and $M>0$ such that for $\epsilon<\epsilon_{0}$ and any $\epsilon$-homomorphism $\phi:\Gamma\to G$, there exists a map $\psi:\Gamma\to G$ with defect $def_{\Gamma,G}(\psi)<\frac{1}{2}def_{\Gamma,G}(\phi)$ and $dist_{\Gamma,G}(\psi)<Mdef_{\Gamma,G}(\phi)$. ###### Proof. We shall prove this by contradiction, so suppose for every $\epsilon>0$ and every $M>0$, there exists an $\epsilon$-homomorphism $\phi:\Gamma\to G$ such that for any $\psi:\Gamma\to G$, either $def_{\Gamma,G}(\psi)\geq\frac{1}{2}def_{\Gamma,G}(\phi)$ or $dist_{\Gamma,G}(\psi)\geq Mdef_{\Gamma,G_{n}}(\phi_{n})$. Consider a sequence $\\{M_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}M_{n}=\infty$ and a sequence $\\{\epsilon_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}\epsilon_{n}=0$. For each $n\in\mathbf{N}$, let $\phi_{n}:\Gamma\to G_{n}$ (with $G_{n}\in\mathcal{G}$) be a map with $def_{\Gamma,G_{n}}(\phi_{n})\leq\epsilon_{n}$ such that for any map $\psi:\Gamma\to G_{n}$, either $def_{\Gamma,G_{n}}(\psi_{n})\geq\frac{1}{2}def_{\Gamma,G_{n}}(\phi_{n})$ or $dist_{\Gamma,G_{n}}(\psi_{n})\geq M_{n}def_{\Gamma,G_{n}}(\phi_{n})$. Then the asymptotic homomorphism $\\{\phi_{n}\\}_{n\in\mathbf{N}}$ as constructed proves that that $\Gamma$ does not have the asymptotic defect diminishing property with respect to $\mathcal{G}$. ∎ ###### Theorem 2.1.8. Suppose $\mathcal{G}$ is such that every $G\in\mathcal{G}$ is a complete metric space (with respect to its metric $d_{G}$). Then group $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate iff $\Gamma$ has the asymptotic defect diminishing property with respect to $\mathcal{G}$ ###### Proof. Suppose $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate, then the implication is immediate. Conversely, suppose $\Gamma$ has the asymptotic defect diminishing property with respect to $\mathcal{G}$. From the previous lemma, this means that there exists $\epsilon_{0}>0$ and $M>0$ such that for any $\epsilon$-homomorphism $\phi:\Gamma\to G$ (with $\epsilon<\epsilon_{0}$), there exists a map $\psi:\Gamma\to G$ with defect $def_{\Gamma,G}(\psi)<\frac{1}{2}def_{\Gamma,G}(\phi)$ and $dist_{\Gamma,G}(\psi)<Mdef_{\Gamma,G}(\phi)$. Set $\phi^{(0)}\coloneq\phi$ and $\phi^{(1)}\coloneq\psi$. Applying Lemma 2.1.7 inductively on $\phi^{(i)}$ to get $\phi^{(i+1)}$, we obtain a sequence of maps $\\{\phi^{(j)}:\Gamma\to G\\}_{j\in\mathbf{N}}$ where for each $j\in\mathbf{N}$, $\phi^{(j)}$ has defect at most $\epsilon/2^{j}$ and is of distance at most $M\epsilon/2^{j}$ from $\phi^{(j-1)}$. This gives us a Cauchy sequence of maps from $\Gamma\to G$. Since $G$ is a complete, the sequence $\\{\phi^{(j)}$ has a limit $\phi^{\infty}:\Gamma\to G$ with defect $def(\phi^{\infty})=0$ (hence it is a homomorphism). As for its distance from $\phi$, $dist_{\Gamma,G}(\phi,\phi^{\infty})\leq\sum\limits_{j=0}^{\infty}\frac{M\epsilon}{2^{j}}=2M\epsilon$ Hence $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate. ∎ ### 2.2 Ultraproducts and Internal Maps We can quantify the asymptotic rates succintly using ultraproducts. We shall now briefly review some concepts from the theory of ultraproducts and non- standard analysis that would be of relevance to our constructions (for more details, refer [Gol12] and [ACH12]). Let $\mathcal{U}$ be a non-principal ultrafilter on $\mathbf{N}$, which is fixed throughout. A subset $S\subseteq\mathbf{N}$ is said to be _large_ if $S\in\mathcal{U}$. ###### Definition 2.2.1. The (algebraic) ultraproduct $\prod_{\mathcal{U}}X_{n}$ (or alternately, $\\{X_{n}\\}_{\mathcal{U}}$) of an indexed collection $\\{X_{n}\\}_{n\in\mathbf{N}}$ of sets is defined to be $\prod_{\mathcal{U}}X_{n}\coloneq\prod_{n\in\mathbf{N}}X_{n}/\sim$ where for $\\{x_{n}\\}_{n\in\mathbf{N}},\\{y_{n}\\}_{n\in\mathbf{N}}\in\prod_{n\in\mathbf{N}}X_{n}$, $\\{x_{n}\\}_{n\in\mathbf{N}}\sim\\{y_{n}\\}_{n\in\mathbf{N}}$ if $\\{n:x_{n}=y_{n}\\}\in\mathcal{U}$. In other words, we identify two sequences $\\{x_{n}\\}_{n\in\mathbf{N}},\\{y_{n}\\}_{n\in\mathbf{N}}\in\prod_{n\in\mathbf{N}}X_{n}$ if they agree on a large set of indices. The image of a sequence $\\{x_{n}\\}_{n\in\mathbf{N}}\in\prod_{n\in\mathbf{N}}X_{n}$ under this equivalence relation shall be denoted $\\{x_{n}\\}_{\mathcal{U}}$. Conversely, given an element of $\prod_{\mathcal{U}}X_{n}$, we shall always regard it as $\\{x_{n}\\}_{\mathcal{U}}$ for some sequence $\\{x_{n}\\}_{n\in\mathbf{N}}\in\prod_{n\in\mathbf{N}}X_{n}$. If $X_{n}=X$ for every $n\in\mathbf{N}$, then $\prod_{\mathcal{U}}X$ is called the (algebraic) _ultrapower_ of $X$, denoted ${}^{*}X$. Note that $X$ can be embedded in ${}^{*}X$ via a diagonal embedding (for $x\in X$, $x\mapsto\\{x\\}_{\mathcal{U}}\in{}^{*}X$). Ultraproducts can be made to inherit algebraic structures of their building blocks. More precisely, let $\\{X_{n}\\}_{n\in\mathbf{N}}$, $\\{Y_{n}\\}_{n\in\mathbf{N}}$, and $\\{Z_{n}\\}_{n\in\mathbf{N}}$ be indexed families of sets with operations $*_{n}:X_{n}\times Y_{n}\to Z_{n}$ for every $n\in\mathbf{N}$. This naturally defines an operation $*:\prod_{\mathcal{U}}X_{n}\times\prod_{\mathcal{U}}Y_{n}\to\prod_{\mathcal{U}}Z_{n}$ by $\\{x_{n}\\}_{\mathcal{U}}*\\{y_{n}\\}_{\mathcal{U}}=\\{x_{n}*_{n}y_{n}\\}_{\mathcal{U}}$ We shall frequently encounter the following examples of ultraproducts: * • The ultrapower ${}^{*}G$ of a group $G$ is itself a group. This can be seen by noting that ${}^{*}G$ is the quotient of the direct product group $\prod_{n\in\mathbf{N}}G$ by the normal subgroup comprising elements $\\{g_{n}\\}_{n\in\mathbf{N}}$ with $\\{n:g_{n}=1\\}\in\mathcal{U}$. * • The ultrapower ${}^{*}\mathbf{R}$ of $\mathbf{R}$ is a non-archimedean ordered field called the _hyperreals_. * • Let $\\{W_{n}\\}_{n\in\mathbf{N}}$ be a family of Banach spaces. Then $\prod_{\mathcal{U}}W_{n}$ can be given the structure of a ${}^{*}\mathbf{R}$-vector space. In fact, it also comes equipped with a ${}^{*}\mathbf{R}$-valued norm. One of the standard tools of non-standard analysis is the transfer principle which relates the truth of statements concerning objects and their counterparts in the ultraproduct universe. Intuitively, our standard universe comprises all objects under normal consideration like $\mathbf{R}$, $\mathbf{C}$, etc. but maybe formally modeled as follows: define $V_{0}(\mathbf{R})=\mathbf{R}$ $V_{n+1}(\mathbf{R})=V_{n}(\mathbf{R})\cup\mathcal{P}\left(V_{n}(\mathbf{R})\right)$ where $\mathcal{P}\left(V_{n}(\mathbf{R})\right)$ denotes the power set of $\left(V_{n}(\mathbf{R})\right)$. Then $V(\mathbf{R})=\cup_{n\geq 0}V_{n}(\mathbf{R})$ is called the superstructure over $\mathbf{R}$, and can be interpreted as comprising all the natural structures we study in mathematics. This shall comprise our standard universe $Univ$. We can construct a mapping ${}^{*}:V(\mathbf{R})\to V({}^{*}\mathbf{R})$ that takes an object in the superstructure of $\mathbf{R}$ (our standard universe $Univ$) to an object in the superstructure of ${}^{*}\mathbf{R}$ satisfying the following: * • (Extension Principle) The mapping $*$ maps $\mathbf{R}$ to ${}^{*}\mathbf{R}$. * • (Transfer Principle) For any first-order formula $\phi$ involving $k$ variables, and $A_{1},\dots,A_{k}\in V(\mathbf{R})$, the statement $\phi(A_{1},\dots,A_{k})$ is true in $V(\mathbf{R})$ iff $\phi({}^{*}A_{1},\dots,{}^{*}A_{k})$ is true in $V({}^{*}\mathbf{R})$. * • (Countable Saturation) Suppose $\\{X_{n}\\}_{n\in\mathbf{N}}$ is a collection of sets in ${}^{*}\left(V(\mathbf{R})\right)$ such that the intersection of any finite subcollection is non-empty. Then $\cap X_{n}$ is non-empty. It is a basic result of non-standard analysis that such a mapping $*$ exists, and can be constructed using a non-principal ultrafilter on $\mathbf{N}$. The image of this mapping shall be referred to as our non-standard universe${}^{*}Univ$, which is ${}^{*}Univ=\\{\\{X_{n}\\}_{\mathcal{U}}\lvert X_{n}\in Univ\\}$ comprising ultraproducts of elements of $Univ$. Note that ${}^{*}\mathbf{R},{}^{*}\mathbf{C},{}^{*}\Gamma$ are all contained in ${}^{*}Univ$. Objects contained in $Univ$ are called standard, while objects contained in ${}^{*}Univ$ are called internal. In particular, a subset $S$ of an ultraproduct $\prod_{\mathcal{U}}X_{n}$ is an _internal subset_ if there exist subsets $S_{n}\subseteq X_{n}$ for every $n\in\mathbf{N}$ such that $S=\prod_{\mathcal{U}}S_{n}$. A function $f:\prod_{\mathcal{U}}X_{n}\to\prod_{\mathcal{U}}Y_{n}$ is said to be an internal function if there exists a sequence $\\{f_{n}:X_{n}\to Y_{n}\\}_{n\in\mathbf{N}}$ such that $f=\\{f_{n}\\}_{\mathcal{U}}$. Objects that are not contained in ${}^{*}Univ$ are called _external_. Note that standard objects like $\mathbf{R}$ and $\mathbf{C}$ too are external. We can always consider objects that are neither in $Univ$ nor in ${}^{*}Univ$. Two important examples of non-standard external subsets are * • The set of bounded hyperreals, denoted ${}^{*}\mathbf{R}_{b}$, is the subset comprising elements $\\{x_{n}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}$ for which there exists $C\in\mathbf{R}_{\geq 0}$ such that $\lvert x_{n}\rvert\leq C$ for every $n\in\mathbf{N}$. * • The set of infinitesimal hyperreals, denoted ${}^{*}\mathbf{R}_{inf}$, is the subset comprising elements $\\{x_{n}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}$ such that for _every_ real $\epsilon>0$, there exists a large set $S\in\mathcal{U}$ such that $\lvert x_{n}\rvert<\epsilon$ for every $n\in S$. * • For $x,y\in{}^{*}\mathbf{R}$, denote by $x=O_{\mathcal{U}}(y)$ if $x/y\in{}^{*}\mathbf{R}_{b}$, and by $x=o_{\mathcal{U}}(y)$ if $x/y\in{}^{*}\mathbf{R}_{inf}$ (in particular,any bounded element $x\in{}^{*}\mathbf{R}_{b}$ is $x=O_{\mathcal{U}}(1)$ while any infinitesimal element $\epsilon\in{}^{*}\mathbf{R}_{inf}$ is $\epsilon=o_{\mathcal{U}}(1)$). Note that the preimage of ${}^{*}\mathbf{R}_{b}$ under the map $\prod_{n\in\mathbf{N}}\mathbf{R}\to{}^{*}\mathbf{R}$ includes the subset of all bounded sequences, while the preimage of ${}^{*}\mathbf{R}_{inf}$ includes the subset of infinitesimal sequences (that is, sequences that converge to $0$). The subset ${}^{*}\mathbf{R}_{b}$ forms a valuation ring with ${}^{*}\mathbf{R}_{inf}$ being the unique maximal ideal, with quotient ${}^{*}\mathbf{R}_{b}/{}^{*}\mathbf{R}_{inf}\cong\mathbf{R}$. The quotient map $st:{}^{*}\mathbf{R}_{b}\to\mathbf{R}$ is known as the _standard part_ map or _limit along the ultrafilter $\mathcal{U}$_. The previous construction can also be replicated for metric spaces with specified base points. Let $\\{(X_{n},d_{n},p_{n})\\}_{n\in\mathbf{N}}$ be an indexed family of metric spaces (where the space $X_{n}$ comes with the metric $d_{n}$ and the base point $p_{n}$). The ultraproduct $\prod_{\mathcal{U}}X_{n}$ comes equipped with an ${}^{*}\mathbf{R}$-values metric $\prod_{\mathcal{U}}d_{n}$ and base point $\prod_{\mathcal{U}}p_{n}$. Consider the subset, denoted $\left(\prod_{\mathcal{U}}X_{n}\right)_{b}$, of $\prod_{\mathcal{U}}X_{n}$ comprising $\\{x_{n}\\}_{\mathcal{U}}$ such that $\\{d_{n}(x_{n},p_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{b}$ (such elements are referred to as bounded or _admissible_), and a subset, denoted $\left(\prod_{\mathcal{U}}X_{n}\right)_{inf}$, comprising $\\{x_{n}\\}_{\mathcal{U}}$ such that $\\{d_{n}(x_{n},p_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$. Define an equivalence relation $\sim$ on $\left(\prod_{\mathcal{U}}X_{n}\right)_{b}$ by $\\{x_{n}\\}_{\mathcal{U}}\sim\\{y_{n}\\}_{\mathcal{U}}$ if $\\{d_{n}(x_{n},y_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$. The set of equivalence classes $\left(\prod_{\mathcal{U}}X_{n}\right)_{b}/\sim$ is called the _ultralimit_ of $\\{(X_{n},d_{n},p_{n})\\}_{n\in\mathbf{N}}$. We will return to this notion in the context of Banach spaces in §4. ###### Remark 2.2.2. For convenience, we shall henceforth denote by for $n\in\mathcal{U}$ to mean for $n\in S$ for some $S\in\mathcal{U}$. Returning to our setting, consider a sequence $\\{\phi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ where $G_{n}\in\mathcal{G}$. This can be used to construct an internal map $\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, and allows us to redefine asymptotic homomorphisms and closeness to homomorphisms in the ultraproduct. * • Given two internal maps $\phi^{(1)}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ and $\phi^{(2)}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, we denote by $dist(\phi^{(1)},\phi^{(2)})\coloneq\\{dist_{\Gamma,G_{n}}(\phi^{(1)}_{n},\phi^{(2)}_{n})\\}_{\mathcal{U}}$. The maps $\phi^{(1)}$ and $\phi^{(2)}$ are said to be (internally) asymptotically close to each other if $dist(\phi^{(1)},\phi^{(2)})\in{}^{*}\mathbf{R}_{inf}$. * • An internal map $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is said to be an internal homomorphism if there exists a sequence $\\{\psi_{n}\\}_{n\in\mathbf{N}}$ of homomorphisms such that $\psi=\\{\psi_{n}\\}_{\mathcal{U}}$. * • An internal map $\phi\coloneq\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is said to be (internally) asymptotically close to an internal homomorphism if there exists an internal homomorphism $\psi=\\{\psi_{n}:\Gamma\to G_{n}\\}_{n\in\mathbf{N}}$ such that $dist(\phi,\psi)\in{}^{*}\mathbf{R}_{inf}$ (in other words, $\\{D_{\Gamma,G_{n}}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$). * • An internal map $\phi\coloneq\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is called an internal asymptotic homomorphism if $def(\phi)\coloneq\\{def(\phi_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$. For $\epsilon\in{}^{*}\mathbf{R}_{inf}$, we shall call an internal asymptotic homomorphism $\phi$ an internal $\epsilon$-homomorphism if $def(\phi)=\epsilon$ (we similarly define internal $O_{\mathcal{U}}(\epsilon)$-homomorphisms and internal $o_{\mathcal{U}}(\epsilon)$-homomorphisms). The following lemmas are variants of Lemma 2.1.5 and Theorem 2.1.8 in the ultraproduct setting: ###### Lemma 2.2.3. The group $\Gamma$ is uniformly $\mathcal{G}$-stable iff every internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ for $G_{n}\in\mathcal{G}$ is asymptotically close to an internal homomorphism. ###### Proof. We shall prove both directions by contradiction. Let $\phi=\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal asymptotic homomorphism that is _not_ asymptotically close to any internal homomorphism. This means that $\\{def(\phi_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$, but there exists some real $c>0$ and $S\in\mathcal{U}$ such that $D_{\Gamma,G_{n}}(\phi_{n})\geq c$ for $n\in S$. In particular, for this $c$ and any $\epsilon>0$, there exists a map $\phi_{n}:\Gamma\to G_{n}$ with defect $def(\phi_{n})\leq\epsilon$ and $D_{\Gamma,G_{n}}\geq c$, thus proving that $\Gamma$ is _not_ uniformly $\mathcal{G}$-stable. Conversely, suppose $\Gamma$ is _not_ uniformly $\mathcal{G}$-stable. This means, by definition, that there exists $\delta>0$ such that for every $\epsilon>0$, there exists $G\in\mathcal{G}$ and $\phi:\Gamma\to G$ such that $def(\phi)\leq\epsilon$ and $dist_{\Gamma,G}(\phi)>\delta$. Let us fix this $\delta$, and consider a sequence $\\{\epsilon_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}\epsilon_{n}=0$. For each such $\epsilon_{n}$, there exists $G_{n}\in\mathcal{G}$ and $\phi_{n}:\Gamma\to G_{n}$ with $def(\phi_{n})\leq\epsilon_{n}$ and $dist_{\Gamma,G_{n}}(\phi_{n})>\delta$. Consider the internal asymptotic homomorphism $\\{\phi_{n}\\}_{\mathcal{U}}$. Clearly it is not asymptotically close to any internal homomorphism. ∎ ###### Lemma 2.2.4. The group $\Gamma$ is $\mathcal{G}$-uniformly stable with a linear estimate iff for every internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, there exists an internal homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ with $dist(\phi,\psi)=O_{\mathcal{U}}\left(def(\phi)\right)$. ###### Proof. (The proof is similar to Lemma 2.1.5) Suppose $\Gamma$ is not uniformly $\mathcal{G}$-stable with a linear estimate. Consider a sequence $\\{M_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}M_{n}=\infty$ and a sequence $\\{\epsilon_{n}\\}_{n\in\mathbf{N}}$ with $\lim_{n\to\infty}\epsilon_{n}=0$. For each $n\in\mathbf{N}$, let $\phi_{n}:\Gamma\to G_{n}$ be the map with $def_{\Gamma,G_{n}}(\phi_{n})\leq\epsilon_{n}$ but $D_{\Gamma,G_{n}}(\phi_{n})>M_{n}\epsilon_{n}$. Then the ultraproduct $\phi=\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is an internal asymptotic homomorphism with $def(\phi)=\epsilon\coloneq\\{\epsilon_{n}\\}_{\mathcal{U}}$ such that for any internal homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, $dist(\phi,\psi)/def(\phi)$ is not in ${}^{*}\mathbf{R}_{b}$. Conversely, suppose $\Gamma$ is $\mathcal{G}$-uniformly stable with a linear estimate, and let $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal asymptotic homomorphism. There exists $\epsilon_{0}>0$, $M>0$ and a large subset $S_{\epsilon_{0}}\in\mathcal{U}$ such that for every $n\in S_{\epsilon_{0}}$, $\phi_{n}:\Gamma\to G_{n}$ has defect $def(\phi_{n})\leq\epsilon_{0}$, and $D_{\Gamma,G_{n}}(\phi_{n})\leq Mdef(\phi_{n})$, allowing us to construct an internal homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ that is asymptotically close to $\phi$. ∎ We now reformulate the notion of defect diminishing in this ultraproduct setting: ###### Definition 2.2.5. The group $\Gamma$ is said to have the defect diminishing property with respect to the family $\mathcal{G}$ if for any internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, there exists an asymptotic homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ such that * • The defect $def(\psi)=o_{\mathcal{U}}\left(def(\phi)\right)$. * • The distance $dist(\phi,\psi)=O_{\mathcal{U}}\left(def(\phi)\right)$. The following lemma reformulates Theorem 2.1.8 in the ultraproduct setting, and the proof is on the same lines. ###### Theorem 2.2.6. The group $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate iff $\Gamma$ has the defect diminishing property with respect to $\mathcal{G}$. ### 2.3 Internal Liftings and Defect Diminishing In this §, we shall reinterpret an internal asymptotic homomorphism as a (true) homomorpism to a quotient group. This will allow us to describe defect diminishing as a homomorphism lifting problem. In §3.1 we will restrict to a family of metric groups of interest such that this homomorphism lifting problem has an abelian kernel, allowing us to build a cohomological theory capturing the obstruction to uniform stability. Let $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal asymptotic homomorphism. Consider the external subset $\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ defined as $\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}\coloneq\Big{\\{}\\{g_{n}\\}_{\mathcal{U}}:\\{d_{n}(g_{n},1)\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}\Big{\\}}$ In other words, $\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ comprises all elements that are infinitesimally close to the identity in $\prod_{\mathcal{U}}G_{n}$. It is easy to check that $\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ is not just a subset but a normal subgroup of $\prod_{\mathcal{U}}G_{n}$. Since $\phi$ has defect $def(\phi)\in{}^{*}\mathbf{R}_{inf}$, this means that for every $x,y\in{}^{*}\Gamma$, $\phi(xy)^{-1}\phi(x)\phi(y)\in\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$, making $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ a homomorphism. Thus, ###### Lemma 2.3.1. Given an internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, its composition, denoted $\tilde{\phi}$, with the canonical quotient homomorphism $\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ is a homomorphism from ${}^{*}\Gamma$ to the group $\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$. Note that a homomorphism from ${}^{*}\Gamma$ to $\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ does not necessarily arise as the composition of an internal map ${}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ and the quotient homomorphism $\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$. However, we will be specifically interested in homomorphisms that arise this way. ###### Definition 2.3.2. Let $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ be a homomorphism. We say that $F$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ if $\hat{F}$ is internal, and its composition with the canonical quotient homomorphism $\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ is $F$. We say that $F$ has an internal lift homomorphism if there exists an internal lift $\hat{F}$ of $F$ that is also a homomorphism from ${}^{*}\Gamma$ to $\prod_{\mathcal{U}}G_{n}$. Observe that for an internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, $\phi$ itself is an internal lift of the homomorphism $\tilde{\phi}$. Conversely, ###### Lemma 2.3.3. Suppose a homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$. Then $\hat{F}$ is an internal asymptotic homomorphism. Furthermore, suppose $\hat{F}^{(1)}$ and $\hat{F}^{(2)}$ are two internal lifts of a homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$, then $\hat{F}^{(1)}$ and $\hat{F}^{(2)}$ are asymptotically close to each other. ###### Proof. The map $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ , being internal, is of the form $\hat{F}=\\{\hat{F}_{n}\\}_{\mathcal{U}}$ for $\hat{F}_{n}:\Gamma\to G_{n}$ for every $n\in\mathbf{N}$. Since $F$ is a homomorphism, for every $x=\\{x_{n}\\}_{\mathcal{U}},y=\\{y_{n}\\}_{\mathcal{U}}\in{}^{*}\Gamma$, $\\{\hat{F}_{n}(x_{n}y_{n})^{-1}\hat{F}_{n}(x_{n})\hat{F}_{n}(y_{n})\\}_{\mathcal{U}}\in\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ which means that $\\{def(\hat{F}_{n})\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$, making $\hat{F}$ an internal asymptotic homomorphism. Since both $\hat{F}^{(1)}$ and $\hat{F}^{(2)}$ are internal lifts of the homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$, for every $x=\\{x_{n}\\}_{\mathcal{U}}\in{}^{*}\Gamma$, $\\{d_{n}(\hat{F}^{(1)}_{n}(x_{n}),\hat{F}^{(2)}_{n}(x_{n}))\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{inf}$. ∎ This motivates the following equivalent condition for uniform $\mathcal{G}$-stability in terms of internal lifts. ###### Lemma 2.3.4. The group $\Gamma$ is uniformly $\mathcal{G}$-stable iff every homomorphism $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ that has an internal lift also has an internal lift homomorphism. ###### Proof. Let $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal asymptotic homomorphism, and $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ be the homomorphism obtained by composing $\phi$ with the quotient map $\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$. Let $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal lift homomorphism of $\tilde{\phi}$. Then by the previous Lemma 2.3.3, $\phi$ is asymptotically close to the internal homomorphism $\psi$. By Lemma 2.2.3, we conclude that $\Gamma$ is uniformly $\mathcal{G}$-stable. The converse follows by definition of uniform $\mathcal{G}$-stability. ∎ ###### Remark 2.3.5. The quotient group $\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ is called the _metric ultraproduct_ of the sequence $\\{G_{n}\\}_{n\in\mathbf{N}}$ of groups. For (pointwise) stability of groups, it is shown in [AP15] that $\Gamma$ is (pointise) $\mathcal{G}$-stable if any homomorphism $\tilde{\phi}:\Gamma\to\prod_{\mathcal{U}}G_{n}/\left(\prod_{\mathcal{U}}G_{n}\right)_{inf}$ from $\Gamma$ to the metric ultraproduct, can be lifted to a homomorphism $\psi:\Gamma\to\prod_{\mathcal{U}}G_{n}$. In our setting, the uniformity requirement forces us to work with internal maps from the ultrapower ${}^{*}\Gamma$ as opposed to just maps from $\Gamma$. We shall further refine the internal lifting property parametrizing it by the precise defect. Let $\phi=\\{\phi_{n}\\}_{\mathcal{U}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ be an internal asymptotic homomorphism with defect $def(\phi)=\epsilon\in{}^{*}\mathbf{R}_{inf}$. Consider the subset $B(\epsilon)$ (elements _bounded_ by $\epsilon$) of $\prod_{\mathcal{U}}G_{n}$ defined as follows: $B(\epsilon)\coloneqq\\{\\{g_{n}\\}_{\mathcal{U}}\in\prod_{\mathcal{U}}G_{n}:\\{d_{n}(g_{n},1_{n})\\}_{\mathcal{U}}=O_{\mathcal{U}}(\epsilon)\\}$ Note that $B(\epsilon)$ is an externally defined subset. Since the metric on $G_{n}$ is bi-invariant, the subset $B(\epsilon)$ is a normal subgroup of $\prod_{\mathcal{U}}G_{n}$. Let $q_{B(\epsilon)}:\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ be the canonical quotient homomorphism, and denote by $\tilde{\phi}$ the composition map $q_{B(\epsilon)}\cdot\phi:{}^{*}\Gamma\to\left(\prod_{\mathcal{U}}G_{n}\right)/B(\epsilon)$. The following lemma is a parametrized reformulation of Lemma 2.3.3, and the proof is on the same lines, which we omit here: ###### Lemma 2.3.6. The map $\tilde{\phi}:{}^{*}\Gamma\to\left(\prod_{\mathcal{U}}G_{n}\right)/B(\epsilon)$ is a group homomorphism. Conversely, let $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$. An internal map $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is said to be an internal lift of $\tilde{\phi}$ if $q_{B(\epsilon)}\cdot\hat{F}=F$. We again have an analogue of Lemma 2.3.3 here too, whose proof is similar: ###### Lemma 2.3.7. Suppose a homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$. Then $\hat{F}$ is an internal $O_{\mathcal{U}}(\epsilon)$-homomorphism. Futhermore, if $\hat{F}^{(1)}$ and $\hat{F}^{(2)}$ are two such internal lifts of $F$, then $\hat{F}^{(1)}$ and $\hat{F}^{(2)}$ are internally $O_{\mathcal{U}}(\epsilon)$-close to each other. ###### Lemma 2.3.8. The group $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate iff for every $\epsilon\in{}^{*}\mathbf{R}_{inf}$, every homomorphism $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ that has an internal lift also has an internal lift homomorphism. ###### Proof. Suppose $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is an internal asymptotic homomorphism with defect $\epsilon\in{}^{*}\mathbf{R}_{inf}$. Then $\tilde{\phi}::{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ is a homomorphism which has internal lift $\phi$. Thus, it also has an internal lift homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ which is internally $O_{\mathcal{U}}(\epsilon)$-close to $\phi$. Hence, by Lemma 2.3.4, $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate. The converse is immediate. ∎ Thus, for an infinitesimal $\epsilon\in{}^{*}\mathbf{R}_{inf}$, if a homomorphism $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ that has an internal lift, can be internally lifted to an internal homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, then $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate. We shall now try to obtain such a lift by a sequence of intermediate lifts. For $\epsilon\in{}^{*}\mathbf{R}_{inf}$, denote by $I(\epsilon)$ the subset of $\prod_{\mathcal{U}}G_{n}$ (elements infinitesimal with respect to $\epsilon$) defined as $I(\epsilon)\coloneq\\{\\{g_{n}\\}_{\mathcal{U}}\in\prod_{\mathcal{U}}G_{n}:\\{d_{n}(g_{n},1_{n})\\}_{\mathcal{U}}=o_{\mathcal{U}}(\epsilon)\\}$ Note that by the bi-invariance of the metric, $I(\epsilon)\subseteq B(\epsilon)$ is a normal subgroup of $\prod_{\mathcal{U}}G_{n}$. Let $q_{I(\epsilon)}:\prod_{\mathcal{U}}G_{n}\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ be the canonical quotient homomorphism. The following lemma is similar to Lemma 2.3.3, and the proof too is on the same lines: ###### Lemma 2.3.9. Suppose $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ is an internal $o_{\mathcal{U}}(\epsilon)$-homomorphism, then $q_{I(\epsilon)}\cdot\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ is a homomorphism. Conversely, suppose a homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$, then $\hat{F}$ is an internal $o_{\mathcal{U}}(\epsilon)$-homomorphism, and any two internal lifts of $F$ are internally $o_{\mathcal{U}}(\epsilon)$-close to one another. We can now reformulate the defect diminishing property in terms of internal lifts. ###### Lemma 2.3.10. The group $\Gamma$ has the defect diminishing property with respect to $\mathcal{G}$ iff for every $\epsilon\in{}^{*}\mathbf{R}_{inf}$ and every homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ that has an internal lift, $F$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ such that $q_{I(\epsilon)}\cdot\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ is a homomorphism. ###### Proof. Suppose $\Gamma$ has the defect diminishing property, then it is immediate from Definition 2.2.5 that for every $\epsilon\in{}^{*}\mathbf{R}_{inf}$ and every homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ that has an internal lift, $F$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ such that $q_{I(\epsilon)}\cdot F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ is a homomorphism. Conversely, consider an internal asymptotic homomorphism $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ with defect $def(\phi)=\epsilon$, and the induced homomorphism $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$. By the hypothesis of the lemma (and Lemma 2.3.9), there exists an internal $o_{\mathcal{U}}(\epsilon)$-homomorphism $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ which is $O_{\mathcal{U}}(\epsilon)$-close to $\phi$. This shows that $\Gamma$ has the defect diminishing property with respect to $\mathcal{G}$. ∎ We now recap the results obtained so far: ###### Theorem 2.3.11. Let $\Gamma$ be a discrete group, and $\mathcal{G}$ be a family of metric groups such that for every group $G\in\mathcal{G}$, its metric $d_{G}$ is complete. Then the following are equivalent: 1. 1. The group $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate. 2. 2. The group $\Gamma$ has the defect diminishing property with respect to $\mathcal{G}$. 3. 3. For every $\epsilon\in{}^{*}\mathbf{R}_{inf}$ and every homomorphism $F:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/B(\epsilon)$ that has an internal lift, $F$ has an internal lift $\hat{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ such that $q_{I(\epsilon)}\cdot\tilde{F}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}/I(\epsilon)$ is a homomorphism. ## 3 A Cohomological Interpretation of Stability We concluded the previous section by noting that in order to prove that $\Gamma$ is uniformly $\mathcal{G}$-stable with a linear estimate, it is sufficient to show that it has the defect diminishing property with respect to $\mathcal{G}$. Furthermore, we interpreted this property in terms of internal lifts of homomorphisms to quotient groups. Recall that a matrix norm $\|\cdot\|$ on $M_{n}(\mathbf{C})$ is said to be submultiplicative if for every $A,B\in M_{n}(\mathbf{C})$, $\|AB\|\leq\|A\|\cdot\|B\|$. From now on, we shall work exclusively with the family of unitary groups, each equipped with a unitarily bi-invariant _submultiplicative_ matrix norm, and shall denote this family by $\mathfrak{U}$. $\mathfrak{U}\coloneq\Big{\\{}\left(U(n),\|\cdot\|\right):n\in\mathbf{N}\text{ and }\|\cdot\|\text{ is submultiplicative}\Big{\\}}$ In particular, these include the $p$-Schatten norms given by $\|A\|_{p}=\begin{cases}(Tr|A|^{p})^{1/p}&1\leq p<\infty\\\ \sup_{\|\nu\|=1}\|A\nu\|&p=\infty\end{cases}$ Note that for $p=\infty$, this is the operator norm (as studied in [Kaz82] and [BOT13]), while for $p=2$, this is the _Frobenius_ norm or the (unnormalized) _Hilbert-Schmidt_ norm (as studied in [DCGLT20] and [AD22]). Before we proceed further, we state and sketch the proof of the following useful transference lemma for uniform $\mathfrak{U}$-stability with a linear estimate, which we shall use in §6. The proof is on the lines of a relative version of [BOT13, Theorem 3.2] (which reproves Kazhdan’s result on the Ulam stability of amenable groups), which we can adapt here in the simpler setting of finite index. ###### Lemma 3.0.1. Let $\Lambda\leq\Gamma$ be a subgroup of finite index. Then $\Gamma$ is uniformly $\mathfrak{U}$-stable with a linear estimate iff $\Lambda$ is uniformly $\mathfrak{U}$-stable with a linear estimate. ###### Sketch. Suppose $\Gamma$ is uniformly $\mathfrak{U}$-stable with a linear estimate. Then the proof of [BOT13, Corollary 2.7] (further explained in [Gam11, Lemma II.22]) implies that $\Lambda$ too is uniformly $\mathfrak{U}$-stable with a linear estimate. Conversely, suppose $\Lambda$ is uniformly $\mathfrak{U}$-stable with a linear estimate, and let $\phi:\Gamma\to U(n)$ be an $\epsilon$-homomorphism. Since the finite index subgroup $\Lambda$ is uniformly $\mathfrak{U}$-stable with a linear estimate, we can assume that the restriction of $\phi$ to $\Lambda$ is a homomorphism, and furthermore, $\phi(g\delta)=\phi(g)\phi(\delta)$ for every $g\in\Gamma$, $\delta\in\Lambda$. Now define $\phi^{\prime}:\Gamma\to M(n)$ as $\phi^{\prime}(g)\coloneq\frac{1}{|\Gamma:\Lambda|}\sum_{x\in\Gamma/\Lambda}\phi(gx)\phi(x)^{*}$ Note that $\phi^{\prime}(g)$ is just the average over coset representatives in $\Gamma/\Lambda$, and $M(n)$ is the space of $n\times n$ matrices. Just as in the proof of [BOT13, Theorem 3.2], this $\phi^{\prime}$ can be normalized to obtain $\phi_{1}:\Gamma\to U(n)$ such that $\phi_{1}$ has defect $C\epsilon^{2}$ (for a universal constant $C$ not depending on $n$), and we repeat the process to obtain a (true) homomorphism as the limit. ∎ ###### Remark 3.0.2. In fact, it is further shown in [FFR23, Proposition 1.5] that for a subgroup $\Lambda\leq\Gamma$ that is _co-amenable_ in $\Gamma$, if $\Lambda$ is uniformly $\mathfrak{U}$-stable with a linear estimate, then $\Gamma$ too is uniformly $\mathfrak{U}$-stable with a linear estimate. This generalizes one direction of Lemma 3.0.1, though the converse is not true in this level of generality. ###### Remark 3.0.3. In particular, if $\Gamma_{1}$ and $\Gamma_{2}$ are commensurable, then $\Gamma_{1}$ is uniformly $\mathfrak{U}$-stable with a linear estimate iff $\Gamma_{2}$ is uniformly $\mathfrak{U}$-stable with a linear estimate. Given a sequence of unitary groups $\\{U(k_{n})\\}_{n\in\mathbf{N}}$, we denote its ultraproduct by $\prod_{\mathcal{U}}U(k_{n})$,and given an element $u=\\{u_{n}\\}_{\mathcal{U}}\in\prod_{\mathcal{U}}U(k_{n})$ (where for each $n\in\mathbf{N}$, $u_{n}\in U(k_{n})$), we denote its distance from the identity $1\in\prod_{\mathcal{U}}U(k_{n})$ by $\|u-1\|\coloneq\\{\|u_{n}-I\|\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{b}$, for notational convenience. Note that for $\epsilon\in{}^{*}\mathbf{R}_{inf}$ and an ultraproduct $\prod_{\mathcal{U}}U(k_{n})$, the subsets $B(\epsilon)\subseteq\prod_{\mathcal{U}}G_{n}$ and $I(\epsilon)\subseteq B(\epsilon)$ can now be written as $B(\epsilon)=\Big{\\{}u\in\prod_{\mathcal{U}}U(k_{n}):\|u-I\|=O_{\mathcal{U}}(\epsilon)\Big{\\}}$ $I(\epsilon)=\Big{\\{}u\in\prod_{\mathcal{U}}U(k_{n}):\|u-I\|=o_{\mathcal{U}}(\epsilon)\Big{\\}}$ Furthermore, the submultiplicativity of the norms implies that: ###### Lemma 3.0.4. The group $B(\epsilon)/I(\epsilon)$ is abelian. ###### Proof. Let $a,b\in B(\epsilon)$, and consider the commutator $aba^{*}b^{*}$. We shall prove that $aba^{*}b^{*}\in I(\epsilon)$. Observe that $\|aba^{*}b^{*}-I\|=\|ab-ba\|$ since the norm is unitarily invariant. $\|ab-ba\|=\|(a-I)(b-I)-(b-I)(a-I)\|\leq 2\|a-I\|\|b-I\|$ This is because of the submultiplicativity of the norm. Since $a,b\in B(\epsilon)$, we conclude that $\|aba^{*}b^{*}-I\|=O_{\mathcal{U}}(\epsilon^{2})=o_{\mathcal{U}}(\epsilon)$. ∎ The fact that $B(\epsilon)/I(\epsilon)$ is an abelian group will allow us to rephrase the lifting property discussed in §2.3 in terms of the vanishing of a cohomology which we shall develop in detail in §3.2 and §4. In §3.1, we explicitly work out the cocycles corresponding to possible lifts of a homomorphism, which are then transferred to the linearized setting in §3.2 where a cohomological theory begins to reveal itself. Finally, in §3.3, we demonstrate the notions discussed in the case of discrete abelian groups, showing that the vanishing of our second cohomology implies uniform stability. ### 3.1 Lifting with an Abelian Kernel Let $\phi:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ be an internal $\epsilon$-homomorphism that induces a homomorphism $\tilde{\phi}:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})/B(\epsilon)$. Let $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ be an internal lift of $\tilde{\phi}$. To an internal lift $\psi:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ of $\tilde{\phi}$, we associate an internal map $\rho_{\psi}:{}^{*}\Gamma\times\prod_{\mathcal{U}}U(k_{n})\to\prod_{\mathcal{U}}U(k_{n})$ $\rho_{\psi}(g)(u)\coloneq\psi(g)\cdot u\cdot\psi(g)^{-1}$ (3.1) For every $g\in{}^{*}\Gamma$ and $u\in B(\epsilon)$, $\rho_{\psi}(g)(u)\in B(\epsilon)$, while for $u\in I(\epsilon)$, $\rho_{\psi}(g)(u)\in I(\epsilon)$. ###### Lemma 3.1.1. The internal map $\rho_{\psi}:{}^{*}\Gamma\times\prod_{\mathcal{U}}U(k_{n})\to\prod_{\mathcal{U}}U(k_{n})$ induces an action, denoted $\tilde{\rho}_{\psi}$, of ${}^{*}\Gamma$ on the abelian group $B(\epsilon)/I(\epsilon)$. ###### Proof. For $g_{1},g_{2}\in{}^{*}\Gamma$ and $u\in B(\epsilon)$, we want to show that $\rho_{\psi}(g_{1})(\rho_{\psi}(g_{2})(u))-\rho_{\psi}(g_{1}g_{2})(u)\in I(\epsilon)$. Note that for every $g_{1},g_{2}\in{}^{*}\Gamma$, $\psi(g_{1}g_{2})^{-1}\psi(g_{1})\psi(g_{2})\in B(\epsilon)$, so $\|\psi(g_{1})\psi(g_{2})u\psi(g_{2})^{-1}\psi(g_{1})^{-1}-\psi(g_{1}g_{2})u\psi(g_{1}g_{2})^{-1}\|=\|\psi(g_{1}g_{2})^{-1}\psi(g_{1})\psi(g_{2})u-u\psi(g_{1}g_{2})^{-1}\psi(g_{1})\psi(g_{2})\|$ Since the elements $\psi(g_{1}g_{2})^{-1}\psi(g_{1})\psi(g_{2})$ and $u$ are in $B(\epsilon)$, their commutator is in $I(\epsilon)$ (as in proof of Lemma 3.0.4). ∎ We shall call $\tilde{\rho}_{\psi}$ the _action induced from $\psi$_. Observe that we have defined $\rho_{\psi}$ using a given internal lift $\psi$. However, this induced action of ${}^{*}\Gamma$ on $B(\epsilon)/I(\epsilon)$ is independent of the choice of internal lift of $\tilde{\phi}$. ###### Lemma 3.1.2. For two internal lifts $\psi_{1}$ and $\psi_{2}$ of $\tilde{\psi}$, $\tilde{\rho}_{\psi_{1}}=\tilde{\rho}_{\psi_{2}}$. ###### Proof. Note that for $g\in{}^{*}\Gamma$, $\psi_{2}(g)^{-1}\psi_{1}(g)\in B(\epsilon)$ since they are both internal lifts of $\tilde{\phi}$. Hence for $u\in B(\epsilon)$, again as in the proof of Lemma 3.0.4, $\psi_{2}(g)^{-1}\psi_{1}(g)u-u\psi_{2}(g)^{-1}\psi_{1}(g)\in I(\epsilon)$, which implies that $\psi_{1}(g)u\psi_{1}(g)^{-1}-\psi_{2}(g)u\psi_{2}(g)^{-1}\in I(\epsilon)$. ∎ So while the internal map $\rho_{\psi}:{}^{*}\Gamma\times\prod_{\mathcal{U}}U(k_{n})\to\prod_{\mathcal{U}}U(k_{n})$ depends on $\psi$, the induced action $\tilde{\rho}_{\psi}$ is independent of the choice of internal lift of $\tilde{\phi}$. Hence we can denote this action by $\tilde{\rho}_{\phi}$. Define the internal map $\alpha_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ $\alpha_{\psi}(g_{1},g_{2})\coloneq\psi(g_{1})\psi(g_{2})\psi(g_{1}g_{2})^{-1}$ Note that by construction $\alpha_{\psi}$ takes values in $B(\epsilon)$ (and our goal is to find some lift $\psi$ for which $\alpha_{\psi}$ takes values only in $I(\epsilon)$). Observe that ###### Lemma 3.1.3. For $g_{1},g_{2},g_{3}\in{}^{*}\Gamma$, $\alpha_{\psi}(g_{1},g_{2})\cdot\alpha_{\psi}(g_{1}g_{2},g_{3})\cdot\alpha_{\psi}(g_{1},g_{2}g_{3})^{-1}=\psi(g_{1})\cdot\alpha_{\psi}(g_{2},g_{3})\psi(g_{1})^{-1}$ Let $q_{B/I}:B(\epsilon)\to B(\epsilon)/I(\epsilon)$ be the canonical quotient homomorphism. Since $\alpha_{\psi}$ takes values in $B(\epsilon)$, let $\tilde{\alpha}_{\psi}\coloneq q_{B/I}\cdot\alpha_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to B(\epsilon)/I(\epsilon)$. Since $B(\epsilon)/I(\epsilon)$ is abelian, Lemma 3.1.3 implies the following corollary: ###### Corollary 3.1.4. For $g_{1},g_{2},g_{3}\in{}^{*}\Gamma$, $\tilde{\rho}_{\phi}(g_{1})\cdot\tilde{\alpha}_{\psi}(g_{2},g_{3})-\tilde{\alpha}_{\psi}(g_{1}g_{2},g_{3})+\tilde{\alpha}_{\psi}(g_{1},g_{2}g_{3})-\tilde{\alpha}_{\psi}(g_{1},g_{2})=0$ While we noted that the action $\tilde{\rho}_{\psi}$ of ${}^{*}\Gamma$ on $B(\epsilon)/I(\epsilon)$ does not depend on the choice of lift $\psi$, the map $\tilde{\alpha}_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to B(\epsilon)/I(\epsilon)$ does depend on the choice of lift $\psi$. Our hope is to prove the existence of some choice of lift $\psi$ such that $\tilde{\alpha}_{\psi}$ is trivial. Consider $\tilde{\alpha}_{\psi_{1}}$ and $\tilde{\alpha}_{\psi_{2}}$ for two different internal lifts $\psi_{1}$ and $\psi_{2}$ of $\tilde{\phi}$. Define an internal map $\beta_{\psi_{1},\psi_{2}}:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ $\beta_{\psi_{1},\psi_{2}}(g)\coloneq\psi_{2}(g)\psi_{1}(g)^{-1}$ Since $\beta_{\psi_{1},\psi_{2}}$ takes values in $B(\epsilon)$, denote by $\tilde{\beta}_{\psi_{1},\psi_{2}}:{}^{*}\Gamma\to B(\epsilon)/I(\epsilon)$ its composition with the quotient map $B(\epsilon)\to B(\epsilon)/I(\epsilon)$. Then for $g_{1},g_{2}\in{}^{*}\Gamma$, a careful computation shows that $\tilde{\alpha}_{\psi_{2}}-\tilde{\alpha}_{\psi_{1}}=\tilde{\beta}_{\psi_{1},\psi_{2}}(g_{1})+\tilde{\rho}_{\phi}(g_{1})\cdot\tilde{\beta}_{\psi_{1},\psi_{2}}(g_{2})-\tilde{\beta}_{\psi_{1},\psi_{2}}(g_{1}g_{2})$ (3.2) Suppose there exists an internal map $\beta:{}^{*}\Gamma\to\prod_{\mathcal{U}}G_{n}$ that takes values in $B(\epsilon)$ such that for the lift $\psi$, the following equation holds for all $g_{1},g_{2}\in{}^{*}\Gamma$: $\tilde{\alpha}_{\psi}(g_{1},g_{2})=\tilde{\beta}(g_{1})+\tilde{\rho}_{\phi}(g_{1})\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})$ Then the internal map $\psi^{\sim}:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ $\psi^{\sim}(g)\coloneq\psi(g)\beta(g)^{-1}$ is also an internal lift of $\tilde{\phi}$ such that for every $g_{1},g_{2}\in{}^{*}\Gamma$, $\tilde{\alpha}_{\psi^{\sim}}(g_{1},g_{2})=0$ In particular, this means that $\psi^{\sim}$ is the internal lift that we want. The above discussion hints at a cohomological theory that captures the obstruction to such lifts. The idea is as follows: any candidate internal lift $\psi$ of $\tilde{\phi}$, with defect $O_{\mathcal{U}}(\epsilon)$, gives us a type of $2$-cocycle of ${}^{*}\Gamma$ with coefficients in $B(\epsilon)/I(\epsilon)$, and if that cocycle happens to be a $1$-couboundary, then the lift $\psi$ can be corrected to obtain another lift that has defect $o_{\mathcal{U}}(\epsilon)$ and is still $O_{\mathcal{U}}(\epsilon)$-close to $\psi$ (and $\phi$), thus implying the defect diminishing property that we want. ###### Remark 3.1.5. In [DCGLT20], it is shown (using the idea mentioned in Remark 2.3.5 and defect diminishing) that $\Gamma$ is (pointwise) stable (with respect to unitary matrices equipped with the Frobenius norm) if $H^{2}\left(\Gamma,B(\epsilon)/I(\epsilon)\right)$ vanishes. From Lemma 3.1.3, it might be tempting to simply consider $\tilde{\alpha}_{\psi}$ as a bounded $2$-cocycle for the group ${}^{*}\Gamma$ with coefficients in the abelian group $B(\epsilon)/I(\epsilon)$, and interpret Eq. 3.2 (and the ensuing discussion) as insisting that $\tilde{\alpha}_{\psi}$ is the coboundary of a bounded $1$-cochain of ${}^{*}\Gamma$ in $B(\epsilon)/I(\epsilon)$. But we cannot simply work with $\operatorname{H}_{b}^{2}\left({}^{*}\Gamma,B(\epsilon)/I(\epsilon)\right)$, since we need to ensure that the bounded $2$-cocycle $\tilde{\alpha}_{\psi}$ (which was induced from an _internal_ map $\alpha_{\psi}$) is the coboundary of a bounded $1$-cochain $\tilde{\beta}:{}^{*}\Gamma\to B(\epsilon)/I(\epsilon)$ that is itself also induced from an _internal_ map $\beta:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$. This insistence on our maps being induced from internal maps is essential in our setting of uniform stability, and leads to the definition of an internal and asymptotic bounded cohomology machinery that we construct in §4. ### 3.2 Linearization and the Lie Algebra In the previous section we observed that the defect diminishing property could be interpreted as a cohomological problem based on an action of ${}^{*}\Gamma$ on the abelian group $B(\epsilon)/I_{\epsilon}$. However, as pointed out in Remark 3.1.5 the subtlety here involves the requirement that we deal only with maps that are induced from some _internal_ mapping to $\prod_{\mathcal{U}}U(k_{n})$. At that level we do not have the abelianness that would allow us to properly formulate a cohomology theory. In this §, we transfer to the Lie algebra allowing us to work with spaces of maps from the group to Banach spaces. For a matrix $A\in M_{n}(\mathbf{C})$, consider the matrix logarithm given by $\log A\coloneq\sum\limits_{j=1}^{\infty}(-1)^{j-1}\frac{(A-I)^{j}}{j}$ The series above converges if $\|A-I\|<1$ (for some submultiplicative matrix norm $\|\cdot\|$). By subadditivity and submultiplicativity of the norm $\|\cdot\|$, if $\epsilon\leq 1/2$ and $\|u-I\|\leq\epsilon$, $\|\log{u}\|\leq\sum\limits_{j=1}^{\infty}\|u-I\|^{j}\leq 2\epsilon$ ###### Lemma 3.2.1. For every $\epsilon<1/2$, $n\in\mathbf{N}$, $u\in U(n)$ and every submultiplicative norm $\|\cdot\|$ on $M_{n}(\mathbf{C})$, if $\|u-I\|\leq\epsilon$, then $\|\log{u}\|\leq 2\epsilon$. It is a classical result that for a unitary matrix $u\in U(n)$, its logarithm $\log{u}$ (whenever it is defined) is an _anti-Hermitian_ matrix. Denote by $\mathfrak{u}(n)$ the (real) vector space of anti-Hermitian matrices in $M_{n}(\mathbf{C})$. In the other direction, we have the matrix exponential map defined as $\exp(A)=\sum\limits_{j=0}^{\infty}\frac{A^{j}}{j!}$ which is well-defined for every $A\in M_{n}(\mathbf{C})$. For an anti- hermitian matrix $W\in\mathfrak{u}(n)$ of the form $W=U\circ diag(i\theta_{j})_{j=1}^{n}\circ U^{*}$ for $U\in U(n)$, its exponential $exp(W)=U\circ diag(e^{i\theta_{j}})_{j=1}^{n}\circ U^{*}$. The following trivial bound is sufficient for our purposes: ###### Lemma 3.2.2. For every $\epsilon>0$, $n\in\mathbf{N}$ and a submultiplicative norm $\|\cdot\|$ on $M_{n}(\mathbf{C})$, for a matrix $W\in\mathfrak{u}(n)$ with $\|W\|<\epsilon$, $\|exp(W)-I\|\leq e^{\epsilon}-1$. Note that since $\mathfrak{u}(n)$ is finite-dimensional, it is complete for any norm, making it a finite-dimensional real Banach space. It comes with an isometric (adjoint) action of $U(n)$ given as follows: for $v\in\mathfrak{u}(n)$ and $U\in U(n)$, $Ad(U)(v)\coloneq UvU^{*}\in\mathfrak{u}(n)$. Consider the family of $\mathbf{R}$-vector spaces of anti-hermitian matrices $\mathfrak{u}(n)$ each equipped with a submultiplicative norm. $\Big{\\{}\left(\mathfrak{u}(n),\|\cdot\|\right):n\in\mathbf{N}\text{ and }\|\cdot\|\text{ is submultiplicative}\Big{\\}}$ For an infinitesimal $\epsilon\in{}^{*}\mathbf{R}_{inf}$, define the internal map ${}_{\epsilon}\log:\prod_{\mathcal{U}}U(k_{n})\to\prod_{\mathcal{U}}\mathfrak{u}(k_{n})$ ${}_{\epsilon}\log{u}\coloneq\frac{1}{\epsilon}\\{\log{u_{k_{n}}}\\}_{\mathcal{U}}$ and similarly, the internal map $\exp:\prod_{\mathcal{U}}\mathfrak{u}(k_{n})\to\prod_{\mathcal{U}}U(k_{n})$ given by ${}_{\epsilon}\exp{u}\coloneq\\{\exp{\epsilon_{k_{n}}u_{k_{n}}}\\}_{\mathcal{U}}$ Let us denote the ultraproduct $\prod_{\mathcal{U}}\mathfrak{u}(k_{n})$ by $\mathcal{W}$ from now on. Note that $\mathcal{W}$ comes with a ${}^{*}\mathbf{R}$-valued norm, which we shall denote simply as $\|\cdot\|$, obtained as the ultraproduct of the respective norms of each $\mathfrak{u}(k_{n})$. The bounded elements of $\mathcal{W}$ shall be denoted $\mathcal{W}_{b}$ while the infinitesimal elements of $\mathcal{W}$ are denoted by $\mathcal{W}_{inf}$. That is, $\mathcal{W}_{b}\coloneq\Big{\\{}w\in\mathcal{W}:\|w\|\in{}^{*}\mathbf{R}_{b}\Big{\\}}$ (3.3) $\mathcal{W}_{inf}\coloneq\Big{\\{}w\in\mathcal{W}:\|w\|\in{}^{*}\mathbf{R}_{inf}\Big{\\}}$ (3.4) The motivation behind scaling the definitions of $\log$ and $\exp$ by $1/\epsilon$ and $\epsilon$ respectively is as follows: ###### Proposition 3.2.3. The internal map ${}_{\epsilon}\log:\prod_{\mathcal{U}}U(k_{n})\to\mathcal{W}$ when restricted to $B(\epsilon)$, takes values in $\mathcal{W}_{b}$, and elements in $I(\epsilon)$ are taken to $\mathcal{W}_{inf}$. This induces an isomorphism of the abelian groups $\mathcal{W}_{b}/\mathcal{W}_{inf}$ and $B(\epsilon)/I(\epsilon)$. ###### Proof. It follows from Lemma 3.2.1 that for $u\in B(\epsilon)$, ${}_{\epsilon}\log{u}\in\mathcal{W}_{b}$, and for $u\in I(\epsilon)$, ${}_{\epsilon}\log{u}\in\mathcal{W}_{inf}$. The map is surjective on $\mathcal{W}_{b}$ since the map ${}_{\epsilon}\log{(_{\epsilon}\exp{v})}=v$ for $v\in\mathcal{W}_{b}$ (and similarly for $\mathcal{W}_{inf}$ as well). From properties of the logarithm map, it follows that for $u_{1},u_{2}\in B(\epsilon)$,${}_{\epsilon}\log{u_{1}u_{2}}-(_{\epsilon}\log{u_{1}}+_{\epsilon}\log{u_{2}})\in\mathcal{W}_{inf}$. Hence ${}_{\epsilon}\log$ induces a surjective group homomorphism from $B(\epsilon)$ to $\mathcal{W}_{b}/\mathcal{W}_{inf}$ with kernel $I(\epsilon)$. ∎ The ultralimit $\mathcal{W}_{b}/\mathcal{W}_{inf}$ shall be denoted $\tilde{\mathcal{W}}$. The above lemma tells us that $\tilde{\mathcal{W}}\cong B(\epsilon)/I(\epsilon)$. In fact, $\tilde{\mathcal{W}}\cong B(\epsilon)/I(\epsilon)$ is not just an abelian group but has the structure of a real Banach space. It is an example of a construction known as a _Banach space ultralimit_. We shall not prove this result here, but refer to [Hei80] and [ACH12] for more details: ###### Proposition 3.2.4 ([Hei80]). The space $\tilde{W}=B(\epsilon)/I(\epsilon)$ is a real Banach space. Recall that we had defined (Eq. 3.1) an internal map $\rho_{\psi}:{}^{*}\Gamma\times\prod_{\mathcal{U}}U(k_{n})\to\prod_{\mathcal{U}}U(k_{n})$ defined as $\rho_{\psi}(g)v=\psi(g)v\psi(g)^{-1}$ which had induced an action $\tilde{\rho}_{\phi}$ of ${}^{*}\Gamma$ on $B(\epsilon)/I(\epsilon)$. We can similarly define an internal map $\pi_{\psi}:{}^{*}\Gamma\times\mathcal{W}\to\mathcal{W}$ through the internal adjoint action of $\prod_{\mathcal{U}}U(k_{n})$ on $\mathcal{W}$ (that is, conjugation), $\pi_{\psi}v\coloneqq\psi(g)v\psi(g)^{-1}$ (3.5) Again, by the submultiplicativity of the norms, for $v\in\mathcal{W}_{b}$ and $g_{1},g_{2}\in{}^{*}\Gamma$, $\pi_{\psi}(g_{1}g_{2})v-\pi_{\psi}(g_{1})\pi_{\psi}(g_{2})v\in\mathcal{W}_{inf}$ Thus, the internal map $\pi_{\psi}$ as defined above induces an action of ${}^{*}\Gamma$ on $\mathcal{W}_{b}/\mathcal{W}_{inf}$. Unless there is ambiguity, we shall denote the induced action of $g\in{}^{*}\Gamma$ on $\tilde{v}\in\mathcal{W}_{b}/\mathcal{W}_{inf}$ through $\pi_{\psi}$ by $g\cdot\tilde{v}$. ###### Lemma 3.2.5. The internal map ${}_{\epsilon}\log:\prod_{\mathcal{U}}U(k_{n})\to\mathcal{W}$ induces a ${}^{*}\Gamma$-equivariant (additive) group isomorphism between $B(\epsilon)/I(\epsilon)$ (with the action induced from $\rho_{\psi}$) and $\tilde{\mathcal{W}}$ (with the action induced from $\pi_{\psi}$). ###### Proof. We already saw that ${}_{\epsilon}\log$ induces a group isomorphism between $B(\epsilon)/I(\epsilon)$ and $\mathcal{W}_{b}/\mathcal{W}_{inf}$. Let $g\in{}^{*}\Gamma$ and $u\in B(\epsilon)$. Then $\rho_{\psi}(g)u=\psi(g)u\psi(g)^{-1}$, which means that ${}_{\epsilon}\log(\rho_{\psi}(g)u)=\psi(g)_{\epsilon}\log{u}\psi(g)^{-1}$ since the matrix logarithm is invariant with respect to conjugation by a unitary matrix. ∎ In particular, $\tilde{\mathcal{W}}$ is a real Banach space with an isometric action of ${}^{*}\Gamma$ (it is a real Banach ${}^{*}\Gamma$-module). ###### Remark 3.2.6. The fact that $\mathcal{W}=B(\epsilon)/I(\epsilon)$ is a real Banach ${}^{*}\Gamma$-module is useful in reducing (pointwise) stability of $\Gamma$ with respect to the family $\mathfrak{U}$ to showing that $H^{2}(\Gamma,\mathcal{W})=0$, and this line of study is pursued in [DCGLT20] and [LO20]. However, in our setting of uniform stability, the Banach structure of $\mathcal{W}$ is not as directly relevant. Corresponding to the internal map $\alpha_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$, define the internal map $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ $\alpha(g_{1},g_{2})\coloneqq_{\epsilon}\log{\alpha_{\psi}(g_{1},g_{2})}$ (3.6) Since $\alpha_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ takes values only in $B(\epsilon)$, it is clear that $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ takes values only in $\mathcal{W}_{b}$. We shall denote by $\tilde{\alpha}:{}^{*}\Gamma\times{}^{*}\Gamma\to\tilde{\mathcal{W}}$ the map induced obtained by composing $\alpha$ with the canonical quotient map $\mathcal{W}_{b}\to\tilde{\mathcal{W}}$. ###### Lemma 3.2.7. The map $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ satisfies the following condition: for any $g_{1},g_{2},g_{3}\in{}^{*}\Gamma$, $\pi_{\psi}(g_{1})\alpha(g_{2},g_{3})-\alpha(g_{1}g_{2},g_{3})+\alpha(g_{1},g_{2}g_{3})-\alpha(g_{1},g_{2})\in\mathcal{W}_{inf}$ ###### Proof. Recall that $\alpha_{\psi}$ satisfies the following property: for $g_{1},g_{2},g_{3}\in{}^{*}\Gamma$, $\alpha_{\psi}(g_{1},g_{2})\alpha_{\psi}(g_{1}g_{2},g_{3})\alpha_{\psi}(g_{1},g_{2}g_{3})^{-1}=\rho_{\psi}(g_{1})\alpha_{\psi}(g_{2},g_{3})$ The conclusion then follows from the fact that the map ${}_{\epsilon}\log:\prod_{\mathcal{U}}U(k_{n})\to\mathcal{W}$ induces a ${}^{*}\Gamma$-equivariant group homomorphism between $B(\epsilon)/I(\epsilon)$ and $\mathcal{W}_{b}/\mathcal{W}_{inf}$. ∎ Thus, the induced map $\tilde{\alpha}:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ satisifes the $2$-cocycle condition given by: for $g_{1},g_{2},g_{3}\in{}^{*}\Gamma$, $g_{1}\cdot\tilde{\alpha}(g_{2},g_{3})-\tilde{\alpha}(g_{1}g_{2},g_{3})+\tilde{\alpha}(g_{1},g_{2}g_{3})-\tilde{\alpha}(g_{1},g_{2})=0$ (3.7) So transfering to the internal Lie algebra through the ${}_{\epsilon}\log$ map, we thus have a map $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ which takes values in $\mathcal{W}_{b}$ and satisfies the 2-cocycle condition modulo $\mathcal{W}_{inf}$. Recall that the defect diminishing condition was implied by the following statement: suppose $\alpha_{\psi}:{}^{*}\Gamma\times{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ is an internal function taking values in $B(\epsilon)$ and such that $\tilde{\alpha}_{\phi}$ satisfies the $2$-cocycle condition. Then there exists an internal $\beta:{}^{*}\Gamma\to\prod_{\mathcal{U}}U(k_{n})$ taking values in $B(\epsilon)$ such that $\tilde{\alpha}_{\phi}(g_{1},g_{2})=\tilde{\beta}(g_{1})+g_{1}\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})$ Using the ${}_{\epsilon}\log$ map to transfer to $\mathcal{W}$, we conclude with the following proposition summarizing our work so far: ###### Proposition 3.2.8. Suppose for every internal $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ with $Im(\alpha)\subseteq\mathcal{W}_{b}$ that satisfies the $2$-cocycle condition (Eq. 3.7), there exists an internal $\beta:{}^{*}\Gamma\to\mathcal{W}$ taking values in $\mathcal{W}_{b}$ such that $\tilde{\alpha}(g_{1},g_{2})=\tilde{\beta}(g_{1})+g_{1}\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})$ then $\Gamma$ exhibits the defect diminishing property, and is therefore uniformly $\mathfrak{U}$-stable with a linear estimate. Since we shall work with internal maps from $({}^{*}\Gamma)^{2}$ (or ${}^{*}\Gamma$) to $\mathcal{W}$ that take values in $\mathcal{W}_{b}$, it is helpful to describe such a map as an ultraproduct of bounded maps. Let $\\{\alpha_{n}\in\ell^{\infty}(\Gamma^{2},\mathfrak{u}(k_{n}))\\}_{k=1}^{\infty}$ be a family of maps such that there exists a constant $C>0$ such that $\|\alpha_{n}\|_{\infty}\leq C$ for every $n\in\mathbf{N}$. Then it is clear that the ultraproduct $\alpha=\\{\alpha_{n}\\}_{\mathcal{U}}$ has image $Im(\alpha)\subseteq\mathcal{W}_{b}$. Conversely, ###### Lemma 3.2.9. Let $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ be an internal map with $Im(\alpha)\subseteq\mathcal{W}_{b}$. Then there exists a family $\\{\alpha_{n}\in\ell^{\infty}(\Gamma^{2},\mathfrak{u}(k_{n}))\\}_{k=1}^{\infty}$ such that $\alpha=\\{\alpha_{n}\\}_{\mathcal{U}}$. Conversely, if $\\{\alpha_{n}\in\ell^{\infty}(\Gamma^{2},\mathfrak{u}(k_{n}))\\}_{k=1}^{\infty}$ is a family of maps such that $\\{\|\alpha_{n}\|_{\infty}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{b}$, then $\alpha=\\{\alpha_{n}\\}_{\mathcal{U}}{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ is an internal map with $Im(\alpha)\subseteq\mathcal{W}_{b}$. ###### Proof. Since $\alpha$ is internal, it is of the form $\alpha=\\{f_{n}\\}_{\mathcal{U}}$ for a family of maps $f_{n}:\Gamma\times\Gamma\to\mathfrak{u}(k_{n})$. For a subset $S\in\mathcal{U}$, suppose $f_{n}$ is unbounded for every $n\in S$. Then for each $n\in S$, there exists $x_{n},y_{n}\in\Gamma$ such that $f_{n}(x_{n},y_{n})$ has norm at least $n$. In particular, for $x=\\{x_{n}\\}_{\mathcal{U}}$ and $y=\\{y_{n}\\}_{\mathcal{U}}$, we have $\alpha(x,y)\notin\mathcal{W}_{b}$. This is a contradiction to the hypothesis that $Im(\alpha)\subseteq\mathcal{W}_{b}$. The converse is immediate from the definition of $\mathcal{W}_{b}$. ∎ Thus, an internal map $\alpha:{}^{*}\Gamma\times{}^{*}\Gamma\to\mathcal{W}$ with $Im(\alpha)\subseteq\mathcal{W}_{b}$ can be described as $\\{\alpha_{n}\\}_{\mathcal{U}}$ where for every $k\in\mathbf{N}$, $\alpha_{n}:\Gamma\times\Gamma\to\mathfrak{u}(k_{n})$ is a bounded map, and such that $\\{\|\alpha_{n}\|_{\infty}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{b}$. In other words, the internal map $\alpha$ is the ultraproduct of bounded maps, and is also bounded as an ultraproduct. From now on, we shall regard $\alpha$ as an element of $\prod_{\mathcal{U}}\ell^{\infty}(({}^{*}\Gamma)^{2},\mathfrak{u}(k_{n})$, which we shall henceforth denote $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{2},\mathcal{W})$ (understood as the space of internal maps from $({}^{*}\Gamma)^{2}$ to $\mathcal{W}$. In fact, $\alpha$ is actually an element of $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{2},\mathcal{W})_{b}$ since not only is it internally bounded, but also $\\{\|\alpha_{n}\|_{\infty}\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}_{b}$. In general, we shall use the following notation: * • For $m\in\mathbf{N}$, the internal space $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$ is defined as $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})\coloneq\Big{\\{}\ell^{\infty}(\Gamma^{m},\mathfrak{u}(k_{n}))\Big{\\}}_{\mathcal{U}}$ * • The (external) subspace of $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$ comprising internal functions with bounded (supremum) norm will be denoted $\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{m},\mathcal{W})$ while the (external) subspace of $\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{m},\mathcal{W})$ comprising internal functions with infinitesimal (supremum) norm will be denoted $\mathcal{L}^{\infty}_{inf}(({}^{*}\Gamma)^{m},\mathcal{W})$. * • The quotient $\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{m},\mathcal{W})/\mathcal{L}^{\infty}_{inf}(({}^{*}\Gamma)^{m},\mathcal{W})$ shall be denoted $\tilde{\mathcal{L}}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$. This space comprises bounded maps from $({}^{*}\Gamma)^{m}$ to $\tilde{\mathcal{W}}$ that are induced from internal maps in $\mathcal{L}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$. As in Remark 3.2.6, $\tilde{\mathcal{L}}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$ is a real Banach space. * • For a map $f\in\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{m},\mathcal{W})$, we shall denote by $\tilde{f}\in\tilde{\mathcal{L}}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$ the composition of $f$ with the canonical quotient map $\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{m},\mathcal{W})\to\tilde{\mathcal{L}}^{\infty}(({}^{*}\Gamma)^{m},\mathcal{W})$. For convenience, we restate Proposition 3.2.8 in this notation: ###### Proposition 3.2.10. Suppose for every $\alpha\in\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{2},\mathcal{W})$ that satisfies the $2$-cocycle condition (Eq. 3.7), there exists a $\beta\in\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{1},\mathcal{W})$ such that $\tilde{\alpha}(g_{1},g_{2})=\tilde{\beta}(g_{1})+g_{1}\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})$ then $\Gamma$ exhibits the defect diminishing property, and is therefore uniformly $\mathfrak{U}$-stable with a linear estimate. ### 3.3 Uniform Stability of Amenable groups In this section, we shall demonstrate an application of Proposition 3.2.10 to amenable groups. The first step is to recognize the duality of $\mathcal{W}$ in an internal way. Consider the space $\mathcal{W}=\prod_{\mathcal{U}}\mathfrak{u}(k_{n})$, and let $\mathcal{W}^{\sharp}\coloneqq\prod_{\mathcal{U}}(\mathfrak{u}(n))^{*}$. For the Banach space $\mathfrak{u}(n)$, consider its dual space $(\mathfrak{u}(n))^{*}$ and let $\langle\cdot|\cdot\rangle$ the canonical duality (for instance, if $\mathfrak{u}(n)$ is equipped with the Schatten $p$-norm for $p>1$, then $(\mathfrak{u}(n))^{*}$ comes equipped with the Schatten $q$-norm, where $1/p+1/q=1$). Denote by $\mathcal{W}^{\sharp}$ the ultraproduct $\prod_{\mathcal{U}}(\mathfrak{u}(k_{n}))^{*}$, and its external subsets $\mathcal{W}^{\sharp}_{b}$ and $\mathcal{W}^{\sharp}_{inf}$ as in (Eq. 3.3) and (Eq. 3.4). Note that the internal pairing $\langle\cdot|\cdot\rangle_{\mathcal{U}}:\mathcal{W}^{\sharp}\times\mathcal{W}\to{}^{*}\mathbf{R}$ induces a pairing $\langle\cdot|\cdot\rangle:\tilde{\mathcal{W}}^{\sharp}\times\tilde{\mathcal{W}}\to\mathbf{R}$ Equivalently, $\mathcal{W}_{b}^{\sharp}$ comprises $\lambda\in\mathcal{W}^{\sharp}$ such that $\lambda(v)\in{}^{*}\mathbf{R}_{b}$ for every $v\in\mathcal{W}_{b}$, while $\mathcal{W}_{inf}^{\sharp}$ comprises $\lambda\in\mathcal{W}^{\sharp}$ such that $\lambda(v)\in{}^{*}\mathbf{R}_{inf}$ for every $v\in\mathcal{W}_{b}$. We can use the internal map $\pi_{\psi}:{}^{*}\Gamma\times\mathcal{W}\to\mathcal{W}$ to define the internal map $\pi_{\psi}^{\sharp}:{}^{*}\Gamma\times\mathcal{W}^{\sharp}\to\mathcal{W}^{\sharp}$ (3.8) which on $\lambda\in\mathcal{W}^{\sharp}$ and $v\in\mathcal{W}$ is defined to be $\pi_{\psi}^{\sharp}(g)(\lambda)(v)\coloneqq\lambda(\pi_{\psi}(g^{-1})v)$ ###### Lemma 3.3.1. The internal map $\pi_{\psi}^{\sharp}$ restricts to a map on $\mathcal{W}^{\sharp}_{b}$ that induces an action of ${}^{*}\Gamma$ on $\mathcal{W}^{\sharp}_{b}/\mathcal{W}^{\sharp}_{inf}$. ###### Proof. Let $\lambda\in\mathcal{W}_{b}^{\sharp}$. For $v\in\mathcal{W}$, since $\pi_{\psi}^{\sharp}(g)(\lambda)(v)=\lambda(\pi_{\psi}(g^{-1})v)$, note that $\pi_{\psi}(g^{-1})v\in\mathcal{W}_{b}$ for $v\in\mathcal{W}_{b}$. Hence $\pi_{\psi}^{\sharp}(g)(\lambda)\in\mathcal{W}_{b}^{\sharp}$ for every $g\in{}^{*}\Gamma$. Similarly, for $\lambda\in\mathcal{W}_{inf}^{\sharp}$, $\pi_{\psi}^{\sharp}(\lambda)\in\mathcal{W}_{inf}^{\sharp}$. That this induces an action of ${}^{*}\Gamma$ on $\mathcal{W}_{b}^{\sharp}/\mathcal{W}_{inf}^{\sharp}$ follows easily from the fact that $\pi_{\psi}$ induces an action of ${}^{*}\Gamma$ on $\mathcal{W}_{b}/\mathcal{W}_{inf}$. ∎ Going one step further, we can obtain a canonical identification of $\mathcal{W}$ with $(\mathcal{W}^{\sharp})^{\sharp}$ (which has the same norm as $\mathcal{W}$) so that we can regard $\mathcal{W}$ as $(\mathcal{W}^{\sharp})^{\sharp}$ with $(\pi_{\psi}^{\sharp})^{\sharp}=\pi_{\psi}$. This is true for $\mathcal{W}$ since $\mathcal{W}=\prod_{\mathcal{U}}\mathfrak{u}(n)$, and $\mathfrak{u}(n)$ is finite-dimensional for each $n\in\mathbf{N}$. ###### Remark 3.3.2. We shall often use this reflexivity property of $\mathcal{W}$ to regard its dual $\mathcal{W}^{\sharp}$ as its predual, so that $v\in\mathcal{W}$ acts on $\lambda\in\mathcal{W}^{\sharp}$ by $v\cdot\lambda=\lambda(v)$. However, note that for the following discussion of amenability, what we actually need is not reflexivity but merely the property that $\mathcal{W}$ is dual. Let us now recall the definition of amenability for discrete groups. While there are innumerable equivalent definitions of amenability, here we shall see the definition that is most relevant to us (later on in §4.3, we shall study amenability and amenable actions in the locally compact case). Consider the Banach space $\ell^{\infty}(\Gamma)$ with the following action of $\Gamma$: for $g,x\in\Gamma$ and $f\in\ell^{\infty}(\Gamma)$, $(g\cdot f)(x)=f(g^{-1}x)$. A _mean_ on $\ell^{\infty}(\Gamma)$ is a bounded linear functional $m:\ell^{\infty}(\Gamma)\to\mathbf{R}$ such that $\|m\|\leq 1$, $m(1)=1$ and $m(f)\geq 0$ whenever $f\geq 0$. The mean $m$ is said to be $\Gamma$-_invariant_ if for every $g\in\Gamma$ and $f\in\ell^{\infty}(\Gamma)$, $m(g\cdot f)=m(f)$. ###### Definition 3.3.3. The discrete group $\Gamma$ is said to be amenable if there exists a $\Gamma$-invariant mean on $\ell^{\infty}(\Gamma)$. While the definition of amenability asks for a $\Gamma$-invariant mean on $\ell^{\infty}(\Gamma)$, this can be easily extended to obtain a $\Gamma$-_equivariant_ mean on $\ell^{\infty}(\Gamma,W)$ for a dual normed $W$-module (where the action of $\Gamma$ on $\ell^{\infty}(\Gamma,W)$ is given by $(g\cdot f)(x)=g\cdot f(g^{-1}x)$. The following lemma builds on this idea to construct an internal mean on $\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})$. ###### Lemma 3.3.4. Suppose $\Gamma$ is amenable. Then there exists an internal map $m_{in}:\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})\to\mathcal{W}$ such that $m_{in}$ induces a linear map $\tilde{m}:\tilde{\mathcal{L}}^{\infty}({}^{*}\Gamma,\mathcal{W})\to\tilde{\mathcal{W}}$ satisfying the following two conditions: * • Suppose $\tilde{f}\in\tilde{\mathcal{L}}^{\infty}({}^{*}\Gamma,\mathcal{W})$ is the constant function $\tilde{f}(g)=\tilde{v}$ for every $g\in{}^{*}\Gamma$, then $\tilde{m}(\tilde{f})=\tilde{v}$. * • For $\tilde{f}\in\tilde{\mathcal{L}}^{\infty}({}^{*}\Gamma,\mathcal{W})$, $\|\tilde{m}(\tilde{f})\|\leq\|\tilde{f}\|$. * • For $g\in{}^{*}\Gamma$ and $\tilde{f}\in\tilde{\mathcal{L}}^{\infty}({}^{*}\Gamma,\mathcal{W})$, $\tilde{m}\left(g\cdot\tilde{f}\right)=g\cdot\tilde{m}(\tilde{f})$. ###### Proof. Consider $f=\\{f_{n}\\}_{\mathcal{U}}\in\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})$. Since $\mathcal{W}=(\mathcal{W}^{\sharp})^{\sharp}$, for each $\lambda\in\mathcal{W}^{\sharp}$, we get an internal map $f_{\lambda}:{}^{*}\Gamma\to{}^{*}\mathbf{R}$ $f_{\lambda}(x)\coloneqq f(x)(\lambda)$ Note that $f_{\lambda}$ being internal, is of the form $\\{(f_{\lambda})_{n}\\}_{\mathcal{U}}$ where $(f_{\lambda})_{n}\in\ell^{\infty}(\Gamma)$. This allows us to construct the internal map $m_{in}^{\lambda}:\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})\to{}^{*}\mathbf{R}$ as $m_{in}^{\lambda}(f)=\\{m\left((f_{\lambda})_{n}\right)\\}_{\mathcal{U}}$ and finally $m_{in}:\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})\to(\mathcal{W}^{\sharp})^{\sharp}$ as $m_{in}(f)(\lambda)\coloneqq m_{in}^{\lambda}(f)$ It is straightforward to check that $m_{in}$ as defined induces a linear map $\tilde{m}:\tilde{\mathcal{L}}({}^{*}\Gamma,\mathcal{W})\to\mathcal{W}_{b}/\mathcal{W}_{inf}$. As for ${}^{*}\Gamma$-equivariance, this follows from the observation that $(g\cdot f)_{\lambda}(x)=\pi(g)f(g^{-1}x)(\lambda)$ while $(g\cdot f_{\lambda})(x)=f(g^{-1}x)(\lambda)$. The conditions on $\tilde{m}$ follow from the definition and properties of the $\Gamma$-invariant mean $m$ on $\ell^{\infty}(\Gamma)$. ∎ We shall denote the internal mean above by $m_{in}^{x}$ (or $\tilde{m}^{x}$) when the mean is understood to be taken over $x\in\Gamma$. This would be particularly useful when working with multivariate maps where we fix certain coordinates to obtain a univariate map which we can take a mean over (as in the following Proposition 3.3.5). Note that in this notation, it is easy to see that the ${}^{*}\Gamma$-equivariance of the mean constructed above translates to the simpler (invariant) form: for $\tilde{f}\in\tilde{\mathcal{L}}^{\infty}({}^{*}\Gamma,\mathcal{W})$ and $g\in{}^{*}\Gamma$, $\tilde{m}^{x}\left(\tilde{f}(gx)\right)=\tilde{m}^{x}\left(\tilde{f}(x)\right)$ We shall now use this internal map $m_{in}$ and the ${}^{*}\Gamma$-equivariant map $\tilde{m}$ to show the following:. ###### Proposition 3.3.5. Suppose $\Gamma$ is amenable. Then for every $\alpha\in\mathcal{L}^{\infty}_{b}(({}^{*}\Gamma)^{2},\mathcal{W})$ that satisfies the $2$-cocycle condition (Eq. 3.7), there exists a $\beta\in\mathcal{L}^{\infty}_{b}({}^{*}\Gamma,\mathcal{W})$ such that $\tilde{\alpha}(g_{1},g_{2})=\tilde{\beta}(g_{1})+g_{1}\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})$ ###### Proof. Suppose $\alpha\in\mathcal{L}^{\infty}L(({}^{*}\Gamma)^{2},\mathcal{W})$ satisfies the $2$-cocyle condition: for every $g_{1},g_{2},x\in{}^{*}\Gamma$, $\tilde{\alpha}(g_{1},g_{2})=g_{1}\cdot\tilde{\alpha}(g_{2},x)-\tilde{\alpha}(g_{1}g_{2},x)+\tilde{\alpha}(g_{1},g_{2}x)$ For a fixed $g$, the map $\alpha_{g}:{}^{*}\Gamma\to\mathcal{W}$ defined as $\alpha_{g}(x)\coloneq\alpha(g,x)$ is clearly contained in $\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})$. Define $\beta\in\mathcal{L}^{\infty}({}^{*}\Gamma,\mathcal{W})$ as $\beta(g)\coloneq m_{in}\left(\alpha_{g}\right)$ In other words, $\beta(g)=m_{in}^{x}\left(\alpha(g,x)\right)$. Then the $2$-cocycle condition satisfied by $\alpha$ immediately implies that $\tilde{\alpha}(g_{1},g_{2})=g_{1}\cdot\tilde{\beta}(g_{2})-\tilde{\beta}(g_{1}g_{2})+\tilde{\beta}(g_{1})$ ∎ Observe that the proof of Proposition 3.3.5 is almost exactly on the lines of the proof that $\operatorname{H}_{b}^{2}(\Gamma,V)=0$ for amenable $\Gamma$ and dual normed $\Gamma$-module $W$ (refer to Theorem $3.6$ in [Fri17] for more details). In light of Proposition 3.3.5 and Proposition 3.2.10, we conclude that: ###### Corollary 3.3.6. If $\Gamma$ is a discrete amenable group, then $\Gamma$ is uniformly $\mathfrak{U}$-stable with a linear estimate. Note that while on the one hand Corollary 3.3.6 generalizes Kazhdan’s result [Kaz82] to a larger family (where we allow any submultiplicative matrix norm as opposed to just the operator norm as in [Kaz82] and [BOT13]), we _do not_ prove here the analogous result of strong Ulam stability, where the family comprises groups of unitary operators on (possibly infinite-dimensional) Hilbert spaces. ## 4 Asymptotic Cohomology of Groups In this section, we shall formally define the asymptotic cohomology theory of (topological) groups, and study some basic properties along the lines of the theory of bounded cohomology. Recall that our goal is to prove uniform $\mathfrak{U}$-stability for lattices in higher rank Lie groups, so this forces us to develop the cohomology theory for locally compact groups (as opposed to just discrete groups as we briefly saw in §3.2 and §3.3). The basic objects we shall deal with are defined in §4.1, where we describe the category of asymptotic cohomology abstractly using tools from cohomological algebra. In §4.3, we define the asymptotic cohomology of groups and relate it to the way it was motivated in §3.3. Finally, in §4.3, we use Zimmer amenability and the functorial relations of §4.1 to obtain other complexes that compute the same cohomology, which we shall use in §5 and §6. ### 4.1 Basic Definitions and Some Cohomological Algebra Recall, from §2.2, that we fix a non-principal ultrafilter $\mathcal{U}$ on $\mathbf{N}$ to define ultraproducts and internal objects. For convenience, we set some notation and conventions now. Let $C$ be a category, with $C$-objects and $C$-morphisms. We shall define a category ${}^{*}C$ in ${}^{*}Univ$ as follows: * • The objects of ${}^{*}C$, referred to as _internal_ $C$-objects, shall be ultraproducts $\prod_{\mathcal{U}}X_{n}$ where $\\{X_{n}\\}_{n\in\mathbf{N}}$ is an indexed collection of $C$-objects. * • The morphisms of ${}^{*}C$, referred to as _internal morphism_ , are of the form $\phi=\\{\phi_{n}\\}_{\mathcal{U}}$ (also denoted $\prod_{\mathcal{U}}\phi_{n}$), where $\\{\phi_{n}\\}_{n\in\mathbf{N}}$ is an indexed collection of $C$-morphisms. Given a category $C$, the internal $C$-objects and internal $C$-morphisms form a category ${}^{*}C$ in ${}^{*}Univ$, and these two categories have the same first-order theories, allowing us to use the transfer principle, as remarked in §2.2. ###### Definition 4.1.1. Let $A$ be a property of $C$-objects (resp, $C$-morphisms). Then $\prod_{\mathcal{U}}X_{n}$ (resp, $\phi:\prod_{\mathcal{U}}X_{n}\to\prod_{\mathcal{U}}Y_{n}$) has _internal $A$_ if $X_{n}$ (resp, $\phi_{n}$) has $A$ for every $n\in S$ with $S\in\mathcal{U}$. For example, let $G$ be a locally compact, second countable topological group, and consider the ultrapower group ${}^{*}G$ and let $\mathcal{E}=\\{E_{n}\\}_{\mathcal{U}}$ be an internal Banach space. An internal map $\pi:{}^{*}G\times\mathcal{E}\to\mathcal{E}$, where $\pi=\\{\pi_{n}:G\times E_{n}\to E_{n}\\}$, is an _internal action_ of ${}^{*}G$ on $\mathcal{E}$ (or $\mathcal{E}$ is an _internal ${}^{*}G$-representation_) if the map $\pi_{n}:G\times E_{n}\to E_{n}$ is an isometric $G$-representation for every $n\in\mathbf{N}$, and the internal map $\pi$ is _internally continuous_ if $\pi_{n}:G\times E_{n}\to E_{n}$ is continuous for every $n\in\mathbf{N}$ (here $G\times E_{n}$ is endowed with the product topology). All of these notions simply involve passing from standard categories to their internal counterparts in ${}^{*}Univ$. We shall now work with the category $Ban$ whose objects are (real) Banach spaces and whose morphisms are bounded linear maps, and study ${}^{*}Ban$. Consider the ultraproduct $\mathcal{E}=\prod_{\mathcal{U}}E_{n}$ of the real Banach spaces $\\{E_{n}\\}_{n\in\mathbf{N}}$, which is an internal Banach space. For an element $v\in\mathcal{E}$ where $v=\\{v_{n}\\}_{\mathcal{U}}$, we denote by $\|v\|\coloneq\\{\|v_{n}\|\\}_{\mathcal{U}}\in{}^{*}\mathbf{R}$. Given two internal Banach spaces $\mathcal{E}=\\{E_{n}\\}_{\mathcal{U}}$ and $\mathcal{F}=\\{F_{n}\\}_{\mathcal{U}}$, we shall denote by $\mathcal{Hom}(\mathcal{E},\mathcal{E})$ the set of internal morphisms between $\mathcal{E}$ and $\mathcal{F}$. These are exactly of the form $\phi\coloneq\\{\phi_{n}:E_{n}\to F_{n}\\}_{\mathcal{U}}$ where $\phi_{n}:E_{n}\to F_{n}$ is a bounded linear map for every $n\in\mathbf{N}$ (such maps are internal morphisms). Note that $\mathcal{Hom}(\mathcal{E},\mathcal{E})$ itself is an internal Banach space when endowed with the internal operator norm (that is, $\|\phi\|\coloneq\\{\|\phi_{n}\|_{op}\\}_{\mathcal{U}}$). Consider the set $\mathcal{Hom}(\mathcal{E},{}^{*}\mathbf{R})$, which is the _internal dual_ of $\mathcal{E}$, denoted $\mathcal{E}^{\sharp}$. Explicitly, for each Banach space $E_{n}$ as above, let $E_{n}^{\sharp}$ denote its (constinuous) dual Banach space, and let $\langle\cdot|\cdot\rangle$ the canonical duality, and $\mathcal{E}^{\sharp}$ denote the ultraproduct $\prod_{\mathcal{U}}E_{n}^{\sharp}$. Note that we have an internal pairing $\langle\cdot|\cdot\rangle_{\mathcal{U}}:\mathcal{E}^{\sharp}\times\mathcal{E}\to{}^{*}\mathbf{R}$ We shall call $\mathcal{E}$ an internal dual Banach space if $\mathcal{E}$ is the internal dual of some internal Banach space (which we shall denote $\mathcal{E}^{\flat}=\prod_{\mathcal{U}}E_{n}^{\flat}$). For $\phi\in\mathcal{Hom}(\mathcal{E},\mathcal{E})$, we shall denote by $\phi^{\sharp}\in\mathcal{Hom}(\mathcal{E}^{\sharp},\mathcal{E}^{\sharp})$ the internal adjoint map with respect to the internal pairing above: that is $\phi^{\sharp}$ is such that for every $v\in\mathcal{E}$ and $w\in\mathcal{E}^{\sharp}$, $\langle\phi^{\sharp}w,v\rangle_{\mathcal{U}}=\langle w,\phi v\rangle_{\mathcal{U}}$. We shall now see some general functorial aspects of (standard) real Banach spaces and extend them naturally to internal Banach spaces. Let $X$, $Y$, $E$ and $F$ be (standard) real Banach spaces. Let $B(X\times E)$ denote the Banach space of bounded bilinear forms $X\times E\to\mathbf{R}$, and $L(E,F)$ denote the Banach space of bounded linear functions from $E$ to $F$. Through the canonical pairing, note that $B(X\times E)$ is naturally isometrically isomorphic to the Banach space of bounded linear maps $E\to X^{\sharp}$ (and also to the Banach space of bounded linear maps $X\to E^{\sharp}$). That is, $B(X\times E)\cong L(E,X^{\sharp})\cong L(X,E^{\sharp})$ (4.1)
# Different Games in Dialogue: Combining character and conversational types in strategic choice Alafate Abulmiti Université de Paris, CNRS Laboratoire de Linguistique Formelle<EMAIL_ADDRESS> ###### Abstract In this paper, we show that investigating the interaction of _conversational type_ (often known as language game or speech genre) with the character types of the interlocutors is worthwhile. We present a method of calculating the decision making process for selecting dialogue moves that combines character type and conversational type. We also present a mathematical model that illustrate these factors’ interactions in a quantitative way. ## 1 Introduction Wittgenstein (1953); Bakhtin (2010) introduced language games/speech genres as notions tying diversity of linguistic behaviors to activity. Building on this, and insights of pragmaticists such as Hymes (1974); Allwood (2000), and earlier work in AI by (Allen and Perrault, 1980; Cohen and Perrault, 1979) Larsson (2002); Ginzburg (2012); Wong and Ginzburg (2018) showed how to associate global structure with conversations in a way that captures the range of possible topics and idiosyncratic moves. (Larsson, 2002) is also the basis for an approach to building spoken dialogue systems (SDS) which is essentially domain general, offers a fine-grained treatment of grounding interaction, and which was extended to clarification interaction in (Purver, 2004). This body of work does not address, however the issue of strategic choice in conversation, which is the core issue underlying work in Game Theory. Asher et al. (2017) used game theoretic tools to develop a theory of strategic choice for dialogue. Although there are a variety of interesting insights captured in this approach, it is based on two assumptions that apply only to a restricted class of language games—games continue indefinitely and there exists a jury that assigns winning conditions to participants. We seek to develop an approach to strategic choice applicable to the general case of dialogical interaction, where termination is an important consideration and where assessment is internal to the participants. Strategic choice is modelled by combining structure from conversational types with psychological and cognitive notions associated with the players. Character type as a relatively independent factor abstracted out of the conversational type is important for dialogue. Although there is some analysis concerning both character effects and conversational types in the dialogue, combining them and analyzing their interactions in a quantitative way has not, to our knowledge, been carried out before. The purposes of this paper is, hence, to combine character type effect and conversational type analysis to yield a method that could help to analyse strategic choice in dialogue. ## 2 Background The starting point of (Asher et al., 2017) is the framework of Banach- Mazurckiewicz games. They modify this framework to make it more amenable to analyzing certain kinds of NL dialogue, the emergent framework being _BM messaging games_. Asher et al. (2017) argued that each dialogue potentially continues indefinitely and has a winner adjudged by a third party jury. This is useful for modelling political discourse between rival groups or individual contenders in the public domain. But clearly this sort of conception is not appropriate for a variety, arguably most types of dialogue.111Strictly speaking, Asher et al. (2017) allowed also for finite games, by allowing for games to consist of a special move from a certain point onwards, but this would seem to either defeat the purpose of assuming potential extendibility or to be purely formal. These have beginnings ($InitState$) and a variety of distinct terminations ($FinState$) Wong (2018), and there is no ‘external jury’ in most cases. Burnett (2019) developed a formal model called social meaning games which explains how social variants affect the linguistic style of dialogue participants. And conversely, how the speaker’s intrinsic linguistic style affects dialogue moves. Pennebaker and King (1999) shows that linguistic style is an independent and meaningful way of exploring personality. There is evidence that people’s personality traits influence their use of language. For instance, extroverted individuals are able to deceive others more easily, while neurotic individuals are less likely to be deceived Riggio et al. (1988). The charisma of a speaker has been shown to be closely related to an extroverted character Bono and Judge (2004). There is also a strong relation between extroversion and conscientiousness and positive influences, as well as between neuroticism and dissent and various negative influencesWatson and Clark (1992). Thus, an individual’s personality does affect decision-making process in dialogue. Cooperation in dialogue is a widespread phenomenon, and Allwood et al. (2000) identified four features of cooperation: cognitive consideration, joint purpose, ethical consideration and trust. When engaging in a collaborative dialogue, where the interlocutor decides his next move based on the intentions of the other and a variety of information deriving from the context of the dialogue, it seems that character has a broad influence on the course of the dialogue. Thus, it seems natural that a dialogue participant (DP) should also take into account the other’s character traits in order to choose the appropriate move. In the next section, we will explain the method we propose of combining character type effects and conversational type. ## 3 Methodology In this section, we wish to explore the interaction between character type and conversational type, by considering how given a possible range of moves, a quantitative analysis can be provided for move selection. ### 3.1 Character Type Researchers have developed a relatively unanimous consensus on personality description patterns, proposing the Big Five model of personalityGoldberg (1992). Within this model there are five traits that can be used to capture many aspects of human personality. The Big Five Personality (OCEAN) can be assessed by the NEO-PI-RCosta Jr and McCrae (2008). The traits can be described as follows: * • Openness: the ability to be imaginative, aesthetic, emotional, creative, intelligent, etc. * • Conscientiousness: displays characteristics such as competence, fairness, organization, due diligence, achievement, self-discipline, caution, restraint, etc. * • Extroversion: exhibits qualities such as enthusiasm, sociability, decisiveness, activity, adventure, optimism, etc. * • Agreeableness: the qualities of trust, altruism, straightforwardness, compliance, humility, empathy, etc. * • Neuroticism: difficulty balancing emotional traits such as anxiety, hostility, repression, self-awareness, impulsivity, vulnerability, etc. Goldberg (1992) gave a pertinent method to quantify character types in terms of 5 dimensional vector $[o,c,e,a,n]$. We define $\chi_{s}$ for the self character type scale vector and $\chi_{o}$ represents other character type scale vector. In addition, with the development of machine learning and deep learning methods within NLP, a variety of approaches have been implemented for automatic recognition of personality in conversation and text Mairesse et al. (2007). Jiang et al. (2019) used attentive network and contextual embeddings to detect personality traits from monologues and multiparty dialogues. Given a text(or utterance), one can calculate the character type scale vector of this sentence with a robust prediction model. We define $c_{i}$ as $i$th dialogue move vector prediction. We note that by calculating the similarity between the $\chi$ and $c_{i}$, we obtain the extent to which a given dialogue move can fit either the self character type or the other character type. Note that $\chi_{s}$ is a dialogue interlocutor’s intrinsic property which does not show great change during conversation, but considering one’s imperfect information situation, $\chi_{o}$ will change once new evidence arises and can be modified by applying Bayes’ rule. ### 3.2 Conversational Type Pennebaker and King (1999) also indicated that linguistic style gets influenced by the situation in which the interlocutor find themselves. Wong and Ginzburg (2018) provided a topological perspective on the space of conversational types based on the distribution of Non-Sentential Utterances (NSU) within each type. Wong (2018) developed a model of a conversational type couched in TTR Cooper (2005). On this view, a conversational type is a 4-tuple {$ConvRules$, $InitState$, $FinState$, $G$}, here $ConvRules$ represents a set of conversational rules, transition rules between different dialogue states (_dialogue gameboards_) of type $DGBType$ (Ginzburg, 2012), $InitState$ and $FinState$ are the initial and final DGB states. $DGBType$ $\mapsto$ $\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr spkr: Ind \\\ addr: Ind \\\ utt-time: Time \\\ c-utt: adressing(spkr, addr, utt-time) \\\ FACTS: set(Proposition) \\\ Pending: list(Locutionary Proposition) \\\ Moves: list(Locutionary Proposition) \\\ QUD: poset(Question) \\\ }\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$ G is the grammar which serves for the different conversational language use. ##### An Example Considering a commercial transaction conversational type involving shopping in a bakery, so that we have: $Bakery=$ $\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr participant: InteractionGroup \framebox{\@text@daccent{$\wedge$}} $\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr c1:customer(A) \\\ c2:baker(B) \\\ }\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$\ignorespaces\\\ qnud-set = QS : poset(question) \\\ c1 : $\mskip-3.0mu\left\\{\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr$\lambda$ x.InShopBuy(A,x), $\lambda$ P.P(A),\\\ $\lambda$ P.P(B), $\lambda$ x.Pay(A,x) \crcr}\vskip 1.65761pt}\hskip 0.0pt\right\\}\mskip-3.0mu$\ignorespaces$\in$ QS\\\ moves : list(IllocProp) \crcr}\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$ this involves: * • participants: Baker and customer. * • qnud-set: questions to be discussed during the interaction * • moves: Dialogue moves made. Wong (2018) argued that clarification interactions provide data showing that interlocutors can be uncertain about the conversational type that classifies the interaction they are in, as for instance in (3.2): * (Context: A is being interviewed by B) A: Hi B: Hi . . . B (1): have you seen Blackklansman yet? A (2): Wait— is this an informal chat or a formal interview? B: A bit of both. (Example (54) from (Wong, 2018). Thus, the DP has an opaque or uncertain ”guess” as to the conversational type and this also influences the decision-making process. Following probabilistic TTR theory Cooper et al. (2014), we assume that the DP has a probabilistic conversational type in the private part of information state and this can be updated during dialogue by Bayesian inference. We model the information state as follows: $InformationState=$ $\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr private = $\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr CharacterType:$\mskip-3.0mu\left[\hskip 0.0pt\vbox{\vskip 1.65761pt\halign{\avmjfont\strut#\unskip\hfil&&\hskip\avmjhskip\avmjvalfont#\unskip\hfil\cr Self: Vector \\\ Other: Vector \\\ }\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$\ignorespaces\\\ Goals: Set(Prop) \\\ Tmp: private \\\ ConvType : ConvType \\\ Conv-prob : [0, 1] \crcr}\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$\ignorespaces\\\ dgb :DGBType \crcr}\vskip 1.65761pt}\hskip 0.0pt\right]\mskip-3.0mu$ In the private part, we have the CharacterType stores for self and other’s character type vector; Goals tracks a set of (futurate) propositions that the DP wants realized; Tmp is a backup of the previous private state. Its use is for reasoning about the current CharacterType and updating the Conv-prob; ConvType is the conversational type and here we introduce Conv-prob, which indicates the probabilistic conjecture concerning the current conversational type. ### 3.3 Move Space Individuals choose the moves from different kind of possible options. Ginzburg et al. (2019) provided a taxonomy of the response space of queries. 14 possible categories are given by the study of English and Polish corpus. The field of Nature Language Generation has some useful probabilistic methods to generate the range of potential moves, such as the Sequence-to-Sequence modelSutskever et al. (2014). See et al. (2019) proposed a new conditional weight control technique to make the response space more human-like. The popular pre-training model like BERTDevlin et al. (2018) exemplifies NLG capability with fine-tuning. We define here $A$ for a dialogue move space vector composed by $a_{i}$ representing the $i$th move. In the current work we will not discuss how to specify the possible moves, but will leave this for future work. ### 3.4 Decision Making #### 3.4.1 Global View Levelt (1993) proposed that speakers monitors themselves while speaking. In other words, individuals have a self-criticism mechanism enabling them to reflect on their behaviors, emotions and speaking. We dub this the SelfMonitor. Given our analysis in the first two subsections, a DP’s move choice in the dialogue is influenced by her character type, the other interlocutor’s character type, and the conversational type. When a DP tries to respond to the other party’s dialogue move, she first constructs a dialogue move space, which yields a set of possible utterances that the DP can use. The DP typically makes a conjecture about the other’s character type in terms of individual personality traits, based on her a priori knowledge of that individual and the current state of the dialogue. In addition, the DP has a probabilistic assumption about the present conversational type given her cognitive state. Subsequently, the DP’s SelfMonitor determines which factor is more valuable in this context. Eventually a move is selected considering each possible move’s value by comparing the affinity of each move for each factor with the skewness of each factor as determined by the SelfMonitor. #### 3.4.2 Mathematical Modeling After the above analysis, we offer a mathematical model to explicate this process. In evaluating possible moves, we have three important factors: self character type, other character type, and the conversational type. We want to provide a real valued function $\rho$ to evaluate each move in the dialogue move space. * • $a_{i}$: $i$th move. * • $\overrightarrow{\chi_{s}}$: Self character type vector. * • $\overrightarrow{c_{i}}$: character type vector for $i$th move. * • $\alpha$ Weight for the self character type effect. * • $\overrightarrow{\chi_{o}}$: Current other character type vector estimation. * • $\beta$: Weight for the other character effect. * • $p$: Probabilistic conjecture of the conversational type. * • $d_{i}$: $i$th move conformity with the conversational type from -1 to 1. * • $\gamma$ Weight for probabilistic conversational type. * • $W=[\alpha,\beta,\gamma]$. Conformity represents the degree to which a dialogue move conforms to the current conversational type. In other words, it can be modeled as the evaluation score generated by this dialogue move in dialogue context. In order to calculate the ”affinity” among character type vector {$\chi_{s}$,$\chi_{o}$} and move vector ($c_{i}$), we use cosine similarity, defined as follows: $simi(A,B)=\cos(\theta)=\frac{A\cdot B}{\|A\|\|B\|}$ (1) Then we define the function $\rho(a_{i})$: $\rho(a_{i})=\alpha\cdot simi(\overrightarrow{c_{i}},\overrightarrow{\chi_{s}})+\beta\cdot simi(\overrightarrow{c^{i}},\overrightarrow{\chi_{o}})+\gamma d_{i}\cdot p\\\ $ (2) and let $X=[simi(\overrightarrow{c_{i}},\overrightarrow{\chi_{s}}),\ simi(\overrightarrow{c^{i}},\overrightarrow{\chi_{o}}),\ d_{i}\cdot p]^{T}$ as a decision factor matrix, obtaining: $\rho(A)=W\cdot X$ (3) where $\alpha+\beta+\gamma=1$ $\alpha$, $\beta$, $\gamma$ are in fact estimated by the SelfMonitor based on information deriving from the information state. We believe those weights are mainly fixed in the beginning of the conversation, because great changes in strategic choice lead to the suspicion of deception in some cases Riggio et al. (1988), This estimation process can be back-engineered by observing DP’s selection of moves in the dialogue. After the calculation of $\rho$, we get the score for each move. This score alone cannot determine the final decision—we need to take into account the features that we have not yet discussed and observed, so here we will probabilize them, i.e., convert it into a probabilistic distribution space using the $softmax$ function. $softmax(a_{i})=\frac{exp(\rho(a_{i}))}{\sum_{j}exp(\rho(a_{j})))}$ (4) We then obtain a probability estimate for each move, which indicates that the greater the probability the DP is more inclined to choose this move. ### 3.5 Example Here we illustrate our proposed approach with an example. #### 3.5.1 Scenario In a bakery, we observe a customer and baker’s buying and selling process during the pandemic of COVID-19. ##### Goals For simplicity we fix the final goals as follows: * a. Customer goal: buy two croissants. * b. Baker goal: sell the two croissants and obtain the desired price. It is worth noting that both players may have more complex goals or a series of goals. For example, the customer might want to have appropriate desserts for dinner. In order to achieve those kind of goals, we often divide into several specific and simple goals. In such a case, the sub-goal might be: 1. 1. what are the best desserts in this bakery? 2. 2. From among those best desserts, which one will fit the dishes I prepare tonight? For our current purposes, however, we will avoid such complications, despite their importance in a variety of cases. #### 3.5.2 Move Space We assess a baker’s possible responses to the customer’s initial utterance: ”2 croissants”. We assume the following possible responses of the baker: * client 2 croissants. baker (i) 1.90. (ii) Get out of the bakery, you’re not wearing a mask. (iii) Please would be nice. (iv) 1.90 and please would be nice. (i) would lead to a quick end that meet their needs. This ”style” is used in most cases in our life disregarding other conversational factors and attending only to the baker’s final goals. If the baker considers particularly the conversational type’s impact, he would choose (i) to advance the conversation, thereby looking forward to finalizing the conversation early and achieving his goal. (ii) would lead to an ”unpleasant” conversation. The baker thinks that the customer’s behavior is disrespectful (a short demand lacking politeness). He therefore uses the lack of the mask as a pretext or has in mind a competing goal for fighting against this disrespect. Finally, neither participants goals is achieved. (iii) This choice shows that the baker wants to have a pleasant and respectful conversation above all. It is clear that baker does not assign much weight to his final goals or the conversational type. Instead he prioritizes his psychological needs. The question under discussion would shift to another topic and the conversation might evolve into a dispute, though with lower probability than in (ii). (iv) indicate that the baker wants to persist with the trade without disturbing his final goals and his psychological needs. This seems to be very much of a compromise move, but still the final state would depend on the customer’s interpretation of the dialogue pragmatics. #### 3.5.3 A worked example For this example, we have a character type vector of [o, c, e, a, n] we assume that baker character vector is $\chi_{s}$ = [0.0, 0.3, 0.0, 0.0, 0.5] showing conscientiousness and relatively high neuroticism and Conv-prob is $p$ = 0.98 assuming the baker’s conformity with the bakery conversational type is high. Following the initial state $S_{0}$, the baker hears ”2 croissants” from client, after which the baker updates the information state then the SelfMonitor chooses values for $\alpha,\beta,\gamma$. For example, we assume $\alpha$ = 0.1, $\beta$ = 0.1 and $\gamma$ = 0.8, we assume, in this circumstance, that the baker thinks more about the conversational type rule than self character type and other character type, then: $CharacterType.Other=\chi_{o}$ $\chi_{o}=\mu(\Uparrow Tmp.CharacterType.Other,\Uparrow dgb)$ $\mu$ is the other’s character type updating function with parameters: the previous state’s character type vector and the current DGB. We assume that after the other’s character type update we have $\chi_{o}$ = [0.0, 0.0, -0.1, -0.4, 0.2] assuming the customer is not very agreeable (-0.4) and a little neurotic (0.2). (i): We assume that ”1.90” shows disagreeableness (-0.4) and slight neuroticism (0.2) so $c_{1}$ = [0.0, 0.0, -0.1, -0.4, 0.2] and $d_{1}$ = 0.8 shows ”1.90” shows high conformity with the bakery conversational type. Then according to function (2) we have: $\rho(a_{1})$ = 0.7646 (ii): We assume that ”Get out of the bakery, you’re not wearing a mask” shows high disagreeableness (-0.7), low conscientiousness (-0.5) and high neuroticism (0.8) so $c_{2}$ = [0.3, -0.5, 0.0, -0.7, 0.8] and $d_{2}$ = -1.0 shows incongruity with the bakery type. As a result, we obtain $\rho(a_{2})$ = -0.7080 (iii): We assume that ”Please would be nice” shows high agreeableness (0.7) and slight extroversion (0.3) $c_{3}$ = [0.2, 0.0, 0.3, 0.7, -0.2] and $d_{3}$ = 0.3 shows low conformity with the conversational type. Consequently, we obtain $\rho(a_{3})$ = 0.1201 (iv): We assume that ”1.90 and please would be nice” shows relatively high openness (0.5), conscientiousness (0.6) and extroversion (0.4), high agreeableness (0.7) and low neuroticism (-0.4) so $c_{4}$ = [0.5, 0.6, 0.4, 0.7, -0.4] and $d_{4}$ = 0.7 shows high conformity with the bakery type. So we obtain, $\rho(a_{4})$ = 0.4727 We then apply those preference score with the $softmax$ function: $softmax(\overrightarrow{\rho(a_{i})})$ = [0.3998, 0.0917, 0.2099, 0.2986] This indicates that the probability that the baker selects (i) is 39.98% (iv) is the next ranked score. All this assuming that the baker’s character type is relative high neuroticism and the commitment degree to the conversational type is high. Given his character type, it is reasonable to conclude that he would not be extremely polite (option iv) but not rude because of the constraint of conformity with the conversational type. Now we want to illustrate that if we change the distribution of certain of the parameters, things can flip significantly. We change the distribution of different variables ($\alpha,\beta,\gamma$) to [0.3, 0.1, 0.6] showing that the baker is concerned a little more about his own character type (0.3 rather than 0.1) but slightly less than before on conversational type (0.6). And we assume the baker’s character type vector is [0.5, 0.7, 0.3, 0.8, -0.5] which shows high conscientiousness (0.7), low neuroticism (-0.5) and high agreeableness. Then we obtain:$\rho(\overrightarrow{A})$ = [0.3457, -0.7277, 0.3217, 0.6946]. the resulting following with probabilistic distribution is $softmax(\overrightarrow{\rho(a_{i})})$ = [0.2677, 0.0915, 0.2613, 0.3795] This indicates that the baker would choose option (iv) for the next move with 38% probability, an option which balances the baker’s final goal and personal psychological needs (high conscientiousness). Finally, we modify ($\alpha,\beta,\gamma$) to [0.8, 0.1, 0.1] and assume that the baker’s character type vector is [0.2, -0.3, 0, -0.5, 0.8] which shows high neuroticism (0.8), low agreeableness (-0.5). What we obtain is $\rho(\overrightarrow{A})$ = [0.8007, 0.7652, -0.5229, -0.5032]. From this what results is the following: $softmax(\overrightarrow{\rho(a_{i})})$ = [0.3996, 0.3856, 0.1064, 0.1085] This indicates that the probability of choosing (i) and (ii) is now about the same. In this scenario the baker focuses more on his own character type. (ii) shows complete incongruity with the bakery type, which indicates a complete violation of the baker’s original final goal. Hence, at this point, the baker is in a dilemma. It is worth noting that this state will lead to a ”breakthrough point”, and if the baker chooses (ii), it means that baker chooses to change his final goal or conversational type. The consequence of this is that the $Goals$, $ConvType$ and $Conv-prob$ should be changed in the private state of information state. How to effect this in a formal way, we leave to future work. ## 4 Conclusion Character is a person’s stable attitude towards reality and it also affects one’s performance in dialogue. Conversational type is, in one way or another, since the early days of AI work on dialogue, one of the principal notions of dialogue research. It reflects domain specific interaction, in particular the move space given to the conversational participants. We have tried to show in this paper that investigating the interaction of these two factors is worthwhile. In particular, we present a method of the decision making process for moves that combines character type and conversational type. We present a mathematical model that combines these factors after assigning numerical values to the various parameters that are involved and demonstrate by means of an example how this works. ## 5 Future Work In this paper, we have made a preliminary proposal concerning the modelling of character types and their combination with conversational types. We aim to refine this in future work in ways that include the following: * • Wong (2018) gave a formal method to classify conversations into types. We believe that under realistic conditions, there are often multiple conversational types involved in a single conversation, which may involve sequential transformations or overlapping phenomena. * • In the approach sketched here, for classifying the conversational type we use a probabilistic TTR. However, in practice this assessment can change as the dialogue unfolds. We hope to develops methods that incorporate such dynamism. * • Personality analysis based on the Big Five theory is robust, but inference about each other’s character types is also in flux during conversations, and examining the impact of the change process on our approach is worth looking forward to future work. * • Ginzburg et al. (2019) provided a categorical approach to the response to the question. We hypothesize that the conversational type has the role of delimiting the range of possible moves. We aim to characterize this variability. * • This article introduces the concept of move conformity, which is required for automatic detection of research based on different types of dialogue in the future work. This could be achieved by modifying an existing NLG evaluation model (e.g. BLEU Papineni et al. (2002), ADEM Lowe et al. (2017)). * • This paper discussed several cases in detail, but they are all constructed examples, so fitting this model on actual conversations (e.g. the British National Corpus) or scripted dialogues annotated for character types (e.g. FriendQA Yang and Choi (2019)) with experimental predictions is desirable. ### Acknowledgments This work was supported by an internship funded by the Institut Universitaire de France, within Jonathan Ginzburg’s project Unifying Verbal and Non-verbal Interaction. We also acknowledge the support of the French Investissements d’Avenir-Labex EFL program (ANR-10-LABX-0083). Thanks to Prof. Ginzburg’s patient guidance during the internship. Thanks also to three anonymous reviewers for SemDial for their detailed and perceptive comments. ## References * Allen and Perrault (1980) James Allen and Ray Perrault. 1980. Analyzing intention in utterances. _Artificial Intelligence_ , 15:143–178. * Allwood (2000) Jens Allwood. 2000. An activity based approach of pragmatics. In Harry Bunt, editor, _Abduction, Belief and Context in Dialogue; Studies in Computational Pragmatics_. Amsterdam, John Benjamins. * Allwood et al. (2000) Jens Allwood, David Traum, and Kristiina Jokinen. 2000. Cooperation, dialogue and ethics. _International Journal of Human-Computer Studies_ , 53(6):871–914. * Asher et al. (2017) Nicholas Asher, Soumya Paul, and Antoine Venant. 2017. Message exchange games in strategic contexts. _Journal of Philosophical Logic_ , 46(4):355–404. * Bakhtin (2010) Mikhail Mikhalovich Bakhtin. 2010. _Speech genres and other late essays_. University of Texas Press. * Bono and Judge (2004) Joyce E Bono and Timothy A Judge. 2004. Personality and transformational and transactional leadership: a meta-analysis. _Journal of applied psychology_ , 89(5):901. * Burnett (2019) Heather Burnett. 2019. Signalling games, sociolinguistic variation and the construction of style. _Linguistics and Philosophy_ , 42(5):419–450. * Cohen and Perrault (1979) Philip Cohen and Ray Perrault. 1979. Elements of a plan-based theory of speech acts. _Cognitive Science_ , 3:177–212. * Cooper (2005) Robin Cooper. 2005. Records and record types in semantic theory. _Journal of Logic and Computation_ , 15(2):99–112. * Cooper et al. (2014) Robin Cooper, Simon Dobnik, Shalom Lappin, and Staffan Larsson. 2014. A probabilistic rich type theory for semantic interpretation. In _Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)_ , pages 72–79. * Costa Jr and McCrae (2008) Paul T Costa Jr and Robert R McCrae. 2008. _The Revised NEO Personality Inventory (NEO-PI-R)._ Sage Publications, Inc. * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_. * Ginzburg (2012) Jonathan Ginzburg. 2012. _The interactive stance_. Oxford University Press. * Ginzburg et al. (2019) Jonathan Ginzburg, Zulipiye Yusupujiang, Chuyuan Li, Kexin Ren, and Paweł Łupkowski. 2019. Characterizing the response space of questions: a corpus study for english and polish. In _Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue_ , pages 320–330. * Goldberg (1992) Lewis R Goldberg. 1992. The development of markers for the big-five factor structure. _Psychological assessment_ , 4(1):26. * Hymes (1974) Dell Hymes. 1974. Ways of speaking. _Explorations in the ethnography of speaking_ , 1:433–451. * Jiang et al. (2019) Hang Jiang, Xianzhe Zhang, and Jinho D Choi. 2019. Automatic text-based personality recognition on monologues and multiparty dialogues using attentive networks and contextual embeddings. _arXiv preprint arXiv:1911.09304_. * Larsson (2002) Staffan Larsson. 2002. _Issue-based dialogue management_. Citeseer. * Levelt (1993) Willem JM Levelt. 1993. _Speaking: From intention to articulation_ , volume 1. MIT press. * Lowe et al. (2017) Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. _arXiv preprint arXiv:1708.07149_. * Mairesse et al. (2007) François Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007\. Using linguistic cues for the automatic recognition of personality in conversation and text. _Journal of artificial intelligence research_ , 30:457–500. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. * Pennebaker and King (1999) James W Pennebaker and Laura A King. 1999. Linguistic styles: Language use as an individual difference. _Journal of personality and social psychology_ , 77(6):1296. * Purver (2004) Matthew Richard John Purver. 2004. _The theory and use of clarification requests in dialogue_. Ph.D. thesis, University of London. * Riggio et al. (1988) Ronald E Riggio, Charles Salinas, and Joan Tucker. 1988. Personality and deception ability. _Personality and Individual Differences_ , 9(1):189–191. * See et al. (2019) Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. _arXiv preprint arXiv:1902.08654_. * Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In _Advances in neural information processing systems_ , pages 3104–3112. * Watson and Clark (1992) David Watson and Lee Anna Clark. 1992. On traits and temperament: General and specific factors of emotional experience and their relation to the five-factor model. _Journal of personality_ , 60(2):441–476. * Wittgenstein (1953) Ludwig Wittgenstein. 1953. Philosophical investigations. basil & blackwell. _OxfordWittgensteinPhilosophical investigations Basil1953_. * Wong (2018) Kwong-Cheong Wong. 2018. _Classifying Conversations_. Ph.D. thesis, Paris Diderot University. * Wong and Ginzburg (2018) Kwong-Cheong Wong and Jonathan Ginzburg. 2018. Conversational types: a topological perspective. * Yang and Choi (2019) Zhengzhe Yang and Jinho D Choi. 2019. Friendsqa: Open-domain question answering on tv show transcripts. In _Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue_ , pages 188–197.
# Privacy-Protecting COVID-19 Exposure Notification Based on Cluster Events111This paper was presented at the _NIST Workshop on Challenges for Digital Proximity Detection in Pandemics: Privacy, Accuracy, and Impact, January 28 02021_. Dates follow conventions of the Long Now Foundation. Paul Syverson U.S. Naval Research Laboratory <EMAIL_ADDRESS> ###### Abstract We provide a rough sketch of a simple system design for exposure notification of COVID-19 infections based on copresence at cluster events—locations and times where a threshold number of tested-positive (TP) individuals were present. Unlike other designs, such as DP3T or the Apple-Google exposure- notification system, this design does not track or notify based on detecting direct proximity to TP individuals. The design makes use of existing or in-development tests for COVID-19 that are relatively cheap and return results in less than an hour, and that have high specificity but may have lower sensitivity. It also uses readily available location tracking for mobile phones and similar devices. It reports events at which TP individuals were present but does not link events with individuals or with other events in an individual’s history. Participating individuals are notified of detected cluster events. They can then compare these locally to their own location history. Detected cluster events can be publicized through public channels. Thus, individuals not participating in the reporting system can still be notified of exposure. A proper security analysis is beyond the scope of this design sketch. We do, however, discuss resistance to various adversaries and attacks on privacy as well as false-reporting attacks. The goal of this brief paper is to introduce the idea of, and contextual motivation for, using mobile phone location data from COVID-19 tested-positive (TP) individuals to identify cluster events and then to notify people of potential exposure simply by notifying them of cluster events. A sketch of a basic design to do this in a privacy-protecting manner is presented. We do not discuss even high-level particulars of necessary associated system features. Existing privacy-protecting exposure-notification systems for COVID-19 infection generally do their detection and notification based on detected proximity to TP individuals [1, 21, 19]. Location information is typically not recorded by such systems. DP3T intentionally “avoids collecting location data, which is highly sensitive and very difficult to publish in a privacy- preserving way” [21]. While SafePaths explores adding GPS location information to proximity tracing [15], this is considered auxiliary to the primary task of identifying individuals who have been in close proximity to TP infectious individuals. Deployed COVID-19 exposure-notification systems do employ location information. For example, the National Health Service of the United Kingdom (NHS) has set up a system whereby venues (pubs, hairdressers, village libraries, etc.) in England and Wales are required to obtain and post QR codes for patrons to scan when visiting. If the venue is later determined to be a COVID-19 hotspot, it is uploaded to a list that patrons can download and check against visited places scanned into their phones [14]. This is unlike our approach in multiple respects. It only works for fixed locations, which are locations of establishments of specific types. It requires participating locations to create and post QR codes. It only works for locations that individuals have scanned into their phones. And it identifies locations rather than events (locations at times). Safe2 [18] is a mobile phone app that combines self assessment information about symptoms and proximity contact detection to provide a “Safe Score” indicating risk of infection ranging from “Healthy” to “Confirmed”. It also keeps track of location history, but shared in a privacy-protecting way somewhat similar to what we suggest below. Based on visited locations and contacts, an individual may be classified as a likely asymptomatic carrier. Notifications will then be sent to others who have been detected to be in close proximity to that individual, thus decreasing their Safe Scores. The approaches and apps mentioned above by no means constitute a complete list, even ignoring the dynamic state of introduction and development of new apps. Nor do we provide more than a brief note of the features of the apps that we do mention as most relevant to or contrasting with our approach. For example, we have not otherwise mentioned PACT [17] or its incorporation in SafePaths. Buchanen et al. have produced a survey and analysis of privacy- preserving COVID-19 contact tracking that was relatively comprehensive at time of writing [3]. Landau’s recent book provides an introduction to the technology of contact tracing and its usefulness for public health, in which she discusses efficacy, equity, and privacy [11]. For Safe2 and the other privacy-protecting systems cited above, close proximity is determined by detected Bluetooth communication. Dehaye and Reardon have identified three attacks on Bluetooth-based proximity detection in the context of COVID-19 contact tracing apps [7]. The attacks require an adversary capable of placing a software development kit (SDK) in a moderately successful app, but Dehaye and Reardon argue that this is much easier than is usually assumed. See the paper for specifics. The good thing is that we need not accurately determine how hard such attacks are or how hard they are to counter if our exposure notification system does not depend on phones broadcasting proximity information. And specifics of COVID-19 infection patterns and COVID-19 testing technology may permit simplifications of privacy-protecting notifications based on presence at cluster events rather than directly detecting proximity to TP individuals. First, unlike common influenzas and other familiar diseases, COVID-19 appears to have a high degree of clustering in its dispersion and to be spread more in events where infected individuals spend time in close proximity to groups of others. There are other dispersion clustering factors, such as whether gatherings are indoors and adequacy of ventilation, and clustering also occurs around some people who are individually linked to high number of infections. But generally, gatherings play a significant role in infection rates, whether or not they are exacerbated by other factors [10]. This includes both super- spreader large gatherings for social or cultural events as well as more moderate-sized gatherings [4]. Second, most tests for COVID-19 initially rolled out required specialized equipment to evaluate collected samples, required days or more to return results, were expensive, or all of the above. But analyses indicate that faster, cheaper point-of-care tests can be more effective at identification of infected individuals than more sensitive tests that are slower [13, 9]. And U.S. Department of Health and Human Services in partnership with the the U.S. Department of Defense is now providing rapid point-of-care tests to communities across the United States [8]. Similarly, under the auspices of the World Health Organization, a global partnership has planned to make 120 million rapid tests available in low- and middle-income countries [23]. Analyses of effectiveness focus on identification and notification of infectious individuals. But coupled with dispersion patterns, another advantage of cheap, point-of-care tests emerges. “[G]iven the huge numbers associated with these clusters, targeting them would be very effective in getting our transmission numbers down” [22]. Also, identifying a cluster does not require that everyone who was already infected at a particular event (location and time) has tested positive or even has been tested at all. As long as a sufficient _number_ of TP individuals are associated with a given event, it is not important to identify which individuals were present, even pseudonymously, in order for the event to constitute a cluster. Since cluster events are indicators of risk of infection for all copresent at the event, informing individuals of those clustering events at which they were present is sufficient to notify them of potential exposure. And, individuals can make the determination of whether they were present at a clustering event entirely locally by having their phones compare clustering events about which they were notified with their location history. Thus, we only need count the number of distinct TP individuals at an event to identify it as a cluster event. Like other privacy protecting COVID-19 notification systems, we can make use of anonyms (ephemeral pseudo-random identifiers) for each individual (where an individual is identified with the individual’s phone) at a given time. These can be combined with location data available to the phone, e.g, by GPS and other phone localization inputs, into an ordered pair, $(\textit{random\\_ID}_{(i,t)},\textit{location}({\textit{random\\_ID}_{(i,t)}},t)\,)$. Phones do typically have automatic access to location data but do not typically store location histories. Location histories are of course frequently tracked by third parties, and apps to track location history are readily available for the major mobile platforms. Also, privacy-protecting tracking and storing of mobile device location data for personal use has been studied since at least the mid-nineties [16]. As noted above, the Safe2 app specifically stores and uses location history for COVID-19 exposure notification. And though Safe2 relies on Bluetooth and GPS, GPS alone can be effective at detecting useful COVID clustering information [20]. The specific means of localization is not central to our approach whether GPS, WiFi, Bluetooth, or other background signals (such as a scanned QR-code, as in the NHS app) as long as it is decoupled from the reporting of location history. As with other privacy-protecting aspects not novel in this design, we simply assume a privacy-protecting location history system and consider particulars out of scope for this paper. If an individual tests positive, she submits such pairs for all times within a critical period, typically covering the maximum past interval during which she might have become infected. (Note that backward tracing of events appears to be more effective than forward tracing [10]. Thus it is important to trace back to the time she might have become infected not merely the shorter period back to the time she might have become infectious.) Submission can be to a decentralized repository or to a centralized repository as long as the act of submission does not reveal association of submitted pairs. Given the limited goals of this paper, we simply assume such an association-protecting submission system. As an over-simple, somewhat concrete example of such a system, assume a centralized repository with submission protected by one or more mixes [6] such that the output of the mixes is an unordered collection of all such pairs submitted during a given mix-system firing interval. If notification of the results of a point-of-care test as well as notification of exposure are automated in a COVID-19 exposure notification app, and if adequate privacy protections are incorporated, then individuals have self- interested incentive to use the app even if its primary goal is to control spread. Nonetheless, as we shall see, the system can provide useful notification of exposure even to nonparticipants without an installed app. #### Malicious reporting of positive tests Another potential advantage of a cluster-event based approach is that, even without limiting TP reporting to authorized individuals, it provides automatic counters to the possibility that “people may falsely report they have been infected to cause mischief or to keep people home in order to shut down school or even to disrupt an election” [5]. An individual falsely reporting a positive test cannot easily create such a result because they are unlikely to be the one to transition a location and time across the threshold of counting as a cluster event. And they cannot report at all for a location and time unless they were present at that event. More significant coordinated copresence or system hacking of location reporting would be required. And if coordinated copresence is the mechanism, then it may be that innocents who are notified because they were present at that cluster event _should_ seek further testing and curtailing of social interactions. Our cluster-event design thus automatically prevents, e.g., a student who is worried about his exam next week and anonymously reports a positive test from thereby causing his school to shut down or his whole chemistry class to be forced into quarantine. A fan of one sports team cannot force a rival team into quarantine, etc. A more substantial adversary, such as a nation-state, might be able to hack location histories for a phone, create sybils of phones at a location, etc. This could support an attack at a larger scale to disrupt a critical operation or degrade the availability of essential infrastructure or emergency personnel. Since the proposed design is meant to leverage rapid, point-of-care tests, it is also conducive to requiring input from authorized testing personnel at point of care to permit reporting a positive test. For example, an authorized TP code tied to a unique anonym bound to the identity of the mobile phone present when an individual is given the test could be sent at the same time test results are comunicated to that phone, possibly even after the individual has left the point of care. And reporting of recent location history for that phone would require this authorization. Specifics of what capabilities such more significant adversaries might have and how all this would work is beyond the scope of this paper. We simply note here that the cluster-event approach remains compatible with requiring authorized parties to confirm TP status in order for reporting to occur. Obviously there is a tension between authorization mitigations against more powerful false reporting attacks and the ease and effectiveness of participation in the reporting and notification system. For example, without an authorization requirement, simple tests requiring no expertise to administer or evaluate can be made available and incorporated in the exposure notification system without an authorized-testing-personnel bottleneck. And a powerful adversary might include many other elements beyond our scope, such as compromising trusted authorization individuals, or the systems used for authorization, or the physical tests themselves. Lesser adversaries described above are still countered by the proposed system even without the authorization component. This is another advantage over any system that inherently has a trusted-authorization bottleneck. #### Cluster event criteria Submitted anonym pairs can be clustered into events according to latest understanding of what constitutes sufficient proximity in space and time to indicate risk of exposure likely enough to merit notification. Such clustering can be based simply on the number of copresent TP individuals. Further privacy preserving measures exist that would only reveal an uploaded location and time if a cluster event occurs there. Indeed there are deployed systems for large- scale gathering of data and release of associated statistics with guarantees of differential privacy [2]. With small thresholds of cluster events, however, even if the system only reveals locations if there is a cluster event, there may be a tension between revealing cluster events in a differentially private way while also not significantly affecting the rate of false negatives about clusters. This may not be a significant limitation, however. Without differential privacy, in principle an adversary could submit a false positive result to enhance tracking of a TP individual. But this is not trivial. If, for example, the adversary had a device that could report from the individual’s location, easier tracking is available. So we would need to assuming knowledge of suspected trajectories followed by the targeted TP individual, and a hack of the location reporting system that doesn’t also make more straightforward tracking possible. In that case, the adversary could report TP locations and times along the suspected route to look for instances where the adversary’s report caused a cluster event. If so, and ignoring the entrance of other TP individuals causing the transition, the target’s presence could have caused locations-times to be at one less than the cluster-reporting threshold. These issues are also beyond the scope of this paper. Exogenous information, if available, could affect clustering classification. For example, a location may have been separately identified as that of a salon, restaurant, house of worship, etc. This can affect what constitutes a notification-worthy cluster: two TP individuals copresent on a street corner in a city may be too low a threshold to classify this as a cluster event meriting notification, two TP individuals copresent in a poorly ventilated small salon might be a reasonable threshold, however. Conversely, cluster locations that are identified simply by the number of copresent TP individuals but otherwise unknown and that are also responsible for large or repeated cluster events might be flagged for public health officials to investigate what is at that location and what sort of people are going there for what purposes. On the other hand, unlike direct contact detecting approaches, since notification is only of cluster events, this approach cannot detect and therefore cannot notify people if they are exposed to a single copresent individual. As noted above, spread of COVID-19 is primarily through such clustering events, however. And this approach will also provide notification in a circumstance where an individual might never have come in close contact to anyone who tested positive (who is also participating in the detection/notification system) but was present at an event where multiple TP individuals were detected. So it not clear which approach is more likely to result in notification. Further, individuals from communities which have historically experienced disproportionate negative impacts of public health crises might be disinclined to participate in a contact tracing system [12]. Even if a non-participant, they might nonetheless self-interestedly check a public website for cluster events and act if they notice a significant cluster at an event where they (or loved ones) were known to be present. Recall that in any case, the primary goal is to produce notifications that lead to significant reduction in spread rather than to guarantee an individual of notification upon any exposure. Relatedly, the approach allows different criteria for distance in making a clustering determination. As an oversimplified illustration, suppose a 2000 $m^{2}$ area is comprised of dozens of TP individuals where all individuals on the perimeter are within 2 meters of another TP individual for several minutes. The relevant cluster event for notification should probably cover the entire area at that time even if portions of the center are 10 meters from the nearest TP individual—unless there is specific additional information that would exclude particular subareas. It may be likewise reasonable to include a larger area than just 2 meters beyond the perimeter for such a large cluster. Another potential limitation of this approach is that it identifies a cluster as at one (possibly extended) location in a relatively brief time interval. It does not directly have a means to distinguish extended presence at a location of multiple TP individuals with few comings and goings over a period of hours (for example at a social gathering) versus a roughly persistent total of TP individuals resulting from regular turnover (for example, at a transit hub or commercial service establishment). This could matter for time of exposure to a particular individual as an indicator of infection risk likelihood. On the other hand, whether a notified individual learns of exposure at multiple cluster events at the same location based on either of these scenarios may not matter for the recommended course of action. Notified individuals are also more likely to know the circumstances of a particular succession of notified cluster events. And if they know the circumstances, they may be more prepared to notify others personally known to be associated with those circumstances, whether or not those others participate in or monitor the notification system. Relatedly, there is no distinction made by the basic system of multiple exposures at a succession of locations versus prolonged exposure, e.g., during a shared ride in a car or mass transit vehicle. Again, it might not make a practical difference for purposes of the system whether this is identified as a single prolonged cluster event or succession of many shorter ones. ### High-level Design Summary 1. 1. Individual tests positive for COVID-19 and enters this in the notification app on their phone (alternatively notification of a positive result might be automated in a phone app and/or require input from a trusted testing official). 2. 2. Individual’s phone app prepares a historical list of locations and times since the beginning of the critical period based on time of testing, each paired with a different random identifier. 3. 3. Phone submits the list of pairs through a privacy preserving system. 4. 4. System receiving these pairs from all reporting phones clusters them into events at which multiple TP individuals are present. 5. 5. System pushes notifications of all cluster events to participants’ phones and/or posts these to a publicly accessible location, e.g., a website. 6. 6. Notified individuals learn of exposures at cluster events. The purpose of this brief paper is to introduce a novel concept of COVID-19 exposure notification based on creation of clustering events of which individuals are notified. We have simply assumed adequately privacy preserving systems for data gathering, processing, and publishing. And there are existing systems that make this assumption plausible, some of which we have cited. Details matter, however, to the security, scalability, practicality, and usability of the overall system. Some of the details we have ignored include how submission of pairs unlinks submitting individuals from locations and histories of locations, the number of submitting individuals (in a region, during an interval), resistance to de-anonymization of location histories from plausible travel paths given a collection of location-time pairs, how long various data and information are held, etc. Clustering algorithms and clustering criteria are central to the cost, basic viability, and properties provided by this approach, but we have simply assumed these will be selected to function as needed. Cognizant of these assumptions, we have nonetheless identified a number of high-level properties of this approach, which we now summarize. ### Design Features * • Does not depend on TP individual’s phone to have Bluetooth turned on in order to provide inputs. * • Does not depend on potentially exposed individuals to have Bluetooth on in order for system to find indications of exposure. * • Does not depend on direct physical interaction (Bluetooth or otherwise) between phones of TP individuals and exposed individuals. * • Exposure contact distance parameters do not depend on limitations of Bluetooth communication parameters. * • Does depend on availability of phone location history of reporting TP individuals. * • Does depend on phone location history (or human memory or…) for individuals to effectively make use of cluster event notifications or postings. * • Does reveal locations and times of cluster events, even small ones, which may identify particular households where there are multiple infections or that some sort of meeting took place at an otherwise nondescript location. * • Leverages fast test results, even if tests have only modest sensitivity. * • Depends on approximate numbers of TP individuals in cluster events rather than specific contacts with TP individuals. * • Cannot detect exposure to TP individuals outside of cluster events. * • Can notify of exposure even without close contact to a TP individual. * • Can notify of exposure even without participation as reporting individual. * • Counters false TP reporting _and_ tolerates false TP reporting. * • Compatible with using trusted authorizations to counter false TP reporting by stronger adversaries. ## Acknowledgements Thanks to Aaron Johnson, Rob Jansen, Matt Traudt, and Ryan Wails for helpful discussions, and to Paul-Olivier Dehaye and Susan Landau for multiple helpful comments and discussions. ## References * [1] Privacy-preserving contact tracing. https://covid19.apple.com/contacttracing, 02020. * [2] Andrea Bittau, Úlfar Erlingsson, Petros Maniatis, Ilya Mironov, Ananth Raghunathan, David Lie, Mitch Rudominer, Ushasree Kode, Julien Tinnes, and Bernhard Seefeld. Prochlo: Strong privacy for analytics in the crowd. In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP ’17), pages 441–459, 02017. * [3] William J Buchanan, Muhammad Ali Imran, Masood Ur-Rehman, Lei Zhang, Qammer H. Abbasi, Christos Chrysoulas, David Haynes, Nikolaos Pitropakis, and Pavlos Papadopoulos. Review and critical analysis of privacy-preserving infection tracking and contact tracing. Frontiers in Communications and Networks, December 8 02020. * [4] Muge Cevik, Julia Marcus, Caroline Buckee, and Tara Smith. SARS-CoV-2 transmission dynamics should inform policy. https://ssrn.com/abstract=3692807, September 14 02020. * [5] Lorrie Faith Cranor. Digital contact tracing may protect privacy, but it is unlikely to stop the pandemic. Communications of the ACM, 63(11):22–24, November 02020. * [6] George Danezis, Claudia Diaz, and Paul Syverson. Anonymous communication. In Burton Rosenberg, editor, Handbook of Financial Cryptography. CRC Press, 02010. * [7] Paul-Olivier Dehaye and Joel Reardon. Proximity tracing in an ecosystem of surveillance capitalism. In Proc. 19th Workshop on Privacy in the Electronic Society, WPES’20, pages 191–203. ACM, November 9 02020. * [8] COVID-19 rapid point-of-care test distribution. https://www.hhs.gov/coronavirus/testing/rapid-test-distribution/index.html, October 7 02020. * [9] Lee Kennedy-Shaffer, Michael Baym, and William Hanage. Perfect as the enemy of the good: Using low-sensitivity tests to mitigate SARS-CoV-2 outbreaks. https://dash.harvard.edu/handle/1/37363184, 02020. * [10] Sadamori Kojaku, Laurent Hébert-Dufresne, Enys Mones, Sune Lehmann, and Yong-Yeol Ahn. The effectiveness of backward contact tracing in networks. https://arxiv.org/abs/2005.02362, May 5 02020. * [11] Susan Landau. People Count: Contact-Tracing Apps and Public Health. MIT Press, April 02021. * [12] Susan Landau, Christy E. Lopez, and Laura Moy. The importance of equity in contact tracing. https://www.lawfareblog.com/importance-equity-contact-tracing, May 1 02020. * [13] Daniel B. Larremore, Bryan Wilder, Evan Lester, Soraya Shehata, James M. Burke, James A. Hay, Milind Tambe, Michael J. Mina, and Roy Parker. Test sensitivity is secondary to frequency and turnaround time for COVID-19 surveillance. medRxiv, 02020. https://www.medrxiv.org/content/early/2020/09/08/2020.06.22.20136309.full.pdf. * [14] Create a coronavirus NHS QR code for your venue. https://www.gov.uk/create-coronavirus-qr-poster, 02020. * [15] Ramesh Raskar, Abhishek Singh, and Sam Zimmerman. Adding Location Context to Apple/Google Exposure Notification Bluetooth API: MIT SafePaths Encryption Proposals for GPS + Bluetooth, version 0.1. https://docs.google.com/document/d/1uTjdUetEEtnwN-6˙iw3HTZOdAd0kKsK7GR1YbdS10Ss/edit, April 26 02020. * [16] Michael G. Reed, Paul F. Syverson, and David M. Goldschlag. Protocols using anonymous connections: Mobile applications. In Security Protocols: 5th International Workshop, Paris France, April 7-9 1997, Proceedings, pages 13–23. Springer-Verlag, LNCS 1361, 01998\. * [17] Ronald L. Rivest, Daniel J. Weitzner, Louise C. Ivers, Israel Soibelman, and Marc A. Zissman. PACT: Private automated contact tracing, mission and approach. https://pact.mit.edu/wp-content/uploads/2020/05/PACT-Mission-and- Approach-2020-05-19-.pdf, May 19 02020. * [18] Safe2: Frequently asked questions. https://safe2.org/faq/, 02020. * [19] SafePaths Alliance. Private kit: Safe paths; privacy-by-design. https://safepaths.mit.edu, 02020. * [20] Matteo Serafino, Higor S. Monteiro, Shaojun Luo, Saulo D. S. Reis, Carles Igual, Antonio S. Lima Neto, Matías Travizano, José S. Andrade, and Hernán A. Makse. Superspreading k-cores at the center of COVID-19 pandemic persistence. medRxiv, 02020. * [21] Carmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, Marcel Salathé, James Larus, Edouard Bugnion, Wouter Lueks, Theresa Stadler, Apostolos Pyrgelis, Daniele Antonioli, Ludovic Barman, Sylvain Chatel, Kenneth Paterson, Srdjan Čapkun, David Basin, Jan Beutel, Dennis Jackson, Marc Roeschlin, Patrick Leu, Bart Preneel, Nigel Smart, Aysajan Abidin, Seda Gürses, Michael Veale, Cas Cremers, Michael Backes, Nils Ole Tippenhauer, Reuben Binns, Ciro Cattuto, Alain Barrat, Dario Fiore, Manuel Barbosa, Rui Oliveira, and José Pereira. Decentralized privacy-preserving proximity tracing. https://arxiv.org/abs/2005.12273, May 25 02020. * [22] Zeynep Tufekci. This overlooked variable is the key to the pandemic: It’s not R. The Atlantic, September 30 02020. https://www.theatlantic.com/health/archive/2020/09/k-overlooked-variable- driving-pandemic/616548/. * [23] Global partnership to make available 120 million affordable, quality COVID-19 rapid tests for low- and middle-income countries. https://www.who.int/news/item/28-09-2020-global-partnership-to-make- available-120- million-affordable-quality-covid-19-rapid-tests-for- low--and- middle-income-countries, September 28 02020.
aainstitutetext: Departamento de Física, Universidad de Santiago de Chile, Avenida Víctor Jara 3493, Santiago, Chilebbinstitutetext: Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels, Belgiumccinstitutetext: Centro de Estudios Científicos (CECs), Av. Arturo Prat 514, Valdivia, Chileddinstitutetext: Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, sede Valdivia, General Lagos 1163, Valdivia 5110693, Chile # Gravity coupled to a scalar field from a Chern-Simons action: describing rotating hairy black holes and solitons with gauge fields Marcela Cárdenas b Oscar Fuentealba c,d Cristián Martínez c,d and Ricardo Troncoso<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Einstein gravity minimally coupled to a scalar field with a two-parameter Higgs-like self-interaction in three spacetime dimensions is recast in terms of a Chern-Simons form for the algebra $g^{+}\oplus g^{-}$ where, depending on the sign of the self-interaction couplings, $g^{\pm}$ can be $so(2,2)$, $so(3,1)$ or $iso(2,1)$. The field equations can then be expressed through the field strength of non-flat composite gauge fields, and conserved charges are readily obtained from boundary terms in the action that agree with those of standard Chern-Simons theory for pure gravity, but with non-flat connections. Regularity of the fields then amounts to requiring the holonomy of the connections along contractible cycles to be trivial. These conditions are automatically fulfilled for the scalar soliton and allow to recover the Hawking temperature and chemical potential in the case of the rotating hairy black holes presented here, whose entropy can also be obtained by the same formula that holds in the case of a pure Chern-Simons theory. In the conformal (Jordan) frame the theory is described by General Relativity with cosmological constant conformally coupled to a self-interacting scalar field, and its formulation in terms of a Chern-Simons form for suitably composite gauge fields is also briefly addressed. ††preprint: CECS-PHY-15/04 ## 1 Introduction The formulation of three-dimensional General Relativity as a Chern-Simons theory Achucarro:1986uwr ; Witten:1988hc has allowed exploiting time-honored tools available for gauge fields in order to span a wealth of very interesting achievements on classical and quantum aspects of gravitation (for a non- exhaustive list of references, see e.g. Carlip:1995zj ; Coussaert:1995zp ; Maloney:2007ud ; Kraus:2006wn ; Cotler:2018zff ; Perez:2016vqo ; Gonzalez:2018jgp ; Melnikov:2018fhb ; Fuentealba:2017omf ; Afshar:2016wfy ; Cardenas:2021vwo ; Barnich:2013yka ; Grumiller:2019tyl ; Ojeda:2019xih ). Many of these results also extend to supergravity Achucarro:1989gm ; Howe:1995zm ; Banados:1996hi ; Henneaux:1999ib ; Giacomini:2006dr ; Barnich:2015sca ; Fuentealba:2017fck ; Caroca:2018obf ; Caroca:2019dds ; Banerjee:2019lrv ; Banerjee:2021uxl ; Barnich:2014cwa as well as for gravitation coupled to higher-spin fields Blencowe:1988gj ; Bergshoeff:1989ns ; Henneaux:2010xg ; Campoleoni:2010zq ; Gutperle:2011kf ; Ammon:2011nk ; Castro:2011fm ; Henneaux:2012ny ; Perez:2012cf ; Campoleoni:2012hp ; Perez:2013xi ; Henneaux:2013dra ; Bunster:2014mua ; Zinoviev:2014sza ; Fuentealba:2015jma ; Fuentealba:2015wza ; Henneaux:2015tar ; Grumiller:2016kcp , since both can be described in terms of a Chern-Simons theory for suitable gauge groups. Ultra and non-relativistic versions thereof have also been developed in Aviles:2019xed ; Ravera:2019ize ; Ali:2019jjp ; Concha:2020eam ; Concha:2021llq ; Caroca:2022byi ; Grumiller:2017sjh . The case of General Relativity minimally coupled to a real scalar field appears to be far from that kind of description. Nevertheless, here we show that in the case of a precise two-parameter Higgs-like self-interaction potential given by $V\left(\phi\right)=\frac{\Lambda}{8}\left(\cosh^{6}\phi+\nu\sinh^{6}\phi\right)\,,$ (1) the theory can be equivalently formulated in terms of a Chern-Simons form that depends on composite gauge fields, so that the field equations no longer imply the vanishing of the field strengths. Thus, the theory and its configuration space can be equivalently described in terms of non-flat connections, so that many of the gauge theory tools open up to analyze their properties. The self-interaction potential (1) enjoys some remarkable properties. Indeed, the first analytic example of a black hole with a minimally coupled scalar field that circumvents the no-hair conjecture was precisely found for $V\left(\phi\right)$ in (1) in the range $\Lambda<0$ and $\nu\geq-1$ Henneaux:2002wm . Furthermore, the potential in (1) falls within the class analyzed in Henneaux:2002wm , for which the scalar field acquires a slow fall- off at infinity, so that the canonical generators of the asymptotic symmetries acquire an explicit contribution from the scalar field. Thus, it was shown that the Brown-Henneaux boundary conditions Brown:1986nw for gravity with a localized distribution of matter can be consistently relaxed, so that the asymptotic symmetries are still given by the conformal group in two dimensions with the same central extension. As it was also pointed out in Henneaux:2002wm , for the self-interaction potential (1), once the theory is expressed in the conformal (Jordan) frame, the matter piece of the action becomes conformally invariant, and the action reduces to General Relativity with cosmological constant $\Lambda$ with a conformally coupled scalar field, whose self- interaction coupling is determined by $\nu$ (see section 6). The plan of the paper is as follows. In the next section, we show that the action $I\left[\phi,g_{\mu\nu}\right]=\frac{8}{\kappa}\int d^{3}x\sqrt{-g}\left(\frac{R}{16}-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V\left(\phi\right)\right)\,,$ (2) with the self-interaction potential given by (1), whose field equations read $\displaystyle R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$ $\displaystyle=$ $\displaystyle 8T_{\mu\nu}\,,$ (3) $\displaystyle\Box\phi-\frac{dV\left(\phi\right)}{d\phi}$ $\displaystyle=$ $\displaystyle 0\,,$ (4) with $T_{\text{$\mu\nu$}}=\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta}\phi- g_{\text{$\mu\nu$}}V\left(\phi\right)\,,$ (5) can be expressed in terms of a Chern-Simons form with certain suitable composite gauge fields. In section 3, the Hamiltonian boundary terms required to obtain the conserved charges in terms of the connections are worked out, including the hairy black hole entropy formula in terms of gauge fields. The rotating extension of the hairy black hole in Henneaux:2002wm and its global charges are discussed in section 4. Regularity of hairy black holes and the scalar soliton in terms of trivial holonomies is addressed in section 5, including the hairy black hole thermodynamics and the corresponding Cardy formula for its entropy that depends on the global charges of the scalar soliton. Finally, section 6 is devoted to some ending remarks as well as a brief discussion of the formulation of the theory in the conformal frame in terms of a Chern-Simons form for composite gauge fields. ## 2 Action from a Chern-Simons form with composite gauge fields Here we show that the action (2), up to a boundary term, can be recast using a Chern-Simons form. The gauge fields are defined in terms of the direct sum of the algebras $g^{+}$ and $g^{-}$, where $g^{\pm}$ can be the three-dimensional (anti-)de Sitter or Poincaré algebras, depending on the signs of $\Lambda^{+}=\Lambda$ and $\Lambda^{-}=-\nu\Lambda$. The theory can be formulated through the following composite gauge fields $\displaystyle A^{+}$ $\displaystyle=$ $\displaystyle\cosh^{2}\left(\phi\right)e^{a}P_{a}^{+}+\omega_{+}^{a}J_{a}^{+}\,,$ (6) $\displaystyle A^{-}$ $\displaystyle=$ $\displaystyle\sinh^{2}\left(\phi\right)e^{a}P_{a}^{-}+\omega_{-}^{a}J_{a}^{-}\,,$ (7) that depend on the scalar field $\phi$, the dreibein $e^{a}$ and additional 1-forms $\omega_{\pm}^{a}$. The generators $J_{a}^{\pm}$ and $P_{a}^{{}^{\pm}}$ fulfill the following algebra $\left[J_{a}^{\pm},J_{b}^{\pm}\right]=\epsilon_{abc}J_{\pm}^{c}\quad,\quad\left[J_{a}^{\pm},P_{b}^{\pm}\right]=\epsilon_{abc}P_{\pm}^{c}\quad,\quad\left[P_{a}^{\pm},P_{b}^{\pm}\right]=-\Lambda^{\pm}\epsilon_{abc}J_{\pm}^{c}\,,$ (8) so that each copy corresponds to $so(3,1)$ or $so(2,2)$ for positive or negative signs of $\Lambda^{\pm}$, respectively, or $iso(2,1)$ when $\Lambda^{+}$ or $\Lambda^{-}$ vanishes. The field strengths associated to the gauge fields (6) and (7) are given by $F^{\pm}=dA^{\pm}+(A^{\pm})^{2}=\mathcal{T}_{\text{$\pm$}}^{a}P_{a}^{\pm}+\mathcal{R}_{\pm}^{a}J_{a}^{\pm}\,,$ (9) where the components along the $so(2,1)$ generators $J_{a}^{\pm}$ read $\displaystyle\mathcal{R}_{+}^{a}$ $\displaystyle=$ $\displaystyle R_{+}^{a}-\frac{1}{2}\Lambda^{+}\cosh^{4}\left(\phi\right)\epsilon^{abc}e_{b}e_{c}\,,$ (10) $\displaystyle\mathcal{R}_{-}^{a}$ $\displaystyle=$ $\displaystyle R_{-}^{a}-\frac{1}{2}\Lambda^{-}\sinh^{4}\left(\phi\right)\epsilon^{abc}e_{b}e_{c}\,,$ (11) with $R_{\pm}^{a}=d\omega_{\text{$\pm$}}^{a}+\frac{1}{2}\epsilon^{abc}\omega_{b}^{\pm}\omega_{c}^{\pm}$, while those along the remaining generators $P_{a}^{\pm}$ are $\displaystyle\mathcal{T}_{+}^{a}$ $\displaystyle=$ $\displaystyle\cosh^{2}\left(\phi\right)\left(T_{+}^{a}+2\tanh\left(\phi\right)d\phi\,e^{a}\right)\,,$ (12) $\displaystyle\mathcal{T}_{-}^{a}$ $\displaystyle=$ $\displaystyle\sinh^{2}\left(\phi\right)\left(T_{-}^{a}+2[\tanh\left(\phi\right)]^{-1}d\phi\,e^{a}\right)\,,$ (13) with $T_{\text{$\pm$}}^{a}=de^{a}+\epsilon^{abc}\omega_{b}^{\pm}e_{c}$. We then consider an action principle defined as a combination of two Chern- Simons forms for the composite gauge fields $A^{\pm}$, which reads $I\left[\phi,e,\omega^{+},\omega^{-}\right]=\frac{k^{+}}{4\pi}\int\left\langle A^{+}dA^{+}+\frac{2}{3}(A^{+})^{3}\right\rangle+\frac{k^{-}}{4\pi}\int\left\langle A^{-}dA^{-}+\frac{2}{3}(A^{-})^{3}\right\rangle\,,$ (14) where the nonvanishing components of the invariant bilinear form are given by $\left\langle J_{a}^{\pm},P_{b}^{\pm}\right\rangle=\eta_{ab}$, with $\eta_{ab}$ standing for the Minkowski metric, and the levels are defined as $k^{\pm}=\pm 2\pi/\kappa$. The field equations are found varying the action (14) with respect to the dynamical fields $e^{a}$, $\phi$ and $\omega_{\pm}^{a}$ $\displaystyle\delta I$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi}\intop\left\langle\left(k^{+}F^{+}\frac{\delta A^{+}}{\delta e^{a}}+k^{-}F^{-}\frac{\delta A^{-}}{\delta e^{a}}\right)\delta e^{a}+\left(k^{+}F^{+}\frac{\delta A^{+}}{\delta\phi}+k^{-}F^{-}\frac{\delta A^{-}}{\delta\phi}\right)\delta\phi\right.$ (16) $\displaystyle+\left.\left(k^{+}F^{+}\frac{\delta A^{+}}{\delta\omega_{+}^{a}}+k^{-}F^{-}\frac{\delta A^{-}}{\delta\omega_{+}^{a}}\right)\delta\omega_{+}^{a}+\left(k^{+}F^{+}\frac{\delta A^{+}}{\delta\omega_{-}^{a}}+k^{-}F^{-}\frac{\delta A^{-}}{\delta\omega_{-}^{a}}\right)\delta\omega_{-}^{a}\right\rangle\,,$ so that they read $\displaystyle\cosh^{2}\left(\phi\right)\mathcal{R}_{a}^{+}-\sinh^{2}\left(\phi\right)\mathcal{R}_{a}^{-}=0\,,$ (17) $\displaystyle\left(\mathcal{R}_{a}^{+}-\mathcal{R}_{a}^{-}\right)e^{a}=0\,,$ (18) $\displaystyle T_{\pm}^{a}+2[\tanh\left(\phi\right)]^{\pm 1}d\phi\,e^{a}=0\,,$ (19) respectively. Note that the last field equations (19) are algebraic for $\omega_{\pm}^{a}$, which can be solved as $\omega_{\pm}^{a}\left(e,\phi\right)=\omega^{a}\left(e\right)-2[\tanh\left(\phi\right)]^{\pm 1}*\left(e^{a}d\phi\right)\,,$ (20) where $\omega^{a}$ is the torsionless Levi-Civita spin connection associated to $e^{a}$, and $*$ stands for the Hodge dual, so that $*\left(e^{a}d\phi\right)=\epsilon^{abc}\partial_{\nu}\phi e_{b}^{\nu}e_{c}$. A second order action $I(e^{a},\phi)$ is obtained by replacing (20) in the Chern-Simons action (14), which reduces to (2) up to a boundary term, where the self-interaction potential is precisely given by (1). Analogously, substituting equation (20) in (17) and (18), they reduce respectively to the Einstein and scalar field equations (3) and (4). ## 3 Global charges and entropy in terms of gauge fields In order to deal with conserved charges as well as for the hairy black hole entropy, it is useful to express the action (14) in Hamiltonian form. Supplementing the action with a boundary term $B$, that is required to ensure a well-defined variational principle Regge:1974zd , yields to $I=\frac{k^{+}}{4\pi}\int dtd^{2}x\epsilon^{ij}\left\langle\dot{A}_{i}^{+}A_{j}^{+}+A_{t}^{+}F_{ij}^{+}\right\rangle+\frac{k^{-}}{4\pi}\int dtd^{2}x\epsilon^{ij}\left\langle\dot{A}_{i}^{-}A_{j}^{-}+A_{t}^{-}F_{ij}^{-}\right\rangle+\mathcal{B}\,,$ (21) where $F_{ij}^{\pm}$ are the spatial components of the field strengths $F^{\pm}$ in (9). Considering that the variations of the gauge fields depend on the scalar field and the dreiben, and making use of (20), the variation of the action reads $\displaystyle\delta I$ $\displaystyle=$ $\displaystyle\frac{1}{2\kappa}\int dtd^{2}x\epsilon^{ij}\left\langle\left[\left(-2F_{tj}^{+}\frac{\delta A_{i}^{+}}{\delta e^{a}}+F_{ij}^{+}\frac{\delta A_{t}^{+}}{\delta e^{a}}\right)-\left(-2F_{tj}^{-}\frac{\delta A_{i}^{-}}{\delta e^{a}}+F_{ij}^{-}\frac{\delta A_{t}^{-}}{\delta e^{a}}\right)\right]\delta e^{a}\right.$ $\displaystyle+\left.\left[\left(-2F_{tj}^{+}\frac{\delta A_{i}^{+}}{\delta\phi}+F_{ij}^{+}\frac{\delta A_{t}^{+}}{\delta\phi}\right)-\left(-2F_{tj}^{-}\frac{\delta A_{i}^{-}}{\delta\phi}+F_{ij}^{-}\frac{\delta A_{t}^{-}}{\delta\phi}\right)\right]\delta\phi\right\rangle$ $\displaystyle+\frac{1}{\kappa}\int dtdS_{i}\epsilon^{ij}\left\langle\left(A_{t}^{+}\frac{\delta A_{j}^{+}}{\delta e^{a}}-A_{t}^{-}\frac{\delta A_{j}^{-}}{\delta e^{a}}\right)\delta e^{a}+\left(A_{t}^{+}\frac{\delta A_{j}^{+}}{\delta\phi}-A_{t}^{-}\frac{\delta A_{j}^{-}}{\delta\phi}\right)\delta\phi\right\rangle+\delta\mathcal{B}\,,$ where the bulk terms are proportional to the field equations (17) and (18). The variation of the boundary term is then given by $\displaystyle\delta\mathcal{B}$ $\displaystyle=$ $\displaystyle-\frac{1}{\kappa}\int dtdS_{i}\epsilon^{ij}\left\langle\left(A_{t}^{+}\frac{\delta A_{j}^{+}}{\delta e^{a}}-A_{t}^{-}\frac{\delta A_{j}^{-}}{\delta e^{a}}\right)\delta e^{a}+\left(A_{t}^{+}\frac{\delta A_{j}^{+}}{\delta\phi}-A_{t}^{-}\frac{\delta A_{j}^{-}}{\delta\phi}\right)\delta\phi\right\rangle$ (22) $\displaystyle=$ $\displaystyle-\frac{1}{\kappa}\int dtd\theta\left\langle A_{t}^{+}\delta A_{\theta}^{+}-A_{t}^{-}\delta A_{\theta}^{-}\right\rangle\,,$ which agrees with that of a pure Chern-Simons theory. Nonetheless, it should be emphasized that in our case the variations of the gauge fields are not fully independent. For the class of configurations that we are interested in, that fulfill the asymptotic conditions spelled out in Henneaux:2002wm , it can be also shown that the variation of the surface integral in (22) integrates as $\mathcal{B}=-\frac{1}{2\kappa}\int dtd\theta\left\langle A_{t}^{+}A_{\theta}^{+}-A_{t}^{-}A_{\theta}^{-}\right\rangle\,.$ (23) Intriguingly, if one replaces the explicit form of the composite gauge fields $A^{\pm}$ in (6) and (7), by virtue of (20), the boundary term (23) completely gets rid of the scalar field and precisely reduces to that of pure General Relativity in the standard Chern-Simons formulation, given by $\mathcal{B}=-\frac{1}{2\kappa}\int dtd\theta\left\langle A_{t}A_{\theta}\right\rangle\,,$ (24) where the connection $A$ is defined by setting $\phi=0$ in the definition of $A^{+}$, i.e., $A=\left.A^{+}\right|{}_{\phi=0}=e^{a}P_{a}^{+}+\omega^{a}J_{a}^{+}\,,$ (25) stands for the gauge field of pure General Relativity Achucarro:1986uwr ; Witten:1988hc . Analogously, the black hole entropy formula that applies for a generic Chern- Simons theory in Bunster:2014mua (see also deBoer:2013gz ), reduces to that of General Relativity $\displaystyle S$ $\displaystyle=\frac{1}{\kappa}\oint d\theta\left\langle A_{\tau}^{+}A_{\theta}^{+}-A_{\tau}^{-}A_{\theta}^{-}\right\rangle\Big{|}_{r_{+}}$ $\displaystyle=\frac{1}{\kappa}\oint d\theta\left\langle A_{\tau}A_{\theta}\right\rangle\Big{|}_{r_{+}}\,,$ (26) where $\tau=-it$ is the Euclidean time and the event horizon is located at $r=r_{\text{+}}$. In sum, the boundary term (24) and the entropy (26) are found to exclusively depend on the pure gravity gauge field $A$ in (25), which noteworthy encodes all of the relevant information, without requiring the contribution of the scalar field in an explicit way. Indeed, it has to be stressed that the pure gravity gauge field (25) in our case is generically no longer flat. ## 4 Rotating hairy black hole The first analytic example of a black hole solution endowed with minimally coupled scalar hair was found in Henneaux:2002wm , precisely for the self- interaction under discussion (1) in the case of $\Lambda<0$ and $\nu\geq-1$. Some of its properties have been further analyzed in Barnich:2002pi ; Clement:2003sr ; Gegenberg:2003jr ; Park:2004yk ; Banados:2005hm ; Myung:2008ze ; Correa:2010hf ; Lashkari:2010ak ; Correa:2011dt ; Hyun:2012bc ; Aparicio:2012yq ; Xu:2014qaa ; Xu:2014xqa ; Ahn:2015uza ; Aviles:2018vnf following different approaches, and the rotating extension in the case of $\nu=0$ was presented in Correa:2012rc . In this section we extend the rotating hairy black hole solution to the full allowed range of the self-interaction coupling ($\nu>-1$). It can be readily obtained from the static one by performing a Lorentz boost parameterized by $\omega$, with $\omega^{2}<1$, in the $t-\theta$ cylinder111The case of $\nu=-1$ is excluded since, as pointed out in Henneaux:2002wm , the static metric is invariant under this kind of boosts. This configuration shares the causal structure with the massless BTZ black hole, but the null curvature singularity coincides with the “horizon” (NUT) at $r=0$, possessing vanishing mass, angular momentum, temperature and entropy, regardless the value of the integration constant. . The line element then reads $ds^{2}=-N_{\infty}^{2}N\left(r\right){}^{2}dt^{2}+\frac{dr^{2}}{G\left(r\right){}^{2}}+R\left(r\right)^{2}\left(d\theta+N^{\theta}\left(r\right)dt\right)^{2}\,,$ (27) with $\displaystyle N\left(r\right)^{2}$ $\displaystyle=$ $\displaystyle\frac{r^{2}g\left(r\right){}^{2}}{R\left(r\right){}^{2}}\,,$ (28) $\displaystyle G\left(r\right)^{2}$ $\displaystyle=$ $\displaystyle\left(\frac{H\left(r\right)+2B}{H\left(r\right)+B}\right)^{2}F\left(r\right)^{2}\,,$ (29) $\displaystyle R\left(r\right)^{2}$ $\displaystyle=$ $\displaystyle\frac{r^{2}-g\left(r\right)^{2}\omega^{2}\ell^{2}}{1-\omega^{2}}\,,$ (30) $\displaystyle N^{\theta}\left(r\right)$ $\displaystyle=$ $\displaystyle N_{\infty}^{\theta}-\frac{\omega}{\ell}\left(\frac{r^{2}-g\left(r\right)^{2}\ell^{2}}{r^{2}-g\left(r\right)^{2}\omega^{2}\ell^{2}}\right)N_{\infty}\,,$ (31) where $g\left(r\right)^{2}=\left(\frac{H\left(r\right)}{H\left(r\right)+B}\right)^{2}F\left(r\right)^{2}\quad,\quad F\left(r\right)^{2}=\frac{H\left(r\right)^{2}}{\ell^{2}}-\left(1+\nu\right)\left(\frac{3B^{2}}{\ell^{2}}+\frac{2B^{3}}{\ell^{2}H\left(r\right)}\right)\,,$ (32) and $H\left(r\right)=\frac{1}{2}\left(r+\sqrt{r^{2}+4Br}\right)\,.$ (33) Here the cosmological constant is written in terms of the AdS radius as $\Lambda=-\ell^{-2}$, while $N_{\infty}$ and $N_{\infty}^{\theta}$ stand for integration constants that turn out to be related to the Hawking temperature and the chemical potential, respectively (see section 5). The hairy black hole is dressed with a scalar field given by $\phi\left(r\right)=\mbox{arctanh}\sqrt{\frac{B}{H(r)+B}}\,,$ (34) being real provided that $B>0$. In this case, the scalar field is regular everywhere except at the origin, since it diverges as $\phi_{r\rightarrow 0}=-\frac{1}{4}\log(r)+\cdots$, sourcing a null curvature singularity, as it is reflected in the behavior of the Ricci scalar near $r=0$, which reads $R_{r\rightarrow 0}=-\frac{4(\nu+1)B^{5/2}}{\ell^{2}r^{5/2}}+O(r^{-3/2})\,.$ (35) Singularities in the spacetime metric and scalar field at the origin are cloaked by the event horizon located at $r=r_{+}=B\Theta_{\nu}$, with $\Theta_{\nu}$ given by $\Theta_{\nu}=2\left(z\bar{z}\right)^{2/3}\frac{z^{2/3}-\bar{z}^{2/3}}{z-\bar{z}}\,,$ (36) and $z=1+i\sqrt{\nu}$. It is worth highlighting that, unlike the case of the rotating solution in vacuum, the inner region of the rotating hairy black hole does not possess neither a region with closed timelike curves to be excised nor an inner (Cauchy) horizon. Indeed, the analogue of the latter actually corresponds to the null singularity at the origin. Moreover, the asymptotic behavior of the rotating hairy black hole does not fulfill the Brown-Henneaux boundary conditions Brown:1986nw because the scalar field (34) has a slow fall-off at infinity, generating a strong backreaction on the metric in the asymptotic region. Instead, the solution fits within the relaxed asymptotic behavior described in Henneaux:2002wm . In order to describe the rotating hairy black hole in terms of gauge fields, we choose the local frame so that the dreibein reads $e^{0}=N_{\infty}N\left(r\right)dt\quad,\quad e^{1}=\frac{dr}{G\left(r\right)}\quad,\quad e^{2}=R\left(r\right)\left(d\theta+N^{\theta}\left(r\right)dt\right)\,,$ (37) and hence, the components of the spin connection $\omega^{a}$ are given by $\displaystyle\omega^{0}$ $\displaystyle=$ $\displaystyle G(r)R(r)\mbox{${}^{\prime}$}d\theta+\frac{G(r)}{2}\left(2N^{\theta}(r)R(r)\mbox{${}^{\prime}$}+R(r)N^{\theta}(r)\mbox{${}^{\prime}$}\right)dt\,,$ (38) $\displaystyle\omega^{1}$ $\displaystyle=$ $\displaystyle\frac{R(r)N^{\theta}(r)\mbox{${}^{\prime}$}}{2N_{\infty}N\left(r\right)}dr\,,$ (39) $\displaystyle\omega^{2}$ $\displaystyle=$ $\displaystyle-\frac{G(r)R(r)^{2}N^{\theta}(r)\mbox{${}^{\prime}$}}{2N_{\infty}N\left(r\right)}d\theta-\frac{G(r)}{2}\left(\frac{R(r)^{2}N^{\theta}(r)N^{\theta}(r)\mbox{${}^{\prime}$}}{N_{\infty}N\left(r\right)}-2N_{\infty}N\left(r\right)\mbox{${}^{\prime}$}\right)dt\,.$ (40) The gauge fields $A^{+}$ and $A^{-}$ in (6), (7) become then fully specified by virtue of $\omega_{\pm}^{a}$ given by (20). Nevertheless, as pointed out in the previous section, the pure gravity connection in (25) is enough in order to evaluate the boundary term in (24), which for the rotating hairy black hole reduces to $\mathcal{B}=-(t_{2}-t_{1})\left[N_{\infty}\left(\frac{3\pi\left(1+\nu\right)B^{2}\left(1+\omega^{2}\right)}{\kappa\ell^{2}\left(1-\omega^{2}\right)}\right)-N_{\infty}^{\theta}\left(\frac{6\pi\left(1+\nu\right)B^{2}\omega}{\kappa\ell\left(1-\omega^{2}\right)}\right)\right].$ (41) Noteworthy, the result is independent of the radial coordinate $r$, and hence it can be computed even at finite proper distance ($r=r_{0}$) without the need of taking the limit $r\rightarrow\infty$. The mass $M$ and angular momentum $J$ can then be recognized from the boundary term (41) in the following way Regge:1974zd $\mathcal{B}=(t_{2}-t_{1})\left(-N_{\infty}M+N_{\infty}^{\theta}J\right)\,,$ (42) so that $M=\frac{3\pi\left(1+\nu\right)B^{2}\left(1+\omega^{2}\right)}{\kappa\ell^{2}\left(1-\omega^{2}\right)}\qquad\mbox{and}\qquad J=\frac{6\pi\left(1+\nu\right)B^{2}\omega}{\kappa\ell\left(1-\omega^{2}\right)}\,.$ (43) The mass and angular momentum in (43) reduce to the result in Henneaux:2002wm for the static case ($\omega=0$), as well as to that in Correa:2012rc for $\nu=0$. Note that the following bound is fulfilled $\frac{M}{|J|/\ell}=\frac{1+\omega^{2}}{2|\omega|}\geq 1\,,$ (44) for $\omega^{2}\leq 1$, being saturated at the extremal case ($\omega^{2}=1$). The extremal case can be attained from (27) and (34) by first rescaling the integration constant $B$ according to $b=\frac{B}{\sqrt{1-\omega^{2}}}$, and then taking the limit $\omega\rightarrow\pm 1$, so that the scalar field vanishes and the line element reduces to that of the extremal rotating BTZ black hole, whose (degenerate) horizon locates at $r_{+}^{2}=3b^{2}(1+\nu)$. It is worth noting that static and stationary extremal cases, respectively for $M=J=0$, and $M=|J|/\ell$, do not admit scalar hair. ## 5 Regularity and thermodynamics through non-flat connections Here we carry out the thermodynamic analysis of the rotating hairy black hole relying on its description in terms of gauge fields in the Euclidean approach, where Euclidean time $\tau=-it$ is normalized with period equals to 1. The Euclidean hairy black hole in the non-extremal case possesses the topology of a solid torus, $\mathbb{R}^{2}\times S^{1}$, where $S^{1}$ is the circle parametrized by the angular coordinate $\theta$, and $\mathbb{R}^{2}$ stands for the $r-\tau$ plane described in polar coordinates centered at $r=r_{+}$. Since thermal cycles around the horizon are contractible, regularity of the gauge fields at the horizon amounts to require trivial holonomies along them, allowing to fix the Lagrange multipliers at the boundary, given by $N_{\infty}$ and $N_{\infty}^{\theta}$, which correspond to the Hawking temperature and the chemical potential. In concrete, the regularity condition for the Euclidean rotating hairy black hole is obtained from demanding that the corresponding holonomies $\mathcal{H^{\pm}}$ along the thermal cycle at the event horizon to be trivial, i.e., $\mathcal{H^{\pm}}=\exp\left[\intop_{0}^{1}A_{\tau}^{\pm}d\tau\right]_{r_{+}}=\exp\left[A_{\tau}^{\pm}\right]_{r_{+}}=I_{c}\,,$ (45) where $I_{c}$ is a suitable element of the center of the gauge group. One possible way to implement the regularity condition corresponds to the direct diagonalization of the holonomies $\mathcal{H}^{\pm}$ for an appropriate matrix representation of the full algebra $g^{+}\oplus$ $g^{-}$. A simpler option is the one spelled out in Matulich:2014hea , being implemented in two steps: (i) Finding a group element that allows to gauge away the temporal component of the dreibein. In our case this condition is already implemented since the local Lorentz frame has already been chosen, and $e_{\tau}$ vanishes at the horizon provided that $N^{\theta}(r_{+})=0$, so that the chemical potential $N_{\infty}^{\theta}$ is fixed in terms of $N_{\infty}$ as $N_{\infty}^{\theta}=\frac{\omega}{\ell}N_{\infty}\,,$ (46) and the time component of the gauge fields reduce to $A_{\tau}^{\pm}(r_{+})=\omega_{\pm\tau}(r_{+})$, with $\omega_{\pm\tau}^{a}(r_{+})=\omega_{\tau}^{a}(r_{+})$, whose non-vanishing component is given by $\omega_{\tau}=N_{\infty}G(r_{+})N\mbox{${}^{\prime}$}(r_{+})J_{2}^{+}\,.$ (47) Therefore, as naturally expected from the results in section 3, the problem actually reduces to requiring a trivial holonomy for the pure gravity gauge field (25), whose time component at the horizon reads $A_{\tau}=\omega_{\tau}^{a}(r_{\text{+}})J_{a}^{+}$. (ii) Diagonalizing the remaining components, which in this case reduces to that of the corresponding $so(2,1)\approx sl(2,\mathbb{R})$ subalgebra. The basis can be chosen as $J_{0}^{+}=\frac{1}{2}\left(L_{-1}+L_{1}\right),\quad J_{1}^{+}=\frac{1}{2}\left(L_{-1}-L_{1}\right),\quad J_{2}^{+}=L_{0}\,,$ (48) with $L_{-1}=\left(\begin{array}[]{cc}0&0\\\ 1&0\end{array}\right)\quad,\quad L_{0}=\left(\begin{array}[]{cc}-\frac{1}{2}&0\\\ 0&\frac{1}{2}\end{array}\right)\quad,\quad L_{1}=\left(\begin{array}[]{cc}0&-1\\\ 0&0\end{array}\right)\,,$ (49) fulfilling $\left[L_{n},L_{m}\right]=\left(n-m\right)L_{n+m}$. Since the $sl(2,\mathbb{R})$ generators are given in the fundamental (spinorial) representation, the suitable element of the center of the group turns out to be $I_{c}=-\mathbb{I}$. The diagonalization can be then readily performed, which implies that the eigenvalues of $\omega_{\tau}$ must be given by $\pm i\pi$, or equivalently, $tr\left[(\omega_{\tau})^{2}\right]=-2\pi^{2}\,.$ (50) The latter equation allows to fix the form of $N_{\infty}$, and hence, by virtue of the former condition in (46), the Lagrange multipliers, related to the Hawking temperature and the chemical potential, become fixed as $N_{\infty}=\frac{2\pi\Theta_{\nu}\ell^{2}}{3\left(1+\nu\right)B\sqrt{1-\omega^{2}}}\qquad\mbox{and}\qquad N_{\infty}^{\theta}=\frac{2\pi\Theta_{\nu}\omega\ell}{3\left(1+\nu\right)B\sqrt{1-\omega^{2}}}\,.$ (51) Once the Langrange multipliers are fixed as in (51), the pure gravity gauge field takes the following form at the horizon $\displaystyle A=$ $\displaystyle R(r_{+})d\theta P_{2}^{+}+\left(2\pi dt-\frac{G(r_{+})^{2}R(r_{+})^{2}N\mbox{${}^{\prime}$}(r_{+})N^{\theta}\mbox{${}^{\prime}$}(r_{+})}{4\pi N(r_{+})}d\theta\right)J_{2}^{+}\,,$ (52) so that the rotating hairy black hole entropy can be directly obtained from (26), being given by $S=\frac{4\pi^{2}B\Theta_{\nu}}{\kappa\sqrt{1-\omega^{2}}}=\frac{4\pi^{2}R(r_{+})}{\kappa}=\frac{A_{\textrm{hor}}}{4G},$ (53) with $\kappa=8\pi G$, which is in full agreement with the Bekenstein-Hawking area law for the entropy. It is then simple to verify that the first law is fulfilled in the grand canonical ensemble, $dS=\beta dM-\beta\Omega dJ,$ with $M$ and $J$ determined by (43), so that the relationship between the Lagrange multipliers with the Hawking temperature and the angular velocity at the horizon reads $\beta=N_{\infty}\qquad\textrm{and}\qquad\beta\Omega=N_{\infty}^{\theta}\,.$ (54) ### 5.1 Soliton mass, regularity and Cardy formula An analytic soliton solution endowed with a nontrivial scalar field for the theory under discussion was found in Correa:2010hf . The line element is given by $\displaystyle ds^{2}$ $\displaystyle=$ $\displaystyle\ell^{2}\left(1+\frac{1}{\alpha_{\nu}\left(1+\rho^{2}\right)}\right)^{-2}\times$ $\displaystyle\left[-N_{\infty}^{2}\left[\frac{2\left(1+\rho^{2}\right)}{3c_{\nu}\ell}\right]^{2}dt^{2}+\frac{4d\rho^{2}}{2+\rho^{2}+\frac{c_{\nu}}{1+\rho^{2}}}+\left(\frac{2\rho}{2+c_{\nu}}\right)^{2}\left(2+\rho^{2}+\frac{c_{\nu}}{1+\rho^{2}}\right)d\theta^{2}\right]\,,$ with $c_{\nu}=2\alpha_{\nu}^{-3}\left(1+\nu\right)$ and $\alpha_{\nu}=\frac{1}{2}\left(\Theta_{\nu}+\sqrt{\Theta_{\nu}^{2}+4\Theta_{\nu}}\right)$, while the scalar field reads $\phi\left(\rho\right)=\mbox{arctanh}\left(\sqrt{\frac{1}{1+\alpha_{\nu}\left(1+\rho^{2}\right)}}\right)\,.$ (56) The radial coordinate has the range $0\leq\rho<\infty$, and the remaining coordinates range precisely as for the hairy black hole metric, so that the solitonic configuration is regular everywhere. As in the case of the rotating hairy black hole, the fall-off at infinity of the soliton also fits that of the relaxed boundary conditions in Henneaux:2002wm . Regularity of the soliton at the origin can also be explicitly verified in terms of its corresponding gauge fields in (6), (7). In order to do that, we follow the same lines as in the case of the Euclidean hairy rotating black hole discussed above, adapted to this case. Thus, for the solitonic configuration, the suitable condition turns out to be requiring the holonomy of the gauge fields $A^{\pm}$ around a spatial (angular) cycle that encloses the origin ($\rho=0$) to be trivial, i.e., $\mathcal{H^{\pm}}=e^{\oint d\theta A_{\theta}^{\pm}\Big{|}_{\rho=0}}=\mathbb{I}_{c}\,.$ (57) The local frame can be suitably chosen as in the case of the hairy black hole, so that the angular components of the dreibein $e_{\theta}$ automatically vanish at the origin, without the need of imposing any condition. Thus, the angular components of the gauge fields now fulfill $A_{\theta}^{\pm}\Big{|}_{\rho=0}=\omega_{\pm\theta}\Big{|}_{\rho=0}$, with $\omega_{\pm\theta}^{a}\Big{|}_{\rho=0}=\omega_{\theta}^{a}\Big{|}_{\rho=0}$, so that the problem again reduces to just requiring the trivial holonomy for the pure gravity connection in (25), whose angular component at the origin is given by $A_{\theta}\Big{|}_{\rho=0}=\omega_{\theta}^{a}J_{a}^{+}\Big{|}_{\rho=0}$. The remaining components can then be diagonalized by choosing the basis and the representation as in (48) and (49), respectively, so that the holonomy turns out to be trivial provided that the condition in (50) with $\omega_{\tau}\rightarrow\omega_{\theta}$ holds, which automatically does without the need of requiring any additional condition. The result is reassuring because the soliton is devoid of integration constants, and hence, it has no freedom to be adjusted in order to fulfill the regularity conditions. Besides, as pointed out in Correa:2010hf , this last feature of the soliton suggests that it can be naturally regarded as a ground state of the theory for the sector with non-vanishing scalar fields. Indeed, it is simple to extend the result in Correa:2010hf to the rotating case, so that the Euclidean hairy black hole turns out to be diffeomorphic to the Euclidean soliton provided that the corresponding modular parameters of the torus are related through S-duality, i.e., $\tau_{sol}=-\frac{1}{\tau_{hbh}}$ with $\tau_{hbh}=\frac{i\beta(1-i\Omega)}{2\pi}$, precisely as it occurs for the Euclidean BTZ black hole Banados:1992wn ; Banados:1992gq and Euclidean AdS3 Carlip:1994gc ; Maldacena:1998bw 222Analogue S-duality relationships between Euclidean three-dimensional black holes and their corresponding diffeomorphic Euclidean solitons are also know to hold for General Relativity on AdS3 with boundary conditions of KdV type Perez:2016vqo , as well as for asymptotically AdS Oliva:2009ip or asymptotically Lifshitz black holes Ayon- Beato:2009rgu and their corresponding solitons in Perez:2011qp and Gonzalez:2011nz , respectively in the context of BHT massive gravity Bergshoeff:2009hq . . Thus, assuming that the soliton is the ground state of the hairy sector of the theory, the entropy of the rotating hairy black hole can be seen to be successfully reproduced by the Cardy formula once expressed in terms of left and right ground state energies $\tilde{\Delta}_{0}^{\pm}$ instead of the central charges Correa:2010hf (see also Correa:2011dt ; Correa:2012rc ). The entropy then reads $S=4\pi\sqrt{-\tilde{\Delta}_{0}^{+}\tilde{\Delta}^{+}}+4\pi\sqrt{-\tilde{\Delta}_{0}^{-}\tilde{\Delta}^{-}}\,,$ (58) where $\tilde{\Delta}^{\pm}=\frac{1}{2}\left(M\ell\pm J\right)\,,$ (59) stand for the eigenvalues of the shifted Virasoro operators, $\tilde{L}_{0}^{\pm}=L_{0}^{\pm}-\frac{c^{\pm}}{24}$, being related to the mass $M$ and angular momentum $J$ of the hairy black hole in (43), while left and right energies of the ground state are determined by the soliton mass according to $\tilde{\Delta}_{0}^{\pm}=\frac{1}{2}\ell M_{sol}$. The mass of the soliton can then be readily obtained by plugging its associated pure gravity gauge field (25) into the boundary term in (24), which yields to $\mathcal{B}=-\frac{1}{2\kappa}\left(t_{2}-t_{1}\right)\oint d\theta\left\langle A_{t}A_{\theta}\right\rangle=\left(t_{2}-t_{1}\right)N_{\infty}\left(\frac{\pi\alpha_{\nu}^{4}}{3\kappa\left(1+\nu\right)\left(1+\alpha_{\nu}\right)^{2}}\right)\,,$ (60) so that the soliton mass is found to be given by $M_{sol}=-\frac{\pi\alpha_{\nu}^{4}}{3\kappa\left(1+\nu\right)\left(1+\alpha_{\nu}\right)^{2}}\,,$ (61) in agreement the the result obtained in Correa:2010hf through the canonical approach. Therefore, making use of the precise value of the soliton mass in (61), it is straightforward to verify that the Cardy formula (58) precisely reproduces the rotating hairy black hole entropy in (53). ## 6 Conformal frame and ending remarks As pointed out in the introduction, the action of a minimally coupled scalar field (1) with the self-interaction potential (1) relates to that of a conformally coupled self-interacting scalar field. Indeed, changing to the conformal (Jordan) frame according to $\hat{g}_{\mu\nu}=\Omega^{-2}g_{\mu\nu}$, with $\Omega=(1-\varphi^{2}),$ and redefining the scalar field as $\varphi=\tanh\left(\phi\right)$, the action becomes $I\left[\hat{g}_{\mu\nu},\varphi\right]=\frac{8}{\kappa}\int d^{3}x\sqrt{-\hat{g}}\left(\frac{\hat{R}-2\Lambda}{16}-\frac{1}{2}\hat{g}^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{1}{16}\hat{R}\varphi^{2}-\lambda\varphi^{6}\right)\,,$ (62) with $\lambda=\frac{\Lambda\nu}{8}$, so that the matter piece turns out to be conformally invariant. The field equations then read $\displaystyle\hat{G}_{\mu\nu}+\Lambda\hat{g}_{\mu\nu}$ $\displaystyle=$ $\displaystyle 8\hat{T}_{\mu\nu}\,,$ (63) $\displaystyle\hat{\Box}\varphi-\frac{1}{8}\hat{R}\varphi-6\lambda\varphi^{5}$ $\displaystyle=$ $\displaystyle 0\,,$ (64) where the stress-energy tensor $\hat{T}_{\mu\nu}=\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{1}{2}\hat{g}_{\mu\nu}\partial_{\alpha}\varphi\partial^{\alpha}\varphi-\hat{g}_{\mu\nu}\lambda\varphi^{6}+\frac{1}{8}\left(\hat{g}_{\mu\nu}\hat{\Box}-\hat{\nabla}_{\mu}\hat{\nabla}_{\nu}+\hat{G}_{\mu\nu}\right)\varphi^{2},$ (65) is traceless by virtue of (64), implying that the Ricci scalar is constant, $\hat{R}=-6\ell^{-2}$. The case of a conformally coupled self-interacting scalar field on a fixed background metric can also be described in terms of a Chern-Simons action Ricardo-Minas , and here we extend this result to the case of a back-reacting scalar field with a dynamical metric described by (62). It is then useful to define $\Lambda^{+}=\Lambda$ and $\Lambda^{-}=-8\lambda$, as well as the appropriate composite gauge fields as $\displaystyle A^{+}$ $\displaystyle=$ $\displaystyle\hat{e}^{a}P_{a}^{+}+\omega_{+}^{a}J_{a}^{+}\,,$ (66) $\displaystyle A^{-}$ $\displaystyle=$ $\displaystyle\varphi^{2}\hat{e}^{a}P_{a}^{-}+\omega_{-}^{a}J_{a}^{-}\,,$ (67) where according to the signs of $\Lambda^{\pm}$, they take values on the algebras in (8) that may correspond to $so(3,1)$, $so(2,2)$ or $iso(2,1)$, precisely as in the case of the minimally coupled scalar field. The field strengths then read $F^{\pm}=dA^{\pm}+(A^{\pm})^{2}=\mathcal{T}_{\text{$\pm$}}^{a}P_{a}^{\pm}+\mathcal{R}_{\pm}^{a}J_{a}^{\pm}\,,$ (68) whose components are given by $\displaystyle\mathcal{R}_{+}^{a}$ $\displaystyle=$ $\displaystyle R_{+}^{a}-\frac{1}{2}\Lambda^{+}\epsilon^{abc}\hat{e}_{b}\hat{e}_{c}\,,$ (69) $\displaystyle\mathcal{R}_{-}^{a}$ $\displaystyle=$ $\displaystyle R_{-}^{a}-\frac{1}{2}\Lambda^{-}\varphi^{4}\epsilon^{abc}\hat{e}_{b}\hat{e}_{c}\,,$ (70) $\displaystyle\mathcal{T}_{+}^{a}$ $\displaystyle=$ $\displaystyle T_{+}^{a}\,,$ (71) $\displaystyle\mathcal{T}_{-}^{a}$ $\displaystyle=$ $\displaystyle\varphi^{2}\left(T_{-}^{a}+2\varphi^{-1}d\varphi\hat{e}^{a}\right)\,,$ (72) where $R_{\pm}^{a}=d\omega_{\text{$\pm$}}^{a}+\frac{1}{2}\epsilon^{abc}\omega_{b}^{\pm}\omega_{c}^{\pm}$, and $T_{\text{$\pm$}}^{a}=d\hat{e}^{a}+\epsilon^{abc}\omega_{b}^{\pm}\hat{e}_{c}$. Up to a boundary term, the action (62) can then be analogously reformulated in terms of a Chern-Simons form as in (14), for the composite gauge fields $A^{\pm}$ in (66), (67) so that $I=I\left[\varphi,\hat{e},\omega^{+},\omega^{-}\right]$. Varying with respect to the dynamical fields one obtains $\displaystyle\frac{\delta I}{\delta\hat{e}^{a}}:$ $\displaystyle\mathcal{R}_{a}^{+}-\varphi^{2}\mathcal{R}_{a}^{-}=0\,,$ (73) $\displaystyle\frac{\delta I}{\delta\varphi}:$ $\displaystyle\left(\mathcal{R}_{a}^{+}-\mathcal{R}_{a}^{-}\right)\hat{e}^{a}=0\,,$ (74) $\displaystyle\frac{\delta I}{\delta\omega_{+}^{a}}:$ $\displaystyle T_{+}^{a}=0\,,$ (75) $\displaystyle\frac{\delta I}{\delta\omega_{-}^{a}}:$ $\displaystyle T_{-}^{a}+2\varphi^{-1}d\varphi\hat{e}^{a}=0\,.$ (76) Since the last field equations (75) and (76) are algebraic for $\omega_{\pm}^{a}$, they are solved as $\omega_{-}^{a}\left(\hat{e},\varphi\right)=\omega^{a}\left(\hat{e}\right)-2\varphi^{-1}*\left(\hat{e}^{a}d\varphi\right)\,.$ (77) and $\omega_{+}^{a}=\omega^{a}\left(\hat{e}\right)$, where $\omega^{a}\left(\hat{e}\right)$ stands for the (torsionless) spin connection associated to $\hat{e}^{a}$. Making use of them, the remaining field equations (73) and (74) reduce to (63) and (64), respectively. A rotating black hole dressed with a conformally coupled scalar field can then be readily obtained from the exact solution discussed in section 4, just by transforming from the Einstein to the conformal frame. In the static case, it reduces to the black holes discussed in Henneaux:2002wm and previously in Martinez:1996gn in the case of $\nu=0$. As in section 5, the analysis of its global charges, regularity and thermodynamics can then also be performed in terms of the gauge fields $A^{\pm}$ in (66), (67) by virtue of the boundary terms in (23), (24) and the holonomy condition (45). It is also reassuring to verify that the black hole entropy formula in the Chern-Simons approach Bunster:2014mua , given by (26), reduces to that in the conformal frame $S=\left[1-\varphi^{2}\big{(}r_{+}\big{)}\right]\frac{A_{\textrm{hor}}}{4G}\,,$ with $\kappa=8\pi G$, see e.g., Visser:1993nu ; Ashtekar:2003jh , which can also be reproduced from the Cardy formula (58) when the scalar soliton in the conformal frame is assumed to be the ground state configuration. As a final remark, it would be interesting to explore whether the hairy black holes and similar configurations endowed with a non-trivial scalar field that have been found for a variety of gravity theories, with different scalar couplings and self-interactions Hotta:2008xt ; Hortacsu:2003we ; Kwon:2012zh ; Xu:2013nia ; Zhao:2013isa ; Naji:2014ira ; Mazharimousavi:2014vza ; Xu:2014uha ; Cardenas:2014kaa ; Xu:2014uka ; Gonzalez:2014pwa ; Naji:2014qya ; Ayon- Beato:2015jga ; Wen:2015xea ; Ayon-Beato:2015ada ; Fan:2015ykb ; Harms:2016pow ; Ozcelik:2016scf ; Erices:2017izj ; Tang:2019jkn ; Harms:2017yko ; Karakasis:2021lnq ; Bueno:2021krl ; Karakasis:2021ttn ; Arias:2022jax ; Desa:2022gtw ; Karakasis:2022fep ; Bueno:2022ewf , could also be understood in terms of suitable composite gauge fields. Nevertheless, the Chern-Simons formulation for the theory discussed here naively appears to be somewhat rigid. In fact, the form of the composite gauge fields in (6) and (7) can be extended as $A^{\pm}=f_{\pm}^{2}\left(\phi\right)e^{a}P_{a}^{\pm}+\omega_{\pm}^{a}J_{a}^{\pm}$, so that consistency of the Chern-Simons formulation with a Riemannian (torsionless) geometry implies that $f_{+}^{2}-f_{-}^{2}=1$, allows to obtain General Relativity minimally coupled to a scalar field with a non-canonical kinetic term and a self-interaction that depend on $f_{+}(\phi)$. However, the scalar field can be appropriately redefined so that in terms of the new scalar field one recovers the original action in (2) with canonical kinetic term and precisely the same self-interaction given in (1). ###### Acknowledgements. We would like to thank Hernán González, Marc Henneaux, Alfredo Pérez, David Tempo and Jorge Zanelli for useful comments and discussions. This research has been partially supported by ANID FONDECYT grants N° 11190730, 1201208, 1220862 , 1211226, 1220910, 1221624. The work of O.F. was partially supported by FNRS- Belgium (conventions FRFC PDRT.1025.14 and IISN 4.4503.15), as well as by funds from the Solvay Family. ## References * (1) A. Achúcarro and P. K. Townsend, “A Chern-Simons Action for Three-Dimensional anti-de Sitter Supergravity Theories,” Phys. Lett. B 180, 89 (1986) doi:10.1016/0370-2693(86)90140-1 * (2) E. Witten, “(2+1)-Dimensional Gravity as an Exactly Soluble System,” Nucl. Phys. B 311, 46 (1988) doi:10.1016/0550-3213(88)90143-5 * (3) S. Carlip, “Lectures on (2+1) dimensional gravity,” J. Korean Phys. Soc. 28, S447-S467 (1995) [arXiv:gr-qc/9503024 [gr-qc]]. * (4) O. Coussaert, M. Henneaux and P. van Driel, “The Asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant,” Class. Quant. Grav. 12, 2961-2966 (1995) doi:10.1088/0264-9381/12/12/012 [arXiv:gr-qc/9506019 [gr-qc]]. * (5) P. Kraus, “Lectures on black holes and the AdS(3) / CFT(2) correspondence,” Lect. Notes Phys. 755, 193-247 (2008) [arXiv:hep-th/0609074 [hep-th]]. * (6) A. Maloney and E. Witten, “Quantum Gravity Partition Functions in Three Dimensions,” JHEP 02, 029 (2010) doi:10.1007/JHEP02(2010)029 [arXiv:0712.0155 [hep-th]]. * (7) G. Barnich and H. A. González, “Dual dynamics of three dimensional asymptotically flat Einstein gravity at null infinity,” JHEP 05, 016 (2013) doi:10.1007/JHEP05(2013)016 [arXiv:1303.1075 [hep-th]]. * (8) H. Afshar, S. Detournay, D. Grumiller, W. Merbis, A. Pérez, D. Tempo and R. Troncoso, “Soft Heisenberg hair on black holes in three dimensions,” Phys. Rev. D 93, no.10, 101503 (2016) doi:10.1103/PhysRevD.93.101503 [arXiv:1603.04824 [hep-th]]. * (9) A. Pérez, D. Tempo and R. Troncoso, “Boundary conditions for General Relativity on AdS3 and the KdV hierarchy,” JHEP 06, 103 (2016) doi:10.1007/JHEP06(2016)103 [arXiv:1605.04490 [hep-th]]. * (10) O. Fuentealba, J. Matulich, A. Pérez, M. Pino, P. Rodríguez, D. Tempo and R. Troncoso, “Integrable systems with BMS3 Poisson structure and the dynamics of locally flat spacetimes,” JHEP 01, 148 (2018) doi:10.1007/JHEP01(2018)148 [arXiv:1711.02646 [hep-th]]. * (11) J. Cotler and K. Jensen, “A theory of reparameterizations for AdS3 gravity,” JHEP 02, 079 (2019) doi:10.1007/JHEP02(2019)079 [arXiv:1808.03263 [hep-th]]. * (12) D. Melnikov, F. Novaes, A. Pérez and R. Troncoso, “Lifshitz Scaling, Microstate Counting from Number Theory and Black Hole Entropy,” JHEP 06, 054 (2019) doi:10.1007/JHEP06(2019)054 [arXiv:1808.04034 [hep-th]]. * (13) H. A. González, J. Matulich, M. Pino and R. Troncoso, “Revisiting the asymptotic dynamics of General Relativity on AdS3,” JHEP 12, 115 (2018) doi:10.1007/JHEP12(2018)115 [arXiv:1809.02749 [hep-th]]. * (14) D. Grumiller and W. Merbis, “Near horizon dynamics of three dimensional black holes,” SciPost Phys. 8, no.1, 010 (2020) doi:10.21468/SciPostPhys.8.1.010 [arXiv:1906.10694 [hep-th]]. * (15) E. Ojeda and A. Pérez, “Boundary conditions for General Relativity in three-dimensional spacetimes, integrable systems and the KdV/mKdV hierarchies,” JHEP 08, 079 (2019) doi:10.1007/JHEP08(2019)079 [arXiv:1906.11226 [hep-th]]. * (16) M. Cárdenas, F. Correa, K. Lara and M. Pino, “Integrable Systems and Spacetime Dynamics,” Phys. Rev. Lett. 127, no.16, 161601 (2021) doi:10.1103/PhysRevLett.127.161601 [arXiv:2104.09676 [hep-th]]. * (17) A. Achúcarro and P. K. Townsend, “Extended Supergravities in $d$ = (2+1) as Chern-Simons Theories,” Phys. Lett. B 229, 383-387 (1989) doi:10.1016/0370-2693(89)90423-1 * (18) P. S. Howe, J. M. Izquierdo, G. Papadopoulos and P. K. Townsend, “New supergravities with central charges and Killing spinors in (2+1)-dimensions,” Nucl. Phys. B 467, 183-214 (1996) doi:10.1016/0550-3213(96)00091-0 [arXiv:hep-th/9505032 [hep-th]]. * (19) M. Bañados, R. Troncoso and J. Zanelli, “Higher dimensional Chern-Simons supergravity,” Phys. Rev. D 54, 2605-2611 (1996) doi:10.1103/PhysRevD.54.2605 [arXiv:gr-qc/9601003 [gr-qc]]. * (20) M. Henneaux, L. Maoz and A. Schwimmer, “Asymptotic dynamics and asymptotic symmetries of three-dimensional extended AdS supergravity,” Annals Phys. 282, 31-66 (2000) doi:10.1006/aphy.2000.5994 [arXiv:hep-th/9910013 [hep-th]]. * (21) A. Giacomini, R. Troncoso and S. Willison, “Three-dimensional supergravity reloaded,” Class. Quant. Grav. 24, 2845-2860 (2007) doi:10.1088/0264-9381/24/11/005 [arXiv:hep-th/0610077 [hep-th]]. * (22) G. Barnich, L. Donnay, J. Matulich and R. Troncoso, “Asymptotic symmetries and dynamics of three-dimensional flat supergravity,” JHEP 08, 071 (2014) doi:10.1007/JHEP08(2014)071 [arXiv:1407.4275 [hep-th]]. * (23) G. Barnich, L. Donnay, J. Matulich and R. Troncoso, “Super-BMS3 invariant boundary theory from three-dimensional flat supergravity,” JHEP 01, 029 (2017) doi:10.1007/JHEP01(2017)029 [arXiv:1510.08824 [hep-th]]. * (24) O. Fuentealba, J. Matulich and R. Troncoso, “Asymptotic structure of $\mathcal{N}=2$ supergravity in 3D: extended super-BMS3 and nonlinear energy bounds,” JHEP 09, 030 (2017) doi:10.1007/JHEP09(2017)030 [arXiv:1706.07542 [hep-th]]. * (25) R. Caroca, P. Concha, O. Fierro and E. Rodríguez, “Three-dimensional Poincaré supergravity and $N$-extended supersymmetric $BMS_{3}$ algebra,” Phys. Lett. B 792, 93-100 (2019) doi:10.1016/j.physletb.2019.02.049 [arXiv:1812.05065 [hep-th]]. * (26) N. Banerjee, A. Bhattacharjee, Neetu and T. Neogi, “New $\mathcal{N}$ = 2 SuperBMS3 algebra and invariant dual theory for 3D supergravity,” JHEP 11, 122 (2019) doi:10.1007/JHEP11(2019)122 [arXiv:1905.10239 [hep-th]]. * (27) R. Caroca, P. Concha, O. Fierro and E. Rodríguez, “On the supersymmetric extension of asymptotic symmetries in three spacetime dimensions,” Eur. Phys. J. C 80, no.1, 29 (2020) doi:10.1140/epjc/s10052-019-7595-5 [arXiv:1908.09150 [hep-th]]. * (28) N. Banerjee, A. Bhattacharjee, S. Biswas and T. Neogi, “Dual theory for maximally $\mathcal{N}$ extended flat supergravity,” JHEP 05, 179 (2022) doi:10.1007/JHEP05(2022)179 [arXiv:2110.05919 [hep-th]]. * (29) M. P. Blencowe, “A Consistent Interacting Massless Higher Spin Field Theory in $D$ = (2+1),” Class. Quant. Grav. 6, 443 (1989) doi:10.1088/0264-9381/6/4/005 * (30) E. Bergshoeff, M. P. Blencowe and K. S. Stelle, “Area Preserving Diffeomorphisms and Higher Spin Algebra,” Commun. Math. Phys. 128, 213 (1990) doi:10.1007/BF02108779 * (31) M. Henneaux and S. J. Rey, “Nonlinear $W_{infinity}$ as Asymptotic Symmetry of Three-Dimensional Higher Spin Anti-de Sitter Gravity,” JHEP 12, 007 (2010) doi:10.1007/JHEP12(2010)007 [arXiv:1008.4579 [hep-th]]. * (32) A. Campoleoni, S. Fredenhagen, S. Pfenninger and S. Theisen, “Asymptotic symmetries of three-dimensional gravity coupled to higher-spin fields,” JHEP 11, 007 (2010) doi:10.1007/JHEP11(2010)007 [arXiv:1008.4744 [hep-th]]. * (33) M. Gutperle and P. Kraus, “Higher Spin Black Holes,” JHEP 05, 022 (2011) doi:10.1007/JHEP05(2011)022 [arXiv:1103.4304 [hep-th]]. * (34) M. Ammon, M. Gutperle, P. Kraus and E. Perlmutter, “Spacetime Geometry in Higher Spin Gravity,” JHEP 10, 053 (2011) doi:10.1007/JHEP10(2011)053 [arXiv:1106.4788 [hep-th]]. * (35) A. Castro, E. Hijano, A. Lepage-Jutier and A. Maloney, “Black Holes and Singularity Resolution in Higher Spin Gravity,” JHEP 01, 031 (2012) doi:10.1007/JHEP01(2012)031 [arXiv:1110.4117 [hep-th]]. * (36) M. Henneaux, G. Lucena Gómez, J. Park and S. J. Rey, “Super- W(infinity) Asymptotic Symmetry of Higher-Spin $AdS_{3}$ Supergravity,” JHEP 06, 037 (2012) doi:10.1007/JHEP06(2012)037 [arXiv:1203.5152 [hep-th]]. * (37) A. Perez, D. Tempo and R. Troncoso, “Higher spin gravity in 3D: Black holes, global charges and thermodynamics,” Phys. Lett. B 726, 444-449 (2013) doi:10.1016/j.physletb.2013.08.038 [arXiv:1207.2844 [hep-th]]. * (38) A. Campoleoni, S. Fredenhagen, S. Pfenninger and S. Theisen, “Towards metric-like higher-spin gauge theories in three dimensions,” J. Phys. A 46, 214017 (2013) doi:10.1088/1751-8113/46/21/214017 [arXiv:1208.1851 [hep-th]]. * (39) A. Perez, D. Tempo and R. Troncoso, “Higher spin black hole entropy in three dimensions,” JHEP 04, 143 (2013) doi:10.1007/JHEP04(2013)143 [arXiv:1301.0847 [hep-th]]. * (40) M. Henneaux, A. Pérez, D. Tempo and R. Troncoso, “Chemical potentials in three-dimensional higher spin anti-de Sitter gravity,” JHEP 12, 048 (2013) doi:10.1007/JHEP12(2013)048 [arXiv:1309.4362 [hep-th]]. * (41) C. Bunster, M. Henneaux, A. Pérez, D. Tempo and R. Troncoso, “Generalized Black Holes in Three-dimensional Spacetime,” JHEP 05, 031 (2014) doi:10.1007/JHEP05(2014)031 [arXiv:1404.3305 [hep-th]]. * (42) Y. M. Zinoviev, “Hypergravity in AdS3,” Phys. Lett. B 739, 106-109 (2014) doi:10.1016/j.physletb.2014.10.041 [arXiv:1408.2912 [hep-th]]. * (43) O. Fuentealba, J. Matulich and R. Troncoso, “Extension of the Poincaré group with half-integer spin generators: hypergravity and beyond,” JHEP 09, 003 (2015) doi:10.1007/JHEP09(2015)003 [arXiv:1505.06173 [hep-th]]. * (44) O. Fuentealba, J. Matulich and R. Troncoso, “Asymptotically flat structure of hypergravity in three spacetime dimensions,” JHEP 10, 009 (2015) doi:10.1007/JHEP10(2015)009 [arXiv:1508.04663 [hep-th]]. * (45) M. Henneaux, A. Pérez, D. Tempo and R. Troncoso, “Extended anti-de Sitter Hypergravity in $2+1$ Dimensions and Hypersymmetry Bounds,” doi:10.1142/9789813144101_0009 [arXiv:1512.08603 [hep-th]]. * (46) D. Grumiller, A. Pérez, S. Prohazka, D. Tempo and R. Troncoso, “Higher Spin Black Holes with Soft Hair,” JHEP 10, 119 (2016) doi:10.1007/JHEP10(2016)119 [arXiv:1607.05360 [hep-th]]. * (47) D. Grumiller, W. Merbis and M. Riegler, “Most general flat space boundary conditions in three-dimensional Einstein gravity,” Class. Quant. Grav. 34, no.18, 184001 (2017) doi:10.1088/1361-6382/aa8004 [arXiv:1704.07419 [hep-th]]. * (48) L. Ravera, “AdS Carroll Chern-Simons supergravity in 2 + 1 dimensions and its flat limit,” Phys. Lett. B 795, 331-338 (2019) doi:10.1016/j.physletb.2019.06.026 [arXiv:1905.00766 [hep-th]]. * (49) L. Avilés, J. Gomis and D. Hidalgo, “Stringy (Galilei) Newton-Hooke Chern-Simons Gravities,” JHEP 09, 015 (2019) doi:10.1007/JHEP09(2019)015 [arXiv:1905.13091 [hep-th]]. * (50) F. Ali and L. Ravera, “$\mathcal{N}$-extended Chern-Simons Carrollian supergravities in $2+1$ spacetime dimensions,” JHEP 02, 128 (2020) doi:10.1007/JHEP02(2020)128 [arXiv:1912.04172 [hep-th]]. * (51) P. Concha, M. Ipinza, L. Ravera and E. Rodríguez, “Non-relativistic three-dimensional supergravity theories and semigroup expansion method,” JHEP 02, 094 (2021) doi:10.1007/JHEP02(2021)094 [arXiv:2010.01216 [hep-th]]. * (52) P. Concha, L. Ravera and E. Rodríguez, “Three-dimensional non-relativistic supergravity and torsion,” Eur. Phys. J. C 82, no.3, 220 (2022) doi:10.1140/epjc/s10052-022-10183-6 [arXiv:2112.05902 [hep-th]]. * (53) R. Caroca, D. M. Peñafiel and P. Salgado-Rebolledo, “Non-relativistic spin-$3$ symmetries in $2+1$ dimensions from expanded/extended Nappi-Witten algebras,” [arXiv:2208.00602 [hep-th]]. * (54) M. Henneaux, C. Martínez, R. Troncoso and J. Zanelli, “Black holes and asymptotics of 2+1 gravity coupled to a scalar field,” Phys. Rev. D 65, 104007 (2002) doi:10.1103/PhysRevD.65.104007 [arXiv:hep-th/0201170 [hep-th]]. * (55) J. D. Brown and M. Henneaux, “Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity,” Commun. Math. Phys. 104, 207-226 (1986) doi:10.1007/BF01211590 * (56) J. de Boer and J. I. Jottar, “Thermodynamics of higher spin black holes in $AdS_{3}$,” JHEP 01, 023 (2014) doi:10.1007/JHEP01(2014)023 [arXiv:1302.0816 [hep-th]]. * (57) G. Barnich, “Conserved charges in gravitational theories: Contribution from scalar fields,” Ann. U. Craiova Phys. 12, no.III, 14-18 (2002) [arXiv:gr-qc/0211031 [gr-qc]]. * (58) G. Clément, “Black hole mass and angular momentum in 2+1 gravity,” Phys. Rev. D 68, 024032 (2003) doi:10.1103/PhysRevD.68.024032 [arXiv:gr-qc/0301129 [gr-qc]]. * (59) J. Gegenberg, C. Martínez and R. Troncoso, “A Finite action for three-dimensional gravity with a minimally coupled scalar field,” Phys. Rev. D 67, 084007 (2003) doi:10.1103/PhysRevD.67.084007 [arXiv:hep-th/0301190 [hep-th]]. * (60) M. I. Park, “Fate of three-dimensional black holes coupled to a scalar field and the Bekenstein-Hawking entropy,” Phys. Lett. B 597, 237-242 (2004) doi:10.1016/j.physletb.2004.07.023 [arXiv:hep-th/0403089 [hep-th]]. * (61) M. Bañados and S. Theisen, “Scale invariant hairy black holes,” Phys. Rev. D 72, 064019 (2005) doi:10.1103/PhysRevD.72.064019 [arXiv:hep-th/0506025 [hep-th]]. * (62) Y. S. Myung, “Phase transition for black holes with scalar hair and topological black holes,” Phys. Lett. B 663, 111-117 (2008) doi:10.1016/j.physletb.2008.03.046 [arXiv:0801.2434 [hep-th]]. * (63) F. Correa, C. Martínez and R. Troncoso, “Scalar solitons and the microscopic entropy of hairy black holes in three dimensions,” JHEP 01, 034 (2011) doi:10.1007/JHEP01(2011)034 [arXiv:1010.1259 [hep-th]]. * (64) N. Lashkari, “Holographic Symmetry-Breaking Phases in AdS3/CFT2,” JHEP 11, 104 (2011) doi:10.1007/JHEP11(2011)104 [arXiv:1011.3520 [hep-th]]. * (65) F. Correa, C. Martínez and R. Troncoso, “Hairy Black Hole Entropy and the Role of Solitons in Three Dimensions,” JHEP 02, 136 (2012) doi:10.1007/JHEP02(2012)136 [arXiv:1112.6198 [hep-th]]. * (66) S. Hyun, J. Jeong and S. H. Yi, “Fake Supersymmetry and Extremal Black Holes,” JHEP 03, 042 (2013) doi:10.1007/JHEP03(2013)042 [arXiv:1210.6273 [hep-th]]. * (67) J. Aparício, D. Grumiller, E. Lopez, I. Papadimitriou and S. Stricker, “Bootstrapping gravity solutions,” JHEP 05, 128 (2013) doi:10.1007/JHEP05(2013)128 [arXiv:1212.3609 [hep-th]]. * (68) W. Xu, J. Wang and X. h. Meng, “A Note on Entropy Relations of Black Hole Horizons,” Int. J. Mod. Phys. A 29, no.18, 1450088 (2014) doi:10.1142/S0217751X14500882 [arXiv:1401.5180 [gr-qc]]. * (69) W. Xu, “Exact black hole formation in three dimensions,” Phys. Lett. B 738, 472-476 (2014) doi:10.1016/j.physletb.2014.10.026 [arXiv:1409.3368 [hep-th]]. * (70) B. Ahn, S. Hyun, S. A. Park and S. H. Yi, “Scaling symmetry and scalar hairy rotating AdS3 black holes,” Phys. Rev. D 93, no.2, 024041 (2016) doi:10.1103/PhysRevD.93.024041 [arXiv:1508.06484 [hep-th]]. * (71) L. Avilés, H. Maeda and C. Martínez, “Exact black-hole formation with a conformally coupled scalar field in three dimensions,” Class. Quant. Grav. 35, no.24, 245001 (2018) doi:10.1088/1361-6382/aaea9f [arXiv:1808.10040 [gr-qc]]. * (72) F. Correa, A. Faúndez and C. Martínez, “Rotating hairy black hole and its microscopic entropy in three spacetime dimensions,” Phys. Rev. D 87, no.2, 027502 (2013) doi:10.1103/PhysRevD.87.027502 [arXiv:1211.4878 [hep-th]]. * (73) T. Regge and C. Teitelboim, “Role of Surface Integrals in the Hamiltonian Formulation of General Relativity,” Annals Phys. 88, 286 (1974) doi:10.1016/0003-4916(74)90404-7 * (74) J. Matulich, A. Pérez, D. Tempo and R. Troncoso, “Higher spin extension of cosmological spacetimes in 3D: asymptotically flat behaviour with chemical potentials and thermodynamics,” JHEP 05, 025 (2015) doi:10.1007/JHEP05(2015)025 [arXiv:1412.1464 [hep-th]]. * (75) M. Bañados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-time,” Phys. Rev. Lett. 69, 1849-1851 (1992) doi:10.1103/PhysRevLett.69.1849 [arXiv:hep-th/9204099 [hep-th]]. * (76) M. Bañados, M. Henneaux, C. Teitelboim and J. Zanelli, “Geometry of the (2+1) black hole,” Phys. Rev. D 48, 1506-1525 (1993) [erratum: Phys. Rev. D 88, 069902 (2013)] doi:10.1103/PhysRevD.48.1506 [arXiv:gr-qc/9302012 [gr-qc]]. * (77) S. Carlip and C. Teitelboim, “Aspects of black hole quantum mechanics and thermodynamics in (2+1)-dimensions,” Phys. Rev. D 51, 622-631 (1995) doi:10.1103/PhysRevD.51.622 [arXiv:gr-qc/9405070 [gr-qc]]. * (78) J. M. Maldacena and A. Strominger, “AdS(3) black holes and a stringy exclusion principle,” JHEP 12, 005 (1998) doi:10.1088/1126-6708/1998/12/005 [arXiv:hep-th/9804085 [hep-th]]. * (79) J. Oliva, D. Tempo and R. Troncoso, “Three-dimensional black holes, gravitational solitons, kinks and wormholes for BHT massive gravity,” JHEP 07, 011 (2009) doi:10.1088/1126-6708/2009/07/011 [arXiv:0905.1545 [hep-th]]. * (80) E. Ayón-Beato, A. Garbarz, G. Giribet and M. Hassaïne, “Lifshitz Black Hole in Three Dimensions,” Phys. Rev. D 80, 104029 (2009) doi:10.1103/PhysRevD.80.104029 [arXiv:0909.1347 [hep-th]]. * (81) A. Pérez, D. Tempo and R. Troncoso, “Gravitational solitons, hairy black holes and phase transitions in BHT massive gravity,” JHEP 07, 093 (2011) doi:10.1007/JHEP07(2011)093 [arXiv:1106.4849 [hep-th]]. * (82) H. A. González, D. Tempo and R. Troncoso, “Field theories with anisotropic scaling in 2D, solitons and the microscopic entropy of asymptotically Lifshitz black holes,” JHEP 11, 066 (2011) doi:10.1007/JHEP11(2011)066 [arXiv:1107.3647 [hep-th]]. * (83) E. A. Bergshoeff, O. Hohm and P. K. Townsend, “Massive Gravity in Three Dimensions,” Phys. Rev. Lett. 102, 201301 (2009) doi:10.1103/PhysRevLett.102.201301 [arXiv:0901.1766 [hep-th]]. * (84) Ricardo Troncoso and Minas Tsoukalas, “Conformally coupled scalar fields in higher dimensions and a generalization of the Yamabe problem," Preprint CECS-PHY-11/11. * (85) C. Martínez and J. Zanelli, “Conformally dressed black hole in (2+1)-dimensions,” Phys. Rev. D 54, 3830 (1996) [gr-qc/9604021]. * (86) M. Visser, “Dirty black holes: Entropy as a surface term,” Phys. Rev. D 48, 5697-5705 (1993) doi:10.1103/PhysRevD.48.5697 [arXiv:hep-th/9307194 [hep-th]]. * (87) A. Ashtekar, A. Corichi and D. Sudarsky, “Nonminimally coupled scalar fields and isolated horizons,” Class. Quant. Grav. 20, 3413 (2003) [gr-qc/0305044]. * (88) M. Hortacsu, H. T. Ozcelik and B. Yapiskan, “Properties of solutions in (2+1)-dimensions,” Gen. Rel. Grav. 35, 1209-1221 (2003) doi:10.1023/A:1024445724029 [arXiv:gr-qc/0302005 [gr-qc]]. * (89) K. Hotta, Y. Hyakutake, T. Kubota, T. Nishinaka and H. Tanida, “The CFT-interpolating Black Hole in Three Dimensions,” JHEP 01, 010 (2009) doi:10.1088/1126-6708/2009/01/010 [arXiv:0811.0910 [hep-th]]. * (90) Y. Kwon, S. Nam, J. D. Park and S. H. Yi, “Extremal Black Holes and Holographic C-Theorem,” Nucl. Phys. B 869, 189-215 (2013) doi:10.1016/j.nuclphysb.2012.12.016 [arXiv:1208.4509 [hep-th]]. * (91) W. Xu and L. Zhao, “Charged black hole with a scalar hair in (2+1) dimensions,” Phys. Rev. D 87, no.12, 124008 (2013) doi:10.1103/PhysRevD.87.124008 [arXiv:1305.5446 [gr-qc]]. * (92) L. Zhao, W. Xu and B. Zhu, “Novel rotating hairy black hole in (2+1)-dimensions,” Commun. Theor. Phys. 61, no.4, 475-481 (2014) doi:10.1088/0253-6102/61/4/12 [arXiv:1305.6001 [gr-qc]]. * (93) J. Naji, “Energy Loss of a Heavy Particle near 3D Charged Rotating Hairy Black Hole,” Eur. Phys. J. C 74, 2697 (2014) doi:10.1140/epjc/s10052-013-2697-y [arXiv:1401.4422 [hep-th]]. * (94) S. H. Mazharimousavi and M. Halilsoy, “Einstein–Born–Infeld black holes with a scalar hair in three dimensions,” Mod. Phys. Lett. A 30, no.33, 1550177 (2015) doi:10.1142/S0217732315501771 [arXiv:1405.2956 [gr-qc]]. * (95) W. Xu, L. Zhao and D. C. Zou, “Three dimensional rotating hairy black holes, asymptotics and thermodynamics,” [arXiv:1406.7153 [gr-qc]]. * (96) M. Cárdenas, O. Fuentealba and C. Martínez, “Three-dimensional black holes with conformally coupled scalar and gauge fields,” Phys. Rev. D 90, no.12, 124072 (2014) doi:10.1103/PhysRevD.90.124072 [arXiv:1408.1401 [hep-th]]. * (97) W. Xu and D. C. Zou, “$(2+1)$ -Dimensional charged black holes with scalar hair in Einstein–Power–Maxwell Theory,” Gen. Rel. Grav. 49, no.6, 73 (2017) doi:10.1007/s10714-017-2237-4 [arXiv:1408.1998 [hep-th]]. * (98) P. A. González, J. Saavedra and Y. Vásquez, “Three-Dimensional Hairy Black Holes in Teleparallel Gravity,” Astrophys. Space Sci. 357, no.2, 143 (2015) doi:10.1007/s10509-015-2374-8 [arXiv:1411.2193 [gr-qc]]. * (99) J. Naji and S. Heshmatian, “Novel Rotating Hairy Black Hole in (2+1)-Dimensions and Shear Viscosity to Entropy Ratio,” Int. J. Theor. Phys. 53, no.8, 2579-2586 (2014) doi:10.1007/s10773-014-2056-2 * (100) E. Ayón-Beato, M. Bravo-Gaete, F. Correa, M. Hassaïne, M. M. Juárez-Aubry and J. Oliva, “First law and anisotropic Cardy formula for three-dimensional Lifshitz black holes,” Phys. Rev. D 91, no.6, 064006 (2015) doi:10.1103/PhysRevD.91.064006 [arXiv:1501.01244 [gr-qc]]. * (101) Q. Wen, “Strategy to Construct Exact Solutions in Einstein-Scalar Gravities,” Phys. Rev. D 92, no.10, 104002 (2015) doi:10.1103/PhysRevD.92.104002 [arXiv:1501.02829 [hep-th]]. * (102) E. Ayón-Beato, M. Hassaïne and J. A. Méndez-Zavaleta, “(Super-)renormalizably dressed black holes,” Phys. Rev. D 92, no.2, 024048 (2015) doi:10.1103/PhysRevD.92.024048 [arXiv:1506.02277 [hep-th]]. * (103) Z. Y. Fan and B. Chen, “Exact formation of hairy planar black holes,” Phys. Rev. D 93, no.8, 084013 (2016) doi:10.1103/PhysRevD.93.084013 [arXiv:1512.09145 [hep-th]]. * (104) B. Harms and A. Stern, “Spinning $\sigma$-model solitons in $2+1$ anti-de Sitter space,” Phys. Lett. B 763, 401-408 (2016) doi:10.1016/j.physletb.2016.10.075 [arXiv:1608.05116 [hep-th]]. * (105) H. T. Özçelik, R. Kaya and M. Hortaçsu, “Einstein gravity with torsion induced by the scalar field,” Annals Phys. 393, 132-144 (2018) doi:10.1016/j.aop.2018.04.012 [arXiv:1611.07496 [gr-qc]]. * (106) B. Harms and A. Stern, “Growing Hair on the extremal $BTZ$ black hole,” Phys. Lett. B 769, 465-469 (2017) doi:10.1016/j.physletb.2017.04.021 [arXiv:1703.10234 [gr-qc]]. * (107) C. Erices and C. Martínez, “Rotating hairy black holes in arbitrary dimensions,” Phys. Rev. D 97, no.2, 024034 (2018) doi:10.1103/PhysRevD.97.024034 [arXiv:1707.03483 [hep-th]]. * (108) Z. Y. Tang, Y. C. Ong, B. Wang and E. Papantonopoulos, “General black hole solutions in ( 2+1 )-dimensions with a scalar field nonminimally coupled to gravity,” Phys. Rev. D 100, no.2, 024003 (2019) doi:10.1103/PhysRevD.100.024003 [arXiv:1901.07310 [gr-qc]]. * (109) T. Karakasis, E. Papantonopoulos, Z. Y. Tang and B. Wang, “Black holes of (2+1)-dimensional $f(R)$ gravity coupled to a scalar field,” Phys. Rev. D 103, no.6, 064063 (2021) doi:10.1103/PhysRevD.103.064063 [arXiv:2101.06410 [gr-qc]]. * (110) P. Bueno, P. A. Cano, J. Moreno and G. van der Velde, “Regular black holes in three dimensions,” Phys. Rev. D 104, no.2, L021501 (2021) doi:10.1103/PhysRevD.104.L021501 [arXiv:2104.10172 [gr-qc]]. * (111) T. Karakasis, E. Papantonopoulos, Z. Y. Tang and B. Wang, “(2+1)-dimensional black holes in f(R,$\phi$) gravity,” Phys. Rev. D 105, no.4, 044038 (2022) doi:10.1103/PhysRevD.105.044038 [arXiv:2201.00035 [gr-qc]]. * (112) P. J. Arias, P. Bargueño, E. Contreras and E. Fuenmayor, “$2+1$ Einstein-Klein-Gordon black holes by gravitational decoupling,” Astronomy 1, no.1, 2-14 (2022) doi:10.3390/astronomy1010002 [arXiv:2203.00661 [gr-qc]]. * (113) C. Desa, W. Ccuiro and D. Choque, “Exact hairy black holes asymptotically $AdS_{2+1}$,” [arXiv:2210.06421 [hep-th]]. * (114) T. Karakasis, E. Papantonopoulos, Z. Y. Tang and B. Wang, “Rotating $(2+1)$-dimensional Black Holes in Einstein-Maxwell-Dilaton Theory,” [arXiv:2210.15704 [gr-qc]]. * (115) P. Bueno, P. A. Cano, J. Moreno and G. van der Velde, “Electromagnetic Generalized Quasi-topological gravities in $(2+1)$ dimensions,” [arXiv:2212.00637 [gr-qc]].
11institutetext: Graduate School of Information Science, Nagoya University, 22institutetext: Information and Communications Headquarters, Nagoya University, 33institutetext: School of Information Science, Aichi Institute of Technology, 44institutetext: Graduate School of Medicine, Nagoya University, 55institutetext: Department of Endoscopy, Nagoya University Hospital, 66institutetext: National Cancer Center, 77institutetext: Aichi Cancer Center, 88institutetext: International University of Health and Welfare Mita Hospital Masahiro Oda # Semi-Automated Virtual Unfolded View Generation Method of Stomach from CT Volumes Masahiro Oda 11 Tomoaki Suito 11 Yuichiro Hayashi 22 Takayuki Kitasaka 33 Kazuhiro Furukawa 44 Ryoji Miyahara 44 Yoshiki Hirooka 55 Hidemi Goto 44 Gen Iinuma 66 Kazunari Misawa 77 Shigeru Nawano 88 Kensaku Mori 2211 ###### Abstract CT image-based diagnosis of the stomach is developed as a new way of diagnostic method. A virtual unfolded (VU) view is suitable for displaying its wall. In this paper, we propose a semi-automated method for generating VU views of the stomach. Our method requires minimum manual operations. The determination of the unfolding forces and the termination of the unfolding process are automated. The unfolded shape of the stomach is estimated based on its radius. The unfolding forces are determined so that the stomach wall is deformed to the expected shape. The iterative deformation process is terminated if the difference of the shapes between the deformed shape and expected shape is small. Our experiments using 67 CT volumes showed that our proposed method can generate good VU views for 76.1% cases. ###### Keywords: Stomach, virtual unfolding, CT image ## 1 Introduction In Japan, the mortality rate of stomach cancer is the second highest among cancer-related mortality [1]. Treatment of stomach cancer in the early stages is crucial. Gastric roentgenography and gastrofiberscopy are currently performed as the diagnostic methods of stomach cancer. But these methods are physically and mentally painful for patients. In recent years, a CT image- based diagnostic method of the stomach has been developed as an alternative choice [2] that utilizes virtual gastroscopic views generated from CT images. Although CT image-based stomach diagnosis systems greatly reduce the inspection time for patients, physicians need to manually change the viewpoints and the viewing directions of the virtual gastroscopic views many times during diagnosis. To reduce this load of physicians, a virtual unfolded (VU) view of the stomach is suitable for displaying the stomach wall. This view enables physicians to observe the stomach wall at a glance. Much research has been reported on the VU view generation of hollow organs. Most generate VU views of the colon [3, 4]. Because these methods generate views in real-time, their unfolding processes do not follow the physical properties of organ deformation. VU view generation methods of the stomach have been proposed based on surface model [5] and volumetric model [6]. Although these methods simulate realistic deformations, they require the following complicated manual operations: (1) incision line determination, (2) unfolding force determination, and (3) the termination of unfolding processes. Truong et al. [7] automated (2) the unfolding force determination process. However, the results of their method heavily depend on the results of incision line determination. Their method requires many trial-and-error corrections in incision line determination. Suito et al. [8] presented an automated method of (1) the incision line determination process. These researches only automate one of the manual processes. Therefore, these VU view generation methods of the stomach still require complicated manual operations. They are time- consuming and the quality of their generated VU views heavily depends on the skill of the user. In this paper, we propose a semi-automated method for generating VU views of the stomach from CT volumes. The contribution of this paper is first presentation of semi-automated method that can drastically reduce manual operations in generation of VU views of the stomach. Manual operation required in our method is just specifying two positions on CT images: the cardia and the pylorus. All other processes are automated in our method. We determine the incision line using a previous method [8]. Then a stomach wall model is generated from a stomach wall region extracted from a CT volume. Unfolding forces are added to the model. We newly introduce a method that automatically calculates the direction of the unfolding forces for this. The expected unfolded shape is estimated based on its diameter. Unfolding force is determined so that the stomach wall is deformed to the expected shape. Newmark-$\beta$ [9] method is introduced to simulate elastic deformation of the model. Because the unfolding process is performed iteratively, it must be terminated after appropriate iterations. For the termination of the unfolding process, we define a criterion that evaluates the progress of the unfolding process based on the differences of the shapes between the deformed stomach shape and the expected unfolded shape of the stomach. ## 2 Method ### 2.1 Preprocessing In the preprocessing step, we extract an air region in the stomach, a stomach wall region, and the centerline of the stomach region from abdominal CT volumes [8]. Then the incision lines are determined using the methods shown in [8]. Since the incision line determination process [8] requires the positions of the cardia and the pylorus, these points are manually specified on CT volumes by mouse-click. ### 2.2 Definition of Stomach Wall Model We utilize a previously proposed stomach wall model [7] to simulate the deformation. The model consists of a set of hexahedra covering the stomach wall region. The center of each hexahedron is a voxel in the stomach wall region. The length of each hexahedron edge in the $x,y$, and $z$ directions are $d,d$, and $\hat{d}$ voxels, respectively. Here, $\hat{d}=d\cdot(\rm{pixel\ spacing})/(\rm{slice\ spacing})$. The hexahedra on the incision line are removed from the stomach wall model. Each hexahedron is converted into elastic model by placing mass-points, springs, and dampers on vertices, edges, and diagonal lines [7]. We define three sets of the hexahedra vertices in the stomach wall model: $S_{\rm vo}$, $S_{\rm vi}$, and $S_{\rm vb}$. A set of vertices on the outer and inner surfaces of the stomach wall model is represented as $S_{\rm vo}$ and $S_{\rm vi}$, respectively. The inner surface of the stomach wall model is a set of the faces of hexahedra in contact with the air region in the stomach and the incision line. A set of vertices near the incision line, which is included in both $S_{\rm vo}$ and $S_{\rm vi}$, are defined as $S_{\rm vb}$ (Fig. 1). Figure 1: Sets of vertices of hexahedra $S_{\rm vo}$, $S_{\rm vi}$, and $S_{\rm vb}$ on stomach wall model. ### 2.3 Determination of Unfolding Force The unfolding forces are added to the vertices of hexahedra on the incision line, which is unfolded to a planar shape by the forces. We first define an unfolded plane on which the stomach wall model is stretched. Then we obtain the destination points on the plane, where the vertices on the incision line must reach after unfolding. #### 2.3.1 Determination of Unfolded Plane Unfolded plane $\Omega$ is uniquely defined by its normal vector ${\bf n}_{\Omega}$ and point ${\bf b}_{\Omega}$ on it as $\displaystyle{\bf n}_{\Omega}$ $\displaystyle=$ $\displaystyle({\bf u}_{J/2}-{\bf x})/\parallel{\bf u}_{J/2}-{\bf x}\parallel,$ (1) $\displaystyle{\bf b}_{\Omega}$ $\displaystyle=$ $\displaystyle{\bf r}^{(0)}_{m},\ m={\rm argmax}_{V_{i}\in S_{\rm vi}}\left({\bf r}^{(0)}_{i}\cdot{\bf n}_{\Omega}\right),$ (2) where ${\bf u}_{j}\ (j=1,\ldots,J)$ is a point sequence forming the incision line. ${\bf x}$ is a point on a line segment, which connects ${\bf u}_{1}$ and ${\bf u}_{J}$, and satisfies $({\bf x}-{\bf u}_{1})\cdot({\bf u}_{J/2}-{\bf x})=0$. $V_{i}\ (i=1,\ldots,I)$ is the $i$-th vertex in the stomach wall model. ${\bf r}^{(\alpha)}_{i}$ is the position of $V_{i}$ after $\alpha$ iterations of the unfolding process. The positions of ${\bf n}_{\Omega}$ and ${\bf b}_{\Omega}$ are shown in Fig. 2(a) and (b). (a) (b) (c) Figure 2: (a) Normal vector ${\bf n}_{\Omega}$, (b) point ${\bf b}_{\Omega}$ on $\Omega$, and (c) radius of stomach $\epsilon_{i^{\prime}}$ at $V_{i^{\prime}}$. #### 2.3.2 Determination of Destination Point This process consists of three steps: (Step 1) Calculation of stomach radius, (Step 2) Calculation of base line, and (Step 3) Determination of destination point. The thick and thin parts of the stomach wall must be unfolded widely and narrowly. We compute the width of the unfolded view based on the radius of the stomach at each point on the incision line. Step 1 calculates the radius of the stomach. Step 2 computes the base line around which the stomach is unfolded. Step 3 determines the points where each vertex of the incision line is forced to be moved. (Step 1) Calculation of Stomach Radius: We obtain the radius of the stomach for each vertex existing on the incision line. The point sequence on the centerline of the stomach is described as ${\bf c}_{k}\ (k=1,\ldots,K)$. ${\bf c}_{k}$ that is closest to ${\bf u}_{j}$ is obtained as ${\bf c}_{k^{\prime}}$ whose index is calculated by $k^{\prime}={\rm argmin}_{1\leq k\leq K}\parallel{\bf c}_{k}-{\bf u}_{j}\parallel.$ (3) $V_{i^{\prime}}\in S_{\rm vb}\ (i^{\prime}=1,\ldots,I^{\prime})$ is the $i^{\prime}$-th vertex existing on the incision line on the stomach wall model. The index of ${\bf u}_{j}$, which is closest to ${\bf r}^{(0)}_{i^{\prime}}$ is obtained by $j^{\prime}={\rm argmin}_{1\leq j\leq J}\parallel{\bf r}^{(0)}_{i^{\prime}}-{\bf u}_{j}\parallel.$ (4) The radius of the stomach (Fig. 2(c)) at $V_{i^{\prime}}$ is calculated by $\epsilon_{i^{\prime}}=\parallel{\bf c}_{k^{\prime}}-{\bf u}_{j^{\prime}}\parallel.$ (5) (Step 2) Calculation of Base Line: The base line is a long axis of the point set on the incision line projected on the unfolded plane. Each point ${\bf u}_{j}$ is projected onto the closest point ${\bf u}^{\prime}_{j}$ on the unfolded plane. We apply principal component analysis to ${\bf u}^{\prime}_{j}$ to obtain the first and second eigenvectors of ${\bf u}^{\prime}_{j}$. The first and second eigenvectors are ${\bf v}^{\prime}_{1}$ and ${\bf v}^{\prime}_{2}$. The direction of the base line is given by $\displaystyle{\bf v}_{1}=\left\\{\begin{array}[]{ll}{\bf v}^{\prime}_{1}/\parallel{\bf v}^{\prime}_{1}\parallel,&\ if\ {\bf v}^{\prime}_{1}\cdot({\bf u}^{\prime}_{J}-{\bf u}^{\prime}_{1})<0,\\\ -{\bf v}^{\prime}_{1}/\parallel{\bf v}^{\prime}_{1}\parallel,&\ otherwise.\end{array}\right.$ (8) The base line is represented as a set of points and defined by $\displaystyle{\bf p}_{j}=\left\\{\begin{array}[]{ll}{\bf u}^{\prime}_{J/2}-{\bf v}_{1}\sum_{z=j}^{J/2-1}\parallel{\bf u}^{\prime}_{z+1}-{\bf u}^{\prime}_{z}\parallel,&\ if\ j=1,\ldots,J/2-1,\\\ {\bf u}^{\prime}_{J/2},&\ if\ j=J/2,\\\ {\bf u}^{\prime}_{J/2}+{\bf v}_{1}\sum_{z=J/2}^{j-1}\parallel{\bf u}^{\prime}_{z+1}-{\bf u}^{\prime}_{z}\parallel,&\ if\ j=J/2+1,\ldots,J.\end{array}\right.$ (12) The position of ${\bf p}_{j}$ is shown in Fig. 3(a). (Step 3) Determination of Destination Point: Destination points ${\bf g}_{i^{\prime}}$ are determined based on the base line and the radius of the stomach. ${\bf g}_{i^{\prime}}$ is positioned $\pi\epsilon_{i^{\prime}}$[mm] away from the base line on the unfolded plane (Fig. 3(b)): $\displaystyle{\bf g}_{i^{\prime}}=\left\\{\begin{array}[]{ll}{\bf p}_{j^{\prime}}+\pi\epsilon_{i^{\prime}}{\bf v}_{2},&\ if\ \\{({\bf u}_{j^{\prime}}-{\bf c}_{k^{\prime}})\times({\bf r}^{(0)}_{i^{\prime}}-{\bf c}_{k^{\prime}})\\}\cdot({\bf u}_{j^{\prime}}-{\bf u}_{j^{\prime}-1})\geq 0,\\\ {\bf p}_{j^{\prime}}-\pi\epsilon_{i^{\prime}}{\bf v}_{2},&\ otherwise,\end{array}\right.$ (15) ${\bf v}_{2}$ is unit vector perpendicular to the base line obtained by $\displaystyle{\bf v}_{2}=\left\\{\begin{array}[]{ll}{\bf v}^{\prime}_{2}/\parallel{\bf v}^{\prime}_{2}\parallel,&\ if\ ({\bf v}^{\prime}_{2}\times{\bf v}_{1})\cdot{\bf n}_{\Omega}<0,\\\ -{\bf v}^{\prime}_{2}/\parallel{\bf v}^{\prime}_{2}\parallel,&\ otherwise.\end{array}\right.$ (18) (a) (b) Figure 3: (a) Base line ${\bf p}_{j}$ calculated from projected incision line ${\bf u}^{\prime}_{j}$. (b) Destination point ${\bf g}_{i^{\prime}}$ corresponds to $V_{i^{\prime}}$. #### 2.3.3 Unfolding Force Determination The direction of the unfolding force for $V_{i^{\prime}}$ calculated as $\displaystyle{\bf e}^{(\alpha)}_{i^{\prime}}={\bf g}_{i^{\prime}}-{\bf r}^{(\alpha)}_{i^{\prime}},$ (19) where $\alpha$ is the number of iterations in the iterative unfolding process. ${\bf e}^{(\alpha)}_{i^{\prime}}$ is updated at each iteration step in the unfolding process. ### 2.4 Unfolding Process Based on Elastic Deformation We deform the stomach wall model by Newmark-$\beta$ method [9] with the forces computed above. The shell-like volumetric model between the inner and outer surfaces shown in Fig. 1 is deformed here. We employ the elastic deformation procedure shown in [7]. This method [7] manually terminated the iterative unfolding process. We propose a criterion to evaluate progress of the unfolding process in order to determine termination of deformation. We introduce new metric that measures the average distance between ${\bf r}^{(\alpha)}_{i^{\prime}}$ and its destination point. The metric is described as $\displaystyle D^{(\alpha)}=\frac{1}{N}\sum_{V_{i^{\prime}}\in S_{\rm vb}}\parallel{\bf r}^{(\alpha)}_{i^{\prime}}-{\bf g}_{i^{\prime}}\parallel,$ (20) where $N$ is the number of vertices included in $S_{\rm vb}$. If the change of the average distance is small in the iteration of the unfolding process, the stomach is adequately stretched. The iterative unfolding process is terminated if condition $\parallel D^{(\alpha-1)}-D^{(\alpha)}\parallel\leq\kappa$ (21) is satisfied. ### 2.5 Unfolded View Generation After the stomach wall model deformation, we generate an unfolded volume by using relationship between the stomach wall models of pre- and post- deformations. The unfolded view is generated by volume-rendering of the unfolded volume. ## 3 Experiments and Results We evaluated the proposed method by using 67 CT volumes that were taken from the distended state of the stomach using a foaming agent at two hospitals. The followings are the acquisition parameters of the CT volumes: image size: 512$\times$512 pixels, number of slices: 371-644, pixel spacing: 0.625-0.774 mm, slice spacing: 0.4-1.0 mm, slice thickness: 0.5-1.0 mm. Parameter values $d$ = 8 voxels and $\kappa=0.5$ were experimentally chosen. The quality of the VU views was visually evaluated by surgeons and engineering researchers by giving excellent (All region on VU view is visible from a view point), good (One small failure part (overlapping, bending, or broken) exists on VU view), fair, and bad (Area of VU view where there are not visible from a view point by influence of failure parts is over 30% and 50%, respectively) classifications. The generated VU views were compared with the flattened pathological specimens of the resected stomachs from the patients. We prepared ground truth VU views with manual input of the incision lines and the unfolding forces. The generated VU views are shown in Fig. 4. The numbers of cases rated as top one (excellent) and two (excellent or good) classifications were 26 (38.8%) and 51 (76.1%) cases. Since the quality of the VU views depends on the value of $\kappa$, we generated VU views by changing $\kappa$ values for 19 cases. Table 1 shows the numbers of good results and cases that caused overlapping, bending, and broken parts of the stomach wall in the VU views. We compared the computation time in VU view generation of the proposed and a previous [6] methods (Computer: two Intel Xeon 3.33GHz processors, 4GB RAM). It took about 19 seconds for the manual input and 458 seconds for the automated process. In the previous method [6], manual processes including the incision line determination, the unfolding force determination, and the termination of the unfolding processes took about 900 seconds. Figure 4: Ground truth VU views, VU views generated by proposed method, and flattened pathological specimens of two cases. Circles indicate positions of a cancer. Table 1: Numbers of good results and numbers of cases that caused overlapping, bending, and broken parts of stomach wall on generated VU views for various $\kappa$. $\kappa$ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 ---|---|---|---|---|---|---|---|---|---|--- Good | 7 | 10 | 10 | 10 | 10 | 9 | 8 | 7 | 7 | 7 Overlapping | 5 | 4 | 2 | 2 | 2 | 2 | 1 | 0 | 0 | 0 Bending | 10 | 7 | 8 | 7 | 6 | 7 | 7 | 7 | 7 | 7 Broken | 1 | 1 | 2 | 2 | 2 | 2 | 3 | 4 | 4 | 4 ## 4 Discussion From Fig. 4, the VU views generated by our proposed method clearly represent the shapes of a cancer and the stomach wall. Although the shapes of the views generated by the proposed method slightly differ from the ground truth VU views, such surface shapes as the cancer on the stomach wall were well observed. The surface shape is important for finding lesions. These results are applicable to other data judged as ”Good”. Therefore, the VU views generated using our proposed method are applicable for diagnosis. From Table 1, the numbers of broken parts in the VU views become larger when large values of $\kappa$ are used. When $\kappa$ is large, the iteration of the unfolding process is terminated even if the stomach wall model is still largely being deformed. In such cases, several parts of the stomach wall model are greatly stretched. Largely stretched parts of the stomach wall model may cause broken parts in the VU views. The numbers of the overlapping parts on the VU view becomes larger for smaller $\kappa$. In this case, the iteration number of the unfolding process becomes large because the iteration continues until the incision lines of the model reach the destination points. In the unfolding process [7], forces are added to the vertices on the outer and inner surfaces of the stomach wall models. The magnitude of these forces increase as iteration progresses. This cause overlapping of the VU views. To minimize broken or overlapping parts, $\kappa=0.5$ was selected as an optimal value for testing 67 cases. This paper presented a semi-automated VU view generation method of the stomach. The automation of VU view generation also achieves qualitative stable VU views. Quality of VU views of the previous method heavily depends on experience levels of users. Our method generates VU views with minimal influence of user experience level. It also has potential to explore new diagnostic method of the stomach like as screening examinations of early gastric cancers or planning of gastric cancer surgery. ## 5 Conclusion This paper presented a semi-automated VU view generation method. The determination of the unfolding forces and the termination of the unfolding were automated in our method. Experiments using 67 CT volumes showed that our proposed method can generate VU views with same quality as manually generated ones. Future work includes the automation of cardia and pylorus detection, unfolding of twisted stomachs, and evaluation using more samples. #### Acknowledgments. Parts of this research were supported by the MEXT, the JSPS KAKENHI Grant Numbers 21103006, 25242047, and the Kayamori Foundation of Informational Science Advancement. ## References * [1] Isobe, Y., Nashimoto, A., Akazawa, K., Oda, I., Hayashi, K., Miyashiro, I., Katai, H., Tsujitani, S., Kodera, Y., Seto, Y., Kaminishi, M.: Gastric cancer treatment in Japan: 2008 annual report of the JGCA nationwide registry. In: Gastric Cancer, vol.14, no.4, pp.301-316 (2011) * [2] Furukawa, K., Miyahara, R., Itoh, A., Ohmiya, N., Hirooka, Y., Mori, K., Goto, H.: Diagnosis of the invasion depth of gastric cancer using MDCT with virtual gastroscopy: comparison with staging with endoscopic ultrasound. Am. J. Roentgenology, 197(4), 867-875 (2011) * [3] Wang, G., McFarland, E.G., Brown, B.P., Vannier, M.W.: GI tract unraveling with curved cross sections. IEEE Trans. Med. Imag., 17(2), 318–322 (1998) * [4] Bartroli, A.V., Wegenkittl, R., Konig, A., Groller, E.: Virtual colon unfolding. In: IEEE Visualization (2001) * [5] Mori, K., Hoshino, Y., Suenaga, Y., Toriwaki, J., Hasegawa, J., Katada, K.: An improved method for generating virtual stretched view of stomach based on shape deformation. In: Proc. CARS2001, pp.425–430 (2001) * [6] Mori, K., Oka, H., Kitasaka, T., Suenaga, Y.: A method for generating unfolded views of the stomach based on volumetric image deformation. In: Proc. SPIE Medical Imaging, vol.5746, pp.340–351 (2005) * [7] Truong, T.D., Kitasaka, T., Mori, K., Suenaga, Y.: A physically-based method for unfolding the stomach from 3D CT images. In: Proc. IASTED Int. Conf. Computer Graphics and Imaging, pp.231–236 (2008) * [8] Suito, T., Oda, M., Kitasaka, T., Iinuma, G., Misawa, K., Nawano, S., Mori, K.: Automated incision line determination for virtual unfolded view generation of the stomach from 3D abdominal CT images. In: Proc. SPIE Medical Imaging, vol.8315, pp.83151M-1–7 (2012) * [9] Newmark, N.M.: A method of computation for structural dynamics. In: Journal of Engineering Mechanics, Proc. ASCE, vol.85, no.EM3, pp.67-94 (1959)
# Satellite galaxies’ drag on field stars in the Milky Way Xilong Liang School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China National Astronomical Observatories, Chinese Academy of Sciences,Beijing 100101, China Jifeng Liu School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China National Astronomical Observatories, Chinese Academy of Sciences,Beijing 100101, China WHU-NAOC Joint Center for Astronomy, Wuhan University, Wuhan, China Jingkun Zhao National Astronomical Observatories, Chinese Academy of Sciences,Beijing 100101, China School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China Kun Xu School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China National Astronomical Observatories, Chinese Academy of Sciences,Beijing 100101, China ###### Abstract With Gaia EDR3 data, velocity dispersion of Milky Way field stars around satellite galaxies have been investigated. We have fitted velocity dispersion against distance to satellite galaxy and found the gradient of velocity dispersion is related to the mass of satellite galaxy. With order-of-magnitude approximations, a linear correlation has been fitted between the mass of satellite galaxy and gradient of velocity dispersion caused by its gravitational drag. Though our result is an observational qualitative result, it shows better relation could be obtained with more observations in the future. Galaxies: Dwarf; Galaxies: Kinematics and Dynamics; Stellar kinematics ## 1 Introduction The popular way to provide rough mass constraints for spheroidal galaxies is using the virial theorem (Poveda, 1958). With spherical Jeans equation, former researchers can estimate accurate masses for dispersion-supported stellar systems (Wolf et al., 2010). Formalisms have been built between velocity dispersions of stars in dwarf galaxy and mass within its half-light radius, which can estimate accurate mass consistent with the result of full Jeans analysis (Walker et al., 2009). As observational ability grows, more and more individual stars can be discerned within dwarf galaxies, but it is still easier to observe individual stars around the out skirt than those in the center of dwarf galaxies. Stars around boundary are more sensitive to traces of dynamical interaction between satellite galaxy and its host galaxy. Tidal tails are not target of this research and we focus on those satellites that show little evidence of Galactic cannibalism, namely no clear stellar streams striped from satellites by the tidal force of host galaxy. Former researchers have investigated interaction of galaxies by application of dynamical friction formula (Chandrasekhar, 1943) or direct numerical simulation (Lin & Tremaine, 1983; White, 1983), but they mainly focused on orbital decay of satellite. Weinberg (1989) studied the response of a spherical stellar system to a periodic perturbation of satellite. Carlberg & Hartwick (1989) found that sinking satellites can make the velocity dispersion horizontal to the disk larger than the vertical velocity dispersion, as is observed. Lacey (1984) studied the influence of massive gas clouds on stellar velocity dispersions in galactic discs, which is close to our work. When a satellite galaxy passes through the halo of our Galaxy, impulse approximation is adopted to treat its gravitational effect on field stars. The impulse approximation (Öpik, 1932) was introduced to calculate stellar perturbations on comets (Oort, 1950), but the situation is similar for satellite galaxies. Section 2 introduces the data and sample adopted to do this research. Section 3 introduces the model adopted to fit velocity distributions. The final section presents the result and discussion. ## 2 Data analysis ### 2.1 data In the Gaia era (Gaia Collaboration et al., 2016), many stars around satellite galaxies have information available regarding spatial positions and proper motions. For every satellite listed in Table 1, we selected all sources in Gaia EDR3 (Gaia Collaboration et al., 2021) that are within $5^{\circ}$ from the center of each galaxy. When those data are downloaded from the Gaia gaiaedr3.gaia_source catalogue, some quality criteria have been adopted, which are respectively $\textrm{RUWE}\leq 1.4,$ $bp\\_rp\geq 0.2$ and $parallax- parallax\\_error<0.03$, where RUWE is the renormalized unit weight error (Riello et al., 2021). Parallaxes have been corrected using the prescription suggested in Lindegren et al. (2021) and used to select sample around each satellite galaxy. We did not use the parallax to calculate distance of each star, since most satellite galaxy are too far away. For each satellite galaxy, its surrounding stars have been selected by apply a parallax cut according to its known distance to Earth. The upper limits of parallax are all relatively tight, taken as about 10 to 30 kpc away from each different satellite galaxy. The lower limits of parallax are relatively loose that most of them are negative to include enough stars. Then all stars within the parallax cut are assumed as the same distance as the satellite galaxies to Earth. Since we focus only on the distribution of tangent velocities, the error caused by the distance approximation will not be larger than an order of unity. Velocities along right ascension and declination directions have been calculated by projecting all stars in the same distance as the satellite galaxy, which will no doubt induce huge uncertainties. The fitting of velocity distribution can only be done in two dimensions without radial velocity and distance. Unfortunately, the sum of velocity dispersion along two directions is not necessarily two thirds of the total velocity dispersion in all three dimensions, because velocity dispersions may be different from each directions. We think using sum of velocity dispersion in two dimensions representing that in three dimensions will only induces uncertainty with an order of unity. To show variations of velocity distribution along the radius of each satellite galaxy, stars have been divided into spatial concentric annulus in the projected plane. The left panel of figure 1 shows concentric annulus taken for Aquarius2 dwarf galaxy as an example. These concentric annuluses have same area for the same satellite galaxy, but different areas for different galaxies, because the masses of different satellite galaxies and star densities of their field stars are different. We assumed stars has approximately homogeneous distribution in space and the same area makes sure each annulus sample has similar numbers of stars for a same satellite galaxy. Then the parallax approximation would has the same effect on all samples around one satellite galaxy. Velocity distributions of stars in each annulus sample have been fitted separately. The inner radius of smallest annulus of each satellite galaxy are taken larger than $0.5^{\circ}$, so that the velocity distribution is dominated by field stars belonging to the Milky Way. For each single star, whether it belongs to the satellite galaxy or field stars of the Milky Way is not important for us. What we want is the velocity distribution of field stars. For each spatial annulus sample, we have divided it in velocity coordinate system into square bins with size equal 0.2 mas yr-1 (about the median value of uncertainties of proper motion). Then took the star counts in each bin to represent the density distribution and took the mean value of uncertainties of proper motions of stars in each bin as uncertainty of that bin. We have ignored the uncertainty of distance which should be the dominating part of total uncertainty, but we think this way of treating uncertainty can still represent relative size of uncertainties between each bins. Then velocity distribution model described by equation 2 has been adopted to fit the density distribution in velocity space. The EMCEE code (Foreman-Mackey et al., 2013) has been used to obtain best fitted parameters and uncertainties of them. We have used uniform priors on free parameters and the logarithm of probability function. As for fitting steps, we first used the Python Scipy package optimize module to obtain least-squares estimation of parameters, then used them to initialize 100 random walkers. After threw away the initial 100 steps as ”burn-in” steps, we run 1000 steps of MCMC. The parameters and their uncertainties are taken from 16th, 50th, and 84th percentiles of the samples in the marginalized distributions of MCMC results. ### 2.2 model The equilibrium state of an infinite, homogeneous stellar system can be described by Maxwellian function in phase space ($\overrightarrow{x},\overrightarrow{v}$) and the velocity distribution is described by Schwarzschild distribution function. We used three components to fit the velocity distributions of surrounding stars around satellite galaxies of the Milky Way, which are respectively for field stars from the Milky Way, stars from satellite galaxy itself and for possible observational artifact. The right panel of figure 1 shows fitted three components of velocity distributions of each annulus taken for Aquarius2 dwarf galaxy as an example. The black star in the right panel shows the known velocity of Aquarius2 dwarf galaxy from literature (McConnachie & Venn, 2020). Grey dots in the background show the velocity distribution of the whole sample. Three eclipses of each color represent three components fitted for stars in each annulus sample. Though we do not know which star is actually from satellite galaxy, they are assumed having a distinct velocity distribution from field stars. With main part of stars from satellite galaxy already been fitted by one component, small number of stars from satellite galaxy mixed in the main part of field stars have negligible effect on the velocity distribution of field stars. The Schwarzschild distribution function used for fitting velocity distribution is $f(v_{1},v_{2})=Cexp(-\frac{((v_{1}-v_{1c})cost+(v_{2}-v_{2c})sint)^{2}}{a^{2}}-\frac{-((v_{1}-v_{1c})sint+(v_{2}-v_{2c})cost)^{2}}{b^{2}})$ (1) in which $a$ and $b$ are two axes of the velocity ellipse, while $t$ is angular separation between axes of the velocity ellipse and axes of coordinate system. Coordinate $v=(v_{1},v_{2})$ is observed velocity of each star, while $(v_{1c},v_{2c})$ is the center coordinate of the velocity ellipse. The same form of function has been applied to all three components and all parameters are free for fitting. We take the one fitted component with largest velocity dispersion as velocity distribution of field stars. The square of velocity dispersion is taken as $\sigma_{v}^{2}=a^{2}+b^{2}$. We obtained velocity dispersions of the Milky Way field star component for every spatial concentric annulus. Then we fitted parameters for equation 7 with velocity dispersions against distance $r$ to the center of satellite galaxy. The EMCEE code has been used to obtain best fitted parameters and their uncertainties. Though there are so many approximation treatment of uncertainties in velocity treatment, we think our result can still reveal some main tendencies. Figure 1: The concentric annuluses taken for Aquarius2 dwarf galaxy and corresponding velocity eclipses of velocity distributions fitted for stars in each annulus. Each dot represent a star and colors are corresponding to each other in two panels. These grey dots in right panel are all stars in all annuluses, while the black star in the right panel is the known velocity of Aquarius2 dwarf galaxy from literature. The dynamical friction between a satellite galaxy and field stars of the Milky Way will systematically transfer energy from their relative orbital motion into random motions of their constituent stars. The process considered here is a satellite galaxy with mass $M$ passing through field stars with mass $m\ll M$. Though satellite galaxies are large complicated systems, we assume them as point masses for simplicity. Moreover, the spatial distribution of field stars has been roughly approximated as infinite and homogeneous, because the Galaxy is much larger than its satellite galaxies. While the satellite galaxy is much larger than field stars, the dominant effect of the encounters is to exert dynamical friction for the satellite galaxy. For those stars with impact parameters smaller than radius of satellite galaxy, we assume they have been accreted by satellite galaxy or switch from one circular orbit to another and do not increase stars’ random velocity. For those with impact parameters similar to radius of satellite galaxy, they experience strong encounters and we assume they have obtain much energy and distribute far away from the main part of stars in velocity coordinate system. Thus they do not affect the velocity distribution of the main part of field stars. The impulse approximation is valid only if the encounter time is short compared to the crossing time. According to page 655, chapter 8.2 of Binney & Tremaine (2008), the crossing time (sometimes also called the dynamical time) can be estimated as $r/\sigma$, where $r$ is the Galactic radius and $\sigma$ is the velocity dispersion at the radius. While the encounter time can be estimated as $\sim l/v$, where $l$ is the size of region passed through by satellite galaxy. Table 1 lists our estimation of crossing time and encounter time of each satellite galaxy in our sample. $\sigma$ is calculated directly by square root of $\sigma_{v_{0}}^{2}$ listed in table 1. The size of region we are studying around satellite galaxy is around 10 kpc, which is used to calculate encounter time for the whole sample. To obtain relative velocity, fitted center of velocity distribution of Galaxy field star component is taken as tangential velocity of the Milky Way and its radial velocity is taken as zero. While for satellite galaxies, proper motions and radial velocities has been taken from Li et al. (2021) and Simon (2019). Coordinates module of Astropy package has been used to transform observable into velocities. The impulse approximation is not a good approximation for some satellite galaxies that not have significantly shorter encounter time than crossing time and they have been removed from our sample. With impulse approximation, the velocity perturbation in the encounter for each field star is $\Delta v_{\perp}=\frac{GM}{bv}$ , where G is the gravitational constant; $M$ is the mass of a satellite galaxy, which has been assumed as a point mass; $b$ is the distance of closest approach; while $v$ the velocity of a field star relative to the satellite galaxy. Supposing the direction of velocity of each field star does not change much during the impact, then the line of $b$ is approximately perpendicular to $\vec{v}$, thus $b\thickapprox\frac{|\vec{r}\times\vec{v}|}{|\vec{v}|}=\frac{|(rcos\theta,rsin\theta)\times(v_{1},v_{2})|}{|\vec{v}|}.$ (2) Where $\vec{r}=(rcos\theta,rsin\theta)$ is star’s spatial coordinate with origin at the satellite galaxy, while $\vec{v}=(v_{1},v_{2})$ is star’s velocity coordinate. We made a simplified approximation that the encounter only adds energy along the direction perpendicular to the original velocity direction. Since stars has been assumed homogeneous distribution in space, the mean of velocity perturbation is zero, but the velocity dispersion becomes larger. The variance of perturbed velocity component $v_{\perp}$ can be calculated by integrating over entire phase space: $\displaystyle\overline{(\Delta v_{\perp})^{2}}$ $\displaystyle=$ $\displaystyle\int(\frac{GM}{bv})^{2}f(v)dvdrrd\theta$ (3) $\displaystyle=$ $\displaystyle\int\frac{G^{2}M^{2}}{(v_{2}rcos\theta- v_{1}rsin\theta)^{2}}f(v_{1},v_{2})drdv_{1}dv_{2}rd\theta$ (4) $\displaystyle=$ $\displaystyle\int\frac{G^{2}M^{2}}{r}dr\frac{f(v_{1},v_{2})dv_{1}dv_{2}d\theta}{(v_{1}sin\theta- v_{2}cos\theta)^{2}}$ (5) The $M$ is constant, while $\int G^{2}\frac{f(v_{1},v_{2})dv_{1}dv_{2}d\theta}{(v_{1}sin\theta- v_{2}cos\theta)^{2}}$ integrated over the whole velocity space is only a function of velocity dispersion and has little dependency on $r$. If denoting the integral by $C$, the equation 5 can be expressed as $\overline{(\Delta v_{\perp})^{2}}=M^{2}C\int\frac{dr}{r}$ (6) For an annulus with small width, the change of velocity square is approximately: $\overline{(\Delta v_{\perp})^{2}}=(\ln\frac{r+\Delta r}{r})M^{2}C=\sigma_{v}^{2}-\sigma_{v0}^{2}$ (7) where $C$ is a constant to $r$ under condition ignoring possible correlation between velocity distribution and spatial distribution of stars in an annulus. The $\sigma_{v}^{2}$ is the present total velocity dispersion, while $\sigma_{v_{0}}^{2}$ is the total velocity dispersion before velocity perturbation. Fitting $\sigma_{v}^{2}$ against $(\ln\frac{r+\Delta r}{r})$, we have obtained $M^{2}C$ and $\sigma_{v_{0}}^{2}$ at the same time. The first eight panels of figure 2 show corner distributions in parameter space of two axes of the velocity ellipse for each annulus sample of AquaII. While the rest panels show corner distributions of $M^{2}C$ versus $\sigma_{v_{0}}^{2}$ of each satellite galaxy in our sample. Though only a few points have been used to fit equation 7 for each satellite galaxy, we think it is enough to reveal the main relation. Figure 2: The corner plots of fitted two axes of velocity ellipses of annulus samples of AquaII and $M^{2}C$ versus $\sigma_{v_{0}}^{2}$ for each satellite galaxy. ## 3 Result and Discussion Figure 3: Fitted $M^{2}C$ versus known mass $M_{0}^{2}$. Figure 3 shows fitted $M^{2}C$ versus known mass $M_{0}^{2}$ in logarithmic coordinate which is for aesthetics since mass square spans several orders of magnitude. $M^{2}C$ is fitted coefficients from equation 7, while $M_{0}^{2}$ is known masses of satellite galaxies from literatures listed in table 1. The blue straight line is only a line fitted between $M^{2}C$ and $M_{0}^{2}$ and does not indicate $C$ has to be a constant. The error bars symbolically show three times of uncertainties obtained by former steps, since uncertainty brought by distance should be much larger than uncertainty brought by proper motion. The spearman correlation coefficient between $M^{2}C$ and $M_{0}^{2}$ is 0.81, which means their correlation is close to linear relation. The spearman correlation coefficient between $\sigma_{v0}^{2}$ and distance from Earth is 0.96, which means their correlation is also very close to linear relation. Our result at least qualitatively shows that gradient of velocity dispersion of field stars around a satellite galaxy is strongly related to the total mass of the satellite galaxy. Without full three dimensional velocity information, it is meaningless to use more complicated functions to describe the correlation than linear relation. Since gravity is always an attractive force, a equilibrium gravitational system must be inhomogeneous. Our relation is obtained under the assumption of an infinite, homogeneous stellar system of field stars which is only an approximation of the real observed spatial distribution, but can illuminate much of the response behavior of real stellar system. We think the total uncertainty should not be more than an order of magnitude larger than parameter itself. The gravitational mass of satellite galaxy is no doubt related to the enlargement of velocity dispersion of field star caused by its gravitational drag. Our result is only an order-of-magnitude argument, but in the future, large surveys like CSST and LSST will cover more stars in local universe and they can provide radial velocity and even chemical abundances information. With more proper motions and parallaxes information from Gaia future data release, it is possible that much more stars will have full 6 dimensional information. Even then distinguishing single stars in the dense center of dwarf galaxies is still very hard, more precise analysis about field stars around satellite galaxy is promising. We thank the anonymous referee for his/her constructive suggestions. This work is supported by Strategic Priority Program of the Chinese Academy of Sciences under grant number XDB41000000, the Fundamental Research Funds for the Central Universities and Project funded by China Postdoctoral Science Foundation No. 2021M703168. This study is also supported by the National Natural Science Foundation of China under grant No. 11973048, 11927804, 11890694 and National Key R&D Program of China No. 2019YFA0405502. We acknowledge the support from the 2m Chinese Space Station Telescope project: CMS-CSST-2021-A10, CMS- CSST-2021-B03, CMS-CSST-2021-B05. The authors also acknowledge all the open- source software involved in this study, specially TOPCAT, Python and R soft. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. Table 1 lists fitted parameters in equation 7, distances, known masses of satellite galaxies and referenced literatures. Column $ct$ and $et$ are respectively crossing time and encounter time in unit of million year. Table 1: Parameters. Name | $M^{2}C$ | $\sigma_{v0}^{2}$ | $dis$ | $M_{0}$ | $ct$ | $et$ | reference ---|---|---|---|---|---|---|--- unit | $10^{4}$ | $10^{4}$ km2s-2 | kpc | $10^{6}M_{sun}$ | Myr | Myr | Aquarius2 | $267^{+26}_{-37}$ | $1332^{+28}_{-23}$ | 105 | 2.7 | 28.8 | 4.0 | Torrealba et al. (2016) Carina1 | $1390^{+134}_{-117}$ | $363^{+123}_{-126}$ | 107 | 6.3 | 55.8 | 5.1 | McConnachie (2012) Canes Venatici1 | $3561^{+337}_{-305}$ | $5995^{+412}_{-321}$ | 218 | 27.0 | 28.1 | 2.7 | Simon & Geha (2007) Canes Venatici2 | $480^{+58}_{-44}$ | $2811^{+49}_{-41}$ | 161 | 0.91 | 30.2 | 4.0 | McConnachie (2012) Coma Berenices | $378^{+48}_{-37}$ | $197^{+39}_{-33}$ | 45 | 0.94 | 32.1 | 8.6 | McConnachie (2012) Crater2 | $397^{+38}_{-37}$ | $1871^{+43}_{-39}$ | 116 | 5.5 | 26.7 | 4.5 | Ji et al. (2021) Draco1 | $539^{+53}_{-51}$ | $409^{+54}_{-47}$ | 76 | 11.0 | 37.5 | 8.8 | McConnachie (2012) Draco2 | $75^{+10}_{-7}$ | $71^{+8}_{-7}$ | 24 | 0.32 | 28.7 | 12.4 | Martin et al. (2016) Eridanus2 | $1551^{+146}_{-148}$ | $9261^{+146}_{-162}$ | 382 | 12.0 | 39.6 | 2.6 | Li et al. (2017) Grus1 | $755^{+65}_{-80}$ | $2546^{+75}_{-71}$ | 116 | 2.5 | 23.0 | 4.9 | Walker et al. (2016) Hercules | $103^{+82}_{-140}$ | $1081^{+99}_{-121}$ | 126 | 2.6 | 38.4 | 3.3 | McConnachie (2012) Hydra2 | $54^{+9}_{-6}$ | $2575^{+8}_{-5}$ | 148 | 0.8 | 29.1 | 3.1 | Kirby et al. (2015) Leo1 | $2550^{+229}_{-256}$ | $7270^{+202}_{-194}$ | 258 | 12.0 | 30.2 | 1.4 | McConnachie (2012) Leo2 | $734^{+76}_{-71}$ | $4770^{+67}_{-79}$ | 236 | 4.6 | 34.0 | 1.6 | McConnachie (2012) LeoT | $393^{+34}_{-43}$ | $7647^{+37}_{-40}$ | 422 | 3.9 | 48.2 | 1.5 | McConnachie (2012) Phoenix1 | $205^{+22}_{-18}$ | $25783^{+22}_{-21}$ | 409 | 0.89 | 25.4 | 0.9 | McConnachie (2012) Pisces2 | $841^{+48}_{-94}$ | $3889^{+95}_{-90}$ | 181 | 1.6 | 29.0 | 3.2 | Kirby et al. (2015) Segue1 | $18^{+30}_{-2}$ | $84^{+2}_{-5}$ | 28 | 0.26 | 30.3 | 5.9 | McConnachie (2012) Triangulum2 | $192^{+19}_{-20}$ | $53^{+18}_{-21}$ | 36 | 0.89 | 49.4 | 20.6 | Kirby et al. (2015) Tucana2 | $183^{+15}_{-17}$ | $619^{+52}_{-54}$ | 54 | 2.7 | 21.8 | 10.2 | Walker et al. (2016) Tucana5 | $184^{+16}_{-18}$ | $385^{+16}_{-18}$ | 52 | 1.7 | 26.4 | 9.7 | Simon et al. (2020) Ursa Major1 | $1276^{+166}_{-110}$ | $519^{+134}_{-126}$ | 97 | 15 | 42.5 | 6.3 | Simon & Geha (2007) Ursa Major2 | $574^{+53}_{-64}$ | $10^{+67}_{-47}$ | 32 | 4.9 | 97.8 | 28.5 | Simon & Geha (2007) ## References * Binney & Tremaine (2008) Binney, J. & Tremaine, S. 2008, Galactic Dynamics: Second Edition, by James Binney and Scott Tremaine. ISBN 978-0-691-13026-2 (HB). Published by Princeton University Press, Princeton, NJ USA, 2008. * Carlberg & Hartwick (1989) Carlberg, R. G. & Hartwick, F. D. A. 1989, ApJ, 345, 196. doi:10.1086/167895 * Chandrasekhar (1943) Chandrasekhar, S. 1943, ApJ, 97, 255. doi:10.1086/144517 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., et al. 2013, PASP, 125, 306. doi:10.1086/670067 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1. doi:10.1051/0004-6361/201629272 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1. doi:10.1051/0004-6361/202039657 * Ji et al. (2021) Ji, A. P., Koposov, S. E., Li, T. S., et al. 2021, ApJ, 921, 32. doi:10.3847/1538-4357/ac1869 * Kirby et al. (2015) Kirby, E. N., Cohen, J. G., Simon, J. D., et al. 2015, ApJ, 814, L7. doi:10.1088/2041-8205/814/1/L7 * Kirby et al. (2015) Kirby, E. N., Simon, J. D., & Cohen, J. G. 2015, ApJ, 810, 56. doi:10.1088/0004-637X/810/1/56 * Lacey (1984) Lacey, C. G. 1984, MNRAS, 208, 687. doi:10.1093/mnras/208.4.687 * Li et al. (2017) Li, T. S., Simon, J. D., Drlica-Wagner, A., et al. 2017, ApJ, 838, 8. doi:10.3847/1538-4357/aa6113 * Li et al. (2021) Li, H., Hammer, F., Babusiaux, C., et al. 2021, ApJ, 916, 8. doi:10.3847/1538-4357/ac0436 * Lin & Tremaine (1983) Lin, D. N. C. & Tremaine, S. 1983, ApJ, 264, 364. doi:10.1086/160604 * Lindegren et al. (2021) Lindegren, L., Klioner, S. A., Hernández, J., et al. 2021, A&A, 649, A2. doi:10.1051/0004-6361/202039709 * Martin et al. (2016) Martin, N. F., Geha, M., Ibata, R. A., et al. 2016, MNRAS, 458, L59. doi:10.1093/mnrasl/slw013 * McConnachie (2012) McConnachie, A. W. 2012, AJ, 144, 4. doi:10.1088/0004-6256/144/1/4 * McConnachie & Venn (2020) McConnachie, A. W. & Venn, K. A. 2020, AJ, 160, 124. doi:10.3847/1538-3881/aba4ab * Oort (1950) Oort, J. H. 1950, Bull. Astron. Inst. Netherlands, 11, 91 * Öpik (1932) Öpik, E. 1932, Proceedings of the American Academy of Arts and Sciences, 67, 169. doi:10.2307/20022899 * Palmer & Papaloizou (1985) Palmer, P. L. & Papaloizou, J. 1985, MNRAS, 215, 691. doi:10.1093/mnras/215.4.691 * Poveda (1958) Poveda, A. 1958, Boletin de los Observatorios Tonantzintla y Tacubaya, 2, 3 * Riello et al. (2021) Riello, M., De Angeli, F., Evans, D. W., et al. 2021, A&A, 649, A3. doi:10.1051/0004-6361/202039587 * Simon & Geha (2007) Simon, J. D. & Geha, M. 2007, ApJ, 670, 313. doi:10.1086/521816 * Simon (2019) Simon, J. D. 2019, ARA&A, 57, 375. doi:10.1146/annurev-astro-091918-104453 * Simon et al. (2020) Simon, J. D., Li, T. S., Erkal, D., et al. 2020, ApJ, 892, 137. doi:10.3847/1538-4357/ab7ccb * Taylor (2005) Taylor Mark B., 2005, Astronomical Data Analysis Software and Systems XIV, 347, 29 * Torrealba et al. (2016) Torrealba, G., Koposov, S. E., Belokurov, V., et al. 2016, MNRAS, 463, 712. doi:10.1093/mnras/stw2051 * Walker et al. (2016) Walker, M. G., Mateo, M., Olszewski, E. W., et al. 2016, ApJ, 819, 53. doi:10.3847/0004-637X/819/1/53 * Walker et al. (2009) Walker, M. G., Mateo, M., Olszewski, E. W., et al. 2009, ApJ, 704, 1274. doi:10.1088/0004-637X/704/2/1274 * Weinberg (1989) Weinberg, M. D. 1989, MNRAS, 239, 549. doi:10.1093/mnras/239.2.549 * White (1983) White, S. D. M. 1983, ApJ, 274, 53. doi:10.1086/161425 * Wolf et al. (2010) Wolf, J., Martinez, G. D., Bullock, J. S., et al. 2010, MNRAS, 406, 1220. doi:10.1111/j.1365-2966.2010.16753.x
11institutetext: Simon Fraser University, Burnaby, Canada 11email<EMAIL_ADDRESS> # Graph Neural Network Approach to Semantic Type Detection in Tables ††thanks: Published at Pacific-Asia Conference on Knowledge Discovery and Data Mining 2024 Ehsan Hoseinzade Corresponding author Ke Wang ###### Abstract This study addresses the challenge of detecting semantic column types in relational tables, a key task in many real-world applications. While language models like BERT have improved prediction accuracy, their token input constraints limit the simultaneous processing of intra-table and inter-table information. We propose a novel approach using Graph Neural Networks (GNNs) to model intra-table dependencies, allowing language models to focus on inter- table information. Our proposed method not only outperforms existing state-of- the-art algorithms but also offers novel insights into the utility and functionality of various GNN types for semantic type detection. The code is available at https://github.com/hoseinzadeehsan/GAIT ###### Keywords: graph neural networks language model semantic types . ## 1 Introduction Accurately identifying (or tagging) the semantic types of columns inside a table is crucial for different information retrieval tasks like data cleaning [16], schema matching [17], and data discovery [6]. One emerging application is automatically tagging sensitive columns in a table, such as personal information, before deciding what information can be released. Previous works showed that machine learning approaches outperform traditional methods in predicting semantic types [9, 26, 1, 2]. Sherlock [9], a single-column prediction framework, feeds various features of a column to a deep feed- forward neural network to get the prediction. This method ignores the global context and the dependencies between columns, making it difficult to distinguish the semantic types in cases like in Figure 1. SATO [26] improves upon Sherlock by adding a topic modeling module and a structured prediction module on top of Sherlock, to jointly predict semantic types of all the columns in a table by leveraging the topic of a table and the dependencies between columns in a table. Building on the trend of applying machine learning to tabular data, researchers have started using language models like BERT [5]. By feeding tables to BERT, which was originally designed for textual data, they exploit its extensive pre-training. This adaptation has created new frameworks that fine-tune BERT for column type annotation [4]. In addition to the values of the target column, two other sources of information can be used to improve the accuracy of semantic type annotation: intra-table information refers to other columns in the same table and inter-table information refers to other tables in the data. Given that language models have a small limit on the number of input tokens (BERT takes a maximum of 512 tokens), column type annotation models are developed to handle only one of the two mentioned sources of information. Incorporating intra-table information has led to multi-column prediction approaches [25] that are designed to address the limitation of single-column prediction models in situations like Fig 1 by accounting for broader table context, specifically column relationships and information. TABBIE [10] encodes rows and columns of a table respectively to get a better understanding of tables. The most prominent work in this category is Doduo [18] where BERT is modified to receive the whole columns of a table and predict their semantic types together. Having inter-table information [21] can be a huge help in cases where the target column does not have enough high-quality data to make a good semantic prediction. For example, if a table has a column with entries like ’Orange’ and ’Peach’, the semantic type is ambiguous. However, by identifying and augmenting this column with columns of similar tables that have entries like ’Red’ and ’Blue’, the semantic type becomes clearer, indicating that this column is likely about colors rather than fruits. The most recent work, RECA [19], in addition to the values of the target column, identifies values of the most useful similar tables and feeds them to BERT to get the semantic type of the target column. However, due to the small limit on the number of input tokens of language models like BERT, Doduo and RECA have the following drawbacks: Figure 1: The two tables on the right both have the column containing the values “Paris”, “Ottawa” and “London”. Without considering information coming from other columns it is difficult for a single-column prediction model to detect the actual semantic types of these columns. The multi-column prediction will label these columns correctly by jointly predicting all columns in a table. 1. 1. Doduo feeds the whole table to BERT and because of that, it poses difficulties in handling wide tables. For instance, the average number of columns of tables in Open Data is 16, but there’s a large variance with some tables having hundreds of columns. Furthermore, Doduo is not designed in a way that can incorporate inter-table context information and ignores this useful source of information [19]. 2. 2. RECA incorporates inter-table but not intra-table information, predicting the semantic type of each column individually. This approach enables RECA to handle wide tables within the language model’s small token limit, as it doesn’t need to model the entire table at once. However, this means it overlooks the valuable information in column relationships, crucial in complex scenarios like Fig 1, where semantic types are difficult to distinguish. Thus, a question is whether it is possible to incorporate both inter-table and intra-table information without suffering from the difficulty of handling wide tables as in Doduo and to benefit from the generalization power of the language model based approach. Inspired by recent developments in computer vision such as visual reasoning, object detection, and scene graph generation [3, 14] where a main task is tagging the objects inside an image by leveraging the relations among the objects in the image, we propose to augment any single-column prediction framework, especially those incorporating inter-table information like RECA (addressing the drawback of Doduo) by a graph neural network (GNN) module to model the whole dependencies between columns. Thus, our framework incorporates both inter-table and intra-table information. In particular, we model each table by a graph with the nodes representing the columns and the edges representing the dependencies between columns through Message Passing of GNN. By considering the dependencies between all pairs of columns, this approach, called GAIT (Graph bAsed semantIc Type detection), addresses the above drawback of RECA, making it a multi-column prediction framework. The challenge is how to represent the features of a column by a node so that Message Passing can leverage the dependencies among columns. GAIT stands out in efficiently handling wide tables and benefiting from a language model based approach by building on top of models like RECA that are single-column predictions and language model based. While Doduo’s effectiveness decreases in scenarios with minimal column dependency, and RECA faces challenges when similar tables are scarce, GAIT’s integration of both inter-table and intra-table information makes it a competitive model in these diverse scenarios. This dual-data approach enables GAIT to maintain its performance and effectively address the limitations encountered by models focusing on either inter-table or intra-table information alone. ## 2 Related works Column type prediction methods are typically grounded into two categories, i.e., deep learning based frameworks and language model based frameworks. Deep learning based models. ColNet [1] uses DBpedia cell value lookups to create examples and trains a CNN with Word2Vec embeddings. HNN [2], models intra-column semantics, enhancing Colnet. Sherlock [9] employs column statistics, paragraph, word, and character embeddings to predict column types through a neural network. SATO [26] builds on Sherlock, adding topic modeling features and adjusting predictions for column dependencies using a CRF. Language model based models. Language models, like BERT [5], have been adopted for table tasks [23] including column type prediction. TaBERT [25] utilizes BERT as a base model to capture the table content features. TURL [4] is pre- trained unsupervisedly, using a visibility matrix for row and column context, and then fine-tuned for table-related tasks. TABBIE [10] separately processes the rows and columns of tables to give a better understanding of them. Doduo [18] predicts all the columns of a table together by feeding the whole table to BERT. RECA [19] incorporates inter-table context information by finding and aligning relevant tables. TCN [21] suggests using both intra-table and inter- table information for column type prediction. However, it needs table schema and page topic, which many datasets, like Webtables and Semtab don’t have. Summary: Most of the previous works [25, 18, 4, 10], except for TCN and RECA, do not incorporate inter-table information for prediction. TCN [21] requires having table schema and page topic, which does not exist in many datasets. RECA does not incorporate intra-table dependencies. Our GAIT predicts semantic types of columns exclusively based on the content of the tables by integrating both intra-table and inter-table information. ## 3 Problem Definition We aim to predict the semantic types of the columns of a given table with missing column headings. This problem is called table annotation. To learn to predict semantic types, a collection of labeled tables is given as the training data $D$, where each table $t(c_{1},c_{2},...,c_{n})$ consists of $n$ columns and each column is labeled as one of the $k$ pre-defined semantic types, also called classes, e.g., Age, Name, Country (note that semantic types are different from atomic types like integer and string). Note that the number of columns $n$ and rows can differ for different tables. Typically, the first step is to extract a feature vector (embedding) to represent a column $c_{i}$. After applying a feature extractor function $\phi$ to the values of a column $c_{i}$ and potential inter-table information related to $c_{i}$, an $m$-dimension feature (embedding) vector $\psi_{i}$ is generated for column $c_{i}$. The rest of the task is to learn a mapping $f$ that, given $\psi=<\psi_{1},...,\psi_{n}>$ of a table of $n$ unlabeled columns, predicts the classes for the $n$ columns in the table. Figure 2: The framework of GAIT: GAIT adds a GNN learning on top of the single-column prediction module, which is RECA in this work. The output of RECA is a class distribution for each column in a table, which provides the initial hidden state of the node representing that column in the GNN. For a table with $n$ columns, RECA is performed $n$ times. Then, the GNN learns the best representations of the hidden states of all nodes to minimize a loss function, through Message Passing that models the dependencies between columns. ## 4 GAIT Figure 2 shows the framework of GAIT. It is built on a single-column prediction provided by RECA. We opted for RECA due to its high performance in column type annotation as a result of incorporating useful inter-table information from other relevant tables. That said, GAIT’s design is versatile. While we utilize RECA, other single-column prediction modules can be integrated. GAIT employs a GNN in which each graph represents a table with the nodes representing the columns in the table. The initial representation of each node is the logits outputted by RECA for the represented column. Once fed with such class distributions as input, the training of GNN is responsible for capturing the dependencies of classes among columns and does not further involve the lower-level single-column prediction module. This approach treats the preliminary prediction of the single-column prediction of RECA as the node features for training GNN, which is more efficient than concatenating the networks of low-level modules into a giant neural network. Our method stacks a GNN as a meta-learner on top of RECA (i.e., two classifiers) instead of concatenating RECA and GNN into one classifier, which improves the overall performance according to stacked generalization technique [24]. We now present more details. ### 4.1 Single-column Prediction The single-column prediction is responsible for generating the preliminary prediction of each column. We use RECA [19] for this task. In the RECA process, the primary goal is to improve the understanding of a target column in the main table by integrating relevant data from other tables. The process begins by identifying named entities across all tables. Each entity is assigned a type from a predefined set, with the most common type within a column being selected as its representative named entity type. Following this, RECA constructs the named entity schema for each table, which includes the named entity types of all its columns. The next step is to find the topical relevance of other tables to the main table. This is done by calculating the Jaccard similarity between the words in the main table and other tables. Tables that are similar enough are chosen as candidate tables for further analysis. Among these candidates, tables with the same named entity schema as the main table are labeled as relevant tables. Additionally, tables with similar, but not identical, named entity schemas are called sub-related tables. The final step involves combining the data from the target column in the main table with data from columns having the same named entity type in both related and sub-related tables and feeding it to a language model, BERT, to find the semantic type of the target column. ### 4.2 Graph-based Prediction Graph Modeling of a Table The training data for GNN is a collection of graphs organized into several mini-batches, where each graph corresponds to a table in the original training data. For a table with $n$ columns, we create a graph of $n$ nodes where each node represents a column in the table and create an edge between each pair of columns. Initially, each node $u$ of a graph has the representation $h^{0}_{u}$ initialized to the logits $<o_{1},...,o_{k}>$ outputted by RECA for the corresponding column, which has one value for each class. This initial state represents the class bias of single-column prediction. In addition, each node is associated with the true class of the represented column. Message Passing Subsequently, the representation of all the nodes in a mini- batch of graphs is updated through the Message Passing mechanism of GNN along edges. For this purpose, we consider three different types of GNNs, graph convolutional network (GCN) [11], gated graph neural network (GGNN) [15], and graph attention network (GAT) [20], with the following $UPDATE$ functions where $\sigma$ is the activation function, $N(u)$ is a list of nodes connected to node $u$, $h_{u}^{s}$ is the representation (also called embedding) of node $u$ at step $s\geq 0$, $W^{(s)}$ is a model parameter: * • GCN: assigns equal weights to all the neighbor nodes while updating the embedding of each node (Eq 1). $h_{u}^{(s+1)}=\sigma(\sum_{v\in N(u)\cup u}\frac{W^{(s)}h_{v}^{(s)}}{\sqrt{|N(u)||N(v)|}})$ (1) * • GGNN: uses gated recurrent unit (GRU) to evaluate messages coming from adjacent nodes while updating the embedding of each node (Eq 2). $h_{u}^{s+1}=GRU(h_{u}^{s},\sum_{v\in N(u)}W^{(s)}h_{v}^{s})$ (2) * • GAT: updates node embedding (Eq 3) according to the multi-head attention weights (Eq 4), where $K$ is the number of attention heads, $a^{(s,k)}$ and $W^{(s,k)}$ are model parameters for attention head $k$, and $\oplus$ is concatenation. $h_{u}^{(s+1)}=\oplus_{k=1}^{K}(\sigma\sum_{v\in N(u)\cup\\{u\\}}\alpha^{(s,k)}_{u,v}W^{(s,k)}h_{v}^{(s)})$ (3) $\alpha^{s,k}_{u,v}=\frac{\exp(ReLU(a^{(s,k)^{T}}(W^{(s,k)}h_{u}^{(s)}\oplus W^{(s,k)}h_{v}^{(s)})))}{\sum_{v^{\prime}\in N(u)\cup\\{u\\}}\exp(ReLU(a^{(s,k)^{T}}(W^{(s,k)}h_{u}^{(s)}\oplus W^{(s,k)}h_{v^{\prime}}^{(s)})))}$ (4) The $UPDATE$ function is applied to each node $u$ in the mini-batch of graphs for $S$ steps, where $S$ is the number of hidden layers and output layer. $h^{S}_{u}$ has one unit for each of the $k$ classes and serves the final output for the $k$ classes. The class prediction for the node $u$ is given by applying softmax to $h^{S}_{u}$. The main difference among GCN, GGNN, and GAT is in their treatment of adjacent nodes (columns). GAT uses an attention mechanism to assign varying weights to these nodes based on their importance. In contrast, GCN averages the features of neighbor nodes, while GGNN processes these features through a GRU to determine their relevance before updating the node embeddings. Loss Function Given the logit vector $h^{S}_{u}$ for a node $u$ with the true class $class_{u}$, the loss for this node is computed by the negative log- likelihood. The loss for a mini-batch of graphs is the sum of the loss of all the nodes inside the mini-batch of graphs (Eq 5). We update the model parameters to minimize the loss of a mini-batch by performing stochastic gradient descent. $loss=\sum_{u=1}^{\\#node}-\log(\frac{\exp(h_{u}^{S}[class_{u}])}{\sum_{m=1}^{k}\exp(h_{u}^{S}[m])})$ (5) ### 4.3 Overall Prediction After training the GNN, to classify columns in a new table $t$, we first get the output of RECA (before softmax) for each column in $t$. These outputs are the initial representation $h^{0}_{u}$ of nodes $u$ in the graph representing the table $t$. Then, message passing is done using the learned parameters in the training phase to get the predicted class for each node, which is the predicted class for the corresponding column in $t$. Fig 2 shows how GAIT predicts labels of a table. ## 5 Evaluation ### 5.1 Evaluation Method Performance metrics. Like previous works [9, 26, 18, 19], we collect weighted f-score and macro f-score on the test data. The former is the average of f-score of all classes, weighted by class frequencies, and the latter is the average of treating all classes equally, regardless of their frequencies. The macro f-score better reflects the model performance on infrequent classes. We evaluate model performance using 5-fold cross-validation, reporting the mean and standard deviation of the above f-scores from the test split of each fold. Datasets. We use two datasets summarized in Table 1. Webtables [26, 18, 19]: This dataset contains 32262 tables and 78 unique classes extracted from the Webtables directory of VizNet [8]. We use exactly the same 5-fold cross-validation split as in [19] , which splits the tables (not columns) into a train set and test set in 5-folds. So, we copy directly the f-scores of the baseline algorithms (more details on the baselines below) except for RECA from [19]. Semtab2019 [19]: It contains 3045 tables and 275 unique classes. While this dataset covers wider tables (an average of 4.5 columns per table), only 7603 columns are annotated. The split proposed in RECA [19] randomly divided columns (not tables) into train, validation, and test sets. Although the column-wise splitting of data makes sense for RECA due to the column-wise prediction of RECA, GAIT requires having a full table to model the dependencies between columns in the table. Therefore, our 5-fold validation splits tables (instead of columns) into the train set and test set for this dataset. At each fold, we further split the train set into 80% for training and 20% for validation. Table 1: Datasets used and the number of tables with the specified number of columns. Dataset | #types | #tables | #Col | avg col ---|---|---|---|--- Semtab | 275 | 3045 | 7603 | 4.5 Webtable | 78 | 32262 | 74141 | 2.3 Algorithms for comparison. All experiments were conducted with Tesla V100s. We used the publicly available source code of RECA111https://github.com/ysunbp/RECA-paper for the single-column prediction module of GAIT. The GNN module of GAIT was implemented using the deep graph library [22] , Adam optimization with a learning rate of $1e-3$ and weight decay of $5e-4$ for training. We trained GCN, GGNN, and GAT for 100, 200, and 100 epochs respectively. To optimize the GAT structure, we tested various # attention heads ([1, 2, 4, 8, 12]) and update steps $S$ ([1, 2, 3, 4]), selecting the best model from the validation set as default. Similarly, for GGNN and GCN, we determined the default model by experimenting with different update steps $S$ ([1, 2, 3, 4]). Three algorithms for GAIT were finally chosen: $\textnormal{{GAIT}}_{\textnormal{{GAT}}}$ (GAT with $S=2$), GAITGGNN (GGNN with $S=3$), and GAITGCN (GCN with $S=2$). Since GAIT incorporates RECA as its single-column prediction module, naturally we evaluate GAIT against the baseline methods outlined in RECA’s paper and RECA itself. These baselines are described below and their source codes are publicly available and are used for our evaluation: * • Sherlock [9]: Sherlock is a deep learning model that extracts character-level, word-level, paragraph-level and global-level statistical features from tables to form vector representations for table columns. * • TaBERT [25]: TaBERT simultaneously analyzes queries and a table, selecting three crucial rows to create table content snapshots. It then uses BERT to develop representations for each table column, aiding in classification. * • TABBIE [10]: improves TaBERT by separately processing the rows and columns of tables. The embedding of the target column is used for prediction. * • Doduo [18]: Modifies BERT to feed the whole columns of a table to BERT and predicts the semantic types of all of the columns in a table together. * • RECA [19]: RECA finds relevant tables for the target table and uses the information coming from these tables and the values of the target column to predict the semantic type of the target column. We do not compare with SATO [26] and TURL [4] as Doduo outperformed them. Since TCN [21] requires having table schema and page topic [19], it cannot be applied to our datasets. ### 5.2 Results Table 2 Shows the performance of GAIT and the baseline algorithms. GAIT outperforms Sherlock by a large margin. The main reason behind the poor performance of Sherlock compared with other models is its simplicity. While other models including GAIT utilize language models for semantic type prediction, Sherlock relies on simple semantic features to do so. Furthermore, Sherlock does not use intra-table or inter-table information for prediction. Table 2: Macro f-score and weighted f-score. | Semtab | Webtables ---|---|--- Model | Weighted f-score | Macro f-score | Weighted f-score | Macro f-score sherlock [9] | 0.638$\pm$0.009 | 0.417$\pm$0.017 | 0.844$\pm$0.001 | 0.670$\pm$0.010 TaBERT [25] | 0.756$\pm$0.011 | 0.401$\pm$0.025 | 0.896$\pm$0.005 | 0.650$\pm$0.011 TABBIE [10] | 0.798$\pm$0.012 | 0.542$\pm$0.022 | 0.929$\pm$0.003 | 0.734$\pm$0.019 Doduo [18] | 0.819$\pm$0.010 | 0.565$\pm$0.021 | 0.928$\pm$0.001 | 0.742$\pm$0.012 RECA [19] | 0.825$\pm$0.015 | 0.583$\pm$0.019 | 0.935$\pm$0.032 | 0.783$\pm$0.017 $\textnormal{GAIT}_{\textnormal{GGNN}}$ | 0.844$\pm$0.003 | 0.606$\pm$0.018 | 0.936$\pm$0.003 | 0.797$\pm$0.022 $\textnormal{GAIT}_{\textnormal{GCN}}$ | 0.845$\pm$0.006 | 0.622$\pm$0.020 | 0.939$\pm$0.004 | 0.794$\pm$0.017 $\textnormal{GAIT}_{\textnormal{GAT}}$ | 0.852$\pm$0.004 | 0.643$\pm$0.017 | 0.940$\pm$0.003 | 0.799$\pm$0.019 Among language model based models TaBERT shows the worst performance because it was initially developed for table semantic parsing and column embeddings generated by TaBERT are not suitable for column type annotation [19]. RECA, single-column prediction module of GAIT, outperforms both TABBIE and Doduo. TABBIE and Doduo use the limited input tokens of language models to process intra-table context while ignoring the inter-table context information when generating the embeddings of the target columns. However, RECA mainly focuses on extracting useful inter-table context information to enhance the embeddings of the target columns [19]. GAIT with different GNNs outperforms RECA, and by extension TABBIE and Doduo in both datasets. In particular, GAIT shows about 6% and 2.7% improvement in macro and weighted f-scores over RECA in the Semtab-dataset. These results prove that modeling the dependencies between columns in a table, which is the main advantage of GAIT over RECA, is useful. GAIT successfully applies a GNN on top of RECA to do so. Among different variations of GAIT, GAT shows the best performance. Assigning different weights to adjacent nodes (columns) according to their importance when updating representation of a node is the key to the superior performance of GAT compared to GCN and GGNN. In both datasets, GAIT’s improvement is larger on the macro f-score than the weighted f-score. This means that infrequent classes that label fewer columns benefit more from the whole dependency approach of the GNN approach. Such classes have less presence in the data and their learning tends to rely on the dependencies on other columns in a table. GAIT provides a mechanism to leverage such dependencies. This also explains why GAIT shows a better enhancement in the Semtab dataset compared to Webtables dataset. The 3045 tables of Semtab have 275 semantic types for columns while the 32262 tables of Webtables are limited to 78 semantic types. Consequently, many more infrequent classes in Semtab can benefit from modeling the whole dependencies of GAIT. To provide a better insight into this improvement, we divide the 275 classes of Semtab dataset into three equally sized bins of High, Medium, and Low frequencies (about 92 classes in each bin) according to the columns labeled by classes and show the macro f-score of the classes in each bin for $\textnormal{GAIT}_{\textnormal{GAT}}$ (best GAIT) and RECA (best baseline) in Fig 3. While $\textnormal{GAIT}_{\textnormal{GAT}}$ improves RECA in all the three bins, the bigger improvements happen in the low-frequency bins, for example, the absolute improvement of 11% or the relative improvement of 96.5% for the Low bin. Fig 3 also demonstrates that the real challenge in developing column type annotation models is how to have a reliable prediction for medium and low-frequency classes as the performance for high-frequency classes is already good enough. The large improvements of $\textnormal{GAIT}_{\textnormal{GAT}}$ over RECA in low-frequency classes is a clear sign of its superiority in handling such classes. Figure 3: The macro f-score of $\textnormal{GAIT}_{\textnormal{GAT}}$ and RECA on Semtab dataset, for High, Medium, and Low-frequency classes. We also study the impact of the number of columns in a table on both $\textnormal{GAIT}_{\textnormal{GAT}}$ and RECA. Table 3 shows the improvement of $\textnormal{GAIT}_{\textnormal{GAT}}$ on macro and weighted f-scores over RECA separately for tables of a different number of columns. As the number of columns in a table increases, the performance of RECA, which is also the single-column prediction module of GAIT, increases. Having more columns in a table better reveals context of that table, so RECA can find more relevant inter-table information which is beneficial to both RECA and GAIT. Thus, the need for dependencies between columns in $\textnormal{GAIT}_{\textnormal{GAT}}$ decreases. The column dependency method GAIT improves RECA mainly for low-frequency classes and tables of 2 to 4 columns, as in case of Semtab. Table 3: The f-score improvement of $\textnormal{GAIT}_{\textnormal{GAT}}$ over RECA by tables of different number of columns. | Semtab | Webtables ---|---|--- | macro f-score | weighted f-score | macro f-score | weighted f-score #col | RECA | $\textnormal{GAIT}_{\textnormal{GAT}}$ | RECA | $\textnormal{GAIT}_{\textnormal{GAT}}$ | RECA | $\textnormal{GAIT}_{\textnormal{GAT}}$ | RECA | $\textnormal{GAIT}_{\textnormal{GAT}}$ 2 | 0.563 | 0.603 (+4.0%) | 0.798 | 0.828 (+3.0%) | 0.738 | 0.758 (+2.0%) | 0.932 | 0.936 (+0.4%) 3 | 0.545 | 0.616 (+7.1%) | 0.797 | 0.827 (+3.0%) | 0.743 | 0.762 (+1.9%) | 0.927 | 0.930 (+0.3%) 4 | 0.566 | 0.618 (+5.2%) | 0.865 | 0.880 (+1.5%) | 0.727 | 0.746 (+1.9%) | 0.960 | 0.961 (+0.1%) 5 | 0.664 | 0.682 (+1.8%) | 0.862 | 0.862 (+0.0%) | 0.540 | 0.548 (+0.8%) | 0.978 | 0.978 (+0.0%) ## 6 Conclusion Language model based approaches recently showed promising results in column type annotation thanks to the semantic knowledge preserved in them. This paper addresses some drawbacks of previous language model-based approaches, namely, failing to incorporate inter-table and intra-table information simultaneously due to the input token limit of language models. Our solutions, GAIT, employ graph neural networks to model the intra-table dependencies, letting language models focus on handling inter-table information. Experiments on different datasets provide evidence of the effectiveness of our solutions. Looking ahead, considering the recent advancements in large language models (LLMs) for column type annotation [7, 12, 27, 13] exploring alternative LLMs beyond BERT to address inter-table information could be a promising future research. Acknowledgement. The work of Ke Wang is supported in part by a discovery grant from Natural Sciences and Engineering Research Council of Canada. ## References * [1] Chen, J., Jiménez-Ruiz, E., Horrocks, I., Sutton, C.: Colnet: Embedding the semantics of web tables for column type prediction. In: AAAI (2019) * [2] Chen, J., Jiménez-Ruiz, E., Horrocks, I., Sutton, h.: Learning semantic annotations for tabular data. In: IJCAI. vol. 33, pp. 2088–2094 (2019) * [3] Chen, X., Li, L.J., Fei-Fei, L., Gupta, A.: Iterative visual reasoning beyond convolutions. In: CVPR. pp. 7239–7248 (2018) * [4] Deng, X., Sun, H., Lees, A., Wu, Y., Yu, C.: Turl: Table understanding through representation learning. ACM SIGMOD Record 51(1), 33–40 (2022) * [5] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018) * [6] Fernandez, R.C., Abedjan, Z., Koko, F., Yuan, G., Madden, S., Stonebraker, M.: Aurum: A data discovery system. In: ICDE. pp. 1001–1012. IEEE (2018) * [7] Feuer, B., Liu, Y., Hegde, C., Freire, J.: Archetype: A novel framework for open-source column type annotation using large language models. arXiv (2023) * [8] Hu, K., Gaikwad, S., Hulsebos, M., Bakker, M.A., Zgraggen, E., Hidalgo, C., Kraska, T., Li, G., Satyanarayan, A., Demiralp, Ç.: Viznet: Towards a large-scale visualization learning and benchmarking repository. In: CHI. pp. 1–12 (2019) * [9] Hulsebos, M., Hu, K., Bakker, M., Zgraggen, E., Satyanarayan, A., Kraska, T., Demiralp, Ç., Hidalgo, C.: Sherlock: A deep learning approach to semantic data type detection. In: SIGKDD. pp. 1500–1508 (2019) * [10] Iida, H., Thai, D., Manjunatha, V., Iyyer, M.: Tabbie: Pretrained representations of tabular data. arXiv preprint arXiv:2105.02584 (2021) * [11] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) * [12] Korini, K., Bizer, C.: Column type annotation using chatgpt. arXiv (2023) * [13] Li, P., He, Y., Yashar, D., Cui, W., Ge, S., Zhang, H., Fainman, D.R., Zhang, D., Chaudhuri, S.: Table-gpt: Table-tuned gpt for diverse table tasks. arXiv (2023) * [14] Li, Y., Ouyang, W., Zhou, B., Wang, K., Wang, X.: Scene graph generation from objects, phrases and region captions. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1261–1270 (2017) * [15] Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R.: Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493 (2015) * [16] Limaye, G., Sarawagi, S., Chakrabarti, S.: Annotating and searching web tables using entities, types and relationships. VLDB 3(1-2), 1338–1347 (2010) * [17] Rahm, E., Bernstein, P.A.: A survey of approaches to automatic schema matching. the VLDB Journal 10(4), 334–350 (2001) * [18] Suhara, Y., Li, J., Li, Y., Zhang, D., Demiralp, Ç., Chen, C., Tan, W.C.: Annotating columns with pre-trained language models. In: SIGMOD (2022) * [19] Sun, Y., Xin, H., Chen, L.: Reca: Related tables enhanced column semantic type annotation framework. VLDB 16(6), 1319–1331 (2023) * [20] Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017) * [21] Wang, D., Shiralkar, P., Lockard, C., Huang, B., Dong, X.L., Jiang, M.: Tcn: Table convolutional network for web table interpretation. In: WWW (2021) * [22] Wang, M., Zheng, D., Ye, Z., Gan, Q., Li, M., Song, X., Zhou, J., Ma, C., Yu, L., Gai, Y., et al.: Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315 (2019) * [23] Wang, Z., Dong, H., Jia, R., Li, J., Fu, Z., Han, S., Zhang, D.: Tuta: Tree-based transformers for generally structured table pre-training. In: SIGKDD (2021) * [24] Wolpert, D.H.: Stacked generalization. Neural networks 5(2), 241–259 (1992) * [25] Yin, P., Neubig, G., Yih, W.t., Riedel, S.: Tabert: Pretraining for joint understanding of textual and tabular data. arXiv preprint arXiv:2005.08314 (2020) * [26] Zhang, D., Suhara, Y., Li, J., Hulsebos, M., Demiralp, Ç., Tan, W.C.: Sato: Contextual semantic type detection in tables. arXiv preprint arXiv:1911.06311 (2019) * [27] Zhang, H., Dong, Y., Xiao, C., Oyamada, M.: Jellyfish: A large language model for data preprocessing. arXiv (2023)
# The Huygens Principle of Angle-Resolved Photoemission Simon Moser Physikalisches Institut and Würzburg-Dresden Cluster of Excellence ct.qmat, Universität Würzburg, 97074 Würzburg, Germany <EMAIL_ADDRESS> ###### Abstract Angle-resolved photoemission spectroscopy (ARPES) measures the interference of dipole allowed Coulomb wavelets from the individual orbital emitters that contribute to an electronic band. If Coulomb scattering of the outgoing electron is neglected, this Huygens view of ARPES simplifies to a Fraunhofer diffraction experiment, and the relevant cross-sections to orbital Fouriertransforms. This plane wave approximation (PWA) is surprisingly descriptive of photoelectron distributions, but fails to reproduce kinetic energy dependent final state effects like dichroism. Yet, Huygens principle of ARPES can be easily adapted to allow for distortion and phase shift of the outgoing Coulomb wave. This retains the strong physical intuition and low computational cost of the PWA, but naturally captures momentum dependent interference effects in systems that so far required treatment at the ab initio level, such as linear dichroism in Rashba systems BiAg2 and AgTe. Introduction.—In 1678, Dutch physicist Christiaan Huygens proposed his famous principle of wave mechanics, stating that every point on a wavefront is itself the source of a spherical wavelet. Combined with Augustin-Jean Fresnel’s 1818ies insight that these secondary wavelets all mutually interfere to form the actual wavefront, this intuitive picture provides an appropriate explanation of (near and far field) wave-propagation, reflection and refraction, and most importantly: diffraction Miller (1991). For angle-resolved photoemission spectroscopy (ARPES), a well established technique to map electronic structure of adsorbed molecules and ordered solid state, such an intuitive interpretation in terms of simple wave mechanics remains elusive. This is remarkable, as fundamentally, the electronic structure contrast produced by ARPES relies on the coherent interference of photoelectron wavelets emitted from individual orbital emitters that are phase-locked through their atomic arrangement, i.e., the structure of a particular molecule, or the lattice properties of an ordered solid. In 2009, this very insight along with the availability of efficient photoelectron detectors, pioneered a novel imaging technique of real space molecular orbitals based on the ARPES response of adsorbed organic molecules Puschnig _et al._ (2009); Lueftner _et al._ (2014). This orbital tomography technique assumes that the photoelectrons transition into plane waves that freely propagate to the detector, and that the ARPES intensity distribution is determined by the real space orbital’s Fourier transform (see Ref. Moser (2017) and references therein). ARPES hence intuitively maps onto a Fraunhofer diffraction experiment and the quest for a Huygens principle of ARPES thus seems to be complete. The tempting use of plane wave final states is flawed, however, as it neglects scattering of the outgoing photoelectron in the Coulomb potential of the ion it leaves behind Bradshaw and Woodruff (2015), and thus inherently fails to describe photon energy dependent final state interference such as dichroism. In fact, it produces an ubiquitous $\boldsymbol{\epsilon}\cdot\boldsymbol{k_{f}}$ polarization term that genuinely suppresses outgoing photoelectron momenta $\boldsymbol{k_{f}}$ that move perpendicular to the polarization vector $\boldsymbol{\epsilon}$; a model artifact that is rarely observed with this stringency in experiments. Model.—Yet, such discrepancies can be overcome without loss of intuition or computational ease, taking into account the appropriate scattering state of the outgoing photoelectron, i.e., a partial wave expansion $\displaystyle\chi_{\eta}(\boldsymbol{r})$ $\displaystyle=$ $\displaystyle 4\pi\sum_{l=0}^{\infty}\sum_{m=-l}^{l}i^{l}~{}e^{i\sigma_{l}}R_{\eta l}(r)Y_{l}^{m}(\boldsymbol{\Omega}_{k_{f}})Y_{l}^{m*}(\boldsymbol{\Omega}_{r})$ in terms of Coulomb wavelets built from spherical harmonics $Y_{l}^{m}$, radial wave functions $R_{\eta l}$ and Coulomb phase shifts $\sigma_{l}$ Messiah (1961); Cooper (1962). The Coulomb distortion of $\chi_{\eta}$ with respect to the free electron is described by the Sommerfeld parameter $\eta=Z/a_{0}k_{f}$, which in the limit of small ion charge $Z$ and large photoelectron momenta $k_{f}\equiv|\boldsymbol{k}_{f}|$ yields $\eta\rightarrow 0$, $R_{\eta l}(r)\rightarrow j_{l}(k_{f}r)$ and $\sigma_{l}\rightarrow 0$, and thus naturally retrieves the plane wave expansion $\chi_{\eta}(\boldsymbol{r})\rightarrow e^{i\boldsymbol{k}_{f}\cdot\boldsymbol{r}}$ Messiah (1961); Abramowitz _et al._ (1988). Computing dipole transitions from a hydrogen like atomic orbital $\Phi_{nlm}$ into scattering states $\chi_{\eta}$, we find $\displaystyle\boldsymbol{M}^{\eta}_{nlm}$ $\displaystyle\propto$ $\displaystyle\langle\chi_{\eta}|\boldsymbol{\nabla}|\Phi_{nlm}\rangle$ $\displaystyle=$ $\displaystyle\underbrace{\widetilde{f}(k_{f})~{}\boldsymbol{Y}_{l,l+1,m}(\boldsymbol{\Omega}_{k_{f}})}_{\text{dipole transition }l\rightarrow l+1}+\underbrace{\widetilde{g}(k_{f})~{}\boldsymbol{Y}_{l,l-1,m}(\boldsymbol{\Omega}_{k_{f}})}_{\text{dipole transition }l\rightarrow l-1}~{},$ where we introduced the complex-valued radial cross-sections $\widetilde{f}(k_{f})$ and $\widetilde{g}(k_{f})$, whose atomic limit is given more explicitly in the Suppl. Info. Based on this expression, we can now formulate the Huygen’s principle of ARPES: Every atomic orbital participating in the photoemission process is the source of two dipole allowed Coulomb wavelets, and the Coulomb wavelets emanating from all these orbital emitters mutually interfere. More explicitly, the vector spherical harmonics $\boldsymbol{Y}_{l,l\pm 1,m}$ describe the orbital symmetry of the two dipole allowed emission channels $l\rightarrow l\pm 1$ that are reached by a given polarization vector $\boldsymbol{\epsilon}$ Arfken _et al._ (2013), whose interference is determined by their $k_{f}$-dependent radial cross-section ratio $|\widetilde{f}|/|\widetilde{g}|$ and relative phase $\Delta\sigma=\arg(\widetilde{f}/\widetilde{g})$. While in the PWA, both $|\widetilde{f}|/|\widetilde{g}|=\text{const}$ and $\Delta\sigma=0$ are independent of $k_{f}$ (see Suppl. Info), it is precisely their Coulomb induced $k_{f}$-dependence that produces kinetic energy dependent final state interferences – and thus bears the potential to describe photon energy dependent dichroism as we will see in more detail later. We note in passing, that this $k_{f}$-dependence of $|\widetilde{f}|/|\widetilde{g}|$, i.e., the fact that individual photoemission channels can be suppressed or enhanced by an appropriate choice of photon energy, is a direct consequence of the dipole operator’s velocity form $\mathcal{H}_{\text{int}}\propto\boldsymbol{\epsilon}\cdot\boldsymbol{\nabla}$ that we use here, but is not reproduced by its length form $\mathcal{H}_{\text{int}}\propto\boldsymbol{\epsilon}\cdot\boldsymbol{r}$ Day _et al._ (2019) (see Suppl. Info). In particular, while the length form is unbounded and not well defined for extended (infinite) systems Pendry (1976); Drake and Cassar (2006); Lebech _et al._ (2012), the velocity form is translation invariant and thus directly applicable to Bloch states, whose Wannier representation in turn can be expanded in terms of an atomic orbital basis $\Psi_{\boldsymbol{k}}(\boldsymbol{r})=\sum_{\boldsymbol{R}}e^{i\boldsymbol{k}\cdot\boldsymbol{R}}\sum_{nlm}c^{\boldsymbol{k}}_{nlm}\Phi_{nlm}(\boldsymbol{r})~{},$ and the first sum runs over all lattice sites $\boldsymbol{R}$ participating in the photoemission process. In the independent center approximation, i.e., ignoring scattering of outgoing electrons in the Coulomb potential of adjacent atoms, the ARPES intensity now compactly reads $I\propto|\langle\Psi_{\boldsymbol{k}_{f}}|\boldsymbol{\epsilon}\cdot\boldsymbol{\nabla}|\Psi_{\boldsymbol{k}}\rangle|^{2}=\delta(\boldsymbol{k}-\boldsymbol{k}_{f})~{}|\boldsymbol{\epsilon}\cdot\boldsymbol{\mathcal{M}_{\boldsymbol{k}}}\cdot\boldsymbol{c}_{\boldsymbol{k}}|^{2}~{},$ (1) with the $N\times 3$ dimensional dipole transition matrix $\boldsymbol{\mathcal{M}_{\boldsymbol{k}}}$ coupling the $N$-dimensional initial state vector $\boldsymbol{c}_{\boldsymbol{k}}=(c^{\boldsymbol{k}}_{n_{1}l_{1}m_{1}},...,c^{\boldsymbol{k}}_{n_{N}l_{N}m_{N}})^{\top}$ to the three dimensional polarization vector $\boldsymbol{\epsilon}$, and $\delta(\boldsymbol{k}-\boldsymbol{k}_{f})$ representing momentum conservation (we thus set $\boldsymbol{k}_{f}\equiv\boldsymbol{k}$ from now). Dichroism.—Analyzing matrix equation 1, we immediately note that exploiting full polarization control in an experiment can yield a maximum of six linear independent equations to retrieve at most three complex eigenvector components of $\boldsymbol{c}_{\boldsymbol{k}}$ from ARPES intensity measurements. Taking, e.g., a linear combination $|\Psi\rangle=c_{x}|p_{x}\rangle+c_{y}|p_{y}\rangle+c_{z}|p_{z}\rangle$ of $p$-orbitals, we find $\boldsymbol{\mathcal{M}}_{\boldsymbol{k}}\propto\widetilde{f}(k)~{}\boldsymbol{\mathcal{M}}_{p\rightarrow d}(\boldsymbol{\Omega}_{k})+\widetilde{g}(k)~{}\boldsymbol{\mathcal{M}}_{p\rightarrow s}(\boldsymbol{\Omega}_{k})$, where the $3\times 3$ matrices $\boldsymbol{\mathcal{M}}_{p\rightarrow d}$ and $\boldsymbol{\mathcal{M}}_{p\rightarrow s}$ describe dipole transitions from the initial state $\boldsymbol{c}=(c_{x},c_{y},c_{z})^{\top}$ into $d$ and $s$ channels at a given polarization $\boldsymbol{\epsilon}$. Developing these matrices in terms of small photoelectron emission angles $\theta_{k}$ Arfken _et al._ (2013), we find (in Cartesian coordinates) $\displaystyle\underset{p\rightarrow d}{\boldsymbol{\mathcal{M}}(\boldsymbol{\Omega}_{k})}$ $\displaystyle=$ $\displaystyle\frac{1}{2\sqrt{2\pi}}\left(\begin{array}[]{ccc}1&0&0\\\ 0&1&0\\\ 0&0&-2\\\ \end{array}\right)+\frac{3}{2\sqrt{2\pi}}\left(\begin{array}[]{ccc}0&0&-\cos\phi_{k}\\\ 0&0&\sin\phi_{k}\\\ \cos\phi_{k}&\sin\phi_{k}&0\\\ \end{array}\right)\theta_{k}+\mathcal{O}(\theta_{k}^{2})~{};$ (8) $\displaystyle\underset{p\rightarrow s}{\boldsymbol{\mathcal{M}}(\boldsymbol{\Omega}_{k})}$ $\displaystyle=$ $\displaystyle\frac{1}{2\sqrt{\pi}}\left(\begin{array}[]{ccc}1&0&0\\\ 0&1&0\\\ 0&0&1\\\ \end{array}\right)~{}.$ (12) Figure 1: (a-d) ARPES data measured along the $k_{x}k_{z}$ mirror plane of BiAg2/Ag(111), reproduced from Ref. Bentmann _et al._ , 2017. (a) Measurement with $s$-polarized light and $h\nu=26$ eV. Both spin-orbit split surface states $\Psi^{\pm}$ appear with equal intensities $I_{s}^{+}\sim I_{s}^{-}$ irrespective of photon energy $h\nu$ (see Ref. Bentmann _et al._ , 2017 for more data). In contrast, ARPES taken with $p$-polarized light exhibits a $h\nu$-dependent intensity swap between $\Psi^{\pm}$: (b) $I_{p}^{+}\sim I_{p}^{-}$ at $h\nu=30$ eV; (c) $I_{p}^{+}/I_{p}^{-}\sim 0$ at $h\nu=22$ eV; (d) $I_{p}^{-}/I_{p}^{+}\sim 0$ at $h\nu=26$ eV. (e) Illustration of phase dependent photoemission $s$\- ($s_{x/z}$, black dashed) and $d$-channel ($d_{x/z}$, magenta) interference, individually shown for $\Psi^{-}$ (blue, top) and $\Psi^{+}$ (red, bottom) in the complex plane. Channels are represented by unit vectors for clarity. For $\Delta\sigma=\pm\pi$ and $0$, the absolute phase between channels $s=s_{x}+s_{z}$ and $d=d_{x}+d_{z}$ is $\pi/2$ for bands $\Psi^{\pm}$, and their intensities $I^{+}=|M^{+}|^{2}=|M^{-}|^{2}=I^{-}$are equal. For $\Delta\sigma=\pm\pi/2$, however, $s$-and $p$-channels of $\Psi^{\pm}$ are in phase and interfere constructively, while they are in antiphase and interfere destructively for $\Psi^{\mp}$. Figure 2: Phase dependent ARPES intensity of bands $\Psi^{\pm}$ in AgBi2 calculated for the explicit experimental geometry $\tan\alpha=\epsilon_{z}/\epsilon_{x}=3$ used in Fig. 1 (b-d) Bentmann _et al._ (2017). As detailed out in Suppl. Info, we find a unique value set $|\widetilde{f}|/|\widetilde{g}|=0.72$ and $\Delta\sigma\sim\mp\pi/9$ that reproduces the complete intensity suppression of $\Psi^{\pm}$ in Figs. 1 (c) and (d). Clearly, the $d$-channel mixes in and out-of plane orbital and polarization components and only diagonalizes right at normal emission $\theta_{k}=0$, where the contributions from $p_{z}$ are twice as large and in antiphase to the contributions from $p_{x}/p_{y}$. (As we will see later, this has important consequences for bands carrying orbital angular momentum OAM.) In contrast, the $s$-channel is isotropic and diagonal for any emission angle $\boldsymbol{\Omega}_{k}=(\theta_{k},\phi_{k})$, and thus provides a one-to- one mapping of eigenvector- onto light polarization components. This implies that ARPES at photoelectron momenta where the $d$-channel is suppressed, i.e., $\widetilde{f}(k)=0$, is a direct probe of the band’s orbital character. In particular, linear polarizations $\boldsymbol{\epsilon}_{x}$ and $\boldsymbol{\epsilon}_{y}$ then directly probe the eigenvector amplitudes $|c_{x}|\propto\sqrt{I_{x}}$ and $|c_{y}|\propto\sqrt{I_{y}}$ (the $c_{z}$ component is fixed by the normalization condition $c_{x}^{2}+c_{y}^{2}+c_{z}^{2}=1$), while linear (LD) and circular dichroism (CD) in the $xy$-plane probe their mutual interference $\displaystyle I_{\text{LD}}$ $\displaystyle\propto$ $\displaystyle\frac{I_{x-y}-I_{x+y}}{I_{x-y}+I_{x+y}}=\frac{2\Re(c_{x}^{*}c_{y})}{|c_{x}|^{2}+|c_{y}|^{2}}~{};$ $\displaystyle I_{\text{CD}}$ $\displaystyle\propto$ $\displaystyle\frac{I_{x-iy}-I_{x+iy}}{I_{x-iy}+I_{x+iy}}=\frac{2\Im(c_{x}^{*}c_{y})}{|c_{x}|^{2}+|c_{y}|^{2}}=\frac{1}{\hbar}\frac{\langle L_{z}\rangle}{|c_{x}|^{2}+|c_{y}|^{2}}~{},$ and retrieve OAM component $\langle L_{z}\rangle=\langle\Psi|L_{z}|\Psi\rangle$ and phase relation $\arg(c_{x}/c_{y})=\arctan(I_{\text{CD}}/I_{\text{LD}})$. In analogy, additional LD and CD experiments within the orthogonal $xz$ and $yz$ planes will further provide access to the angular momenta $\langle L_{y}\rangle$ and $\langle L_{x}\rangle$ as well as phase relations $\arg(c_{x}/c_{z})$ and $\arg(c_{y}/c_{z})$, respectively, in principle allowing for a full reconstruction of the eigenstate vector $\boldsymbol{c}$ from ARPES intensity measurements Schüler _et al._ (2020, 2021). Note, however, that this generally requires the photoemission $d$-channel and consequent final state interferences to be reliably suppressed, i.e., a photon-energy where $\widetilde{f}(k)\sim 0$. Application.—Let us illustrate this corollary in a well studied model system whose large $Z$ constituents give rise to elevated final state scattering and strong spin orbit coupling (SOC): the surface alloy BiAg2/Ag(111) Bentmann _et al._ (2017). Density functional theory (DFT) finds its low energy surface electronic structure to be of primarily Bi $6p$ and Ag $5s$ orbital character, with two Rashba bands $\Psi^{\pm}$ whose SOC shaped wave-functions along the system’s $k_{x}k_{z}$ mirror plane are well described by $|\Psi^{\pm}\rangle=\frac{1}{\sqrt{2}}|p_{z},\uparrow\downarrow\rangle\mp\frac{i}{2}|p_{x},\uparrow\downarrow\rangle\pm\frac{1}{2}|p_{y},\downarrow\uparrow\rangle$, with spinors $|\uparrow\downarrow\rangle$ quantized along the $y$-axis and orbital angular momenta $\langle L_{y}\rangle^{\pm}=\mp\hbar/\sqrt{2}$ (Fig. 1 a) Mirhosseini _et al._ (2009); Zhang _et al._ (2013). ARPES experiments with $s$-polarized light and the sample mirror- and ARPES scattering planes coinciding (Fig. 1 a), display both the $\Psi^{+}$ and $\Psi^{-}$ Rashba bands with equal intensity, irrespective of photon energy (See detailed data in Ref. Bentmann _et al._ (2017)). In contrast, ARPES experiments with $p$-polarized light find a photon energy dependent swap of intensity between $\Psi^{+}$ and $\Psi^{-}$ (Fig. 1 b-d) Meier _et al._ (2009); Bentmann _et al._ (2017). Based on our model, these observations can be easily understood in terms of photon energy-dependent $s$ and $d$ channel interference: From Eq. 8, ARPES close to normal emission with $s$-polarized light $\boldsymbol{\epsilon}_{y}$ mostly projects out the $p_{y}$-orbital contributions, leading to an equal intensity distribution $I_{s}^{\pm}\propto|\pm\frac{1}{2}(\sqrt{2}\widetilde{g}+\widetilde{f})|^{2}$ among both bands $\psi^{\pm}$, modulating synchronously with the $k$-, i.e. kinetic- or photon energy dependent interference of the $s$\- and $d$\- channels. In contrast, the $p$-polarized geometry is receptive to both the $p_{x}$ and $p_{z}$ orbital contributions. The intensity distribution is given by $I_{p}^{\pm}\propto|\mp\frac{i}{2}(\sqrt{2}\widetilde{g}+\widetilde{f})\cos\alpha+\frac{1}{\sqrt{2}}(\sqrt{2}\widetilde{g}-2\widetilde{f})\sin\alpha|^{2}$, where $\alpha$ is the angle of light incidence that quantifies the ratio of in- and out of plane polarization $\tan\alpha=\epsilon_{z}/\epsilon_{x}$. Interestingly, we now find a disparity $I_{p}^{+}-I_{p}^{-}\propto|\widetilde{f}||\widetilde{g}|\sin 2\alpha\sin\Delta\sigma$ between bands $\Psi^{\pm}$ that scales with the Coulomb phase shift $\Delta\sigma=\arg(\widetilde{f}/\widetilde{g})$ between $s$\- and $d$-channels, but vanishes for $|\widetilde{f}||\widetilde{g}|=0$ (either channel suppressed) and $\alpha=0$ or $\pi/2$ ($\boldsymbol{\epsilon}_{x}$ and $\boldsymbol{\epsilon}_{z}$ not mixed). This is a direct consequence of the interference of both the $s$ and $d$-channels resulting from both the $p_{x}$ and $p_{z}$ orbitals. In particular, the pertinent $\pi$-phase shift between the $p_{x}$ and $p_{z}$ derived $d$-channels (the minus sign in the $d$-channel entry ‘-2’ of expression 8) reverses their chirality with respect to the $s$-channels. This along with the opposite OAM $\langle L_{y}\rangle^{\pm}=\mp\hbar/\sqrt{2}$ of bands $\Psi^{\pm}$ (the $\pm\pi/2$ phase between $p_{x}$ and $p_{z}$ orbitals) results in a band dependent phase difference between $s$\- and $d$-waves that is controlled by the Coulomb phase shift $\Delta\sigma$. We visualize this effect in Fig. 1 (e), where the $d_{x/z}$-channels emitted from $p_{x}$ and $p_{z}$ orbitals (for clarity represented by unit vectors in the complex plane) are rotated by $\Delta\sigma$ around their corresponding $s_{x/z}$-channels. For $\Delta\sigma=\pm\pi$ and $0$, the absolute phase between $s=s_{x}+s_{z}$ and $d=d_{x}+d_{z}$ is $\pi/2$ for both bands $\Psi^{\pm}$, and their ARPES intensities $I^{\pm}\propto|M^{\pm}|^{2}=|s_{x}+d_{x}+s_{z}+d_{z}|^{2}$ are consequently identical. For $\Delta\sigma=\pm\pi/2$, however, $s$\- and $d$-channels are in phase and interfere constructively for $\Psi^{\pm}$, while they are out of phase and interfere destructively for $\Psi^{\mp}$. This mutual exchange of intensity thus results from the interplay between the opposite chiralities of bands $\Psi^{\pm}$ (their OAMs; marked by the sign change in $s_{x}$) in concert with the opposite chiralities of their photoemission $s$\- and $d$-channels (marked by the sign change in $d_{z}$). According to Eq. 8, these arguments also hold for systems carrying OAM along $x$ and light polarized in the $yz$-plane, while the effect vanishes for OAM along $z$ and $xy$-polarized light, where $s$\- and $d$-channel chiralities are equal (Suppl. info). Returning to AgBi2 and examining $I_{p}^{\pm}$ within the experimental geometry $\epsilon_{z}/\epsilon_{x}\sim 3$ used in Ref. Bentmann _et al._ (2017) in more detail (Suppl. Info), we identify a unique cross-section ratio $|\widetilde{f}|/|\widetilde{g}|\sim 0.72$ that produces the total band suppression $I_{p}^{\pm}/I_{p}^{\mp}\sim 0$ observed in Fig. 1 (c,d) for a phase shift of $\Delta\sigma\sim\pm\pi/9$ (Fig. 2). For these particular photon energies, the model thus allows us to extract detailed information on the photoemission final state. Let us further study the angular dependence of linear dichroism in a system with similar out- and in plane orbital mixing: the 2D honeycomb monolayer AgTe/Ag(111) Ünzelmann _et al._ (2020). Its occupied low energy electronic structure is described by two distinct bands: band $|\alpha\rangle=\sin\phi_{k}|p_{x}\rangle-\cos\phi_{k}|p_{y}\rangle$, of tangential orbital character with zero angular momentum; and Rashba split band $|\beta\rangle=\cos\phi_{k}|p_{x}\rangle+\sin\phi_{k}|p_{y}\rangle+ik_{\|}\delta_{sp}|sp_{z}\rangle$ 111The splitting is not resolved in the data shown., of primarily radial orbital character, but with in- and out of plane orbital mixing $\delta_{sp}\sim 4.15$ Å producing a rotating in plane orbital angular momentum $\langle\boldsymbol{L}\rangle^{\beta}=2k_{\|}\delta_{sp}\hbar~{}(\sin\phi,\cos\phi,0)^{\top}$ that is governed – in contrast to BiAg2 where SOC is decisive Bentmann _et al._ (2017) – by inversion symmetry breaking at the surface Ünzelmann _et al._ (2020); Ünzelmann (2021). Like in our previous discussion, $s$-polarized light close to normal emission projects out the $p_{y}$ character and delivers $I_{s}^{\alpha}\propto\cos^{2}\phi_{k}$ and $I_{s}^{\beta}\propto\sin^{2}\phi_{k}$ (Fig. 3 a,b). In analogy, $p$-polarized light projects out the $p_{x}$ character of band $|\alpha\rangle$ and delivers $I_{p}^{\alpha}\propto\sin^{2}\phi_{k}$. In band $|\beta\rangle$, however, $s$-and $d$-channel interference again yields a $k$-dependent final state intensity $I_{p}^{\beta}\propto|\cos\phi_{k}(\sqrt{2}\widetilde{g}+\widetilde{f})\cos\alpha+ik_{\|}\delta_{sp}(\sqrt{2}\widetilde{g}-2\widetilde{f})\sin\alpha|^{2}$, which breaks the twofold rotational symmetry of the $\cos^{2}\phi_{k}$ if photo emission $s$\- and $d$-channels are out of phase. This produces the oppositely oriented half moon structures in Fig. 3 (c,e) Ünzelmann (2021); Ünzelmann _et al._ , whose angular intensity distributions are fitted to the model in panels (d,f), and provide the relevant cross-section ratios and Coulomb phase shifts annotated to the figure. Figure 3: (a) AgTe/Ag(111) ARPES constant energy maps at 1.3 eV binding energy, reproduced from Ref. Ünzelmann _et al._ , 2020. As described in the text, band $\beta$ exhibits a two-fold rotational symmetry when measured with $s$-polarized light at $h\nu=65$ eV (a,b), but shows a distinct half moon signature with $p$-polarized light at $25$ eV (c,d), which is flipped at $58$ eV (e,f). Black curves in (b,d,f) show a best fit to angular intensity distributions extracted from (a,c,e) at $k_{\|}=0.12$ Å-1, employing the experimental geometry $\tan\alpha=1/\sqrt{2}$ (magic angle light incidence) and the orbital mixing parameter $\delta_{sp}\sim 4.15$ Å of Ref. Ünzelmann _et al._ , 2020 Discussion.—Finally, let us discuss why – despite the preceding arguments – the PWA has served so well in describing ARPES intensity distributions, in particular also in orbital tomography Bradshaw and Woodruff (2015): The latter is typically applied to small organic molecules, whose main element carbon is light, Coulomb final state effects fade out quickly with increasing photon energy ($\eta\propto Z/k\rightarrow 0$, $\widetilde{f}/\widetilde{g}\sim\text{const}$) and at least the angular part of the PWA holds. The investigated orbital character is almost entirely C $2p$, spin-orbit effects and OAM can thus be neglected and proper light polarization further limits orbital interference to a minimum – ideally even suppresses one of the two photo emission channels altogether. However, it is exactly these reasons that render the PWA problematic in more complex systems. Intermixing of any additional orbital character will introduce two additional Coulomb wavelets that participate in final state interference, and if hybridization to heavy elements, e.g., to a metallic substrate is involved, Coulomb scattering will become significant. Orbital details obtained from a Fourier reconstruction in orbital tomography might then be meaningless. Albeit it is not obvious if and how a similar reconstruction based on Coulomb waves could be implemented without bias (the outgoing photoelectron exit wave is deformed by a per se unknown potential), a detailed quantitative confrontation of model and experiment beyond the conceptional discussion in this work might still be feasible. This, however, crucially relies on well constructed initial states, both what concerns the angular part (that can be routinely obtained by downfolding of a Kohn-Sham eigenbasis obtained from density functional theory Mostofi _et al._ (2008)), but in particular what concerns the radial part, which depends on a (less obvious) realistic description of Coulomb potentials close to the nuclei. If such an approach will turn out predictive for complex single- or even many electron systems has to be seen. What we have already shown so far, however, is that this simple Huygens principle of ARPES has the potential to deliver ballpark figures of final state interference effects that so far required one-step photoemission calculations at the ab initio level Scholz _et al._ (2013); Dauth _et al._ (2016); Bentmann _et al._ (2021), while it maintains the computational ease and the priceless intuition of the PWA. Acknowledgements.—I thank Henriette Maaß, Maximilian Ünzelmann, Hendrik Bentmann and Friedel Reinert of EP7, Würzburg, for raising the problem of final state interference in BiAg2 and AgTe, and for sharing their experimental data. Further, I thank Philipp Eck, Jonas Erhardt, Hans Kirschner, Peter Puschnig, Ralph Claessen and Phil Woodruff for helpful discussions and valuable feedback on this work. Funding support came from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147, Project ID 390858490) as well as through the Collaborative Research Center SFB 1170 ToCoTronics (Project ID 258499086). ## References * Miller (1991) D. A. B. Miller, Optics Letters 16, 1370 (1991). * Puschnig _et al._ (2009) P. Puschnig, S. Berkebile, A. J. Fleming, G. Koller, K. Emtsev, T. Seyller, J. D. Riley, C. Ambrosch-Draxl, F. P. Netzer, and M. G. Ramsey, Science 326, 702 (2009). * Lueftner _et al._ (2014) D. Lueftner, T. Ules, E. M. Reinisch, G. Koller, S. Soubatch, F. S. Tautz, M. G. Ramsey, and P. Puschnig, Proceedings of the National Academy of Sciences 111, 605 (2014). * Moser (2017) S. Moser, Journal of Electron Spectroscopy and Related Phenomena 214, 29 (2017). * Bradshaw and Woodruff (2015) A. M. Bradshaw and D. P. Woodruff, New Journal of Physics 17, 013033 (2015). * Messiah (1961) A. Messiah, _Quantum Mechanics_ (North-Holland Publishing, Amsterdam, 1961). * Cooper (1962) J. W. Cooper, Physical Review 128, 681 (1962). * Abramowitz _et al._ (1988) M. Abramowitz, I. A. Stegun, and R. H. Romer, American Journal of Physics 56, 958 (1988). * Arfken _et al._ (2013) G. B. Arfken, H. J. Weber, and F. E. Harris, _Mathematical Methods for Physicists_, 7th ed. (Elsevier Academic Press, 2013). * Day _et al._ (2019) R. P. Day, B. Zwartsenberg, I. S. Elfimov, and A. Damascelli, npj Quantum Materials 4, 54 (2019), arXiv:1909.12081 . * Pendry (1976) J. Pendry, Surface Science 57, 679 (1976). * Drake and Cassar (2006) G. W. F. Drake and M. M. Cassar, _SciencesNew York_, edited by G. Drake (Springer New York, New York, NY, 2006) p. 1504. * Lebech _et al._ (2012) M. Lebech, J. C. Houver, G. Raseev, A. S. dos Santos, D. Dowek, and R. R. Lucchese, The Journal of Chemical Physics 136, 094303 (2012). * Bentmann _et al._ (2017) H. Bentmann, H. Maaß, E. E. Krasovskii, T. R. F. Peixoto, C. Seibel, M. Leandersson, T. Balasubramanian, and F. Reinert, Physical Review Letters 119, 106401 (2017), arXiv:1507.04664 . * Schüler _et al._ (2020) M. Schüler, U. De Giovannini, H. Hübener, A. Rubio, M. A. Sentef, and P. Werner, Science Advances 6, 1 (2020), arXiv:1905.09404 . * Schüler _et al._ (2021) M. Schüler, T. Pincelli, S. Dong, T. P. Devereaux, M. Wolf, L. Rettig, R. Ernstorfer, and S. Beaulieu, , 1 (2021), arXiv:2103.17168 . * Mirhosseini _et al._ (2009) H. Mirhosseini, J. Henk, A. Ernst, S. Ostanin, C.-T. Chiang, P. Yu, A. Winkelmann, and J. Kirschner, Physical Review B 79, 245428 (2009). * Zhang _et al._ (2013) H. Zhang, C.-X. Liu, and S.-C. Zhang, Physical Review Letters 111, 066801 (5 pp.) (2013). * Meier _et al._ (2009) F. Meier, J. H. Dil, and J. Osterwalder, New Journal of Physics 11, 125008 (2009). * Ünzelmann _et al._ (2020) M. Ünzelmann, H. Bentmann, P. Eck, T. Kißlinger, B. Geldiyev, J. Rieger, S. Moser, R. C. Vidal, K. Kißner, L. Hammer, M. A. Schneider, T. Fauster, G. Sangiovanni, D. Di Sante, and F. Reinert, Physical Review Letters 124, 176401 (2020), arXiv:1912.05210 . * Ünzelmann (2021) M. Ünzelmann, _Interplay of Inversion Symmetry Breaking and Spin-Orbit Coupling - From the Rashba Effect to Weyl Semimetals_ , Ph.D. thesis (2021). * (22) M. Ünzelmann, H. Bentmann, and F. Reinert, unpublished . * Mostofi _et al._ (2008) A. A. Mostofi, J. R. Yates, Y. S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Computer Physics Communications 178, 685 (2008), arXiv:0708.0650 . * Scholz _et al._ (2013) M. R. Scholz, J. Braun, D. Marchenko, A. Varykhalov, M. Lindroos, Y. J. Wang, H. Lin, A. Bansil, H. Ebert, A. Volykhov, L. V. Yashina, O. Rader, J. Sánchez-Barriga, J. Braun, D. Marchenko, A. Varykhalov, M. Lindroos, Y. J. Wang, H. Lin, A. Bansil, J. Minár, H. Ebert, A. Volykhov, L. V. Yashina, and O. Rader, Physical Review Letters 110, 216801 (2013). * Dauth _et al._ (2016) M. Dauth, M. Graus, I. Schelter, M. Wießner, A. Schöll, F. Reinert, and S. Kümmel, Physical Review Letters 117, 1 (2016). * Bentmann _et al._ (2021) H. Bentmann, H. Maaß, J. Braun, C. Seibel, K. A. Kokh, O. E. Tereshchenko, S. Schreyeck, K. Brunner, L. W. Molenkamp, K. Miyamoto, M. Arita, K. Shimada, T. Okuda, J. Kirschner, C. Tusche, H. Ebert, J. Minár, and F. Reinert, Physical Review B 103, L161107 (2021).
11institutetext: Michael Hanke 22institutetext: KTH Royal Institute of Technology, Department of Mathematics, S-10044 Stockholm, Sweden 22email<EMAIL_ADDRESS>33institutetext: Roswitha März 44institutetext: Humboldt University of Berlin, Institute of Mathematics, D-10099 Berlin, Germany 44email<EMAIL_ADDRESS> # Towards a reliable implementation of least-squares collocation for higher- index differential-algebraic equations Michael Hanke Roswitha März (August 27, 2024) ###### Abstract In this note we discuss several questions concerning the implementation of overdetermined least-squares collocation methods for higher-index differential algebraic equations (DAEs). Since higher-index DAEs lead to ill-posed problems in natural settings, the dicrete counterparts are expected to be very sensitive, what attaches particular importance to their implementation. We provide a robust selection of basis functions and collocation points to design the discrete problem and substantiate a procedure for its numerical solution. Additionally, a number of new error estimates are proven that support some of the design decisions. ###### Keywords: Least-squares collocationhigher-index differential-algebraic equationsill- posed problem ###### MSC: 65L8065L0865F2034A99 ## 1 Introduction An overdetermined least-squares collocation method for the solution of boundary-value problems for higher-index differential-algebraic equations (DAEs) has been introduced in HMTWW and further investigated in HMT ; HM ; HM1 . A couple of sufficient convergence conditions have been established. Numerical experiments indicate an excellent behavior. Moreover, it is particularly noteworthy that the computational effort is not much more expensive than for standard collocation methods applied to boundary-value problems for ordinary differential equations. However, the particular procedures are much more sensitive, which reflects the ill-posedness of higher-index DAEs. The question of a reliable implemention is almost completely open. The method offers a number of parameters and options whose selection has not been backed up by any theoretical justifications. The present paper is devoted to a first investigation of this topic. We focus on the choice of collocation nodes, the representation of the ansatz function as well as the shape and structure of the resulting discrete problem. We apply various theoretical arguments, among them also new sufficient convergence conditions in Theorems 2.1, 2.2, and 2.3, and report on corresponding systematic comprehensive numerical experiments. The paper ist organized as follows: Section 2 contains the information concerning the problem to be solved as well as the basics on the overdetermined least-squares approach, and, additionally, the new error estimates. Section 3 deals with the selection and calculation of collocation points and integration weights for the different functionals of interest and Section 4 provides a robust selection of basis of the ansatz space. The resulting discrete least-squares problem is treated in Section 5, a number of experiments is reported. The more detailed structur of the discrete problem is described in the Appendix. We conclude with Section 6, which contains a summary and further comments. The algorithms have been implemented in C++11. All computations have been performed on a laptop running OpenSuSE Linux, release Leap 15.1, the GNU g++ compiler (version 7.5.0) GCC , the Eigen matrix library (version 3.3.7) EigenLib , SuiteSparse (version 5.6.0) DavisSS , in particular its sparse QR factorization SPQR , Intel® MKL (version 2019.5-281), all in double precision with a rounding unit of $\epsilon_{\textrm{mach}}\approx 2.22\times 10^{-16}$.111Intel is a registered trademark of Intel Corporation. The code is optimized using the level -O3. ## 2 Fundamentals of the problem and method Consider a linear boundary-value problem for a DAE with properly involved derivative, $\displaystyle A(t)(Dx)^{\prime}(t)+B(t)x(t)$ $\displaystyle=q(t),\quad t\in[a,b],$ (1) $\displaystyle G_{a}x(a)+G_{b}x(b)$ $\displaystyle=d.$ (2) with $[a,b]\subset\mathbb{R}$ being a compact interval, $D=[I\;0]\in\mathbb{R}^{k\times m}$, $k<m$, with the identity matrix $I\in\mathbb{R}^{k\times k}$. Furthermore, $A(t)\in\mathbb{R}^{m\times k}$, $B(t)\in\mathbb{R}^{m\times m}$, and $q(t)\in\mathbb{R}^{m}$ are assumed to be sufficiently smooth with respect to $t\in[a,b]$. Moreover, $G_{a},G_{b}\in\mathbb{R}^{l\times m}$. Thereby, $l$ is the dynamical degree of freedom of the DAE, that is, the number of free parameters which can be fixed by initial and boundary conditions. Unlike regular ordinary differential equations (ODEs) where $l=k=m$, for DAEs it holds that $0\leq l\leq k<m$, in particular, $l=k$ for index-one DAEs, $l<k$ for higher-index DAEs, and $l=0$ can certainly happen. Supposing accurately stated initial and boundary conditions, index-one DAEs yield well-posed problems in natural settings and can be numerically treated quite well similarly as ODEs LMW . In contrast, in the present paper, we are mainly interested in higher-index DAEs which lead to essentially ill-posed problems even if the boundary conditions are stated accurately CRR ; LMW ; HMT . The tractability index and projector-based analysis serve as the basis for our investigations. We refer to CRR for a detailed presentation and to LMW ; Mae2014 ; HMT for corresponding short sketches. We assume that the DAE is regular with arbitrarily high index $\mu\in\mathbb{N}$ and the boundary conditions are stated accurately so that solutions of the problem (1)-(2) are unique. We also assume that a solution $x_{\ast}:[a,b]\rightarrow\mathbb{R}^{m}$ actually exists and is sufficiently smooth. For the construction of a regularization method to treat an essentially ill- posed problem a Hilbert space setting of the problem is most convenient. For this reason, as in HMTWW ; HMT ; HM , we apply the spaces $\displaystyle H_{D}^{1}$ $\displaystyle:=H_{D}^{1}((a,b),\mathbb{R}^{m})=\\{x\in L^{2}((a,b),\mathbb{R}^{m}):Dx\in H^{1}((a,b),\mathbb{R}^{k})\\},$ $\displaystyle L^{2}$ $\displaystyle:=L^{2}((a,b),\mathbb{R}^{m}),$ which are suitable for describing the underlying operators. In particular, let $\mathcal{T}:H_{D}^{1}\rightarrow L^{2}\times\mathbb{R}^{l}$ be given by $\displaystyle(\mathcal{T}x)(t)=$ $\displaystyle\left[\begin{array}[]{c}A(t)(Dx)^{\prime}(t)+B(t)x(t)\\\ G_{a}x(a)+G_{b}x(b)\end{array}\right].$ (5) Then the boundary-value problem can be described by $\mathcal{T}x=(q,d)^{T}$. For $K>0$, let $\mathfrak{P}_{K}$ denote the set of all polynomials of degree less than or equal to $K$. Next, we define a finite dimensional subspace $X_{\pi}\subset H_{D}^{1}$ of piecewise polynomial functions which should serve as ansatz space for the least-squares approximation: Let the partition $\pi$ be given by $\pi:\quad a=t_{0}<t_{1}<\cdots<t_{n}=b,$ (6) with the stepsizes $h_{j}=t_{j}-t_{j-1}$, $h=\max_{1\leq j\leq n}h_{j}$, and $h_{min}=\min_{1\leq j\leq n}h_{j}$. Let $C_{\pi}([a,b],\mathbb{R}^{m})$ denote the space of piecewise continuous functions having breakpoints merely at the meshpoints of the partition $\pi$. Let $N\geq 1$ be a fixed integer. Then, we define $\displaystyle X_{\pi}$ $\displaystyle=\\{x\in C_{\pi}([a,b],\mathbb{R}^{m}):Dx\in C([a,b],\mathbb{R}^{k}),$ $\displaystyle x_{\kappa}\lvert_{[t_{j-1},t_{j})}\in\mathfrak{P}_{N},\,\kappa=1,\ldots,k,\quad x_{\kappa}\lvert_{[t_{j-1},t_{j})}\in\mathfrak{P}_{N-1},\,\kappa=k+1,\ldots,m,\;j=1,\ldots,n\\}.$ (7) The continuous version of the least-squares method reads: Find an $x_{\pi}\in X_{\pi}$ which minimizes the functional $\Phi(x)=\|\mathcal{T}x\|^{2}=\int_{a}^{b}|A(t)(Dx)^{\prime}(t)+B(t)x(t)-q(t)|^{2}{\rm d}t+|G_{a}x(a)+G_{b}x(b)-d|^{2}.$ (8) It is ensured by (HMT, , Theorem 4.1) that, for all sufficiently fine partitions $\pi$ with bounded ratios $1\leq h/h_{min}\leq\rho$, $\rho$ being a global constant, there exists a unique solution $x_{\pi}\in X_{\pi}$ and the inequality $\|x_{\pi}-x_{\ast}\|_{H_{D}^{1}}\leq Ch^{N-\mu+1}$ (9) is valid. The constant $C\in\mathbb{R}$ depends on the solution $x_{*}$, the degree $N$, and the index $\mu$, but it is independent of $h$. If $N>\mu-1$ then (9) apparently indicates convergence $x_{\pi}\xrightarrow{h\rightarrow 0}x_{*}$ in $H_{D}^{1}$. At this place it is important to mention that, so far, we are aware of only sufficient conditions of convergence and the error estimates may not be sharp. Not only more practical questions of implementation are open, but also several questions about the theoretical background. We are optimistic that much better estimates are possible since the results of the numerical experiments have performed impressively better than theoretically expected till now. The following theorem can be understood as a specification of (HMT, , Theorem 4.1) by a more detailed description of the ingredients of the constant C, in particular, now the role of $N$ is better clarified, which could well be important for the practical realization. In particular, it suggests that smooth problems could perhaps be solved better with large N and coarser partitions. ###### Theorem 2.1 Let the DAE (1) be regular with index $\mu\in\mathbb{N}$ and let the boundary condition (2) be accurately stated. Let $x_{*}$ be a solution of the boundary value problem (1)–(2), and let $A,B,q$ and also $x_{*}$ be sufficiently smooth. Let $N\geq 1$ and all partitions $\pi$ be such that $h/h_{min}\leq\rho$, with a global constant $\rho$. Then, for all such partitions with sufficiently small $h$, the estimate (9) is valid with $C=\frac{N!}{(2N)!\sqrt{2N+1}}C_{N}C_{*}\rho^{\mu-1}C_{data},$ where $\displaystyle C_{\ast}$ $\displaystyle=\max\\{\|x_{\ast}^{(N)}\|_{\infty},\|x_{\ast}^{(N+1)}\|_{\infty}\\}(m+4k(b-a)^{3})^{1/2},$ $\displaystyle C_{data}$ $\displaystyle\quad\text{is independent of $N$ and $h$, it depends only on the data }\;A,D,B,G_{a},G_{b},$ and $C_{N}$ is a rather involved function of $N$. In particular, there is an integer $K$ with $N\leq K\leq 2(\mu-1)+N$ such that, for $N\rightarrow\infty$, $C_{N}$ does not grow faster than $K^{2(\mu-1)}$. If $A$ and $B$ are constant, it holds $K=N$. At this place it should be mentioned that estimate Robbins55 $\sqrt{2\pi N}\left(\frac{N}{e}\right)^{N}e^{1/(12N+1)}\leq N!\leq\sqrt{2\pi N}\left(\frac{N}{e}\right)^{N}e^{1/(12N)},$ or its slightly less sharp version, $\sqrt{2\pi N}\left(\frac{N}{e}\right)^{N}\leq N!\leq\sqrt{2\pi N}\left(\frac{N}{e}\right)^{N}e^{1/12}$ allow the growth estimate $\frac{N!}{(2N)!}\leq\sqrt{\pi N}e^{1/6}\frac{1}{N!}\frac{1}{4^{N}}$, thus $\displaystyle C$ $\displaystyle\leq\sqrt{\pi N}e^{1/6}\frac{1}{N!}\frac{1}{4^{N}\sqrt{2N+1}}C_{N}C_{*}\rho^{\mu-1}C_{data}\leq\sqrt{\frac{\pi}{2}}\,e^{1/6}\frac{1}{N!}\frac{1}{4^{N}}C_{N}C_{*}\rho^{\mu-1}C_{data}$ $\displaystyle\leq\frac{1}{N!}\frac{1}{4^{N}}C_{N}C_{*}\rho^{\mu-1}\sqrt{\frac{\pi}{2}}e^{1/6}C_{data}.$ (10) However, it should be considered that $C_{N}$ and $C_{*}$ also depend on $N$. ###### Proof We apply the estimate HMT $\|x_{\pi}-x_{\ast}\|_{H_{D}^{1}}\leq\frac{\|\mathcal{T\|\alpha_{\pi}}}{\gamma_{\pi}}+\alpha_{\pi},$ in which the approximation error $\alpha_{\pi}$ and the instability threshold $\gamma_{\pi}$ are given by $\alpha_{\pi}=\inf_{x\in X_{\pi}}\|x-x_{\ast}\|_{H_{D}^{1}},\quad\gamma_{\pi}=\inf_{x\in X_{\pi},x\neq 0}\frac{\|\mathcal{T}x\|}{\|x\|_{H_{D}^{1}}}.$ Owing to (HMT, , Theorem 4.1), there is a constant $c_{\gamma}>0$ independent of $\pi$ so that the instability threshold $\gamma_{\pi}$ satisfies the inequality $c_{\gamma}h_{min}^{\mu-1}\leq\gamma_{\pi}\leq\lVert\mathcal{T}\rVert,$ for all partitions with sufficiently small $h$. This leads to $\|x_{\pi}-x_{\ast}\|_{H_{D}^{1}}\leq\frac{\alpha_{\pi}}{\gamma_{\pi}}(\|\mathcal{T}\|+\gamma_{\pi})\leq 2\frac{\alpha_{\pi}}{\gamma_{\pi}}\|\mathcal{T}\|.$ Choosing $N$ interpolation points $\rho_{i}$ with $\displaystyle 0$ $\displaystyle<\rho_{1}<\cdots<\rho_{N}<1,$ (11) $\displaystyle\tilde{\omega}(\rho)$ $\displaystyle=(\rho-\rho_{1})\cdots(\rho-\rho_{N}),$ the approximation error can be estimated by straightforward but elaborate computations by constructing $p_{*}\in X_{\pi}$ such that $p_{*,s}^{\prime}(t_{j-1}+\rho_{i}h_{j})=x_{*,s}^{\prime}(t_{j-1}+\rho_{i}h_{j})$, $p_{*,s}(a)=x_{*,s}(a)$, $s=1,\ldots,k$, $p_{*,s}(t_{j-1}+\rho_{i}h_{j})=x_{*,s}(t_{j-1}+\rho_{i}h_{j})$, $s=k+1,\ldots,m$, $i=1,\ldots,N$, $j=1,\ldots,n$, and regarding $\alpha_{\pi}\leq\|p_{*}-x_{\ast}\|_{H_{D}^{1}}$. One obtains $\displaystyle\alpha_{\pi}$ $\displaystyle\leq\frac{h^{N}}{N!}\lVert\tilde{\omega}\rVert_{L^{2}(0,1)}C_{\ast},$ (12) $\displaystyle\quad C_{\ast}=\max\\{\|x_{\ast}^{(N)}\|_{\infty},\|x_{\ast}^{(N+1)}\|_{\infty}\\}(m+4k(b-a)^{3})^{1/2}.$ Turning to shifted Gauss-Legendre nodes that minimize $\lVert\tilde{\omega}\rVert_{L^{2}(0,1)}$ we obtain $\lVert\tilde{\omega}\rVert_{L^{2}(0,1)}=\frac{(N!)^{2}}{(2N)!\sqrt{2N+1}}.$ To verify this, we consider the polynomial $\displaystyle\omega(t)=2^{N}\tilde{\omega}\bigg{(}\frac{t+1}{2}\bigg{)}=(t-t_{1})\cdots(t-t_{N})$ with zeros $t_{j}=2\rho_{j}-1$, $j=1,\ldots,N$, which is nothing else but the standard Legendre polynomial with leading coefficient one. Using the Rodrigues formula and other arguments from (HaemHoff91, , Section 5.4), one obtains $\lVert\omega\rVert_{L^{2}(-1,1)}=2^{N+\frac{1}{2}}\frac{(N!)^{2}}{(2N)!\sqrt{2N+1}}.$ Finally, shifting back to the interval $(0,1)$ leads to $\lVert\tilde{\omega}\rVert_{L^{2}(0,1)}=2^{-(N+\frac{1}{2})}\lVert\omega\rVert_{L^{2}(-1,1)}$. Thus we have $\alpha_{\pi}\leq\frac{h^{N}}{N!}\frac{(N!)^{2}}{(2N)!\sqrt{2N+1}}C_{\ast}=h^{N}\frac{N!}{(2N)!\sqrt{2N+1}}C_{\ast}.$ (13) Next, a careful review of the proof of (HMT, , Theorem 4.1 (a)) results in the representation (in terms of HMT ) $\displaystyle\frac{1}{c_{\gamma}}$ $\displaystyle=12c_{Y}\sqrt{g_{\mu-1}}=12c_{Y}\sqrt{d_{1,\mu-1}c^{*}_{\mu-1}\lVert D\mathcal{L}_{\mu-1}\rVert^{2}_{\infty}}$ $\displaystyle=12c_{Y}\sqrt{2}\lVert D\Pi_{0}Q_{1}\cdots Q_{\mu-1}D^{+}\rVert_{\infty}\lVert D\mathcal{L}_{\mu-1}\rVert_{\infty}\sqrt{c^{*}_{\mu-1}}.$ The factors $\lVert D\Pi_{0}Q_{1}\cdots Q_{\mu-1}D^{+}\rVert_{\infty}$ and $\lVert D\mathcal{L}_{\mu-1}\rVert_{\infty}$ depend only on the data $A,D,B$, likewise the bound $c_{Y}$ introduced in (HMT, , Proposition 4.3). In contrast, the term $c^{*}_{\mu-1}$ depends additionally on $N$ besides the problem data. Let $K$ denote the degree of the auxiliary polynomial $q_{\mu-1}=\mathfrak{A}_{\mu-1}(Dp)^{\prime}+\mathfrak{B}_{\mu-1}p,\;p\in X_{\pi}$ in the proof of (HMT, , Theorem 4.1). Then we have $N\leq K\leq N+2(\mu-1)$ and, by (HMT, , Lemma 4.2), $c_{\mu-1}^{*}=4^{\mu-1}\lambda_{K}\cdots\lambda_{K-\mu+2}$, where each $\lambda_{S}>0$ is the maximal eigenvalue of a certain symmetric, positive semidefinite matrix of size $(S+1)\times(S+1)$ (HMTWW, , Lemma 3.3). Owing to (HMTWW, , Corollary A.3) it holds that $\lambda_{S}\leq\frac{4}{\pi^{2}}S^{4}+O(S^{2})$ for large $S$, and therefore $\displaystyle c_{\mu-1}^{*}$ $\displaystyle=4^{\mu-1}\lambda_{K}\cdots\lambda_{K-\mu+2}$ $\displaystyle\leq 4^{\mu-1}(\frac{4}{\pi^{2}})^{\mu-1}K^{4}(K-1)^{4}\cdots(K-\mu+2)^{4}+O(K^{4(\mu-1)-1})$ $\displaystyle=4^{\mu-1}(\frac{4}{\pi^{2}})^{\mu-1}K^{4(\mu-1)}+O(K^{4(\mu-1)-1})$ $\displaystyle\leq 4^{\mu-1}(\frac{4}{\pi^{2}})^{\mu-1}(N+2(\mu-1))^{4(\mu-1)}+O((N+2(\mu-1))^{4(\mu-1)-1}).$ Finally, letting $\displaystyle C_{data}=2\|\mathcal{T}\|12c_{Y}\sqrt{2}\lVert D\Pi_{0}Q_{1}\cdots Q_{\mu-1}D^{+}\rVert_{\infty}\lVert D\mathcal{L}_{\mu-1}\rVert_{\infty},\quad C_{N}=\sqrt{c^{*}_{\mu-1}},$ we are done. ∎ Observe that, for smooth problems, any fixed sufficiently fine partition $\pi$, and $N\rightarrow\infty$, the growth rate of the error $\lVert x_{\pi}-x_{*}\rVert_{H^{1}_{D}}$ is not greater than that of $\displaystyle C_{*}h^{N}\frac{(N+2(\mu-1))^{2(\mu-1)}}{4^{N}N!}=C_{*}\left(\frac{h}{4}\right)^{N}\frac{(N+2(\mu-1))^{2(\mu-1)}}{N!}$ (14) and, for constant matrix function $A$ and $B$, $\displaystyle C_{*}h^{N}\frac{N^{2(\mu-1)}}{4^{N}N!}=C_{*}\left(\frac{h}{4}\right)^{N}\frac{N^{2(\mu-1)}}{N!}.$ (15) Remember that $C_{*}$ is a function of $N$. ###### Remark 1 The specific error estimation provided in HMTWW for the case of DAEs in Jordan chain form on equidistant grids may provide some further insight in the behavior of the instability threshold $\gamma_{\pi}$. It is shown that $\gamma_{\pi}\geq\bar{C}_{\mu}\left(\frac{h}{\sqrt{\lambda_{N}}}\right)^{\mu-1}$ holds true for sufficiently small $h$ where $\bar{C}_{\mu}$ is a moderate constant depending only on $\mu$ (HMTWW, , Theorem 3.6). This leads to the dominant error term $\displaystyle\frac{\alpha_{\pi}}{\gamma_{\pi}}\leq\frac{C_{\ast}}{\bar{C}_{\mu}}\sqrt{\frac{\pi}{2}}e^{1/6}\frac{1}{2^{2N}}\frac{\lambda_{N}^{\frac{\mu-1}{2}}}{N!}h^{N-\mu+1}=\frac{1}{\bar{C}_{\mu}}\sqrt{\frac{\pi}{2}}e^{1/6}\frac{1}{h^{\mu-1}}C_{\ast}\left(\frac{h}{4}\right)^{N}\frac{\lambda_{N}^{\frac{\mu-1}{2}}}{N!},$ indicating again that, for smooth problems it seems reasonable to calculate with larger N and coarse partitions. Moreover, for sufficiently small $\frac{h}{\sqrt{\lambda_{N}}}$, the estimation $\lambda_{N}\leq\frac{4}{\pi^{2}}N^{4}+O(N^{2})$ becomes valid (HMTWW, , Remark 3.4), and hence the growth characteristic (15) for large $N$ is confirmed once more. ∎ The functional values $\Phi(x)$, which are needed when minimizing for $x\in X_{\pi}$, cannot be evaluated exactly and the integral must be discretized accordingly. Taking into account that the boundary-value problem is ill-posed in the higher index case $\mu>1$, perturbations of the functional may have a serious influence on the error of the approximate least-squares solution or even prevent convergence towards the solution $x_{\ast}$. Therefore, careful approximations of the integral in $\Phi$ are required. We discuss the following three options: $\Phi_{\pi,M}^{C}(x)=\sum_{j=1}^{n}\frac{h_{j}}{M}\sum_{i=1}^{M}|A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji})x(t_{ji})-q(t_{ji})|^{2}+|G_{a}x(a)+G_{b}x(b)-d|^{2},$ (16) $\Phi_{\pi,M}^{I}(x)=\sum_{j=1}^{n}h_{j}\sum_{i=1}^{M}\gamma_{i}|A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji})x(t_{ji})-q(t_{ji})|^{2}+|G_{a}x(a)+G_{b}x(b)-d|^{2},$ (17) and $\Phi_{\pi,M}^{R}(x)=\sum_{j=1}^{n}\int_{t_{j-1}}^{t_{j}}\left|\sum_{i=1}^{M}l_{ji}(t)(A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji})x(t_{ji})-q(t_{ji}))\right|^{2}{\rm d}t\\\ +|G_{a}x(a)+G_{b}x(b)-d|^{2},$ (18) in which from the DAE (1) and $x\in X_{\pi}$ only data at the points $t_{ji}=t_{j-1}+\tau_{i}h_{j},\quad i=1,\ldots,M,\;j=1,\ldots,n,$ are included, with $0\leq\tau_{1}<\cdots<\tau_{M}\leq 1.$ (19) In the last functional $\Phi_{\pi,M}^{R}$ Lagrange basis polynomials appear, i.e., $l_{ji}(t)=\frac{\prod_{\begin{subarray}{c}\kappa=1\\\ \kappa\neq i\end{subarray}}^{M}(t-t_{j\kappa})}{\prod_{\begin{subarray}{c}\kappa=1\\\ \kappa\neq i\end{subarray}}^{M}(t_{ji}-t_{j\kappa})}=\frac{\prod_{\begin{subarray}{c}\kappa=1\\\ \kappa\neq i\end{subarray}}^{M}(\tau-\tau_{\kappa})}{\prod_{\begin{subarray}{c}\kappa=1\\\ \kappa\neq i\end{subarray}}^{M}(\tau_{i}-\tau_{\kappa})}=:l_{i}(\tau),\quad\tau=(t-t_{j-1})/h_{j}.$ (20) ###### Remark 2 The direct numerical implementation of $\Phi^{R}_{\pi,M}(x)$ with the Lagrangian basis functions includes the use of the mass matrix belonging to such functions. It is well known that this matrix may be very bad conditioned thus leading to an amplification of rounding errors. In connection with the ill-posedness of higher-index DAEs, this may render the numerical solutions useless. The solution of the least-squares problem with $\Phi_{\pi,M}^{I}$ is much less expensive than that with $\Phi^{R}_{\pi,M}$, and in turn, solving system (23)-(24) below for $x\in X_{\pi}$ in a least-squares sense using the (diagonally weighted) Euclidean norm in $\mathbb{R}^{nMm+l}$ according to $\Phi_{\pi,M}^{C}$ is even less computationally expensive than using $\Phi^{I}_{\pi,M}(x)$. ∎ Introducing, for each $x\in X_{\pi}$ and $w(t)=A(t)(Dx)^{\prime}(t)+B(t)x(t)-q(t)$, the corresponding vector $W\in\mathbb{R}^{mMn}$ by $W=\left[\begin{array}[]{c}W_{1}\\\ \vdots\\\ W_{n}\end{array}\right]\in\mathbb{R}^{mMn},\quad W_{j}=h_{j}^{1/2}\left[\begin{array}[]{c}w(t_{j1})\\\ \vdots\\\ w(t_{jM})\end{array}\right]\in\mathbb{R}^{mM},$ (21) we obtain new representations of these functionals, namely $\Phi_{\pi,M}^{C}(x)=W^{T}\mathcal{L}^{C}W+|G_{a}x(a)+G_{b}x(b)-d|^{2},$ $\Phi_{\pi,M}^{I}(x)=W^{T}\mathcal{L}^{I}W+|G_{a}x(a)+G_{b}x(b)-d|^{2},$ and $\Phi_{\pi,M}^{R}(x)=W^{T}\mathcal{L}^{R}W+|G_{a}x(a)+G_{b}x(b)-d|^{2},$ whereby the first two formulae are evident, with $\mathcal{L}^{C}=\operatorname{diag}(L^{C}\otimes I_{m},\ldots,L^{C}\otimes I_{m})$, $\otimes$ denoting the Kronecker product, and $L^{C}=M^{-1}I_{M}$ such that finally $\mathcal{L}^{C}=M^{-1}I_{nMm}$, and further, $\mathcal{L}^{I}=\operatorname{diag}(L^{I}\otimes I_{m},\ldots,L^{I}\otimes I_{m})$ and $L^{I}=\operatorname{diag}(\gamma_{1},\ldots,\gamma_{M})$. $L^{C}$ and thus $\mathcal{L}^{C}$ are positive definite. The matrices $L^{I}$ and $\mathcal{L}^{I}$ are positive definite if and only if all quadrature weights are positive. The formula for $\Phi_{\pi,M}^{R}(x)$ can be established by straightforward evaluations following the lines222(HMTWW, , Section 2.3) is restricted to equidistant partitions $\pi$ and collocation points $0<\tau_{1}<\cdots<\tau_{M}<1$. The generalization works without further ado. of (HMTWW, , Section 2.3), in which $\mathcal{L}^{R}=\operatorname{diag}(L^{R}\otimes I_{m},\ldots,L^{R}\otimes I_{m})$, $L^{R}$ is the mass matrix associated with the Lagrange basis functions $l_{i}$, $i=1,\ldots,M$, (20) for the node sequence (19), more precisely, $L^{R}=(L_{i\kappa}^{R})_{i,\kappa=1,\ldots,M},\quad L_{i\kappa}^{R}=\int_{0}^{1}l_{i}(\tau)l_{\kappa}(\tau)d\tau.$ (22) $L^{R}$ is symmetric and positive definite and, consequently, $\mathcal{L}^{R}$ is so. We emphasize that the matrices $L^{C},L^{I},L^{R}$ depend only on $M$, the node sequence (19), and the quadrature weights, but do not depend on the partition $\pi$ and $h$ at all. We set always $M\geq N+1$. Although the nodes (19) serve as interpolation points in the functional $\Phi^{R}_{\pi,M}$, we still call them collocation nodes after HMTWW . It should be underlined here that minimizing each of the above functionals on $X_{\pi}$ can be viewed as a special least-squares method to solve the overdetermined collocation system $W=0$, $G_{a}x(a)+G_{b}x(b))=d$, with respect to $x\in X_{\pi}$, that is in detail, the collocation system $\displaystyle A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji})x(t_{ji})$ $\displaystyle=q(t_{ji}),\quad j=1,\ldots,M,\quad i=1,\ldots,n,$ (23) $\displaystyle G_{a}x(a)+G_{b}x(b))$ $\displaystyle=d.$ (24) The system (23)-(24) for $x\in X_{\pi}$ becomes overdetermined since $X_{\pi}$ has dimension $mnN+k$, whereas the system consists of $mnM+l\geq mnN+mn+l\geq nmN+m+l>nmN+k+l\geq mnN+k$ scalar equations.333If the DAE (1) has index $\mu=1$, then $l=k$, and hence also the choice $M=N$ makes sense. Then the system (23)-(24) for $x\in X_{\pi}$ is nothing else but the classical collocation approach, and the least squares solution becomes the exact solution of the collocation system. We refer to LMW for a detailed survey of classical collocation methods, however, here we mainly focus on higher index cases yielding overdetermined systems. ###### Remark 3 Based on collocation methods for index-1 DAEs, the first thought in HMTWW ; HMT was to turn to the functional $\Phi^{C}_{\pi,M}$ with nodes $0<\tau_{1}<\cdots<\tau_{M}<1$. However, the use of the special discretized norm in these papers for providing convergence results is in essence already the use of the functional $\Phi^{R}_{\pi,M}$. For a general set of nodes (19), $\Phi^{C}_{\pi,M}$ represents a simple Riemann approximation of the corresponding integral, which has first order of accuracy, only. If, however, the nodes are chosen as those of the Chebyshev intergration, the orders $1,\dots,7$ and $9$ can be obtained for the corresponding number $M$ of nodes (Hildebrand56, , p 349). The marking with the upper index $C$ indicates now that Chebyshev integration formulas are conceivable. As developed in (HaemHoff91, , Section 7.5.2), integration formulas with uniform weights, i.e., Chebyshev formulas, are those where random errors in the function values have the least effect on the quadrature result. This makes these formulas very interesting in our context. However, although a lot of test calculations runs well, we are not aware of convergence statements going along with $\Phi^{C}_{\pi,M}$ so far. ∎ ###### Remark 4 The functional $\Phi^{R}_{\pi,M}$ gets its upper index $R$ from the restriction operator $R_{\pi,M}$ introduced in HM with nodes $0<\tau_{1}<\cdots<\tau_{M}<1$. Note that (HM, , Theorem 2.3) generalizes convergence results from HMTWW ; HMT to a large extend. Theorem 2.2 below allows even any nodes with (19). ∎ ###### Remark 5 The functional $\Phi^{I}_{\pi,M}$ has its upper index $I$ simply from the word integration formula. We will see first convergence results going along with $\Phi^{I}_{\pi,M}$ in Theorem 2.2 below. Intuitively, it seems reasonable to use a Gaussian quadrature rule for these purposes. However, it is not known if such a rule is most robust against rounding errors and/or other choices of the overall process. ∎ ###### Remark 6 Our approximations are according to the basic ansatz space $X_{\pi}$ discontinuous, with possible jumps at the grid points in certain components. In this respect it does not matter which of our functionals is selected. Since we always have overdetermined systems (23)-(24), it can no longer be expected that all components of the approximation are continuous even for the case $\tau_{1}=0,\tau_{M}=1$. This is an important difference to the classical collocation methods for Index-1 DAEs, which base on classical uniquely solvable linear systems, e.g.,LMW . ###### Theorem 2.2 Let the DAE (1) be regular with index $\mu\in\mathbb{N}$ and let the boundary condition (2) be accurately stated. Let $x_{*}$ be a solution of the boundary value problem (1)–(2), and let $A,B,q$ and also $x_{*}$ be sufficiently smooth. Let all partitions $\pi$ be such that $h/h_{min}\leq\rho$, with a global constant $\rho$. Then, with $M\geq N+\mu,$ the following statements are true: (1) For sufficient fine partitions $\pi$ and each sequence of arbitrarily placed nodes (19), there exists exactly one $x_{\pi}^{R}\in X_{\pi}$ minimizing the functional $\Phi^{R}_{\pi,M}$ on $X_{\pi}$, and $\displaystyle\|x_{\pi}^{R}-x_{\ast}\|_{H_{D}^{1}}\leq C_{R}h^{N-\mu+1}.$ (2) For each integration rule related to the interval $[0,1]$, with $M$ nodes (19) and positive weights $\gamma_{1},\ldots,\gamma_{M}$, which is exact for polynomials with degree less than or equal to $2M-2$, and sufficient fine partitions $\pi$, there exists exactly one $x_{\pi}^{I}\in X_{\pi}$ minimizing the functional $\Phi^{I}_{\pi,M}$ on $X_{\pi}$, and $x_{\pi}^{I}=x_{\pi}^{R}$, thus $\displaystyle\|x_{\pi}^{I}-x_{\ast}\|_{H_{D}^{1}}\leq C_{R}h^{N-\mu+1}.$ Since Gauss-Legendre and Gauss-Radau integration rules are exact for polynomials up to degree $2M-1$ and $2M-2$, respectively, with positive weights, they are well suitable here, but Gauss-Lobatto rules do not meet the requirement of Theorem 2.2(2). ###### Proof (1): In HM , additionally supposing $0<\tau<\cdots<\tau_{M}<1$, conditions are derived that ensure the existence and uniqueness of $x_{\pi}^{R}$ minimizing $\Phi^{R}_{\pi,M}$ on $X_{\pi}$. It is shown that $x_{\pi}^{R}$ has similar convergence properties as $x_{\pi}$ minimizing $\Phi$ on $X_{\pi}$. Merely the constant $C_{R}$ is slightly larger than $C$ in (9). A further careful check of the proofs in HM shows that the assertion holds also true for $\tau_{1}=0$ and/or $\tau_{M}=1$, possibly with a larger constant $C_{R}$. (2): For each arbitrary $x\in X_{\pi}$, the expression $\displaystyle\theta_{j}(t):=\left|\sum_{i=1}^{M}l_{ji}(t)(A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji}x(t_{ji})-q(t_{ji}))\right|^{2},\quad t\in(t_{j-1},t_{j}),$ shows that $\theta_{j}$ is a polynomial with degree less than or equal to $2M-2$, thus $\displaystyle\int_{t_{j-1}}^{t_{j}}\theta_{j}(t){\rm dt}=h_{j}\sum_{i=1}^{M}\gamma_{i}\theta_{j}(t_{ji})=h_{j}\sum_{i=1}^{M}\gamma_{i}\left|A(t_{ji})(Dx)^{\prime}(t_{ji})+B(t_{ji}x(t_{ji})-q(t_{ji})\right|^{2}$ Therefore, it follows that $\Phi_{\pi,M}^{I}(x)=\Phi_{\pi,M}^{R}(x)$, for all $x\in X_{\pi}$, and $\Phi_{\pi,M}^{I}$ coincides with the special functional $\Phi_{\pi,M}^{I}$ having the same nodes. Eventually, (2) is a particular case of (1). ∎ As already emphasized above, until now we are aware of only sufficient convergence conditions, which is, in particular, especially applicable for the size of $M$. So far, often the applications run well with $M=N+1$ and no significant difference to calculations with a larger M was visible, e.g., (HMT, , Section 6) and (HMTWW, , Section 4). Also the experiments in Section 4 below are carried out with $M=N+1$. The following statement for $A$ and $B$ with polynomial entries allows to choose $M$ independently of the index $\mu$ and confirms the choice $M=N+1$ for constant $A$ and $B$. ###### Theorem 2.3 Let the DAE (1) be regular with index $\mu\in\mathbb{N}$ and let the boundary condition (2) be accurately stated. Let $x_{*}$ be a solution of the boundary value problem (1)–(2), and let $q$ and also $x_{*}$ be sufficiently smooth. Let the entries of $A$ and $B$ be polynomials with degree less than or equal to $N_{AB}$. Let all partitions $\pi$ be such that $h/h_{min}\leq\rho$, with a global constant $\rho$. Then, with $M\geq N+1+N_{AB},$ the following statements are true: (1) For sufficient fine partitions $\pi$ and each sequence of arbitrarily placed nodes (19), there exists exactly one $x_{\pi}^{R}\in X_{\pi}$ minimizing the functional $\Phi^{R}_{\pi,M}$ on $X_{\pi}$, and $\displaystyle\|x_{\pi}^{R}-x_{\ast}\|_{H_{D}^{1}}\leq C_{R}h^{N-\mu+1}.$ (2) For each integration rule of interpolation type related to the interval $[0,1]$, with $M$ nodes (19) and positive weights $\gamma_{1},\ldots,\gamma_{M}$, which is exact for polynomials with degree less than or equal to $2M-2$, and sufficient fine partitions $\pi$, there exists exactly one $x_{\pi}^{I}\in X_{\pi}$ minimizing the functional $\Phi^{I}_{\pi,M}$ on $X_{\pi}$, and $x_{\pi}^{I}=x_{\pi}^{R}$, thus $\displaystyle\|x_{\pi}^{I}-x_{\ast}\|_{H_{D}^{1}}\leq C_{R}h^{N-\mu+1}.$ (3) If $A$ and $B$ are even constant matrices, for sufficient fine partitions $\pi$ and each sequence of arbitrarily placed nodes (19), there exists exactly one $x_{\pi}^{R}\in X_{\pi}$ minimizing the functional $\Phi^{R}_{\pi,M}$ on $X_{\pi}$, and $\displaystyle\|x_{\pi}^{R}-x_{\ast}\|_{H_{D}^{1}}\leq C_{R}h^{\max\\{0,N-\mu+1\\}}.$ ###### Proof (1): This follows from (HM, , Proposition 2.2(iv)) and (HMT, , Theorem 4.1). (2): As in the proof of the previous theorem, this is again a comsequence of (1). (3): The statement is a consequence of (HM, , Proposition 2.2(iv)) and (HMT, , Theorem 4.7).∎ ###### Remark 7 Observe a further interesting feature. Let $A$ and $B$ be constant matrices. Set $N=1$, $M=N+1$. Then, it holds that $\displaystyle\Phi_{\pi,M}^{C}(x)=\Phi_{\pi,M}^{R}(x)=\Phi_{\pi,M}^{I}(x),\quad x\in X_{\pi},$ in which $\Phi_{\pi,M}^{I}$ is associated to the corresponding Gauss-Legendre or Gauss-Radau rules. This follows from the fact that the 2-point Chebyshev integration nodes are just the Gauss-Legendre nodes. We underline that, by Theorem 2.3(3), the approximate solutions stay bounded also for DAEs with larger index $\mu$, for instance (HMTWW, , Table 6) confirming that for an index four Jordan DAE.∎ Having in mind the implementation of such an overdetermined least-squares collocation, for given partition $\pi$ and a given polynomial degree $N$, a number of parameters and options must be selected: * • basis functions for $X_{\pi}$; * • number $M$ of collocation points and their location $0\leq\tau_{1}<\cdots<\tau_{M}\leq 1$; * • setup and solution of the discrete least-squares problem. Below we will discuss several issues in this context. The main aim is on implementations being as stable as possible, not necessarily best computational efficiency. ## 3 Collocation nodes, mass matrix and integration weights ### 3.1 Collocation nodes for $\Phi^{R}_{\pi,M}$ The functional $\Phi^{R}_{\pi,M}$ in (18) is based on polynomial interpolation using $M$ nodes (19). It seems reasonable to choose these nodes in such a way that, separately on each subinterval $[t_{j-1},t_{j}]$ of the partition, the interpolation error is as small as possible in a certain sense. Without restriction of the generality we can trace back the matter to the interval $[0,1]$. Consider functions $q\in C([0,1],\mathbb{R}^{m})$ and define the interpolation operator $R_{M}:C([0,1],\mathbb{R}^{m})\rightarrow C([0,1],\mathbb{R}^{m})$ by $\displaystyle R_{M}q=\sum_{i=1}^{M}l_{i}q(\tau_{i}),$ with the Lagrange basis functions (20) such that $(R_{M}q)(\tau_{i})=q(\tau_{i})$, $i=1,\ldots,M$, and $R_{M}q\in Y_{M}$, where $Y_{M}\subset C([0,1],\mathbb{R}^{m})$ is the subspace of all functions whose components are polynomials up to degree $M-1$. Introducing $\omega(\tau)=(\tau-\tau_{1})(\tau-\tau_{2})\cdots(\tau-\tau_{M})$ and using componentwise the divided differences we have the error representation, e.g., (HaemHoff91, , Kapitel 5), $\displaystyle q(\tau)-(R_{M}q)(\tau)=\omega(\tau)\,q[\tau_{1},\ldots,\tau_{M},\tau].$ For smooth functions $q\in C^{M}([0,1],\mathbb{R}^{m})$ it follows that $\displaystyle\lVert q-R_{M}q\rVert^{2}_{L^{2}}=\int_{0}^{1}\omega(\tau)^{2}\,\lvert q[\tau_{1},\ldots,\tau_{M},\tau]\rvert^{2}{\rm d\tau}\leq\int_{0}^{1}\omega(\tau)^{2}{\rm d\tau}\frac{m}{(M!)^{2}}\lVert q^{(M)}\rVert^{2}_{\infty}.$ For the evaluation of $\Phi_{\pi,M}^{R}$ (18), it seems resonable to choose the collocation nodes in such a way that this expression is minimized for all functions $q\in C^{(M)}([0,1],\mathbb{R}^{m})$. The optimal set of nodes is determined by the condition $\min_{0\leq\tau_{1}<\cdots<\tau_{M}\leq 1}\|\omega\|_{L^{2}(0,1)}.$ It is well known that this functional is minimized if the collocation nodes are chosen to be the Gauss-Legendre nodes (HaemHoff91, , Chapter 7.5.1 and 4.5.4). On the other hand, the best polynomial approximation to a given function $q$ in the $L^{2}$-norm is obtained if the Fourier approximation with respect to the Legendre polynomials is computed. However, to the best of our knowledge, there are no estimations of the interpolation error in $L^{2}((0,1),\mathbb{R}^{m})$ known.444It holds that $\lVert R_{M}q\rVert_{\infty}\leq\max_{\tau\in[0,1]}\sum_{i=1}^{M}\lvert l_{i}(\tau)\rvert\lvert q(\tau_{i})\rvert\leq\Lambda_{M}\lVert q\rVert_{\infty}$ which means that the interpolation operator $R_{M}$ is bounded in $C([0,1],\mathbb{R}^{m})$, and the Lebesgue constant is a bound of the operator norm. In contrast, $R_{M}$ it is unbounded in $L^{2}((0,1),\mathbb{R}^{m})$! However, in the uniform norm and with arbitrary node sequences, for each $q\in C([0,1],\mathbb{R}^{m})$, the estimate $\|R_{M}q-q\|_{\infty}\leq(1+\Lambda_{M})\operatorname{dist}_{\infty}(q,Y_{M})$ holds true where $\operatorname{dist}_{\infty}(q,Y_{M})=\min\\{\|q-y\|_{\infty}|y\in Y_{M}\\}$ and $\Lambda_{M}$ is so-called Lebesgue constant defined by $\Lambda_{M}=\max_{\tau\in[0,1]}\sum_{i=1}^{M}|l_{i}(\tau)|$ in which $l_{i}$ are again the Lagrange basis functions (20). The Lebesgue constant $\Lambda_{M}^{L}$ for Gauss-Legendre nodes has the property $\Lambda_{M}^{L}=O(\sqrt{M})$. If instead Chebyshev nodes are used, the corresponding Lebesgue constant $\Lambda_{M}^{C}$ behaves like $\Lambda_{M}^{C}=O(\log M)$ ((FoSl94, , p 206) and the references therein). For uniform polynomial approximations, these nodes are known to be optimal (DeuflHohmann, , Theorem 7.6). Table 1 shows some values for the Lebesgue constants. Note that the Lebesgue constants $\Lambda_{M}^{U}$ for equidistant nodes grow exponentially, see e.g. TrefWeid91 .555For each $M$, there is a set of interpolation nodes $\tau_{i}^{\ast}$ which minimizes the corresponding Lebesgue constant $\Lambda_{M}^{\ast}$. This constant is only slightly smaller than $\Lambda_{M}^{C}$ Ibrahimoglu16 . Table 1: Lebesgue contants for Chebyshev nodes ($\Lambda_{M}^{C}$), Gauss-Legendre nodes ($\Lambda_{M}^{L}$), Gauss-Lobatto nodes ($\Lambda_{M}^{Lo}$), Gauss-Radau nodes ($\Lambda_{M}^{R}$), and uniform nodes including the boundaries ($\Lambda_{M}^{U}$) and without boundaries ($\Lambda_{M}^{O}$) $M$ | $\Lambda_{M}^{C}$ | $\Lambda_{M}^{L}$ | $\Lambda_{M}^{Lo}$ | $\Lambda_{M}^{R}$ | $\Lambda_{M}^{U}$ | $\Lambda_{M}^{O}$ ---|---|---|---|---|---|--- 5 | 1.989 | 3.322 | 1.636 | 4.035 | 2.708 | 10.375 10 | 2.429 | 5.193 | 2.121 | 6.348 | 17.849 | 204.734 15 | 2.687 | 6.649 | 2.386 | 8.126 | 283.211 | 5107.931 20 | 2.870 | 7.885 | 2.576 | 9.627 | 5889.584 | 138852.138 ###### Remark 8 Computation of nodes and weights for for Gauss-type integration formulae In the following we will make heavy use of Gauss-Legendre, Gauss-Radau, and Gauss-Lobatto integration nodes and their corresponding weights. Since we do not have them available in tabular form for large $M$ with sufficient accuracy, they will be computed on the fly. A severe concern is the accuracy of the nodes and weights. In the case of Gauss-Legendre integration rules, the computed nodes and weights have been provided by the Gnu Scientific Library routine glfixed.c GSL09 . It makes use of tabulated values for $M=1(1)20$, 32, 64, 96, 100, 128, 256, 512, 1024 with an accuracy of 27 digits. Other values are computed on the fly with an accuracy being a small multiple of the machine rounding unit using an adapted version of the Newton method. For computing the Gauss-Lobatto nodes and weights, the methods of Michels63 (using the Newton method) as well as Gautschi00b (a variant of the method in GolubWelsch69 ) have been implemented. Table 2 contains some comparisons to the tabulated values in Michels63 that have 20 digits. The method of Michels63 provides slighly more accurate values than that of Gautschi00b . Therefore, the former has been used further on. We did not find sufficiently accurate tabulated values for the Gauss-Radau nodes and weights. Therefore, the method of Gautschi00a has been implemented. We assume that the results obtained have an accuracy similar to the values for the Gauss-Lobatto nodes and weights using the method in Gautschi00b . ∎ Table 2: Accuracy of the computed nodes and weights of the Gauss-Lobatto integration rules. For each method, the absolute error of the nodes (A), the absolute error of the weights (B), the maximum componentwise relative error of nodes (C) and weights (D) are shown. The machine accuracy (machine epsilon) is $2.22\times 10^{-16}$ $M$ | Method ofMichels63 | Method of Gautschi00b ---|---|--- | (A) | (B) | (C) | (D) | (A) | (B) | (C) | (D) 6 | 1.11e-16 | 1.11e-16 | 4.73e-16 | 5.87e-16 | 1.11e-16 | 3.33e-16 | 4.73e-16 | 9.99e-15 12 | 1.11e-16 | 8.33e-17 | 2.01e-15 | 6.14e-16 | 3.33e-16 | 4.44e-16 | 6.04e-15 | 5.86e-14 24 | 0 | 3.47e-17 | 0 | 9.64e-16 | 2.22e-16 | 2.22e-16 | 8.37e-15 | 1.23e-13 48 | 1.11e-16 | 3.47e-17 | 3.41e-14 | 5.23e-15 | 3.33e-16 | 1.22e-15 | 1.71e-13 | 2.76e-12 96 | 5.55e-16 | 2.95e-17 | 4.73e-16 | 1.76e-14 | 4.44e-16 | 4.44e-16 | 2.76e-13 | 4.05e-12 ### 3.2 The mass matrix In the following, we will make extensive use of Legendre polynomials. For the readers convenience, the necessary properties are collected in Appendix A.1. Let us turn to $\Phi_{\pi,M}^{R}$ (18) again. A critical ingredient for determining its properties is the mass matrix $L^{R}$ in (22). Denote as before by $l_{i}(\tau)$, $i=1,\ldots,M$, the Lagrange basis functions for the node sequence (19), that is, cf. (20), $l_{i}(\tau)=\frac{\prod_{\kappa\neq i}(\tau-\tau_{\kappa})}{\prod_{\kappa\neq i}(\tau_{i}-\tau_{\kappa})}.$ For evaluating $L^{R}$, we will use the normalized shifted Legendre polynomials $\hat{P}_{\nu}=(2\nu+1)^{1/2}\tilde{P}_{\nu}$ (cf Appendix A.1). Assume the representation $l_{i}(\tau)=\sum_{\nu=1}^{M}\alpha_{i\nu}\hat{P}_{\nu-1}(\tau).$ (25) A short calculation shows $L_{ij}^{R}=\sum_{\lambda=1}^{M}\alpha_{i\lambda}\alpha_{j\lambda}.$ Letting $a^{i}=(\alpha_{i1},\ldots,\alpha_{iM})^{T}$ we obtain $L_{ij}^{R}=(a^{i})^{T}a^{j}$. Collecting the vectors $a^{i}$ in a matrix $A=(a^{1},\ldots,a^{M})$ it holds $L^{R}=A^{T}A$. The definition of the coefficients $\alpha_{i\nu}$ provides us with $\tilde{V}a^{i}=e^{i}$ where $e^{i}$ denotes the $i$-th unit vector and $\tilde{V}=\left[\begin{array}[]{ccc}\hat{P}_{0}(\tau_{1})&\ldots&\hat{P}_{M-1}(\tau_{1})\\\ \vdots&&\vdots\\\ \hat{P}_{0}(\tau_{M})&\ldots&\hat{P}_{M-1}(\tau_{M})\end{array}\right].$ (26) This gives $A=\tilde{V}^{-1}$. $V=\tilde{V}^{T}$ is a so-called Vandermonde-like matrix Gautschi83 . It is nonsingular under the condition (19) (Stoer89, , Theorem 3.6.11). In Gautschi83 , representations and estimations of the condition number with respect to the Frobenius norm of such matrices are derived. In particular, (Gautschi83, , Table 1) shows impressingly small condition numbers if the collocation nodes are chosen to be the zeros of $\tilde{P}_{M}$, that is the Gauss-Legendre nodes. Moreover, this condition number is optimal among all scalings of the Legendre polynomials Gautschi83 . A consequence of the Christoffel-Darboux formula is that the rows of $\tilde{V}$ are orthogonal for Gauss-Legendre nodes.666The Christoffel-Darboux formula for Legendre polynomials reads: If $i\neq\kappa$, then $\sum_{\nu=0}^{M-1}\hat{P}_{\nu}(\tau_{i})\hat{P}_{\nu}(\tau_{\kappa})=\frac{\mu_{M-1}}{\mu_{M}}\frac{\hat{P}_{M}(\tau_{i})\hat{P}_{M-1}(\tau_{\kappa})-\hat{P}_{M}(\tau_{\kappa})\hat{P}_{M-1}(\tau_{i})}{\tau_{i}-\tau_{\kappa}}$ where $\mu_{M}$ and $\mu_{M-1}$ are the leading coefficients of $\hat{P}_{M}$ and $\hat{P}_{M-1}$, respectively. For the Gauss-Legendre nodes, it holds $\hat{P}_{M}(\tau_{i})=0$. Hence, the right hand side vanishes. Thus, we have the representation $\tilde{V}=\mathcal{D}U$ with an orthogonal matrix $U$ and a diagonal matrix $\mathcal{D}$ with positive diagonal entries.777The diagonal elements $d_{i}$, $i=1,\ldots,M$ of $\mathcal{D}$ can be evaluated analytically using the Christoffel-Darboux formula again: $d_{i}=\sum_{\nu=0}^{M-1}\hat{P}_{\nu}^{2}(\tau_{i})=\frac{\mu_{M-1}}{\mu_{M}}\left(\hat{P}_{M}^{\prime}(\tau_{i})\hat{P}_{M-1}(\tau_{i})-\hat{P}_{M-1}(\tau_{i})\hat{P}_{M}(\tau_{i})\right)=\frac{\mu_{M-1}}{\mu_{M}}\hat{P}_{M}^{\prime}(\tau_{i})\hat{P}_{M-1}(\tau_{i}).$ It is known that the Gauss-Legendre nodes are not the very best set of nodes. However, a comparison of Tables 1 and 2 in Gautschi83 as well as (Gautschi11, , Table 4) indicates that the gain of choosing optimal nodes for Legendre polynomials compared to the choice of Gauss-Legendre nodes is rather minor. In Table 3 we provide condition numbers of $\tilde{V}$ with respect to the Euclidean norm for different choices of nodes. Note that the condition number of $L^{R}$ is the square of that of $\tilde{V}$. The condition numbers for all Gauss-type and Chebyshev nodes are remarkably small. Table 3: Spectral condition numbers of the Vandermonde-like matrices for different node choices. The columns represent Gauss-Legendre nodes (GLe), Gauss-Radau nodes (GR), Gauss-Lobatto nodes (GLo), Chebyshev nodes (Ch), Newton-Cotes type nodes including the boundary (cNC) and without the boundary (oNC). An asterisk * indicates an overflow condition $M$ | GLe | GR | GLo | Ch | cNC | oNC ---|---|---|---|---|---|--- 5 | 1.55e+0 | 2.79e+0 | 3.23e+0 | 2.16e+0 | 3.76e+0 | 3.04e+0 10 | 2.11e+0 | 3.96e+0 | 4.28e+0 | 3.00e+0 | 2.39e+1 | 5.23e+1 15 | 2.57e+0 | 4.85e+0 | 5.11e+0 | 3.66e+0 | 3.98e+2 | 1.14e+3 20 | 2.94e+0 | 5.60e+0 | 5.83e+0 | 4.21e+0 | 8.62e+3 | 3.10e+4 50 | 4.62e+0 | 8.86e+0 | 9.00e+0 | 6.60e+0 | 1.13e+12 | 1.97e+13 100 | 6.52e+0 | 1.25e+1 | 1.26e+1 | 9.32e+0 | * | * ### 3.3 Computation of quadrature weights for general $\Phi^{I}_{\pi,M}$ In oder to apply $\Phi_{\pi,M}^{I}$(17), a numerical quadrature formulae is necessary. For standard nodes sequences (Gauss-Legendre, Gauss-Lobatto, Gauss- Radau) their computation has been described above. However, for general node sequences, the weights must be evaluated. This can be done following the derivations in (Stoer89, , p. 175): Let $\hat{P}_{\nu}(\tau)$ denote the normalized shifted Legendre polynomials as before (cf. Appendix A.1). In particular, it holds then $\int_{0}^{1}\hat{P}_{0}(\tau){\rm d}\tau=1,\quad\int_{0}^{1}\hat{P}_{\nu}(\tau){\rm d}\tau=0,\quad\nu=1,2,\ldots$ For a given function $q\in C[0,1]$, the integral is approximated by the integral of its polynomial interpolation. Using the representation (25) of the Lagrange basis functions we obtain $\displaystyle\int_{0}^{1}q(\tau)d\tau$ $\displaystyle\approx\int_{0}^{1}\sum_{i=1}^{M}q(\tau_{i})\sum_{\nu=1}^{M}\alpha_{i\nu}\hat{P}_{\nu-1}(\tau){\rm d}\tau$ $\displaystyle=\sum_{i=1}^{M}q(\tau_{i})\sum_{\nu=1}^{M}\alpha_{i\nu}\int_{0}^{1}\hat{P}_{\nu-1}(\tau){\rm d}\tau$ $\displaystyle=\sum_{i=1}^{M}q(\tau_{i})\alpha_{i1}.$ Consequently, for the weights it holds $\gamma_{i}=\alpha_{i1}$, $i=1,\ldots,M$. The definition (25) shows that the vector $\gamma=(\gamma_{1},\ldots,\gamma_{M})^{T}$ of weights fulfills the linear system $V\gamma=e^{1}$ where $V=\tilde{V}^{T}$with $\tilde{V}$ from (26) and $e^{1}=(1,0,\ldots,0)^{T}$ is the first unit vector. The discussion of the condition number of $V$ shows that we can expect reliable and accurate results at least for reasonable node sequences. For general node sequences, the weights may become negative. This happens, for example, for uniformly distributed nodes and $M>7$ (Newton-Cotes formulae) (Stoer89, , p. 148). So for $\Phi_{\pi,M}^{I}$, only node sequences leading to positive quadrature weights $\gamma_{i}$ are admitted in order to prevent $L^{I}$ from not being positive definite. ## 4 Choice of basis functions for the ansatz space $X_{\pi}$ The ansatz space $X_{\pi}$ (7) consists of piecewise polynomials having the degree $N-1$ for the algebraic components and the degree $N$ for the differentiated ones on each subinterval of the partition $\pi$ (6). For collocation methods for boundary value problems for ordinary differential equations this questions has lead to the choice of a Runge-Kutta basis for stability reasons, see BaderAscher . This has been later on also used successfully for boundary value problems for index-1 DAEs AscherSpiteri ; KKPSW ; LMW ; KMPW10 . However, this ansatz makes heavily use of the collocation nodes which are at the same time used as the nodes for the Runge- Kutta basis. In our case, the number $M$ of collocation nodes and the degree $N$ of the polynomials for the differentiated components do not coincide since $M>N$ such that the reasoning applied in the case of ordinary differential equations does not transfer to the least-squares case. Taking into account the computational expense for solving the discretized system, bases with local support are preferable. Ideally, the support of each basis function consists of only one subinterval of (6).888This excludes for example B-spline bases! Note that the Runge-Kutta basis has this property. We consider the Runge-Kutta basis and further local basis with orthogonal polynomials. A drawback of this strategy is the fact that the continuity of the piecewise polynomials approximating the differentiated components must be ensured explicitly. This in turn will lead to a discrete least-squares problem with equality constraints. Details can be found in Appendix A.3. Looking for a local basis we turn to the reference interval $[0,1]$. Once a basis on this reference interval is available it can be defined on any subinterval $(t_{j-1},t_{j})$ by a simple linear transformation. Assume that $\\{p_{0},\ldots,p_{N-1}\\}$ is a basis of the set of polynomials of degree less than $N$ defined on the reference interval $[0,1]$. Then, a basis $\\{\bar{p}_{0},\ldots,\bar{p}_{N}\\}$ for the ansatz functions for the differentiated components is given by $\bar{p}_{i}(\rho)=\begin{cases}1,&i=0,\\\ \int_{0}^{\rho}p_{i-1}(\sigma){\rm d}\sigma,&i=1,\ldots,N,\quad\rho\in[0,1],\end{cases}$ (27) and the transformation to the interval $(t_{j-1},t_{j})$ of the partition $\pi$ (6) yields $\displaystyle p_{ji}(t)$ $\displaystyle=p_{i}((t-t_{j-1})/h_{j}),$ $\displaystyle\bar{p}_{ji}(t)$ $\displaystyle=h_{j}\bar{p}_{i}((t-t_{j-1})/h_{j}).$ (28) Additionally to this transformation, the continuity of the piecewise polynomials must be ensured. This gives rise to the additional conditions $\bar{p}_{ji}(t_{j})=\bar{p}_{j+1,i}(t_{j}),\quad i=1,\ldots,N,\quad j=1,\ldots,n-1,$ (29) which must be imposed explicitely.999This is in contrast to choices of basis functions that fulfil the basis conditions. An example of such basis functions are B-splines. ### 4.1 The Runge-Kutta basis In order to define the Runge-Kutta basis, let the $N$ interpolation points $\rho_{i}$ with (11) be given. Then, the Lagrange basis functions are chosen, $p_{i}(\rho)=\frac{\prod_{\kappa\neq i+1}(\rho-\rho_{\kappa})}{\prod_{\kappa\neq i+1}(\rho_{i}-\rho_{\kappa})},\quad i=0,\ldots,N-1.$ ###### Remark 9 Note that the interpolation nodes are only used to define the local basis functions. Thus, their selection is completely independent of the choice of collocation nodes. In view of the estimations (12) and (13) and the argumentation there we prefer Gauss-Legendre interpolation nodes. This choice is also supported by Experiments 2 and 5 below. The numerical computation of $\bar{p}$ is more involved. If not precalculated, the integrals must be available in a closed formula. This can surely be done by expressing the Lagrange basis functions in the monomial representation such that the integration can be carried out analytically. Once these coefficients are known, the evaluation of the values of the basis functions at a given $\rho\in[0,1]$ is easily done using the Horner method. However, this approach amounts to the inversion of the Vandermonde matrix using the nodes (11). This matrix is known to be extremly ill-conditioned. In particular, its condition number grows exponentially with $N$ GaIn88 ; Beckermann00 . Therefore, an orthogonal basis might be better suited. This leads to a representation $p_{i}(\rho)=\sum_{\kappa=1}^{N}\alpha_{i\kappa}Q_{\kappa}(\rho)$ (30) for some polynomials $Q_{1},\ldots,Q_{N}$. If these polynomials fulfil a three-term recursion,101010which they do if the polynomials are orthogonal with respect to some scalar product (DeuflHohmann, , Theorem 6.2). the evaluation of function values can be performed using the Clenshaw algorithm FoxParker68 which is only slightly more expensive than the Horner method. In order to use this approach, the integrals of $p_{0},\ldots,p_{N-1}$ must be easily representable in terms of the chosen basis. Here, the Legendre and Chebyshev polynomials are well-suited (cf below Appendix A.1 and (34) as well as Appendix A.2 and (36)). ### 4.2 Orthogonal polynomials A reasonable choice for the basis are orthogonal polynomials. We will consider Legendre polynomials first. A motivation is provided in the following example. ###### Example 1 Consider the index-1 DAE $x=q(t),\quad t\in[0,1].$ Let $\\{\hat{P}_{0},\ldots,\hat{P}_{N-1}\\}$ be the normalized shifted Legendre polynomials. Then letting $x=\sum_{i=1}^{N}\alpha_{i}\hat{P}_{i-1}$ for some vector $\alpha=(\alpha_{1},\ldots,\alpha_{N})^{T}$, the least-squares functional $\Phi(x)=\int_{0}^{1}(x(t)-q(t))^{2}{\rm d}t$ corresponding to this DAE is minimized if $\alpha=b$ and $b=(b_{1},\ldots,b_{N})^{T}$ where $b_{i}=\int_{0}^{1}q(t)\hat{P}_{i-1}(t)dt$ which is just the best approximation of the solution in $H_{D}^{1}((0,1),\mathbb{R})=L^{2}((0,1),\mathbb{R})$. Similar relations hold for the differential equation $x^{\prime}=f$ if the basis functions for the differentiated components are constructed according to (27). Hence, these basis functions seem to qualify well for index-1 DAEs. ∎ The necessary ingredients for the efficient implementation of the Legendre polynomials are collected in Appendix A.1. Another common choice are Chebyshev polynomials of the first kind. They have been used extensivly in the context of spectral methods because of their excellent approximation properties, cf Fornberg96 ; Trefethen00 , see also DrHatr14 . The relations used for their implementation can be found in Appendix A.2.111111Let us note in passing that the first routine for solving two-point boundary problems in the NAG library (NAG® is a registered trademark of The Numerical Algorithms Group) besides shooting methods was just a least- squares collocation method corresponding to $n=1$ and using a version of the functional $\Phi_{\pi,M}^{C}$. The NAG routine D02AFF and its predecessor D02TGF (and its driver D02JBF) use Chebyshev polynomials as basis functions and Gauss-Legendre nodes as collocation points Gladwell79 ; Albasiny78 . This routine appeared as early as 1970 in Mark 8 of the library and survived to date (as of Mark 27 of 2019) NAG . ### 4.3 Comparison of different basis representations The choice of the basis function representations is dominated by the question of obtaining a most robust implementation. The computational complexity of the representations presented above is not that much different such that this aspect plays a minor role. The check for robustness can be subdivided into two questions: 1. 1. Which representation is most robust locally? 2. 2. Which representation is most robust globally? In the following experiments, $N$ will be varied. The functional used is $\Phi_{\pi,M}^{R}$. The number of collocation nodes is $M=N+1$. Table 3 motivates the choice of the Gauss-Legendre nodes as collocation nodes. In order to compute the norms of $L^{2}((0,1),\mathbb{R}^{m})$ and $H_{D}^{1}((0,1),\mathbb{R}^{m})$, Gaussian quadrature with $N+2$ integration nodes on each subinterval of $\pi$ is used. #### Local behavior of the basis representations In order to answer the first question, is is reasonable to experiment first with a higher index example that does not have any dynamic components (that is, $l=0$) on a grid $\pi$ consisting only of one subinterval (that is, $n=1$). In that case, we check the ability to interpolate functions and to numerically differentiate them. For $n=1$, there are no continuity conditions (29) involved. Therefore, the discrete problem becomes a linear least-squares problem. We will solve it by a Householder QR-factorization with column pivoting as implemented in the Eigen library. The following example is used in HMTWW ; HMT . ###### Example 2 $\displaystyle x^{\prime}_{2}(t)+x_{1}(t)$ $\displaystyle=q_{1}(t),$ $\displaystyle t\eta x^{\prime}_{2}(t)+x^{\prime}_{3}(t)+(\eta+1)x_{2}(t)$ $\displaystyle=q_{2}(t),$ $\displaystyle t\eta x_{2}(t)+x_{3}(t)$ $\displaystyle=q_{3}(t),\quad t\in[0,1].$ This is an index-3 example with dynamical degree of freedom $l=0$ such that no additional boundary or initial conditions are necessary for unique solvability. We choose the exact solution $\displaystyle x_{\ast,1}(t)$ $\displaystyle=e^{-t}\sin t,$ $\displaystyle x_{\ast,2}(t)$ $\displaystyle=e^{-2t}\sin t,$ $\displaystyle x_{\ast,3}(t)$ $\displaystyle=e^{-t}\cos t$ and adapt the right-hand side $q$ accordingly. For the exact solution, it holds $\|x_{\ast}\|_{L^{2}((0,1),\mathbb{R}^{3})}\approx 0.673$, $\|x_{\ast}\|_{L^{\infty}((0,1),\mathbb{R}^{3})}=1$, and $\|x_{\ast}\|_{H_{D}^{1}((0,1),\mathbb{R}^{3})}\approx 1.11$.∎ ###### Experiment 1 Robustness of the representation of the Runge-Kutta basis In a first experiment we intend to clarify the differences between different representations of the Runge-Kutta basis. The interpolation nodes (11) have been fixed to be the Gauss-Legendre nodes (cf (12)). The Runge-Kutta basis has been represented with respect to the monomial, Legendre, and Chebyshev bases. The results are shown in Figure 1 (see appendix). This test indicates that the monomial basis is much less robust than the others for $N>10$ while the other representations behave very similar. ∎ ###### Experiment 2 Robustness of the Runge-Kutta basis with respect to the node sequence In this experiment we are interested in understanding the influence of the interpolation nodes. For that, we compared the uniform nodes sequence to the Gauss-Legendre and Chebyshev nodes. The uniform nodes are given by $\rho_{i}=(i-\frac{1}{2})/N$. In accordance with the results of the previous experiment, the representation of the Runge-Kutta basis in Legendre polynomials has been chosen. The results are shown in Figure 2. Not unexpectedly, uniform nodes are inferior to the other choices at least for $N>13$. On the other hand, there is no significant difference between Gauss- Legendre and Chebyshev nodes. ∎ ###### Experiment 3 Robustness of different polynomial representations In this experiment we intend to compare the robustness of different bases. Therefore, we have chosen the Runge-Kutta basis with Gauss-Legendre interpolation nodes, the Legendre polynomials, and the Chebyshev polynomials. The results are shown in Figure 3. All representations show similar behavior. ∎ A general note is in order. The exact solution has approximately the norm 1 in all used norms. The machine accuracy is $\varepsilon_{\textrm{mach}}\approx 2.22\times 10^{-16}$ in all computations. The best accuracy obtained is $10^{-12}$ – $10^{-14}$. Considering that there is a twofold differentiation involved in the problem of the example we would expect a much lower accuracy. This surprising behavior has also been observed in other experiments. The next example is an index-3 one which has $l=4$ dynamical degrees of freedom. It is the linearized version of an example presented CampbellMoore95 that has also been considered in HMT . ###### Example 3 Consider the DAE $A(Dx(t))^{\prime}+B(t)x(t)=q(t),\quad t\in[0,5],$ where $\displaystyle A=\begin{bmatrix}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\\\ 0&0&0&0&0&0\end{bmatrix},D=\begin{bmatrix}1&0&0&0&0&0&0\\\ 0&1&0&0&0&0&0\\\ 0&0&1&0&0&0&0\\\ 0&0&0&1&0&0&0\\\ 0&0&0&0&1&0&0\\\ 0&0&0&0&0&1&0\end{bmatrix},$ and the smooth coefficient matrix $\displaystyle B(t)=\begin{bmatrix}0&0&0&-1&0&0&0\\\ 0&0&0&0&-1&0&0\\\ 0&0&0&0&0&-1&0\\\ 0&0&\sin t&0&1&-\cos t&-2\rho\cos^{2}t\\\ 0&0&-\cos t&-1&0&-\sin t&-2\rho\sin t\cos t\\\ 0&0&1&0&0&0&2\rho\sin t\\\ 2\rho\cos^{2}t&2\rho\sin t\cos t&-2\rho\sin t&0&0&0&0\end{bmatrix},\quad\rho=5,$ subject to the initial conditions $x_{2}(0)=1,\quad x_{3}(0)=2,\quad x_{5}(0)=0,\quad x_{6}(0)=0.$ This problem has the tractability index 3 and dynamical dgree of freedom $l=4$. The right-hand side $q$ has been chosen in such a way that the exact solution becomes $\displaystyle x_{\ast,1}$ $\displaystyle=\sin t,$ $\displaystyle x_{\ast,4}$ $\displaystyle=\cos t,$ $\displaystyle x_{\ast,2}$ $\displaystyle=\cos t,$ $\displaystyle x_{\ast,5}$ $\displaystyle=-\sin t,$ $\displaystyle x_{\ast,3}$ $\displaystyle=2\cos^{2}t,$ $\displaystyle x_{\ast,6}$ $\displaystyle=-2\sin 2t,$ $\displaystyle x_{\ast,7}$ $\displaystyle=-\rho^{-1}\sin t.$ For the exact solution, it holds $\|x_{\ast}\|_{L^{2}((0,5),\mathbb{R}^{7})}\approx 5.2$, $\|x_{\ast}\|_{L^{\infty}((0,5),\mathbb{R}^{7})}=2$, and $\|x_{\ast}\|_{H_{D}^{1}((0,5),\mathbb{R}^{7})}\approx 9.4$. ∎ The following experiments with Example 3 are carried out under the same conditions as before when using Example 2. ###### Experiment 4 Robustness of the representation of the Runge-Kutta basis In this experiment we intend to clarify the differences between different representations of the Runge-Kutta basis. The interpolation points have been fixed to be the Gauss-Legendre nodes. The Runge-Kutta basis has been represented with respect to the monomial, Legendre, and Chebyshev bases. The results are shown in Figure 4. This test indicates that the monomial basis is much less robust than the others for $N>15$ while the other representations behave very similar. ∎ ###### Experiment 5 Robustness of the Runga-Kutta basis with respect to the node sequence In this experiment we are interested in understanding the influence of the interpolation nodes. For that, we compared the uniform nodes sequence to the Gauss-Legendre and Chebyshev nodes. The uniform nodes are given by $\rho_{i}=(i-\frac{1}{2})/N$. In accordance with the results of the previous experiment, the representation of the Runge-Kutta basis in Legendre polynomials has been chosen. The results are shown in Figure 5. Not unexpectedly, uniform nodes are inferior to the other choices at least for $N>20$. However, there is no real difference between Gauss-Legendre and Chebyshev nodes. ∎ ###### Experiment 6 Robustness of different polynomial representations In this experiment we intend to compare the robustness of different bases. Therefore, we have chosen the Runge-Kutta basis with Gauss-Legendre interpolation nodes, the Legendre polynomials, and the Chebyshev polynomials. The results are shown in Figure 6. All representations show similar behavior. As a conclusion, we can see that the results of the Experiments 1-3 and 4-6 are largely consistent. #### Global behavior of the basis representations We are interested in understandig the global error, which corresponds to error propagation in the case of initial value problems. In order to understand the error propagation properties we will investigate the accuracy of the computed solution with respect to an increasing number of subintervals $n$. This motivates to use a rather low order $N$ of polynomials. In the previous section we observed that there is no difference in the local properties between different basis representations for low degrees $N$ of the ansatz polynomials. In the following experiments, the functionals used are $\Phi_{\pi,M}^{R}$ and $\Phi_{\pi,M}^{C}$. The number of collocation nodes is again $M=N+1$. The basis functions are the shifted Legendre polynomials. The discrete problem for $n>1$ is an equality constraint linear least-squares problem. The equality constraints consists just of the continuity requirements for the differentiated components of the elements in $X_{\pi}$. The problem is solved by a direct solution method as described in Section 5. In short, the equality constraints are eliminated by a sparse QR-decomposition with column pivoting as implemented in the code SPQR SPQR . The resulting least-squares problem has then been solved by the same code. ###### Experiment 7 Influence of selection of collocation nodes, approximation degree $N$, and number $n$ of subintervals In this experiment, we use Example 3 and vary the choice of collocation nodes as well as the degree $N$ of the polynomial basis and the number $n$ of subintervals. We compare Gauss-Legendre, Radau IIA and Lobatto collocation nodes. Since this example is a pure initial value problem, the use of the Radau IIA collocation nodes is especially justified.121212Such methods are proven in time-stepping procedures for ordinary initial value problems because of their stability properties. Radau IIA methods are also used for many DAEs with index $\mu\leq 2$ since the generated approximations on the grid points satisfy the obvious constraint. The results using $\Phi_{\pi,M}^{R}$ are collected in Table ‣ Towards a reliable implementation of least-squares collocation for higher-index differential-algebraic equations, those using $\Phi_{\pi,M}^{C}$ in Table ‣ Towards a reliable implementation of least- squares collocation for higher-index differential-algebraic equations. We observe no real difference between the different sets of collocation points. The results seem to confirm the conjecture that, in case of smooth problems, a higher degree $N$ is preferable over a larger $n$ or, equivalently, a smaller stepsize $h$. In addition, for the highest degree polynomials ($N=20$), the use of $\Phi_{\pi,M}^{C}$ seems to produce more accurate results than that of $\Phi_{\pi,M}^{R}$. ∎ ## 5 The discrete least-squares problem Once the basis has been chosen and the collocation conditions are selected, the discrete problems (18), (17), and (16) for a linear boundary value problem (1) - (2) lead to a constraint linear least-squares problem $\varphi(c)=\|\mathcal{A}c-r\|_{\mathbb{R}^{nmM+l}}^{2}\rightarrow\min!$ (31) under the constraint $\mathcal{C}c=0.$ (32) The equality constraints consists of the $k(n-1)$ continuity conditions for the approximation of the differential constraints while the functional $\varphi(c)$ represent a reformulation of the functionals (18), (17), and (16), respectively. Here, $c\in\mathbb{R}^{n(mN+k)}$ is the vector of coefficients of the basis functions for $X_{\pi}$ disregarding the continuity conditions. Furthermore, it holds $r\in\mathbb{R}^{nM}$, $\mathcal{A}\in\mathbb{R}^{(nM+l)\times n(mN+k)}$, and $\mathcal{C}\in\mathbb{R}^{(n-1)k\times n(mN+k)}$. The matrices $\mathcal{A}$ and $\mathcal{C}$ are very sparse. For details about their structure we refer to Appendix A.3. ### 5.1 Approaches to solve the constraint optimization problem (31)-(32) A number of approaches to solve the constraint optimization problem (31)-(32) have been tested. 1. 1. Direct method. The solution manifold of (32) forms a subspace which can be characterized by131313$\mathcal{C}$ has full row rank. $\mathcal{C}c=0\text{ if and only if }c=\tilde{\mathcal{C}}z\text{ for some }z\in\mathbb{R}^{nmN+k-l}.$ Here, $\tilde{\mathcal{C}}\in\mathbb{R}^{n(mN+k)\times(nmN+k-l)}$ has orthonormal columns. With this representation, the constrained minimization problem can be reduced to the uncontrained one $\tilde{\varphi}(z)=\|\mathcal{A}\tilde{\mathcal{C}}z-r\|_{\mathbb{R}^{W}}^{2}\rightarrow\min!$ The implemented algorithm is that of BjorckGolub67 , see also (Bjorck96, , Section 5.1.2) which is sometimes called the direct elimination method. 2. 2. Weighting of the constraints. In this approach, a sufficiently large parameter $\omega>0$ is chosen and the problem (31) \- (32) is replaced by the free minimization problem $\varphi_{\omega}(c)=\|\mathcal{A}c-r\|_{\mathbb{R}^{W}}^{2}+\omega\|\mathcal{C}c\|\rightarrow\min!$ It is known that the141414Assuming a fullrank condition on $\mathcal{A}$! minimizer $c_{\omega}$ of $\varphi_{\omega}$ converges towards the solution of (31) - (32) for $\omega\rightarrow\infty$ (cf. (GovLo89, , Section 12.1.5)). Two different orderings of the equations have been implemented. One is $\mathcal{G}=\left[\begin{array}[]{c}\omega\mathcal{C}\\\ \mathcal{A}\end{array}\right],\quad\bar{r}=\left[\begin{array}[]{c}0\\\ r\end{array}\right]$ while the other uses a block-bidiagonal structure as it is common for collocation methods for ODEs, cf BaderAscher . It is known that the order of the equations in the weighting method may have a large impact on the accuracy of the solutions vLoan85 . In our test examples, however, we did not observe a difference in the behavior of both orderings. 3. 3. The direct solution method by eliminating the constraints has often the deficiency of generating a lot of fill-in in the intermediate matrices. An approach to overcome this situation has been proposed in vLoan85 . The solutions of the weighting approach are iteratively enhanced by a defect correction process. This method is implemented in the form presented in Barlow92 ; BarlowVemu92 . This form is called the deferred correction procedure for constrained leaset-squares problems by the authors. As a stopping criterion, the estimate (i) in (BarlowVemu92, , p. 254) has been implemented. Additionally, a bound for the maximal numer of iterations can be provided. Under reasonable conditions, at most 2 iterations should be sufficient for obtaining maximal (with respect to the sensitivity of the problem) accuracy for the discrete solution. The results of the weighting method depend substantially on the choice of the parameter $\omega$. In order to have an accurate approximation of the exact solution $c_{\ast}$ of the problem (31)-(32), a large value of $\omega$ should be used (in the absence of rounding errors). However, if $\omega$ becomes too large, the algorithm may lack numerical stability. A discussion of this topic has been given in vLoan85 . In particular, it turns out that the algorithm used for the QR decomposition and the pivoting strategies have a strong influence on the success of this method. In our implementation, we use the sparse QR implementation of SPQR . On the other hand, an accuracy of the solution being much lower than the approximation error of $x_{\pi}$ is not necessary.151515The Eigen library has its own implementation of a sparse QR factorization. The latter turned out to be very slow compared to SPQR. Therefore, a number of experiments have been done in order to obtain some insight into what reasonable choices might be. ###### Experiment 8 Influence of the choice of the weighting parameter $\omega$ We use Example 3. Two sets of parameters are selected: (i) $N=5$, $n=160$ and (ii) $N=20$, $n=20$. The choice (i) corresponds to low degree polynomials with a corresponding large number of subintervals while (ii) uses higher degree polynomials with a corresponding small number of subintervals. Both cases have been selected according to Table ‣ Towards a reliable implementation of least-squares collocation for higher-index differential-algebraic equations in such a way that a high accuracy can be obtained while at the same time having only a small influence of the problem conditioning. The other parameters chosen in this experiment are: $M=N+1$, Gauss-Legendre collocation nodes and Legendre polynomials as basis functions. The error in dependence of $\omega$ is measured both with respect to the exact solution and with respect to a reference soltion obtained by the direct solution method. The results are provided in Tables 4–5. The results for Example 4 below are quite similar. The results indicate that an optimal $\omega$ may vary considerably depending on the problem parameters. However, the accuracy against the exact solution is rather insensitive of $\omega$. ∎ Table 4: Influence of parameter $\omega$ for the constraints in Example 3 using $N=5$ and $n=160$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{7})$, $L^{\infty}((0,5),\mathbb{R}^{7})$ and $H_{D}^{1}((0,5),\mathbb{R}^{7})$ | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 1e-09 | 2.25e+00 | 4.54e+00 | 9.37e+00 | 2.25e+00 | 4.54e+00 | 8.04e+00 1e-08 | 2.00e+00 | 4.59e+00 | 9.04e+00 | 2.00e+00 | 4.59e+00 | 9.04e+00 1e-07 | 3.55e-01 | 5.83e-01 | 1.05e+00 | 3.55e-01 | 5.83e-01 | 1.05e+00 1e-06 | 1.06e-05 | 1.66e-05 | 2.84e-05 | 1.06e-05 | 1.66e-05 | 2.84e-05 1e-05 | 1.02e-07 | 1.60e-07 | 3.07e-07 | 1.02e-07 | 1.60e-07 | 2.75e-07 1e-04 | 2.26e-08 | 1.49e-08 | 1.41e-07 | 5.51e-09 | 6.27e-09 | 3.51e-08 1e-03 | 2.26e-08 | 1.53e-08 | 1.41e-07 | 5.51e-09 | 7.09e-09 | 3.54e-08 1e-02 | 2.15e-08 | 1.39e-08 | 1.40e-07 | 4.44e-09 | 3.36e-09 | 3.31e-08 1e-01 | 2.13e-08 | 1.39e-08 | 1.40e-07 | 4.28e-09 | 3.28e-09 | 3.29e-08 1e+00 | 2.00e-08 | 1.31e-08 | 1.31e-07 | 2.99e-09 | 2.99e-09 | 2.29e-08 1e+01 | 1.73e-08 | 1.12e-08 | 1.13e-07 | 1.27e-10 | 7.51e-11 | 7.53e-10 1e+02 | 1.73e-08 | 1.12e-08 | 1.12e-07 | 3.64e-10 | 3.84e-11 | 3.85e-10 1e+03 | 1.73e-08 | 1.12e-08 | 1.12e-07 | 2.36e-09 | 3.05e-10 | 3.06e-09 1e+04 | 2.15e-08 | 1.15e-08 | 1.16e-07 | 1.82e-08 | 2.91e-09 | 2.92e-08 1e+05 | 1.18e-07 | 3.27e-08 | 3.28e-07 | 1.26e-07 | 3.18e-08 | 3.20e-07 1e+06 | 6.69e-06 | 5.08e-07 | 5.08e-06 | 6.68e-06 | 5.08e-07 | 5.09e-06 1e+07 | 6.28e-05 | 5.09e-06 | 5.09e-05 | 6.28e-05 | 5.09e-06 | 5.09e-05 1e+08 | 9.94e-05 | 2.82e-05 | 2.83e-04 | 9.94e-05 | 2.82e-05 | 2.83e-04 1e+09 | 3.33e+01 | 7.87e+00 | 7.91e+01 | 3.33e+01 | 7.87e+00 | 7.91e+01 1e+10 | 8.61e+01 | 5.91e+01 | 5.93e+02 | 8.61e+01 | 5.91e+01 | 5.93e+02 Table 5: Influence of parameter $\omega$ for the constraints in Example 3 using $N=20$ and $n=20$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{7})$, $L^{\infty}((0,5),\mathbb{R}^{7})$ and $H_{D}^{1}((0,5),\mathbb{R}^{7})$ | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 1e-09 | 2.44e+00 | 4.91e+00 | 7.59e+00 | 2.44e+00 | 4.91e+00 | 7.59e+00 1e-08 | 4.40e-02 | 7.51e-02 | 1.31e-01 | 4.40e-02 | 7.51e-02 | 1.31e-01 1e-07 | 6.38e-08 | 9.91e-08 | 1.85e-07 | 6.38e-08 | 9.91e-08 | 1.85e-07 1e-06 | 1.35e-08 | 2.38e-08 | 3.80e-08 | 1.35e-08 | 2.38e-08 | 3.80e-08 1e-05 | 2.76e-09 | 3.68e-09 | 6.77e-09 | 2.76e-09 | 3.68e-09 | 6.77e-09 1e-04 | 1.86e-10 | 2.77e-10 | 5.11e-10 | 1.86e-10 | 2.77e-10 | 5.13e-10 1e-03 | 5.12e-11 | 1.59e-11 | 5.68e-11 | 4.59e-11 | 1.60e-11 | 6.23e-11 1e-02 | 2.49e-11 | 4.53e-12 | 4.29e-11 | 4.25e-11 | 5.62e-12 | 5.43e-11 1e-01 | 3.63e-11 | 4.57e-12 | 4.59e-11 | 5.97e-11 | 6.32e-12 | 6.35e-11 1e+00 | 6.01e-11 | 5.37e-12 | 5.40e-11 | 8.58e-11 | 7.61e-12 | 7.64e-11 1e+01 | 1.51e-10 | 1.64e-11 | 1.64e-10 | 1.53e-10 | 1.60e-11 | 1.61e-10 1e+02 | 4.67e-10 | 4.35e-11 | 4.37e-10 | 4.39e-10 | 4.14e-11 | 4.16e-10 1e+03 | 1.29e-08 | 8.11e-10 | 8.15e-09 | 1.29e-08 | 8.13e-10 | 8.17e-09 1e+04 | 1.50e-07 | 8.22e-09 | 8.26e-08 | 1.50e-07 | 8.22e-09 | 8.26e-08 1e+05 | 6.26e-07 | 4.26e-08 | 4.28e-07 | 6.26e-07 | 4.26e-08 | 4.28e-07 1e+06 | 1.10e-05 | 7.53e-07 | 7.57e-06 | 1.10e-05 | 7.53e-07 | 7.57e-06 1e+07 | 3.43e-05 | 3.17e-06 | 3.19e-05 | 3.43e-05 | 3.17e-06 | 3.19e-05 1e+08 | 1.85e-04 | 1.22e-05 | 1.23e-04 | 1.85e-04 | 1.22e-05 | 1.23e-04 1e+09 | 1.77e-05 | 3.69e-06 | 3.22e-05 | 1.77e-05 | 3.69e-06 | 3.22e-05 1e+10 | 6.74e+00 | 2.38e+00 | 1.47e+01 | 6.74e+00 | 2.38e+00 | 1.47e+01 The following example is a boundary value problem incontrast to Example 3 which is an intial value problem. ###### Example 4 On the interval $[0,1]$, consider the DAE $\left[\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&0&0&0&0&0\\\ 0&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&1&0\end{array}\right]\frac{d}{dt}\left[\begin{array}[]{c}x_{1}\\\ x_{2}\\\ y_{1}\\\ y_{2}\\\ y_{3}\\\ y_{4}\end{array}\right]+\left[\begin{array}[]{cccccc}0&-\lambda&0&0&0&0\\\ -\lambda&0&0&0&0&0\\\ -1&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\end{array}\right]\left[\begin{array}[]{c}x_{1}\\\ x_{2}\\\ y_{1}\\\ y_{2}\\\ y_{3}\\\ y_{4}\end{array}\right]=\left[\begin{array}[]{c}0\\\ 0\\\ 0\\\ 0\\\ 0\\\ 0\end{array}\right],\quad\lambda>0,$ subject to the boundary conditions $x_{1}(0)=x_{1}(1)=1.$ This DAE can be brought into the proper form (1) by setting $A=\left[\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&1&0&0&0\\\ 0&0&0&0&0\\\ 0&0&1&0&0\\\ 0&0&0&1&0\\\ 0&0&0&0&1\end{array}\right],\quad D=\left[\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&1&0\end{array}\right],\quad B=\left[\begin{array}[]{cccccc}0&-\lambda&0&0&0&0\\\ -\lambda&0&0&0&0&0\\\ -1&0&1&0&0&0\\\ 0&0&0&1&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\end{array}\right].$ This DAE has the tractability index $\mu=4$ and dynamical degree of freedom $l=2$. The solution reads $\displaystyle x_{\ast,1}(t)$ $\displaystyle=\frac{e^{-\lambda t}(e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ $\displaystyle x_{\ast,2}(t)$ $\displaystyle=\frac{e^{-\lambda t}(-e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ $\displaystyle y_{\ast,1}(t)$ $\displaystyle=\frac{e^{-\lambda t}(e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ $\displaystyle y_{\ast,2}(t)$ $\displaystyle=\lambda\frac{e^{-\lambda t}(-e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ $\displaystyle y_{\ast,3}(t)$ $\displaystyle=\lambda^{2}\frac{e^{-\lambda t}(e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ $\displaystyle y_{\ast,4}(t)$ $\displaystyle=\lambda^{3}\frac{e^{-\lambda t}(-e^{\lambda}+e^{2\lambda t})}{1+e^{\lambda}}$ ∎ The iterative solver using defect corrections may overcome the difficulties connected with a suitable choice of the parameter $\omega$ in the weighting method. According to Experiment 10, we would expect the optimal $\omega$ to be in the order of magnitude $10^{-3}\ldots 10^{+2}$ with an optimum around $10^{-2}$. This is in contrast to the recommendations given in BarlowVemu92 where a choice of $\omega\approx\varepsilon_{\text{mach}}^{-1/3}$ is recommended for the deferred correction algorithm. We test the performance of the deferred correction solver in the next experiment. Here, the tolerance in the convergence check is set to $10^{-15}$. The iterations are considered not to converge if the convergence check has failed after two iterations. ###### Experiment 9 We check the performance of the deferred correction solver in dependence of the weight parameter $\omega$. Both Examples 3 and 4 are used. The results are presented in Tables 6 – 9. The results indicate that a larger value for $\omega$ seems to be preferable. ∎ Table 6: Influence of the parameter $\omega$ on the accuracy of th discrete solution for Example 3 using $N=5$ and $n=160$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{7})$, $L^{\infty}((0,5),\mathbb{R}^{7})$ and $H_{D}^{1}((0,5),\mathbb{R}^{7})$. 2 iterations are applied | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 0.01\theempfootnote\theempfootnoteIteration did not converge | 2.13e-08 | 1.39e-08 | 1.40e-07 | 4.30e-09 | 3.30e-09 | 3.31e-08 10 | 1.73e-08 | 1.12e-08 | 1.12e-07 | 5.43e-11 | 1.62e-11 | 1.63e-10 $\varepsilon_{\text{mach}}^{-1/3}$ | 1.73e-08 | 1.12e-08 | 1.12e-07 | 5.42e-11 | 1.62e-11 | 1.63e-10 Table 7: Influence of the parameter $\omega$ on the accuracy of th discrete solution for Example 3 using $N=20$ and $n=20$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{7})$, $L^{\infty}((0,5),\mathbb{R}^{7})$ and $H_{D}^{1}((0,5),\mathbb{R}^{7})$. 2 iterations are applied | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 0.01\theempfootnote\theempfootnoteIteration did not converge | 2.20e-11 | 3.35e-12 | 3.37e-11 | 6.25e-11 | 6.67e-12 | 6.70e-11 10 | 1.79e-11 | 1.98e-12 | 1.99e-11 | 6.26e-11 | 6.91e-12 | 6.95e-11 $\varepsilon_{\text{mach}}^{-1/3}$ | 1.11e-11 | 1.71e-12 | 1.72e-11 | 6.26e-11 | 6.94e-12 | 6.97e-11 Table 8: Influence of the parameter $\omega$ on the accuracy of the discrete solution for Example 4 using $N=20$ and $n=5$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{6})$, $L^{\infty}((0,5),\mathbb{R}^{6})$ and $H_{D}^{1}((0,5),\mathbb{R}^{6})$. 2 iterations are applied | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 0.01 | 8.25e-08 | 6.17e-09 | 8.72e-09 | 2.52e-06 | 1.55e-07 | 2.20e-07 10 | 2.73e-07 | 1.41e-08 | 2.00e-08 | 2.63e-06 | 1.61e-07 | 2.27e-07 $\varepsilon_{\text{mach}}^{-1/3}$ | 3.84e-09 | 3.61e-10 | 5.11e-10 | 2.45e-06 | 1.56e-07 | 2.20e-07 Table 9: Influence of the parameter $\omega$ on the accuracy of th discrete solution for Example 4 using $N=5$ and $n=20$. The error of the solution with respect to the exact solution (A) and with respect to a discrete reference solution obtained by a direct method (B) is given in the norms of $L^{2}((0,5),\mathbb{R}^{6})$, $L^{\infty}((0,5),\mathbb{R}^{6})$ and $H_{D}^{1}((0,5),\mathbb{R}^{6})$. 2 iterations are applied | (A) | (B) ---|---|--- $\omega$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 0.01\theempfootnote\theempfootnoteIteration did not converge | 1.41e-06 | 4.59e-07 | 6.49e-07 | 3.75e-08 | 4.23e-09 | 5.98e-09 10 | 1.39e-06 | 4.59e-07 | 6.49e-07 | 1.42e-08 | 2.63e-09 | 3.71e-09 $\varepsilon_{\text{mach}}^{-1/3}$ | 1.39e-06 | 4.59e-07 | 6.49e-07 | 1.71e-08 | 2.83e-09 | 4.00e-09 ### 5.2 Performance of the linear solvers In this section, we intend to provide some insight into the behavior of the linear solvers. This concerns both the accuracy as well as the computational resources (computation time, memory consumption). All these data are highly implementation dependent. Also the hardware architecture plays an important role. The linear solvers have been implemented using the standard strategy of subdividing them into a factorization step and a solve step. The price to pay is a larger memory consumption. However, their use in the context of, e.g., a modified Newton method may decrease the computation time considerably. The tests have benn run on a Linux laptop Dell Latitude E5550. While the program is a pure sequential one, the MKL library may use shared memory parallel versions of their BLAS and LAPACK routines. The CPU of the machine is an Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz providing two cores, each of them capable of hyperthreading. For the test runs, cpu throttling has been disabled such that all cores ran at roughly 3.2 GHz. The parameter for the weighting solver is $\omega=1$ while the corresponding parameter for the deferred correction solver is $\omega=\epsilon_{\textrm{mach}}^{-1/3}\approx 1.65\times 10^{5}$. These parameters have been chosen since they seem to be best suited for the examples. The test cases (combination of $N$ and $n$) have been selected by choosing the best combinations in Tables ‣ Towards a reliable implementation of least-squares collocation for higher-index differential-algebraic equations and ‣ Towards a reliable implementation of least-squares collocation for higher-index differential-algebraic equations, respectively. ###### Experiment 10 First, we consider Example 3. For all values of $N$, $M=N+1$ Gauss-Legendre nodes have been used. The characteristics of the test cases using Legendre basis functions are provided in Table 10. For the special properties of the Legendre polynomials, the matrix $\mathcal{C}$ representing the constraints is extremly sparse featuring only three nonzero elements per row. The computational results are shown in Table 11. In the next computations, the Chebyshev basis has been used which leads to a slightly more occupied matrix $\mathcal{C}$. The results are provided in Tables 12 and 13. Table 10: Case characteristics for Experiment 10 using the Legendre basis. The number of nonzero elements in the matrices $\mathcal{A}$ and $\mathcal{C}$ are provided as reported by the functions of the Eigen library. The columns denote: the number of rows of $\mathcal{A}$ (dimA), the number of rows of $\mathcal{C}$ (dimC), the number of unknowns (nun), the number of nonzero elements of $\mathcal{C}$ (nnzC), the number of nonzero elements of $\mathcal{A}$ (nnzA) for the functional $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$, respectively | | | | | | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|---|---|---|---|---|--- case | $N$ | $n$ | dimA | dimC | nun | nnzC | nnzA | nnzA 1 | 3 | 320 | 8964 | 1914 | 8640 | 5742 | 101124 | 101124 2 | 5 | 80 | 3364 | 474 | 3280 | 1422 | 58964 | 59044 3 | 10 | 5 | 389 | 24 | 380 | 72 | 12749 | 12334 4 | 20 | 5 | 739 | 24 | 730 | 72 | 47509 | 46534 Table 11: Computing times, permanent workspace needed, and error for the cases described in Table 10. The computing times are provided in milliseconds. They are the average of 100 runs of each case. The error is measured in the norm of $H^{1}_{D}((0,5),\mathbb{R}^{7})$. The column headings denote: The upper bound on the number of nonzero elements of the $QR$-factors as reported by SPQR (nWork), the time for the matrix assembly (tass), the time for the factorization (afact), and the time for the solution (tslv) for both functionals $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$ | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|--- case | solver | nWork | tass | tfact | tslv | error | nWork | tass | tfact | tslv | error 1 | direct | 221829 | 12 | 156 | 4 | 6.74e-04 | 221829 | 12 | 158 | 4 | 6.44e-04 | weighted | 309438 | 13 | 17 | 6 | 1.15e-03 | 309438 | 13 | 19 | 6 | 6.93e-04 | deferred | 309438 | 14 | 18 | 16 | 6.74e-04 | 309438 | 13 | 18 | 17 | 6.44e-04 2 | direct | 115932 | 12 | 50 | 4 | 9.02e-07 | 116168 | 5 | 25 | 2 | 8.50e-07 | weighted | 155334 | 14 | 17 | 6 | 1.05e-06 | 155370 | 6 | 8 | 3 | 8.95e-07 | deferred | 155334 | 14 | 16 | 14 | 9.02e-07 | 155370 | 6 | 8 | 7 | 8.50e-07 3 | direct | 24233 | 2 | 4 | 1 | 8.80e-08 | 24967 | 1 | 2 | 0 | 6.59e-08 | weighted | 26810 | 2 | 3 | 1 | 9.62e-08 | 27028 | 1 | 1 | 0 | 8.00e-08 | deferred | 26810 | 2 | 3 | 2 | 8.80e-08 | 27028 | 1 | 1 | 1 | 6.59e-08 4 | direct | 90277 | 9 | 2 | 2 | 4.47e-12 | 90052 | 1 | 1 | 2 | 5.17e-12 | weighted | 96544 | 11 | 10 | 3 | 7.44e-12 | 97857 | 9 | 11 | 3 | 5.28e-12 | deferred | 96544 | 11 | 10 | 6 | 2.17e-12 | 97857 | 9 | 10 | 5 | 2.08e-12 Table 12: Case characteristics for Experiment 10 using the Chebyshev basis. The number of nonzero elements in the matrices $\mathcal{A}$ and $\mathcal{C}$ are provided as reported by the functions of the Eigen library. The columns denote: the number of rows of $\mathcal{A}$ (dimA), the number of rows of $\mathcal{C}$ (dimC), the number of unknowns (nun), the number of nonzero elements of $\mathcal{C}$ (nnzC), the number of nonzero elements of $\mathcal{A}$ (nnzA) for the functional $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$, respectively | | | | | | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|---|---|---|---|---|--- case | $N$ | $n$ | dimA | dimC | nun | nnzC | nnzA | nnzA 1 | 3 | 320 | 8964 | 1914 | 8640 | 7656 | 101128 | 101124 2 | 5 | 80 | 3364 | 474 | 3280 | 3318 | 58851 | 59056 3 | 10 | 5 | 389 | 24 | 380 | 360 | 12846 | 12626 4 | 20 | 5 | 739 | 24 | 730 | 720 | 47581 | 47191 Table 13: Computing times, permanent workspace needed, and error for the cases described in Table 12. The computing times are provided in milliseconds. They are the average of 100 runs of each case. The error is measured in the norm of $H^{1}_{D}((0,5),\mathbb{R}^{7})$. The column headings denote: The upper bound on the number of nonzero elements of the $QR$-factors as reported by SPQR (nWork), the time for the matrix assembly (tass), the time for the factorization (afact), and the time for the solution (tslv) for both functionals $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$ | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|--- case | solver | nWork | tass | tfact | tslv | error | nWork | tass | tfact | tslv | error 1 | direct | 334564 | 12 | 161 | 6 | 6.74e-04 | 329266 | 15 | 163 | 6 | 6.44e-04 | weighted | 367514 | 14 | 21 | 8 | 1.15e-03 | 358591 | 15 | 23 | 8 | 6.93e-04 | deferred | 367514 | 13 | 21 | 21 | 6.74e-04 | 358591 | 15 | 22 | 22 | 6.44e-04 2 | direct | 231988 | 12 | 61 | 7 | 9.02e-07 | 231962 | 5 | 30 | 4 | 8.50e-07 | weighted | 204243 | 14 | 23 | 8 | 1.05e-06 | 201128 | 6 | 11 | 4 | 8.95e-07 | deferred | 204243 | 14 | 23 | 21 | 9.02e-07 | 201128 | 6 | 11 | 10 | 8.50e-07 3 | direct | 51343 | 2 | 7 | 1 | 8.80e-08 | 51565 | 2 | 7 | 1 | 6.59e-08 | weighted | 60861 | 2 | 5 | 1 | 9.62e-08 | 61376 | 2 | 5 | 1 | 8.00e-08 | deferred | 60861 | 3 | 5 | 3 | 8.80e-08 | 61376 | 2 | 5 | 3 | 6.59e-08 4 | direct | 208910 | 9 | 3 | 4 | 5.78e-12 | 230195 | 7 | 28 | 5 | 5.17e-12 | weighted | 164558 | 11 | 15 | 4 | 5.37e-12 | 167836 | 9 | 15 | 3 | 4.75e-12 | deferred | 164558 | 11 | 15 | 8 | 2.71e-12 | 167836 | 10 | 15 | 8 | 2.23e-12 The previous example is an initial value problem. This structure may have consequences on the performance of the linear solvers. Therefore, in the next experiment, we consider a boundary value problem. ###### Experiment 11 We repeat Experiment 10 with Example 4. The problem characteristics and computational results are provided in Tables 14 – 17. It should be noted that the deferred correction solver returned normally (tolerance as before $10^{-15}$) after at most two iterations in all cases. However, in some cases, the results are completely off. This happens, for example, in Tables 15 and 17, cases 1 and 2, for $\Phi_{\pi,M}^{C}$. Table 14: Case characteristics for Experiment 11 using the Legendre basis. The number of nonzero elements in the matrices $\mathcal{A}$ and $\mathcal{C}$ are provided as reported by the functions of the Eigen library. The columns denote: the number of rows of $\mathcal{A}$ (dimA), the number of rows of $\mathcal{C}$ (dimC), the number of unknowns (nun), the number of nonzero elements of $\mathcal{C}$ (nnzC), the number of nonzero elements of $\mathcal{A}$ (nnzA) for the functional $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$, respectively | | | | | | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|---|---|---|---|---|--- case | $N$ | $n$ | dimA | dimC | nun | nnzC | nnzA | nnzA 1 | 4 | 320 | 9602 | 1595 | 9280 | 4785 | 86403 | 80643 2 | 5 | 160 | 5762 | 795 | 5600 | 1422 | 63363 | 63363 3 | 10 | 5 | 332 | 20 | 325 | 60 | 6933 | 6663 4 | 20 | 5 | 632 | 20 | 625 | 60 | 25793 | 25263 Table 15: Computing times, permanent workspace needed, and error for the cases described in Table 14. The computing times are provided in milliseconds. They are the average of 100 runs of each case. The error is measured in the norm of $H^{1}_{D}((0,1),\mathbb{R}^{6})$. The column headings denote: The upper bound on the number of nonzero elements of the $QR$-factors as reported by SPQR (nWork), the time for the matrix assembly (tass), the time for the factorization (afact), and the time for the solution (tslv) for both functionals $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$ | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|--- case | solver | nWork | tass | tfact | tslv | error | nWork | tass | tfact | tslv | error 1 | direct | 437085 | 14 | 164 | 8 | 1.58e-04 | 397127 | 13 | 158 | 7 | 1.24e-04 | weighted | 235746 | 14 | 16 | 5 | 8.22e-05 | 341713 | 7 | 22 | 7 | 2.07e-05 | deferred | 235746 | 14 | 16 | 13 | 5.53e-02 | 341713 | 14 | 21 | 25 | 9.09e+02 2 | direct | 348742 | 17 | 124 | 12 | 2.58e-05 | 348742 | 15 | 123 | 12 | 1.57e-05 | weighted | 153062 | 9 | 9 | 3 | 9.29e-07 | 153062 | 9 | 9 | 3 | 7.75e-06 | deferred | 153062 | 10 | 9 | 8 | 1.38e-01 | 153062 | 9 | 10 | 10 | 1.47e-01 3 | direct | 11617 | 1 | 3 | 0 | 8.04e-10 | 12155 | 1 | 2 | 0 | 1.06e-09 | weighted | 12400 | 2 | 2 | 1 | 1.26e-09 | 12141 | 1 | 1 | 0 | 5.52e-09 | deferred | 12400 | 2 | 2 | 1 | 4.18e-11 | 12141 | 1 | 1 | 1 | 5.08e-09 4 | direct | 46847 | 6 | 9 | 1 | 7.24e-08 | 46883 | 2 | 4 | 7 | 3.54e-07 | weighted | 42947 | 7 | 6 | 2 | 1.42e-07 | 42859 | 3 | 3 | 1 | 1.71e-07 | deferred | 42947 | 6 | 6 | 4 | 5.27e-09 | 42859 | 3 | 3 | 2 | 1.51e-07 Table 16: Case characteristics for Experiment 11 using the Chebyshev basis. The number of nonzero elements in the matrices $\mathcal{A}$ and $\mathcal{C}$ are provided as reported by the functions of the Eigen library. The columns denote: the number of rows of $\mathcal{A}$ (dimA), the number of rows of $\mathcal{C}$ (dimC), the number of unknowns (nun), the number of nonzero elements of $\mathcal{C}$ (nnzC), the number of nonzero elements of $\mathcal{A}$ (nnzA) for the functional $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$, respectively | | | | | | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|---|---|---|---|---|--- case | $N$ | $n$ | dimA | dimC | nun | nnzC | nnzA | nnzA 1 | 4 | 320 | 9602 | 1595 | 9280 | 7656 | 86406 | 82566 2 | 5 | 160 | 5762 | 795 | 5600 | 5565 | 63367 | 63367 3 | 10 | 5 | 332 | 20 | 325 | 300 | 6945 | 6795 4 | 20 | 5 | 632 | 20 | 325 | 600 | 25830 | 25560 Table 17: Computing times, permanent workspace needed, and error for the cases described in Table 16. The computing times are provided in milliseconds. They are the average of 100 runs of each case. The error is measured in the norm of $H^{1}_{D}((0,1),\mathbb{R}^{6})$. The column headings denote: The upper bound on the number of nonzero elements of the $QR$-factors as reported by SPQR (nWork), the time for the matrix assembly (tass), the time for the factorization (afact), and the time for the solution (tslv) for both functionals $\Phi^{R}_{\pi,M}$ and $\Phi^{C}_{\pi,M}$ | | $\Phi_{\pi,M}^{R}$ | $\Phi_{\pi,M}^{C}$ ---|---|---|--- case | solver | nWork | tass | tfact | tslv | error | nWork | tass | tfact | tslv | error 1 | direct | 796757 | 27 | 360 | 28 | 6.77e-05 | 807507 | 26 | 363 | 29 | 1.77e-04 | weighted | 502962 | 16 | 28 | 11 | 1.90e-06 | 471966 | 16 | 29 | 11 | 1.11e-05 | deferred | 502962 | 15 | 28 | 27 | 2.33e-07 | 471966 | 15 | 29 | 35 | 1.19e+02 2 | direct | 513054 | 17 | 143 | 16 | 3.59e-05 | 513054 | 15 | 143 | 17 | 2.37e-05 | weighted | 347439 | 10 | 19 | 7 | 8.73e-07 | 347439 | 9 | 19 | 7 | 4.71e-06 | deferred | Solver failed | 347439 | 9 | 20 | 25 | 5.07e+02 3 | direct | 29347 | 2 | 4 | 1 | 2.69e-09 | 30843 | 1 | 3 | 1 | 1.40e-09 | weighted | 25392 | 2 | 3 | 1 | 5.10e-10 | 26984 | 1 | 1 | 0 | 8.52e-10 | deferred | 25392 | 2 | 2 | 2 | 4.41e-11 | 26984 | 1 | 1 | 1 | 1.22e-09 4 | direct | 122665 | 6 | 16 | 3 | 6.70e-08 | 148882 | 5 | 18 | 4 | 6.68e-07 | weighted | 109429 | 7 | 10 | 3 | 5.22e-08 | 109345 | 6 | 10 | 3 | 5.43e-08 | deferred | 109429 | 7 | 11 | 7 | 6.09e-11 | 109345 | 6 | 11 | 7 | 2.62e-09 It should be noted that a considerable amount of memory for the QR- factorizations is consumed by the internal representation of the Q-factor in SPQR. This can be avoided if the factorization and solution steps are intervowen. ### 5.3 Sensitivity of boundary condition weighting As already known for boundary value problems for ODEs and index-1 DAEs, a special problem is the scaling of the boundary condition, and hence, here the inclusion of the boundary conditions (2). Their scaling is independent of the scaling of the DAE (1). Therefore, it seems to be reasonable to provide an additional possibility for the scaling of the boundary conditions. We decided to enable this by introducing an additional parameter $\alpha$ to be chosen by the user. So, $\Phi$ from (8) is replaced by the functional $\tilde{\Phi}(x)=\int_{a}^{b}|A(t)(Dx)^{\prime}(t)+B(t)x(t)-q(t)|^{2}{\rm dt}+\alpha|G_{a}x(a)+G_{b}x(b)-d|^{2}.$ Analogously, the discretized versions $\Phi_{\pi,M}^{R}$, $\Phi_{\pi,M}^{I}$ and $\Phi_{\pi,M}^{C}$ are replaced by their counterparts $\tilde{\Phi}_{\pi,M}^{R}$, $\tilde{\Phi}_{\pi,M}^{I}$ and $\tilde{\Phi}_{\pi,M}^{C}$ with weighted boundary conditions. The convergence theorems will hold true for these modifications of the functional, too. ###### Experiment 12 Influence of $\alpha$ on the accuracy We use the example and settings of Experiment 8. The results are provided in Table 18. ∎ Table 18: Influence of weight parameter $\alpha$ for the boundary conditions in Example 3. The error of the solution is given in the norms of $L^{2}((0,5),\mathbb{R}^{7})$, $L^{\infty}((0,5),\mathbb{R}^{7})$ and $H_{D}^{1}((0,5),\mathbb{R}^{7})$ | $N=5$, $n=160$ | $N=20$, $n=20$ ---|---|--- $\alpha$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ | $L^{\infty}(0,5)$ | $L^{2}(0,5)$ | $H_{D}^{1}(0,5)$ 1e-10 | 3.18e+00 | 7.03e+00 | 1.21e+01 | 1.60e+00 | 3.10e+00 | 5.09e+00 1e-09 | 9.33e-07 | 2.33e-06 | 3.84e-06 | 1.60e+00 | 3.10e+00 | 5.09e+00 1e-08 | 1.58e-07 | 3.52e-07 | 6.16e-07 | 1.05e-07 | 1.94e-07 | 3.54e-07 1e-07 | 1.27e-07 | 1.39e-08 | 3.26e-08 | 5.06e-09 | 1.10e-08 | 2.00e-08 1e-06 | 7.17e-08 | 2.20e-09 | 1.68e-08 | 9.60e-10 | 2.29e-09 | 4.10e-09 1e-05 | 9.60e-08 | 1.59e-09 | 1.58e-08 | 7.64e-11 | 2.07e-10 | 3.80e-10 1e-04 | 6.99e-08 | 1.59e-09 | 1.60e-08 | 5.00e-11 | 4.07e-11 | 9.26e-11 1e-03 | 9.83e-08 | 1.82e-09 | 1.83e-08 | 3.91e-11 | 6.41e-12 | 5.46e-11 1e-02 | 1.15e-07 | 2.28e-09 | 2.29e-08 | 6.37e-11 | 6.26e-12 | 6.25e-11 1e-01 | 6.43e-08 | 1.27e-09 | 1.27e-08 | 5.11e-11 | 6.61e-12 | 6.64e-1 1e+00 | 6.04e-08 | 1.13e-09 | 1.13e-08 | 6.66e-11 | 7.50e-12 | 7.54e-11 1e+01 | 2.15e-07 | 3.40e-09 | 3.42e-08 | 7.97e-11 | 9.85e-12 | 9.89e-11 1e+02 | 4.12e-07 | 5.66e-09 | 5.68e-08 | 6.78e-11 | 8.10e-12 | 8.14e-11 1e+03 | 4.51e-06 | 5.74e-08 | 5.76e-07 | 9.60e-11 | 9.81e-12 | 9.85e-11 1e+04 | 2.31e-05 | 2.93e-07 | 2.95e-06 | 2.24e-09 | 1.52e-10 | 1.52e-09 1e+05 | 4.68e-04 | 5.94e-06 | 5.97e-05 | 2.91e-08 | 1.35e-09 | 1.36e-08 1e+06 | 2.12e+03 | 5.16e+01 | 5.19e+02 | 2.34e-07 | 1.68e-08 | 1.68e-07 1e+07 | 6.53e+03 | 1.03e+02 | 1.04e+03 | 2.97e-06 | 1.77e-07 | 1.77e-06 1e+08 | 4.60e+02 | 1.78e+01 | 1.79e+02 | 4.76e-06 | 3.72e-07 | 3.73e-06 1e+09 | 2.05e+01 | 3.27e+00 | 3.24e+01 | 4.56e+01 | 4.90e+00 | 4.91e+01 ###### Experiment 13 Influence of $\alpha$ on the accuracy We repeat the previous experiment with Example 4. The discretization parameters are (i) $N=5$, $n=20$ and (ii) $N=20$, $n=5$. All other settings correspond to those of Experiment 12. The results are presented in Table 19. ∎ Table 19: Influence of weight parameter $\alpha$ for the boundary conditions in Example 4. The error of the solution is given in the norms of $L^{2}((0,1),\mathbb{R}^{6})$, $L^{\infty}((0,1),\mathbb{R}^{6})$ and $H_{D}^{1}((0,1),\mathbb{R}^{6})$ | $N=5$, $n=20$ | $N=20$, $n=5$ ---|---|--- $\alpha$ | $L^{\infty}(0,1)$ | $L^{2}(0,1)$ | $H_{D}^{1}(0,1)$ | $L^{\infty}(0,1)$ | $L^{2}(0,1)$ | $H_{D}^{1}(0,1)$ 1e-10 | 4.21e-02 | 7.02e-02 | 9.13e-02 | 1.03e-06 | 8.55e-08 | 1.21e-07 1e-09 | 4.46e-04 | 7.38e-04 | 9.60e-04 | 1.00e-06 | 6.11e-08 | 8.64e-08 1e-08 | 4.40e-06 | 6.71e-06 | 8.80e-06 | 1.14e-06 | 6.48e-08 | 9.16e-08 1e-07 | 1.47e-06 | 4.87e-07 | 6.88e-07 | 9.84e-07 | 6.02e-08 | 8.51e-08 1e-06 | 1.39e-06 | 4.59e-07 | 6.49e-07 | 1.67e-06 | 1.10e-07 | 1.56e-07 1e-05 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.19e-06 | 8.21e-08 | 1.16e-07 1e-04 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 8.55e-07 | 6.48e-08 | 9.17e-08 1e-03 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.44e-06 | 1.04e-07 | 1.47e-07 1e-02 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 5.14e-07 | 4.77e-08 | 6.75e-08 1e-01 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.69e-06 | 8.49e-08 | 1.20e-07 1e+00 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 2.45e-06 | 1.56e-07 | 2.20e-07 1e+01 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.83e-06 | 1.09e-07 | 1.54e-07 1e+02 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.91e-05 | 8.14e-07 | 1.15e-06 1e+03 | 1.40e-06 | 4.59e-07 | 6.49e-07 | 1.40e-04 | 1.10e-06 | 1.55e-06 1e+04 | 1.41e-06 | 4.59e-07 | 6.49e-07 | 1.27e-03 | 5.34e-05 | 7.56e-05 1e+05 | 1.39e-06 | 4.59e-07 | 6.49e-07 | 3.69e-04 | 1.94e-05 | 2.75e-05 1e+06 | 1.63e-06 | 4.66e-07 | 6.59e-07 | 3.98e-04 | 3.42e-05 | 4.83e-05 1e+07 | 1.99e+02 | 5.07e+01 | 7.18e+01 | 2.11e-03 | 3.53e-04 | 4.99e-04 1e+08 | 1.99e+02 | 5.07e+01 | 7.18e+01 | 1.22e-01 | 2.83e-02 | 4.01e-02 1e+09 | 1.99e+02 | 5.07e+01 | 7.18e+01 | 4.86e-01 | 2.05e-01 | 2.90e-01 The results of Experiments 12 and 13 indicate that the final accuracy is rather insensitive to the choice of $\alpha$. It should be noted that the coefficient matrices in Examples 3 and 4 are well-scaled. ## 6 Final remarks and conclusions In summary, in the present paper, we investigated questions related to an efficient and reliable realization of a least-squares collocation method. These questions are particularly important since a higher index DAE is an essentially ill-posed problem in naturally given spaces, which is why we must be prepared for highly sensitive discrete problems. In order to obtain a overall procedure that is as robust as possible, we provided criteria which led to a robust selection of the collocation points and of the basis functions, whereby the latter is also useful for the shape of the resulting discrete problem. Additionally, a number of new, more detailed, error estimates have been given that support some of the design decisions. The following particular items are worth highlighting in this context: * • The basis for the approximation space should be appropriately shifted and scaled orthogonal polynomials. We could not observe any larger differences between the behavior of Legendre and Chebyshev polynomials. * • The collocation points should be chosen to be the Gauss-Legendre, Lobatto, or Radau nodes. This leads to discrete problems whose conditioning using the discretization by interpolation ($\Phi_{\pi,M}^{R}$) is not much worse than that resembling collocation methods for ordinary differential equations ($\Phi_{\pi,M}^{C}$). A particular efficient and stable implementation is obtained if Gauss-Legendre or Radau nodes are used since, in this case, diagonal weighting ($\Phi_{\pi,M}^{I}$) coincides with the interpolation approach. * • A critical ingredient for the implementation of the method is the algorithm used for the solution of the constrained linear least-squares problems. Given the expected bad conditioning of the least-squares problem, a QR-factorization with column pivoting must lie at the heart of the algorithm. At the same time, the sparsity structure must be used as best as possible. In our tests, the direct solver seems to be the most robust one. With respect to efficiency and accuracy, the deferred correction solver is preferable. However, it failed in certain tests. * • It seems as if, for problems with a smooth solution, a higher degree $N$ of the ansatz polynomials with a low number of subintervals $n$ in the mesh is preferable over a smaller degree with a larger number of subintervals with respect to accuracy. Some first theoretical justification has been provided for this claim. * • So far, in all experiments of this and previously published papers, we did not observe any serious differences in the accuracy obtained in dependence on the choice of $M>N$ for fixed $n$. The results for $M=N+1$ are not much different from those obtained for a larger $M$. * • While superconvergence in classical collocation for ODEs and index-1 DAEs is a very favorable phenomenon, we could not find anything analogous in all our experiments. * • The simple collocation procedure using $\Phi_{\pi,M}^{C}$ performs surprisingly well. In fact, the results are, in our experiments, in par with those using $\Phi_{\pi,M}^{R}=\Phi_{\pi,M}^{I}$. However, we have no theoretical justification for this as yet. * • Our method is designed for variable grids. However, so far we have only worked with constant step size. In order to be able to adapt the grid and the polynomial degree, or even select appropriate grids, it is important to understand the structure of the error, that is, how the global error depends on local errors. This is a very important open problem, for which we have no solution yet. In conclusion, we note that earlier implementations, among others the one from the very first paper in this matter HMTWW , which started from proven ingredients for ODE codes, are from today’s point of view and experience a rather bad version for the least-squares collocation. Nevertheless, the test results calculated with it were already very impressive. This strengthens our belief that a careful implementation of the method gives rise to a very efficient solver for higher-index DAEs. ## Appendix A Some facts about classical orthogonal polynomials In the derivations, classical orthogonal polynomials have been heavily used. For the reader’s convenience important properties are collected below. ### A.1 Legendre Polynomials The Legendre polynomials $P_{\nu}$, $\nu=0,1,\ldots$, are defined by the recurrence relation $\displaystyle P_{0}(\tau)$ $\displaystyle=1,$ $\displaystyle P_{1}(\tau)$ $\displaystyle=\tau,$ (33) $\displaystyle(\nu+1)P_{\nu+1}(\tau)$ $\displaystyle=(2\nu+1)\tau P_{\nu}(\tau)-\nu P_{\nu-1}(\tau),\quad\nu=1,2,\ldots.$ Some properties of the Legendre polynomials are 1. 1. $P_{\nu}(-1)=(-1)^{\nu}$, $P_{\nu}(1)=1,\quad\nu=0,1,\ldots$, 2. 2. $\int_{-1}^{1}P_{0}(\tau)=2$, $\int_{-1}^{1}P_{\nu}(\tau)=0,\quad\nu=1,2,\ldots$ 3. 3. $\int_{-1}^{1}P_{\nu}(\tau)P_{\mu}(\tau)d\tau=\frac{2}{2\nu+1}\delta_{\nu\mu},\quad\nu,\nu=0,1,\ldots$, where $\delta_{\nu\mu}$ denotes the Kronecker $\delta$-symbol, 4. 4. $P_{\nu+1}^{\prime}(\tau)-P_{\nu-1}^{\prime}(\tau)=(2\nu+1)P_{\nu}(\tau),\quad\nu=1,2,\ldots$ The latter property is useful for representing integrals, $\displaystyle\int_{-1}^{\tau}P_{\nu}(\sigma)d\sigma$ $\displaystyle=\frac{1}{2\nu+1}\left(P_{\nu+1}(\tau)-P_{\nu-1}(\tau)-(-1)^{\nu+1}+(-1)^{\nu-1}\right)$ $\displaystyle=\frac{1}{2\nu+1}\left(P_{\nu+1}(\tau)-P_{\nu-1}(\tau)\right).$ (34) Moreover, $\int_{-1}^{\tau}P_{0}(\sigma)d\sigma=\tau+1$. For a stable evaluation of the Legendre polynomials, we use a representation proposed in LebedevBarburin65 , $P_{\nu+1}(\tau)=\frac{\nu}{\nu+1}(\tau P_{\nu}(\tau)-P_{\nu-1}(\tau))+\tau P_{\nu}(\tau).$ In the implementation, all polynomials must be evaluated simulataneously for each given $\tau$. The evaluation of the recursions is cheap. Linear combinations of the basis function can be conveniently and stably evaluated using the Clenshaw algorithm (FoxParker68, , p. 56)Barrio02 ; Smok02 . The shifted Legendre polynomials $\tilde{P}_{\nu}$ are given by $\tilde{P}_{\nu}(\rho)=P_{\nu}(2\rho-1)$, $\nu=0,1,\ldots$191919$\tilde{P}_{\nu}$ ist eine Standardbezeichnung. They fulfill the orthogonality relations $\int_{0}^{1}\tilde{P}_{\nu}(\rho)\tilde{P}_{\mu}(\rho)d\rho=\frac{1}{2\nu+1}\delta_{\nu\mu}.$ Moreover, we introduce the normalized shifted Legendre polynomials $\hat{P}_{\nu}$ by $\hat{P}_{\nu}(\rho)=(2\nu+1)^{1/2}\tilde{P}(\rho).$ ### A.2 Chebyshev polynomials The Chebyshev polynomials of the first kind $T_{\nu}$, $\nu=0,1,\ldots$, are defined by the recurrence relation $\displaystyle T_{0}(\tau)$ $\displaystyle=1,$ $\displaystyle T_{1}(\tau)$ $\displaystyle=\tau,$ (35) $\displaystyle T_{\nu+1}(\tau)$ $\displaystyle=2\tau T_{\nu}(\tau)-T_{\nu-1}(\tau),\quad\nu=1,2,\ldots.$ Some properties of the Chebyshev polynomials are 1. 1. $T_{\nu}(-1)=(-1)^{\nu},\quad T_{\nu}(1)=1,\quad\nu=0,1,\ldots$ 2. 2. $T_{\nu}(\tau)=\frac{1}{2}\left(\frac{1}{\nu+1}T_{\nu+1}^{\prime}(\tau)-\frac{1}{\nu-1}T_{\nu-1}^{\prime}(\tau)\right),\quad\nu=2,3,\ldots$ Similarly as before, we obtain the simple presentation $\int_{-1}^{\tau}T_{\nu}(\sigma)d\sigma=\frac{1}{2(\nu^{2}-1)}\left((\nu-1)T_{\nu+1}(\tau)-(\nu+1)T_{\nu-1}(\tau)+(-1)^{\nu-1}2\right).$ (36) The orthogonality property of the Chebyshev polynomials reads $\int_{-1}^{1}T_{\nu}(\tau)T_{\mu}(\tau)\frac{d\tau}{\sqrt{1-x^{2}}}=\begin{cases}0,&\nu\neq\mu,\\\ \pi,&\nu=\mu=0,\\\ \frac{\pi}{2},&\nu=\mu\neq 0.\end{cases}$ The normalized Chebyshev polynomials $\bar{T}_{\nu}$ are given by $\bar{T}_{\nu}(\tau)=\begin{cases}\sqrt{\frac{1}{\pi}}T_{0}(\tau),&\nu=0,\\\ \sqrt{\frac{2}{\pi}}T_{\nu}(\tau),&\nu=1,2,\ldots.\end{cases}$ Linear combinations of Chebyshev polynomials can be stably computed by the Clenshaw algorithm (FoxParker68, , p. 57ff),Barrio02 ; Smok02 . ### A.3 The structure of the discrete problems Consider the linear DAE (1). In order to simplify the notation slightly, define $E=AD$ such that, for sufficiently smooth functions $x\in X_{\pi}$, (1) is equivalent to $E(t)x^{\prime}(t)+B(t)x(t)=q(t),\quad t\in(t_{j-1},t_{j}),\quad j=1,\ldots,n.$ Let, on $(t_{j-1},t_{j})$, $x(t)=(x_{j1}(t),\ldots,x_{jm}(t))^{T}$. Then, we have the representations $\displaystyle x_{j\kappa}(t)$ $\displaystyle=\sum_{l=0}^{N}c_{j\kappa l}\bar{p}_{jl}(t),\quad\kappa=1,\ldots,k,$ $\displaystyle x_{j\kappa}(t)$ $\displaystyle=\sum_{l=0}^{N-1}c_{j\kappa l}p_{jl}(t),\quad\kappa=k+1,\ldots,m,$ (37) with $p_{jl}$, $\bar{p}_{jl}$ from (28). Introduce $\bar{Q}_{j}(t)=(\bar{p}_{j1}(t),\ldots,\bar{p}_{lN}(t)),\quad Q_{j}(t)=(p_{j1}(t),\ldots,p_{j,N-1}(t))$ as well as $a_{j}(t)=\left[\begin{array}[]{cc}I_{k}\otimes\bar{Q}_{j}(t)&0\\\ 0&I_{m-k}\otimes Q_{j}(t)\end{array}\right]\in\mathbb{R}^{m\times(mN+k)}.$ Collect the coefficents in (37) in the vector $c_{j}=(c_{j10},\ldots,c_{j1N},c_{j20},\ldots,c_{jm,N-1})^{T}\in\mathbb{R}^{mN+k}.$ Then it holds $x_{j}(t)=a_{j}(t)c_{j}.$ Then, for $W_{j}$ of (21), we have the representation $w_{j}(t_{ji})=\left[E(t_{ji})a_{j}^{\prime}(t_{ji})+B(t_{ji})a_{j}(t_{ji})\right]c_{j}-q(t_{ji})=:A_{ji}c_{j}-r_{ji}$ and $W_{j}=h^{1/2}\left[\begin{array}[]{c}A_{j1}\\\ \vdots\\\ A_{jM}\end{array}\right]c_{j}-h^{1/2}\left[\begin{array}[]{c}r_{j1}\\\ \vdots\\\ r_{jM}\end{array}\right].$ The functionals $\Phi_{\pi,M}^{r}$ have, for $r=R,I,C$ an representation of the kind $\Phi_{\pi,M}^{r}=W^{T}\mathcal{L}^{r}W+|G_{a}x(a)+G_{b}x(b)-d|^{2}.$ Assume that there exists a matrix $\hat{L}^{r}$ such that $L^{r}=\left(\hat{L}^{r}\right)^{T}\hat{L}^{r}$. For $r=I,C$, simple possibilities are $\hat{L}^{I}=\operatorname{diag}(\sqrt{\gamma_{1}},\ldots,\sqrt{\gamma_{M}})$ and $\hat{L}^{C}=M^{-1/2}I_{M}$. For $L^{R}$, the choice $\hat{L}^{R}=\tilde{V}^{-1}$ (cf (26)) is suitable. Define $A_{j}=h_{j}^{1/2}\operatorname{diag}(\hat{L}^{r}\otimes I_{m},\ldots,\hat{L}^{r}\otimes I_{m})\left[\begin{array}[]{c}A_{j1}\\\ \vdots\\\ A_{jM}\end{array}\right],\quad r_{j}=h_{j}^{1/2}\operatorname{diag}(\hat{L}^{r}\otimes I_{m},\ldots,\hat{L}^{r}\otimes I_{m})\left[\begin{array}[]{c}r_{j1}\\\ \vdots\\\ r_{jM}\end{array}\right].$ Then we set $\mathcal{A}=\left[\begin{array}[]{ccccc}A_{1}&0&\cdots&&0\\\ 0&\ddots&&&\vdots\\\ \vdots&&\ddots\\\ &&&\ddots&0\\\ 0&&&&A_{n}\\\ G_{a}a_{1}(a)&0&\cdots&0&G_{b}a_{n}(b)\end{array}\right],\quad r=\left[\begin{array}[]{c}r_{1}\\\ \vdots\\\ r_{n}\end{array}\right].$ Moreover, the continuity conditions (29) can be represented by the matrix $\mathcal{C}=\left[\begin{array}[]{cccccc}I_{k}\otimes Q_{1}(t_{1})&I_{k}\otimes Q_{2}(t_{1})\\\ &I_{k}\otimes Q_{2}(t_{2})&I_{k}\otimes Q_{3}(t_{2})\\\ &&\ddots&\ddots\\\ &&&\ddots&\ddots\\\ &&&&I_{k}\otimes Q_{n-1}(t_{n-1})&I_{k}\otimes Q_{n}(t_{n-1})\end{array}\right].$ The discrete minimization problem becomes, therefore, $\varphi^{r}(c)=\|\mathcal{A}c-r\|_{\mathbb{R}^{nmM+l}}^{2}\rightarrow\min!$ under the constraint $\mathcal{C}c=0.$ ## References * [1] E.L. Albasiny. A subroutine for solving a system of differential equations in Chebyshev series. In B. Childs, M. Scott, J.W. Daniel, E. Denman, and P. Nelson, editors, Codes for Boundary-Value problems in Ordinary Differential Equations, volume 76 of Lecture Notes in Computer Science, pages 280–286, Berlin, Heidelberg, New York, 1979. Springer-Verlag. * [2] U. Ascher and G. Bader. A new basis implementation for a mixed order boundary-value ode solver. SIAM J. Sci. Statist. Comput., 8:483–500, 1987. * [3] U. Ascher and R. Spiteri. Collocation software for boundary-value differential-algebraic equations. SIAM J. Sci. Comput., 15:938–952, 1994. * [4] J.L. Barlow. Solution of sparse weighted and equality constrained least squares problems. In C. Page and R. LePage, editors, Computing Science and Statistics, pages 53–62, New York, 1992. Springer. * [5] J.L. Barlow and U.B. Vemulapati. A note on deferred correction for equality constrained least squares problems. SIAM J. Numer. Anal., 29(1):249–256, 1992. * [6] R. Barrio. Rounding error bounds for Clenshaw and Forsythe algoritms for the evaluation of orthogonal polynomial series. J Comp Appl Math, 138:185–204, 2002. * [7] B. Beckermann. The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math., 85:553–577, 2000. * [8] Å. Björck. Numerical Methods for Least Squares Problems. SIAM, Philadelphia, 1996. * [9] Å Björck and G.H. Golub. Iterative refinement of linear least squares solutions by Householder transformations. BIT, 7:322–337, 1967. * [10] S. L. Campbell and E. Moore. Constraint preserving integrators for general nonlinear higher index DAEs. Num.Math., 69:383–399, 1995. * [11] T.A. Davis. Direct Methods for Sparse Linear Systems. Fundamentals of Algorithms. SIAM, Philadelphia, 2006. * [12] T.A. Davis. Algorithm 915, SuiteSparseQR: Multifrontal multithreaded rank-revealing sparse QR factorization. ACM Trans. Math. Software, 38(1):8:1–8:22, 2011. * [13] P. Deuflhard and A. Hohmann. Numerical Analysis in Modern Scientific Computing: An Introduction. Texts in Applied Mathematics. Springer-Verlag, New York, 2nd edition, 2003\. * [14] T.A. Driscoll, N. Hale, and L.N. Trefethen. Chebfun guide. Pafnuty Publications, Oxford, 2014. * [15] B. Fornberg. A practical guide to pseudospectral methods. Cambridge University Press, 1996. * [16] B. Fornberg and D.M. Sloan. A review of pseudospectral methods for solving partial differential equations. Acta Numerica, pages 203–267, 1994. * [17] L. Fox and I.B. Parker. Chebyshev polynomials in numerical analysis. Oxford Mathematical Handbooks. Oxford University Press, London, 1968. * [18] M. Galassi et al. GNU Scientific Library Reference Manual. Network Theory Ltd., 3rd edition edition, January 2009. Version 2.6. * [19] W. Gautschi. The condition of Vandermonde-like matrices involving orthogonal polynomials. Linear Algebra Appl., 52/53:293–300, 1983. * [20] W. Gautschi. Gauss-Radau formulae for Jacobi and Laguerre weight functions. Math. Comput. Simulation, 54:403–412, 2000. * [21] W. Gautschi. High-order Gauss-Lobatto formulae. Numer. Algorithms, 25:213–222, 2000. * [22] W. Gautschi. Optimally scaled and optimally conditioned Vandermonde and Vandermonde-like matrices. BIT Numer. Math., 51:103–125, 2011. * [23] W. Gautschi and G. Inglese. Lower bounds for the condition number of Vandermonde matrices. Numer. Math., 52:241–250, 1988. * [24] I. Gladwell. The development of the boundary-value codes in the ordinary differential equations chapter of the NAG library. In B. Childs, M. Scott, J.W. Daniel, E. Denman, and P. Nelson, editors, Codes for Boundary-Value problems in Ordinary Differential Equations, volume 76 of Lecture Notes in Coputer Science, pages 122–143, Berlin, Heidelberg, New York, 1979. Springer-Verlag. * [25] G.H. Golub and Ch. van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore and London, 2nd edition, 1989. * [26] G.H. Golub and J.H. Welsch. Calculation of Gauss quadrature rules. Math. Comp., 23:221–230, 1969. * [27] G. Guennebaud, B. Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010. * [28] G. Hämmerlin and K.-H. Hoffmann. Numerical Mathematics. Springer Verlag, New York, 1991. * [29] M. Hanke and R. März. Convergence analysis of least-squares collocation metods for nonlinear higher-index differential-algebraic equations. J Comp Appl Math, in Press, 2019. * [30] M. Hanke and R. März. A reliable direct numerical treatment of differential-algebraic equations by overdetermined collocation: An operator approach. J Comp Appl Math, in Press, 2019. * [31] M. Hanke, R. März, and C. Tischendorf. Least-squares collocation for higher-index linear differential-algebaic equations: Estimating the stability threshold. Math. Comp., 88(318):1647–1683, 2019. https://doi.org/10.1090/mcom/3393. * [32] M. Hanke, R. März, C. Tischendorf, E. Weinmüller, and S. Wurm. Least-squares collocation for linear higher-index differential-algebraic equations. J. Comput. Appl. Math., 317:403–431, 2017. http://dx.doi.org/10.1016/j.cam.2016.12.017. * [33] F.B. Hildebrand. Introduction to Numerical Analysis. McGraw-Hill, New York, 1956. * [34] B.A. Ibrahimoglu. Lebesgue functions and Lebesgue constants in polynomial interpolation. J. Inequal. Appl., 93:1–15, 2016. * [35] G. Kitzhover, O. Koch, G. Pulverer, Ch. Simon, and E. Weinmüller. BVPSUITE, a new MATLAB solver for singular implicit boundary value problems. ASC Report 35, Vienna University of Technology, 2009. * [36] O. Koch, R. März, D. Praetorius, E. Weinmüller, et al. Collocation methods for index-1 daes with singularities of the first kind. Math. Comp., 79:281–304, 2010. * [37] R. Lamour, R. März, and C. Tischendorf. Differential-Algebraic Equations: A Projector Based Analysis. Differential-Algebraic Equations Forum. Springer-Verlag Berlin Heidelberg New York Dordrecht London, 2013. Series Editors: A. Ilchmann, T. Reis. * [38] R. Lamour, R. März, and E. Weinmüller. Surveys in Differential-Algebraic Equations III, chapter Boundary-Value Problems for Differential-Algebraic Equations: A Survey, pages 177–309. Differential-Algebraic Equations Forum. Springer Heidelberg, 2015. ed. by A. Ilchmann and T. Reis. * [39] V-I. Lebedev and O.V. Barburin. Calculation of principal values, weights and nodes of the Gauss quadrature formulae of integrals. U.S.S.R. Computational Math. and Math. Physics, 5(3):81–92, 1965\. * [40] R. März. Surveys in Differential-Algebraic Equations II, chapter Differential-Algebraic Equations from a Functional-Analytic Viewpoint: A Survey, pages 163–285. Differential-Algebraic Equations Forum. Springer Heidelberg, 2015. ed. by A. Ilchmann and T. Reis. * [41] H.H. Michaels. Abscissas and weight coefficients for Lobatto quadrature. Math. Comp., 17:237–244, 1963. * [42] H Robbins. A remark on Stirling’s formula. Amer. Math. Monthly, 62(1):26–29, 1955. * [43] A. Smoktunowicz. Backward stability of Clenshaw’s algorithm. BIT Numer. Math., 42(3):600–610, 2002. * [44] R.M. Stallman, GCC Developers Community, et al. Using the Gnu Compiler collection. CreateSpace, Scotts Valley, 2009. * [45] J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Texts in Applied Mathematics. Springer-Verlag, New York, 3rd edition, 2002\. * [46] The Numerical Algorithms Group (NAG). The NAG library for Fortran, 2019. * [47] L.N. Trefethen. Spectral methods in MATLAB. SIAM, Philadelphia, 2000. * [48] L.N. Trefethen and J.A.C. Weideman. Two results on polynomial interpolation in equally spaced points. J. Approx. Theory, 65:247–260, 1991. * [49] Ch. van Loan. On the method of weighting for equality-constrained least-squares problems. SIAM J. Numer. Anal., 22(5):851–864, 1985. ## --- (1) (2) (3) Figure 1: Error of the approximate solution in Experiment 1. The abbreviations (M) for the monomial basis, (L) for the Legendre basis, and (C) for the Chebyshev basis are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. --- (1) (2) (3) Figure 2: Error of the approximate solution in Experiment 2. The abbreviations (U) for uniform nodes, (L) for the Gauss-Legendre nodes, and (C) for the Chebyshev nodes are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. --- (1) (2) (3) Figure 3: Error of the approximate solution in Experiment 3. The abbreviations (R) for the Runge-Kutta basis in Legendre representation, (L) for the Legendre basis, and (C) for the Chebyshev basis are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. --- (1) (2) (3) Figure 4: Error of the approximate solution in Experiment 4. The abbreviations (M) for the monomial basis, (L) for the Legendre basis, and (C) for the Chebyshev basis are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. --- (1) (2) (3) Figure 5: Error of the approximate solution in Experiment 5. The abbreviations (U) for uniform nodes, (L) for the Gauss-Legendre nodes, and (C) for the Chebyshev nodes are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. --- (1) (2) (3) Figure 6: Error of the approximate solution in Experiment 6. The abbreviations (R) for the Runge-Kutta basis in Legendre representation, (L) for the Legendre basis, and (C) for the Chebyshev basis are used. The error is measured in the norms of (1) $L^{2}((0,1),\mathbb{R}^{m})$, (2) $L^{\infty}((0,1),\mathbb{R}^{m})$, and (3) $H_{D}^{1}((0,1),\mathbb{R}^{m})$. Error of the approximate solution using Legendre basis functions and Gauss- Legendre (G), Radau IIA (R), and Lobatto (L) collocation nodes when using the functional $\Phi_{\pi,M}^{R}$ in Example 3. The norm is that of $H_{D}^{1}((0,5),\mathbb{R}^{7})$ | $N=3$ | $N=5$ | $N=10$ | $N=20$ ---|---|---|---|--- $n$ | G | R | L | G | R | L | G | R | L | G | R | L 5 | 5.37e-03 | 5.86e-03 | 5.55e-03 | 1.37e-05 | 1.52e-05 | 1.38e-05 | 3.41e-12 | 4.08e-12 | 3.61e-12 | 8.97e-11 | 5.31e-11 | 1.04e-10 10 | 2.15e-03 | 2.33e-03 | 2.20e-03 | 1.68e-06 | 1.77e-06 | 1.69e-06 | 3.98e-11 | 2.53e-11 | 2.51e-11 | 4.78e-10 | 7.58e-10 | 1.00e-09 20 | 9.95e-04 | 1.04e-03 | 1.00e-03 | 2.08e-07 | 2.14e-07 | 2.08e-07 | 2.04e-10 | 2.53e-10 | 1.80e-10 | 3.18e-09 | 2.97e-09 | 3.30e-09 40 | 4.80e-04 | 4.91e-04 | 4.81e-04 | 2.58e-08 | 2.62e-08 | 2.58e-08 | 1.34e-09 | 1.67e-09 | 1.64e-09 | 2.56e-08 | 2.31e-08 | 3.29e-08 80 | 2.36e-04 | 2.39e-04 | 2.36e-04 | 3.34e-09 | 3.32e-09 | 3.63e-09 | 1.19e-08 | 1.33e-08 | 1.60e-08 | 1.99e-07 | 2.03e-07 | 2.14e-07 160 | 1.17e-04 | 1.17e-04 | 1.17e-04 | 9.16e-09 | 9.16e-09 | 1.18e-08 | 8.66e-08 | 1.01e-07 | 1.13e-07 | 1.74e-06 | 1.45e-06 | 1.94e-06 320 | 5.81e-05 | 5.83e-05 | 5.81e-05 | 8.06e-08 | 6.79e-08 | 8.74e-08 | 7.90e-07 | 8.25e-07 | 9.82e-07 | 1.39e-05 | 1.27e-05 | 1.38e-05 Error of the approximate solution using Legendre basis functions and Gauss- Legendre (G), Radau IIA (R), and Lobatto (L) collocation nodes when using the functional $\Phi_{\pi,M}^{C}$ in Example 3. The norm is that of $H_{D}^{1}((0,5),\mathbb{R}^{7})$ | $N=3$ | $N=5$ | $N=10$ | $N=20$ ---|---|---|---|--- $n$ | G | R | L | G | R | L | G | R | L | G | R | L 5 | 5.22e-03 | 7.20e-03 | 7.81e-03 | 1.30e-05 | 1.50e-05 | 1.44e-05 | 2.89e-12 | 4.32e-12 | 1.82e-12 | 5.15e-11 | 3.67e-11 | 4.24e-11 10 | 2.06e-03 | 2.85e-03 | 3.46e-03 | 1.59e-06 | 1.75e-06 | 1.76e-06 | 3.24e-11 | 1.95e-11 | 1.79e-11 | 3.23e-10 | 1.57e-10 | 1.91e-10 20 | 9.49e-04 | 1.27e-03 | 1.67e-03 | 1.96e-07 | 2.11e-07 | 2.19e-07 | 2.19e-10 | 1.66e-10 | 1.06e-10 | 2.39e-09 | 1.55e-09 | 7.72e-10 40 | 4.58e-04 | 6.04e-04 | 8.27e-04 | 2.42e-08 | 2.60e-08 | 2.73e-08 | 1.59e-09 | 1.01e-09 | 7.38e-10 | 1.80e-08 | 1.65e-08 | 4.39e-09 80 | 2.25e-04 | 2.95e-04 | 4.12e-04 | 3.12e-09 | 3.51e-09 | 3.58e-09 | 1.16e-08 | 8.67e-09 | 5.25e-09 | 1.45e-07 | 1.41e-07 | 2.56e-08 160 | 1.11e-04 | 1.46e-04 | 2.06e-04 | 9.57e-09 | 1.01e-08 | 8.23e-09 | 9.33e-08 | 7.27e-08 | 3.96e-08 | 1.10e-06 | 1.20e-06 | 1.52e-07 320 | 5.54e-05 | 7.24e-05 | 1.03e-04 | 7.95e-08 | 8.73e-08 | 7.13e-08 | 7.47e-07 | 5.86e-07 | 3.33e-07 | 8.82e-06 | 9.78e-06 | 1.11e-06
# Informed Spectral Normalized Gaussian Processes for Trajectory Prediction Christian Schlauch Humboldt-Universität zu Berlin, and Continental AG Berlin, Germany & Christian Wirth Continental AG Frankfurt am Main, Germany & Nadja Klein Technische Universität Dortmund Chair of Uncertainty Quantification and Statistical Learning Berlin, Germany ###### Abstract Prior parameter distributions provide an elegant way to represent prior expert and world knowledge for informed learning. Previous work has shown that using such informative priors to regularize probabilistic deep learning (DL) models increases their performance and data-efficiency. However, commonly used sampling-based approximations for probabilistic DL models can be computationally expensive, requiring multiple inference passes and longer training times. Promising alternatives are compute-efficient last layer kernel approximations like spectral normalized Gaussian processes (SNGPs). We propose a novel regularization-based continual learning method for SNGPs, which enables the use of informative priors that represent prior knowledge learned from previous tasks. Our proposal builds upon well-established methods and requires no rehearsal memory or parameter expansion. We apply our informed SNGP model to the trajectory prediction problem in autonomous driving by integrating prior drivability knowledge. On two public datasets, we investigate its performance under diminishing training data and across locations, and thereby demonstrate an increase in data-efficiency and robustness to location-transfers over non-informed and informed baselines. ## 1 Introduction Deep learning (DL) has become a powerful artificial intelligence (AI) tool for handling complex tasks. However, DL typically requires extensive training data to provide robust results [?]. High acquisition costs can render the collection of sufficient data unfeasible. This is especially problematic in safety-critical domains like autonomous driving, where we encounter a wide range of edge cases associated with high risks [?]. Informed learning (IL) aims to improve the data efficiency and robustness of DL models by integrating prior knowledge [?]. Most IL approaches consider prior scientific knowledge by constraining or verifying the problem space or learning process directly. However, hard constraints are not suitable for qualitative prior expert and world knowledge where ubiquitous exceptions exist. In autonomous driving, for example, we expect traffic participants to comply with speed regulations but must not rule out violations. Still, prior knowledge about norms and regulations, like in this example, are highly informative for most cases and readily available at low costs. A recent idea is the integration of such prior expert and world knowledge into probabilistic DL models [?; ?]. These models maintain a distribution over possible model parameters instead of single maximum likelihood estimates. The prior knowledge can be represented as a prior parameter distribution, learned from arbitrarily defined knowledge tasks, to regularize training on real-world observations. The probabilistic informed learning (PIL) approach of Schlauch [?] applies this idea to the trajectory prediction in autonomous driving using regularization-based continual learning methods, achieving a substantially improved data efficiency. However, typical sampling-based probabilistic DL model approximations, such as the variational inference (VI) used by Schlauch [?], are computationally expensive, since they require multiple inference passes and substantially more training epochs. A promising alternative are compute-efficient last layer approximations [?]. The spectral normalized Gaussian process (SNGP) [?] is a particularly efficient approximation, that applies a Gaussian process (GP) as last layer to a deterministic deep neural network (DNN). The DNN acts as scalable feature extractor, while the last layer GP allows the deterministic estimation of the uncertainty in a single inference pass. The last layer GP kernel itself is approximated via Fourier features, which is asymptotically exact and can be easily scaled. We propose a novel regularization-based continual learning method to enable the use of SNGPs in a PIL approach. Our proposal is conceptually simple, builds upon well-established methods [?; ?], imposes little computational overhead and requires no additional architecture changes, making implementation straightforward. We apply our method in a PIL approach for the trajectory prediction in autonomous driving, which is an especially challenging application since well-calibrated, multi-modal predictions are required to enable safe planning. Figure 1: The informed CoverNet-SNGP model consists of a spectral normalized feature extractor and a last layer Gaussian Process with a Fourier feature approximated radial basis function kernel. Given a Birds-Eye-View RGB rendering and the target’s current state, the model classifies a set of candidate trajectories according to their drivability in task $i$ and their likely realization in task $i+1$. Our method regularizes the training on task $i+1$, given the MAP estimates and Laplace approximated covariance from task $i$ as informative priors, thereby integrating the drivability knowledge following the PIL approach. Following Schlauch [?], we employ CoverNet as base model and integrate the prior drivability knowledge that trajectories are likely to stay on-road. We benchmark our proposed informed CoverNet-SNGP on two public datasets, NuScenes and Argoverse2, against the non-informed Base-CoverNet, CoverNet-SNGP and informed Transfer-CoverNet, GVCL-Det-CoverNet as baselines. To this end, we evaluate data-efficiency under diminishing training data availability and robustness to location-transfers, both being key aspects for safe autonomous driving [?; ?]. We observe benefits in favor of our informed CoverNet-SNGP across various performance metrics, especially in low data regimes, which demonstrates our method’s viability to increase data-efficiency and robustness in a PIL approach. Our code is available on GitHub111https://github.com/continental/kiwissen-bayesian-trajectory- prediction. ## 2 Related Work Van Rueden [?] provides an overview of IL as an emerging field of research, which is also known as knowledge-guided or -augmented learning [?]. In trajectory prediction, like in other domains, most work concentrates on integrating prior scientific knowledge. Dynamical models are used, for instance, to encode physical limitations of motion in the architecture [?], in the output representation [?] or in a post-hoc verification [?]. Approaches similar to the PIL approach [?], that focus on integrating expert and world prior knowledge, typically leverage transfer- or multi-task learning settings [?]. However, transfer learning does not prevent catastrophic forgetting, while multi-task learning requires a single dataset with simultaneously available labels. PIL can be applied without these limitations. SNGPs and related models, known as deterministic uncertainty models (DUMs), have been analyzed by Postels [?] and Charpentier [?]. Most closely related to SNGPs is the deterministic uncertainty estimator (DUE) proposed by van Amersfoort [?], which approximates the last layer kernel with sparse variational inducing points instead of Fourier features. DUE preserves the non-parametric nature of the kernel, but is sensitive to its initialization and generally not asymptotically exact. Parisi [?] and De Lange [?] give a detailed survey of continual learning methods and their classification. Our proposed continual learning method for SNGPs is purely regularization-based, in contrast to the functional regularization introduced by Titsias [?], which could be directly applied to the DUE model, and the work of Derakhshani [?], which also considers a kernel approximation based on Fourier features. Both these methods require rehearsal, the latter also a parameter expansion. Rehearsal is likely to be sensitive to the data imbalances [?] in our application, while parameter expansions require architecture changes which introduce additional complexity. Our proposed method is conceptually simple and builds upon the well-established online elastic weight consolidation (online EWC) introduced by Schwartz [?]. Online EWC can also be understood as special case of generalized variational continual learning (GVCL) described by Loo [?]. ## 3 Informed SNGPs ### 3.1 Probabilistic Informed Learning The PIL approach of Schlauch [?] integrates prior expert and world knowledge in a supervised learning setup. The basic idea is to define a sequence of knowledge tasks $i=1,\ldots,M-1$ on datasets $D_{i}=\\{(x^{(i)}_{j},y^{(t)}_{j})\\}^{n_{i}}_{j=1}$ with $n_{i}$ samples each. These datasets can be synthetically generated, for example, by leveraging semantic annotations to map the prior knowledge to the prediction target. Semantic annotations are readily available in domains like autonomous driving, but are often underutilized in state-of-the-art models that learn from observations in the conventional task $i=M$ alone [?]. Given a probabilistic DL model parameterized by $\theta$ and an initial uninformative prior $\pi_{0}(\theta)$, the goal is to recursively learn from the sequence of tasks by applying Bayes’ rule $\begin{split}p(\theta|D_{1:i})\propto\pi_{0}(\theta)\prod^{i}_{j=1}p_{\theta}(y_{j}|x_{j}),\end{split}$ (1) where $p_{\theta}(y_{j}|x_{j})$ are the likelihood functions at task $j$, which are assumed to be conditionally independent given $\theta$. This computationally intractable recursion is approximated by repurposing regularization-based continual learning methods. The PIL approach can generally be applied, as long as first, the prior knowledge is strongly related to the observational task, second, the prior knowledge can be mapped to the prediction target and third, the posterior parameter distribution can be estimated. The informative priors make information explicit and shape the loss surface in the downstream task, improving the training outcome; even without using probabilistic inference in the end [?]. ### 3.2 SNGP Composition SNGPs [?] employ a composition $f_{\theta}=g_{\theta_{\text{GP}}}\circ h_{\theta_{\text{NN}}}:\mathcal{X}\to\mathcal{Y}$, $\theta=\\{\theta_{\text{NN}},\theta_{\text{GP}}\\}$. Its first component is a deterministic, spectral normalized feature extractor $h_{\theta_{\text{NN}}}:\mathcal{X}\to\mathcal{H}$ with trainable parameters $\theta_{\text{NN}}$ mapping the high dimensional input space $\mathcal{X}$ into a low dimensional hidden space $\mathcal{H}$. The second component is a GP output layer $g_{\theta_{\text{GP}}}:\mathcal{H}\to\mathcal{Y}$ with a radial basis function (RBF) kernel mapping into the output space $\mathcal{Y}$. The RBF kernel can be approximated by (random) Fourier features using Bochner’s Theorem [?]. This effectively reduces the GP to a Bayesian linear model, that can be written as a neural network layer with fixed hidden weights and trainable output weight parameters $\theta_{\text{GP}}$ and enables end-to-end training with the feature extractor. The distance-sensitive of the composition prevents a “feature-collapse” [?], improving the calibration against adversarial and outlier samples. In total, SNGP introduces five additional hyperparameters, namely an upper bound $s$ and number of power iterations $N_{p}$ for the spectral normalization for the feature extractor and the number of Fourier features $N_{f}$, the kernel’s length scale $l_{s}$ and Gaussian prior choice for the last layer. ### 3.3 Regularizing SNGPs There are two problems prohibiting the direct application of the PIL approach to composite last layer kernel approximations like the SNGP. First, there is no existing continual learning method for kernels that does not require rehearsal memories or parameter expansions (see Sec. 2). Second, estimating the posterior parameter distribution of the feature extractor (e.g. via a Laplace approximation or variational inference) contradicts the motivation for the last layer kernel approximation regarding compute-efficiency. We tackle the first problem by leveraging the Fourier feature approximation of the RBF kernel of the GP. The posterior distributions of the parameters of the last layer at task $i$ can be made tractable through Laplace approximation [?], that is, we assume $\displaystyle p(\theta_{\text{GP}}|D_{1:i})\approx\mathcal{N}(\theta_{\text{GP}};\theta_{\text{GP},i}^{*},\Sigma^{-1}_{\text{GP},i}),$ given a maximum a posteriori (MAP) estimate $\theta^{*}_{\text{GP},i}$ at task $i$. Similar to online EWC [?], $\theta^{*}_{\text{GP},i}$ can be obtained by minimizing $\displaystyle-\log{p_{\theta_{\text{GP}}}(y_{i}|x_{i})}-\frac{\lambda_{\text{GP}}}{2}(\theta_{\text{GP}}-\theta^{*}_{\text{GP},i-1})^{\top}\Sigma_{\text{GP},i-1}^{-1}(\theta_{\text{GP}}-\theta^{*}_{\text{GP},i-1})$ (2) with respect to $\theta_{\text{GP}}$, where the precision $\Sigma_{\text{GP},i}^{-1}$ is approximated by the sum of the Hessian at the MAP estimate and a scaled precision at task $i-1$, that is, $\Sigma_{\text{GP},i}^{-1}\approx H_{\text{GP},i}(\theta_{\text{GP},i}^{*})+\gamma_{\text{GP}}\Sigma_{\text{GP},i-1}^{-1}.$ Above, $\lambda_{\text{GP}}>0$ is a temperature parameter, that scales the importance of the previous task [?], and $0<\gamma_{\text{GP}}\leq 1$ is a decay parameter, that allows for more plasticity over very long task sequences [?]. In contrast to online EWC, we can cheaply compute the Hessian using moving averages [?] instead of using a Fisher matrix approximation. In the first task $i=1$, we use an uninformative zero-mean, unit-variance prior $\pi_{0}$, which amounts to a simple $\mathcal{L}$2-regularization. To tackle the second problem and regularize the feature extractor, we approximate the precision $\Sigma_{\text{NN},i-1}^{-1}$ with the identity matrix $\mathbb{I}$. This implies a simple $\mathcal{L}$2-regularization for the MAP estimates $\theta^{*}_{\text{NN},i}$ obtained by minimizing $\displaystyle-\log{p_{\theta_{\text{NN}}}(y_{i}|x_{i})}-\frac{\lambda_{\text{NN}}}{2}(\theta_{\text{NN}}-\theta^{*}_{\text{NN},i-1})^{2}$ with respect to $\theta_{\text{NN}}$, where $\lambda_{\text{NN}}$ is the extractor specific temperature parameter. This idea is conceptually simple, but should be sufficient, since the learned representation in knowledge tasks should be suitable downstream due to the close relation between tasks. In result, the complete model $f_{\theta}:\mathcal{X}\to\mathcal{Y}$, parameterized by $\theta=\\{\theta_{\text{NN}},\theta_{\text{GP}}\\}$, can be effectively regularized and used in the PIL approach, as visualized in Fig. 1. Our method introduces three hyperparameters $\\{\lambda_{\text{GP}},\gamma_{\text{GP}},\lambda_{\text{NN}}\\}$. It only requires the parameters of the previous task in memory and has little computational overhead like online EWC [?]. ## 4 Application to Trajectory Prediction ### 4.1 Problem Definition We limit ourselves to the single-agent trajectory prediction problem [?]. An autonomous driving system is assumed to observe the states in the state space $\mathcal{Y}$ of all agents $\mathcal{A}$ present in a scene on the road. Let $y^{(t)}\in\mathcal{Y}$ denote the state of target agent $a\in\mathcal{A}$ at time $t$ and let $y^{(t-T_{o}\,:\,t)}=\big{(}y^{(t-T_{o})},y^{(t-T_{o}+\delta t)},\ldots,y^{(t)}\big{)}$ be its observed trajectory over an observation history $T_{o}$ with sampling period $\delta t$. Additionally, we assume access to agent-centered maps $\mathcal{M}$, which include semantic annotations such as the drivable area. Map and states make up the scene context of agent $a$, denoted as ${x=(\\{y_{j}^{(t-T_{o}\,:\,t)}\\}^{|\mathcal{A}|}_{j=1},\mathcal{M})}$. Given $x$, the goal is to predict the distribution of $a$’s future trajectories $p(y^{(t+\delta t\,:\,t+T_{h})}|x)$ over the prediction horizon $T_{h}$, where $y^{(t-\delta t\,:\,t+T_{h})}=\big{(}y^{(t+\delta t)},y^{(t+2\delta t)},\ldots,y^{(t+T_{h})}\big{)}$. ### 4.2 CoverNet-SNGP CoverNet [?] approaches the single-agent trajectory problem by considering a birds-eye-view RGB rendering of the scene context $x$ and the current state $y^{(t)}$ of the target agent $a$ as inputs. The RGB rendering is processed by a computer-vision backbone, before concatenated with the target’s current state and processed by another dense layer. The output is represented as a set $\mathcal{K}$ of $K$ candidate trajectories $y_{k}^{(t+\delta t\,:\,t+T_{h})}$. Doing so reduces the prediction problem to a classification problem, where each trajectory in the set $\mathcal{K}$ is treated as a sample of the predictive distribution $p(y^{(t+\delta t\,:\,t+T_{h})}|x)$ and only the conditional probability of each sample is required. In principle, any space-filling heuristic may be used to define $\mathcal{K}$, for example, by using a dynamical model that integrates physical limitations [?], which could be applied in combination with the PIL approach. Here, we follow Phan-Minh’s [?] definition of a fixed set $\mathcal{K}$ by solving a set-covering problem over a subsample of observed trajectories in the training split, using a greedy-algorithm222Further details in our supplemental. Also see Chapter 35.3 of Cormen [?] regarding set-covering problems in general. given a coverage- bound $\epsilon$, which determines the number of total candidates $K$. The modification of CoverNet with SNGP is straightforward if a convolutional neural network (CNN) is used as backbone. In that case, a spectral normalization can be directly applied to the architecture while the last layer is replaced with a Gaussian process, approximated by Fourier features as described in Sec. 3.2. ### 4.3 Integrating Prior Drivability Knowledge The PIL approach is applied sequentially on two consecutive tasks as follows. In task $i$, we integrate the prior drivability knowledge, that trajectories are likely to stay on-road. To this end, we derive new training labels (see Sec. 3.1), where all candidate trajectories in $\mathcal{K}$ with way-points inside the drivable area for a given training scene $x$ are labeled as positive [?]. We then train in a multi-label classification with a binary cross-entropy loss on these labels. In task $i+1$, the closest candidate trajectory in $\mathcal{K}$ to the observed ground truth is labeled as positive. We train in a multi-class classification with a sparse categorical cross-entropy loss (using softmax normalized logit transformations) on these labels [?]. In effect, the consecutive tasks are only differing in the labels and loss functions used. Applying our method described in Sec. 3.3, we first train our CoverNet-SNGP model on task $i$ and then regularize its training on task $i+1$, as exemplified in Fig. 1. We denote the resulting informed CoverNet-SNGP as CoverNet-SNGP${}_{\textbf{I}}$, opposed to the non-informed version CoverNet-SNGP${}_{\textbf{U}}$ trained on task $i+1$ only without integration of prior knowledge from task $i$. ## 5 Experimental Design ### 5.1 Datasets We use the public NuScenes [?] and Argoverse2 [?] datasets. We replicate the NuScenes data split by Phan-Minh [?] on Argoverse2, only considering vehicle targets (exlcuding pedestrians and cyclists not driving on-road), as summarized in Tab. 1. For the RGB rendering, we consider each scene with a one-second history ($T_{o}=1\text{s}$). For the candidate trajectories in $\mathcal{K}$, we consider a six-second prediction horizon ($T_{h}=6\text{s}$), sampled at $2\text{Hz}$ in NuScenes and $10\text{Hz}$ in Argoverse2. Both datasets include drivable areas in the semantic map data, allowing us to define the first task as described in Sec. 4.3. Table 1: Numbers and percentages of samples across location subsets of both NuScenes and Argoverse2. data subset | train split # (%) | train-val split # (%) | val split # (%) ---|---|---|--- NuScenes Total | 32186 (100.0) | 8560 (100.0) | 9041 (100.0) Boston | 19629 (60.99) | 5855 (68.40) | 5138 (56.84) Singapore | 12557 (49.01) | 2705 (31.60) | 3903 (43.16) Argoverse2 Total | 161379 (100.0) | 22992 (100.0) | 23113 (100.0) Miami | 42214 (26.16) | 5983 (26.02) | 5984 (25.89) Austin | 34681 (21.49) | 4968 (21.57) | 4985 (26.16) Pittsburgh | 33391 (20.69) | 4823 (20.98) | 4803 (20.78) Dearborn | 20579 (12.75) | 2933 (12.79) | 3001 (12.98) Washington-DC | 20546 (12.73) | 2883 (12.54) | 2976 (12.88) Palo-Alto | 9968 (6.18) | 1402 (6.10) | 1364 (5.90) ### 5.2 Baselines We consider the unmodified CoverNet as baseline, once as non-informed Base- CoverNet [?] and once as Transfer-CoverNet. The Transfer-CoverNet baseline, pretrained on task $i$ and then trained on the current task $i+1$, has previously been proposed by Boulton [?]. We can also understand it as an ablation-type baseline to the PIL approach without regularization. In addition, we compare to GVCL-Det-CoverNet proposed by Schlauch [?], since it only needs a single-inference pass too. However, GVCl-Det-CoverNet also requires computationally extremely expensive training of a GVCL-CoverNet model. For example, in our setting, training until convergence on a single Nvidia RTX A5000 GPUs with $10\%$ of NuScenes data needs around 120 hours for GVCL-CoverNet, in contrast to 8 hours for CoverNet-SNGP${}_{\textbf{I}}$ and 6 hours for Base-CoverNet. ### 5.3 Metrics We measure the average displacement error minADE1 and final displacement error minFDE1 , evaluating the quality of the most likely trajectory, and the minADE5 , which considers the five most likely trajectories [?]. The minADE5 depends on the probability-based ordering and, thus, indirectly on the calibration. We also consider the drivable area compliance (DAC) to evaluate the extent to which predictions align with our prior drivability knowledge. Since observed ground truth trajectories may not be part of the trajectory set $y_{\text{true}}^{(t+\delta t\,:\,t+T_{h})}\notin\mathcal{K}$, the CoverNet model exhibits an irreducible approximation error. To more clearly assess the impact of our method, we also consider the classification-based negative log likelihood (NLL) and the rank of the positively labeled trajectory (RNK), both directly depending on the calibration, and the Top1-accuracy (ACC). ### 5.4 Implementation Details We use the output representation described in Sec. 4 with a coverage bound $\epsilon=4\text{m}$, for NuScenes with $K_{\text{Nusc}}=415$ and for Argoverse2 with $K_{\text{Argo}}=518$ candidates. We employ a ResNet-50 as backbone and SGD as optimizer. For the CoverNet-SNGPs, we fix power iterations $N_{p}$ to one and the number of Fourier features $N_{f}$ to 1024, following Liu [?]. The spectral normalization’s upper bound $s$ and the kernel length scale $l_{s}$ are treated as additional hyperparameters. We tune the hyperparameters of each model on the respective tasks with 100% of the data using the validation NLL333Configurations are available in our supplemental and on Github.. The exception is CoverNet-SNGP${}_{\textbf{I}}$, which uses the same settings as CoverNet-SNGP${}_{\textbf{U}}$ on task $i+1$. We also fix both temperature parameters $\lambda_{\text{NN}}$ and $\lambda_{\text{GP}}$ ad-hoc to the inverse of the effective dataset size to keep tuning costs low. The decay parameter $\gamma_{\text{GP}}$ is mostly relevant for very long task sequences (see Sec. 3), such that we set $\gamma_{\text{GP}}=1$. ## 6 Results We study the performance of our CoverNet-SNGP${}_{\text{I}}$ against the baselines under two sets of experiments. First, we investigate the performance under increasingly smaller subsets of the observational training data, allowing us to shed light on data-efficiency. These subsets are randomly subsampled once and then kept fixed across models and repetitions. In this set, we also consider GVCL-Det-CoverNet with results on NuScenes for $100\%$, reported from Schlauch [?], $10\%$ and $3\%$, replicated with only three independent repetitions, due to the long training times. Second, we test the performance by training and testing on location-specific subsets, gaining insights into the robustness to location-transfers, which is often implicitly assumed in the state of the art [?]. The reported results are the average performance and standard deviation of five independent runs for each experiment. ### 6.1 Effect of Available Training Data Table 2: Average performance and standard deviation of 5 independent repetitions over decreasing subsamples of NuScenes (bold as best). Data (in %) | Model | minADE1 | minADE5 | minFDE1 | NLL | RNK | ACC (in %) | DAC (in %) ---|---|---|---|---|---|---|---|--- 100 | Base | 4.92 $\pm$0.15 | 2.34 $\pm$0.05 | 10.94 $\pm$0.27 | 3.47 $\pm$0.06 | 15.55 $\pm$0.73 | 13.94 $\pm$1.10 | 89.26 $\pm$1.13 | Transfer | 4.60 $\pm$0.04 | 2.18 $\pm$0.02 | 9.94 $\pm$0.08 | 3.21 $\pm$0.01 | 11.79 $\pm$0.18 | 15.19 $\pm$0.43 | 95.73 $\pm$0.29 | GVCL-Det* | 4.55 $\pm$0.11 | 2.26 $\pm$0.05 | 9.93 $\pm$0.39 | 3.60 $\pm$0.08 | 11.85 $\pm$0.48 | 14.88 $\pm$0.94 | 90.94 $\pm$2.25 | SNGP${}_{\text{U}}$ | 4.53 $\pm$0.09 | 2.25 $\pm$0.04 | 10.31 $\pm$0.27 | 3.23 $\pm$0.01 | 13.25 $\pm$0.19 | 17.04 $\pm$0.68 | 91.19 $\pm$0.61 | SNGP${}_{\text{I}}$ | 4.45 $\pm$0.04 | 2.21 $\pm$0.01 | 10.09 $\pm$0.12 | 3.19 $\pm$0.01 | 12.44 $\pm$0.14 | 17.36 $\pm$0.59 | 91.65 $\pm$0.59 50 | Base | 5.15 $\pm$0.23 | 2.37 $\pm$0.11 | 11.46 $\pm$0.60 | 3.52 $\pm$0.06 | 17.21 $\pm$1.33 | 13.55 $\pm$0.62 | 86.68 $\pm$4.72 | Transfer | 4.86 $\pm$0.04 | 2.26 $\pm$0.01 | 10.38 $\pm$0.06 | 3.35 $\pm$0.01 | 13.46 $\pm$0.21 | 14.37 $\pm$0.09 | 95.66 $\pm$0.28 | SNGP${}_{\text{U}}$ | 4.57 $\pm$0.05 | 2.26 $\pm$0.04 | 10.40 $\pm$0.15 | 3.30 $\pm$0.02 | 14.62 $\pm$0.17 | 16.83 $\pm$0.59 | 90.09 $\pm$0.56 | SNGP${}_{\text{I}}$ | 4.48 $\pm$0.07 | 2.22 $\pm$0.04 | 10.13 $\pm$0.13 | 3.25 $\pm$0.02 | 13.39$\pm$0.31 | 16.72 $\pm$0.76 | 91.10 $\pm$0.72 30 | Base | 5.40 $\pm$0.03 | 2.44 $\pm$0.07 | 12.01 $\pm$0.20 | 3.68 $\pm$0.04 | 19.80 $\pm$1.03 | 12.70 $\pm$0.81 | 86.58 $\pm$2.54 | Transfer | 5.08 $\pm$0.03 | 2.34 $\pm$0.02 | 10.80 $\pm$0.07 | 3.47 $\pm$0.01 | 15.00 $\pm$0.05 | 13.38 $\pm$0.31 | 96.07 $\pm$0.32 | SNGP${}_{\text{U}}$ | 4.68 $\pm$0.09 | 2.29 $\pm$0.04 | 10.61 $\pm$0.26 | 3.37 $\pm$0.01 | 16.02 $\pm$0.22 | 16.69 $\pm$0.62 | 89.22 $\pm$0.30 | SNGP${}_{\text{I}}$ | 4.58 $\pm$0.03 | 2.30 $\pm$0.02 | 10.35 $\pm$0.08 | 3.31 $\pm$0.02 | 14.67 $\pm$0.20 | 17.10 $\pm$0.34 | 90.41 $\pm$0.49 10 | Base | 5.89 $\pm$0.28 | 2.72 $\pm$0.11 | 12.88 $\pm$0.63 | 3.99 $\pm$0.06 | 32.74 $\pm$1.48 | 12.38 $\pm$0.96 | 86.38 $\pm$2.64 | Transfer | 6.09 $\pm$0.03 | 2.65 $\pm$0.02 | 12.60 $\pm$0.06 | 3.89 $\pm$0.01 | 24.82 $\pm$0.13 | 10.35 $\pm$0.15 | 95.54 $\pm$0.23 | GVCL-Det* | 5.27 $\pm$0.27 | 2.53 $\pm$0.09 | 12.03 $\pm$0.58 | 4.05 $\pm$0.07 | 24.78 $\pm$0.45 | 12.95 $\pm$0.80 | 91.52 $\pm$1.54 | SNGP${}_{\text{U}}$ | 5.00 $\pm$0.04 | 2.52 $\pm$0.03 | 11.36 $\pm$0.22 | 3.60 $\pm$0.02 | 25.19 $\pm$0.30 | 15.73 $\pm$0.22 | 88.62 $\pm$0.56 | SNGP${}_{\text{I}}$ | 4.96 $\pm$0.05 | 2.47 $\pm$0.04 | 11.25 $\pm$0.16 | 3.52 $\pm$0.03 | 20.94 $\pm$0.59 | 15.39 $\pm$0.29 | 89.53 $\pm$1.11 5 | Base | 5.90 $\pm$0.17 | 2.82 $\pm$0.06 | 12.81 $\pm$0.38 | 4.26 $\pm$0.03 | 42.55 $\pm$1.92 | 10.17 $\pm$1.26 | 86.89 $\pm$1.96 | Transfer | 6.62 $\pm$0.04 | 2.89 $\pm$0.01 | 13.41 $\pm$0.09 | 4.30 $\pm$0.01 | 29.74 $\pm$0.44 | 8.70 $\pm$0.14 | 97.46 $\pm$0.07 | SNGP${}_{\text{U}}$ | 5.07 $\pm$0.05 | 2.58 $\pm$0.02 | 11.63 $\pm$0.13 | 3.90 $\pm$0.03 | 31.77 $\pm$0.85 | 14.29 $\pm$0.30 | 86.31 $\pm$0.82 | SNGP${}_{\text{I}}$ | 5.01 $\pm$0.04 | 2.53 $\pm$0.04 | 11.43 $\pm$0.10 | 3.72 $\pm$0.03 | 25.99 $\pm$0.65 | 14.32 $\pm$0.59 | 86.85 $\pm$0.94 3 | Base | 6.23 $\pm$0.16 | 3.11 $\pm$0.11 | 13.32 $\pm$0.28 | 4.53 $\pm$0.03 | 59.34 $\pm$3.76 | 10.42 $\pm$0.71 | 84.83 $\pm$2.00 | Transfer | 7.52 $\pm$0.09 | 3.35 $\pm$0.07 | 14.71 $\pm$0.14 | 4.61 $\pm$0.01 | 36.62 $\pm$0.60 | 7.33 $\pm$0.10 | 97.80 $\pm$0.08 | GVCL-Det* | 6.12 $\pm$0.11 | 2.86 $\pm$0.09 | 13.25 $\pm$0.31 | 4.26 $\pm$0.05 | 31.96 $\pm$3.01 | 10.87 $\pm$0.49 | 93.05 $\pm$1.21 | SNGP${}_{\text{U}}$ | 5.56 $\pm$0.09 | 2.85 $\pm$0.07 | 12.64 $\pm$0.15 | 4.61 $\pm$0.02 | 46.82 $\pm$1.37 | 13.37 $\pm$0.09 | 86.00 $\pm$0.95 | SNGP${}_{\text{I}}$ | 5.44 $\pm$0.13 | 2.74 $\pm$0.06 | 12.38 $\pm$0.29 | 3.90 $\pm$0.01 | 27.88 $\pm$1.54 | 12.62 $\pm$0.64 | 86.18 $\pm$0.77 1 | Base | 8.39 $\pm$1.16 | 3.44 $\pm$0.30 | 16.25 $\pm$2.27 | 5.23 $\pm$0.10 | 83.20 $\pm$3.84 | 5.39 $\pm$1.92 | 81.48 $\pm$4.34 | Transfer | 8.44 $\pm$0.07 | 4.18 $\pm$0.08 | 15.71 $\pm$0.27 | 5.52 $\pm$0.01 | 52.92 $\pm$0.11 | 4.53 $\pm$0.15 | 98.29 $\pm$0.27 | SNGP${}_{\text{U}}$ | 6.33 $\pm$0.64 | 2.88 $\pm$0.07 | 12.76 $\pm$1.07 | 5.48 $\pm$0.01 | 77.64 $\pm$3.38 | 8.48 $\pm$0.92 | 70.42 $\pm$3.97 | SNGP${}_{\text{I}}$ | 5.39 $\pm$0.28 | 2.68 $\pm$0.07 | 12.27 $\pm$0.53 | 4.40 $\pm$0.01 | 50.19 $\pm$1.15 | 9.94 $\pm$0.56 | 79.34 $\pm$2.54 Table 3: Average performance and standard deviation of 5 independent repetitions over decreasing subsamples of Argoverse2 (bold as best). Data (in %) | Model | minADE1 | minADE5 | minFDE1 | NLL | RNK | ACC (in %) | DAC (in %) ---|---|---|---|---|---|---|---|--- 100 | Base | 3.57 $\pm$0.07 | 1.84 $\pm$0.04 | 8.96 $\pm$0.15 | 2.73$\pm$0.01 | 7.87 $\pm$0.58 | 24.46 $\pm$0.59 | 94.77 $\pm$0.32 | Transfer | 3.60 $\pm$0.04 | 1.76 $\pm$0.02 | 8.78 $\pm$0.08 | 2.68 $\pm$0.01 | 7.31 $\pm$0.18 | 24.61 $\pm$0.33 | 96.91 $\pm$0.09 | SNGP${}_{\text{U}}$ | 3.60 $\pm$0.04 | 1.86 $\pm$0.03 | 9.00 $\pm$0.15 | 2.74 $\pm$0.03 | 8.19 $\pm$0.28 | 25.24 $\pm$0.29 | 95.00 $\pm$0.47 | SNGP${}_{\text{I}}$ | 3.51 $\pm$0.06 | 1.82 $\pm$0.03 | 8.73 $\pm$0.07 | 2.69 $\pm$0.01 | 7.69 $\pm$0.17 | 25.56 $\pm$0.42 | 95.01 $\pm$0.12 50 | Base | 3.93 $\pm$0.10 | 1.97 $\pm$0.07 | 9.78 $\pm$0.31 | 2.98 $\pm$0.07 | 9.89 $\pm$0.53 | 21.02 $\pm$1.17 | 93.99 $\pm$1.58 | Transfer | 3.80 $\pm$0.01 | 1.83 $\pm$0.02 | 9.36 $\pm$0.05 | 2.80 $\pm$0.01 | 8.16 $\pm$0.02 | 23.41 $\pm$0.29 | 97.04 $\pm$0.22 | SNGP${}_{\text{U}}$ | 3.84 $\pm$0.04 | 2.01 $\pm$0.02 | 9.67 $\pm$0.15 | 2.89 $\pm$0.02 | 9.95 $\pm$0.28 | 23.53 $\pm$0.40 | 94.93 $\pm$0.19 | SNGP${}_{\text{I}}$ | 3.76 $\pm$0.02 | 1.95 $\pm$0.02 | 9.38 $\pm$0.06 | 2.84 $\pm$0.02 | 9.27 $\pm$0.15 | 23.81 $\pm$0.76 | 94.92 $\pm$0.64 30 | Base | 4.22 $\pm$0.10 | 2.04 $\pm$0.01 | 10.41 $\pm$0.28 | 3.07 $\pm$0.06 | 11.18 $\pm$1.33 | 19.76 $\pm$0.49 | 94.37 $\pm$0.71 | Transfer | 3.99 $\pm$0.02 | 1.89 $\pm$0.02 | 9.76 $\pm$0.05 | 2.91 $\pm$0.01 | 9.07 $\pm$0.04 | 22.02 $\pm$0.23 | 97.15 $\pm$0.23 | SNGP${}_{\text{U}}$ | 3.98 $\pm$0.03 | 2.09 $\pm$0.04 | 9.95 $\pm$0.09 | 2.99 $\pm$0.01 | 11.40 $\pm$0.30 | 22.79 $\pm$0.22 | 94.73 $\pm$0.56 | SNGP${}_{\text{I}}$ | 3.96 $\pm$0.04 | 2.04 $\pm$0.02 | 9.88 $\pm$0.12 | 2.95 $\pm$0.02 | 10.54 $\pm$0.18 | 22.60 $\pm$0.30 | 94.94 $\pm$0.58 10 | Base | 4.70 $\pm$0.10 | 2.25 $\pm$0.02 | 11.43 $\pm$0.16 | 3.42 $\pm$0.06 | 17.17 $\pm$0.28 | 16.91 $\pm$0.38 | 93.63 $\pm$0.52 | Transfer | 4.49 $\pm$0.02 | 2.07 $\pm$0.02 | 10.76 $\pm$0.05 | 3.21 $\pm$0.01 | 12.40 $\pm$0.07 | 18.46 $\pm$0.27 | 97.48 $\pm$0.55 | SNGP${}_{\text{U}}$ | 4.26 $\pm$0.03 | 2.23 $\pm$0.03 | 10.49 $\pm$0.22 | 3.19 $\pm$0.02 | 14.99 $\pm$0.13 | 20.19 $\pm$0.29 | 94.22 $\pm$0.89 | SNGP${}_{\text{I}}$ | 4.23 $\pm$0.08 | 2.22 $\pm$0.05 | 10.47 $\pm$0.20 | 3.15 $\pm$0.02 | 13.85 $\pm$0.38 | 20.90 $\pm$0.16 | 94.35 $\pm$0.65 5 | Base | 5.04 $\pm$0.09 | 2.41 $\pm$0.06 | 12.33 $\pm$0.23 | 3.67 $\pm$0.02 | 23.73 $\pm$0.87 | 15.05 $\pm$0.80 | 90.79 $\pm$1.79 | Transfer | 4.94 $\pm$0.01 | 2.25 $\pm$0.01 | 11.49 $\pm$0.02 | 3.50 $\pm$0.01 | 16.80 $\pm$0.03 | 15.86 $\pm$0.16 | 97.12 $\pm$0.38 | SNGP${}_{\text{U}}$ | 4.43 $\pm$0.04 | 2.31 $\pm$0.02 | 11.06 $\pm$0.08 | 3.36 $\pm$0.02 | 18.92 $\pm$0.29 | 19.32 $\pm$0.29 | 91.60 $\pm$0.82 | SNGP${}_{\text{I}}$ | 4.41 $\pm$0.01 | 2.24 $\pm$0.02 | 10.92 $\pm$0.11 | 3.28 $\pm$0.01 | 16.30 $\pm$0.30 | 19.36 $\pm$0.48 | 93.17 $\pm$1.20 3 | Base | 5.41 $\pm$0.09 | 2.48 $\pm$0.11 | 12.99 $\pm$0.47 | 3.88 $\pm$0.03 | 28.85 $\pm$1.35 | 13.55 $\pm$0.47 | 90.96 $\pm$0.50 | Transfer | 5.44 $\pm$0.01 | 2.44 $\pm$0.07 | 12.35 $\pm$0.04 | 3.73 $\pm$0.01 | 20.89 $\pm$0.04 | 13.81 $\pm$0.07 | 97.16 $\pm$0.33 | SNGP${}_{\text{U}}$ | 4.54 $\pm$0.04 | 2.34 $\pm$0.02 | 11.31 $\pm$0.13 | 3.50 $\pm$0.01 | 21.96 $\pm$0.34 | 17.78 $\pm$0.22 | 91.28 $\pm$0.82 | SNGP${}_{\text{I}}$ | 4.51 $\pm$0.05 | 2.33 $\pm$0.04 | 11.06 $\pm$0.14 | 3.41 $\pm$0.01 | 18.04 $\pm$0.33 | 17.94 $\pm$0.20 | 92.49 $\pm$0.79 1 | Base | 5.96 $\pm$0.26 | 2.75 $\pm$0.04 | 14.15 $\pm$0.62 | 4.46 $\pm$0.01 | 50.60 $\pm$0.99 | 11.33 $\pm$0.96 | 87.43 $\pm$3.49 | Transfer | 6.52 $\pm$0.03 | 2.95 $\pm$0.01 | 14.30 $\pm$0.06 | 4.28 $\pm$0.01 | 33.31 $\pm$0.12 | 10.02 $\pm$0.02 | 98.70 $\pm$0.03 | SNGP${}_{\text{U}}$ | 5.02 $\pm$0.05 | 2.53 $\pm$0.02 | 12.26 $\pm$0.13 | 3.96 $\pm$0.01 | 40.58 $\pm$0.50 | 15.14 $\pm$0.50 | 89.11 $\pm$0.83 | SNGP${}_{\text{I}}$ | 5.00 $\pm$0.09 | 2.50 $\pm$0.02 | 12.14 $\pm$0.19 | 3.75 $\pm$0.02 | 26.63 $\pm$0.90 | 15.12 $\pm$0.39 | 90.34 $\pm$0.64 Figure 2: Average performance and standard deviation in NLL, minFDE1 and DAC of five repetitions for the informed and non-informed CoverNet-SNGP over decreasing subsamples of NuScenes. Tab. 2 and Tab. 3 show the performance of our CoverNet-SNGP${}_{\text{I}}$ in comparison to the baselines on NuScenes and Argoverse2, respectively. We observe, that the prior drivability knowledge leads to notable performance benefits in our CoverNet-SNGP${}_{\text{I}}$ and informed baselines (Transfer- CoverNet, GVCl-Det-CoverNet) across most metrics. The benefits from the prior drivability knowledge are most substantial in the calibration-sensitive metrics (RNK and notably NLL, e.g., as seen in Fig 2) that directly benefit from the optimization in the knowledge tasks. The drivability knowledge is less helpful in discerning the best candidate between the remaining drivable candidate trajectories, leading to lower benefits in the respective metrics (minADE1 , minFDE1 , ACC). We also observe, that Transfer-CoverNet’s benefits are limited to higher data regimes. In low data regimes, Transfer-CoverNet can even perform substantially worse than Base-CoverNet across all metrics (except DAC). In these low data regimes, Transfer-CoverNet may converge to less adequate minima, due to its weight initialization being overly biased towards drivability (illustrated by the rising DAC). In contrast, GVCL-Det-CoverNet and our CoverNet- SNGP${}_{\text{I}}$ never decrease performance, with consistent benefits especially in low data regimes. This highlights a principal advantage of the PIL approach, where the informative prior helps to shape the complete loss landscape during training. In comparison to GVCL-Det-CoverNet, our CoverNet-SNGP${}_{\text{I}}$ shows benefits across most metrics, especially in low data regimes, even though both are trained using the PIL approach. The advantage is most visible in the metrics concerning the most-likely trajectory (minADE1 , ACC). CoverNet- SNGP${}_{\text{I}}$ also shows more stable results with a lower standard deviations. Here, our CoverNet-SNGP${}_{\text{I}}$ profits from using the full information of the posterior distribution at inference. ### 6.2 Effect of Location-Specific Training Table 4: Average performance and standard deviation of 5 independent repetitions trained on Singapore and Boston locations from NuScenes. Train Location | Model | Test Location | minADE1 | minADE5 | minFDE1 | NLL | RNK | ACC | DAC ---|---|---|---|---|---|---|---|---|--- Singapore | Base | Singapore | 5.33 $\pm$0.40 | 2.37 $\pm$0.06 | 11.54$\pm$0.80 | 3.69$\pm$0.06 | 19.79$\pm$0.71 | 12.97$\pm$1.89 | 84.94$\pm$1.35 | | Boston | 5.83 $\pm$0.22 | 2.64 $\pm$0.04 | 12.76$\pm$0.41 | 3.93$\pm$0.07 | 24.98$\pm$1.08 | 10.18$\pm$0.80 | 89.79$\pm$2.13 | Transfer | Singapore | 5.47 $\pm$0.07 | 2.35 $\pm$0.03 | 11.41$\pm$0.14 | 3.49$\pm$0.02 | 13.79$\pm$0.18 | 11.49$\pm$0.50 | 94.29$\pm$0.59 | | Boston | 6.65 $\pm$0.10 | 2.94 $\pm$0.03 | 14.26 $\pm$0.20 | 4.09 $\pm$0.01 | 24.09 $\pm$0.31 | 8.55 $\pm$0.40 | 96.09$\pm$0.20 | SNGP${}_{\text{U}}$ | Singapore | 4.48 $\pm$0.06 | 2.26 $\pm$0.02 | 10.05 $\pm$0.16 | 3.38 $\pm$0.03 | 15.06 $\pm$0.31 | 15.85 $\pm$0.46 | 85.30 $\pm$1.01 | | Boston | 5.38 $\pm$0.15 | 2.71 $\pm$0.05 | 12.20 $\pm$0.35 | 3.65 $\pm$0.02 | 20.81 $\pm$0.56 | 13.15$\pm$0.73 | 90.37$\pm$0.95 | SNGP${}_{\text{I}}$ | Singapore | 4.43 $\pm$0.07 | 2.20 $\pm$0.06 | 9.83 $\pm$0.15 | 3.31 $\pm$0.06 | 13.64 $\pm$1.31 | 15.84$\pm$1.05 | 86.56$\pm$0.50 | | Boston | 5.36 $\pm$0.09 | 2.68 $\pm$0.08 | 12.18 $\pm$0.26 | 3.65 $\pm$0.01 | 21.56$\pm$1.40 | 12.95 $\pm$0.74 | 90.68 $\pm$0.79 Boston | Base | Boston | 5.02 $\pm$0.20 | 2.32 $\pm$0.09 | 11.18$\pm$0.49 | 3.57 $\pm$0.08 | 18.10$\pm$1.31 | 13.18$\pm$1.23 | 90.18 $\pm$2.13 | | Singapore | 5.69 $\pm$0.28 | 2.73 $\pm$0.15 | 12.77 $\pm$0.78 | 3.88 $\pm$0.07 | 23.42 $\pm$0.47 | 11.37 $\pm$0.98 | 82.03 $\pm$2.92 | Transfer | Boston | 4.78 $\pm$0.06 | 2.19 $\pm$0.01 | 10.21 $\pm$0.12 | 3.41 $\pm$0.01 | 14.39 $\pm$0.19 | 14.02 $\pm$0.56 | 96.50 $\pm$0.74 | | Singapore | 5.63 $\pm$0.06 | 2.64 $\pm$0.04 | 12.17 $\pm$0.20 | 3.70 $\pm$0.01 | 18.77 $\pm$0.16 | 11.40 $\pm$0.61 | 93.10 $\pm$1.15 | SNGP${}_{\text{U}}$ | Boston | 4.62 $\pm$0.10 | 2.23 $\pm$0.02 | 10.46 $\pm$0.25 | 3.32 $\pm$0.01 | 14.83 $\pm$0.40 | 16.57 $\pm$0.64 | 93.31 $\pm$0.40 | | Singapore | 4.94 $\pm$0.07 | 2.61 $\pm$0.11 | 11.27 $\pm$0.17 | 3.58 $\pm$0.03 | 19.48 $\pm$0.16 | 14.93 $\pm$1.06 | 83.28 $\pm$0.60 | SNGP${}_{\text{I}}$ | Boston | 4.50 $\pm$0.04 | 2.19 $\pm$0.02 | 10.13 $\pm$0.11 | 3.26 $\pm$0.02 | 12.97 $\pm$0.33 | 16.94 $\pm$0.60 | 94.01 $\pm$0.27 | | Singapore | 4.82 $\pm$0.07 | 2.60 $\pm$0.07 | 10.95 $\pm$0.18 | 3.52 $\pm$0.04 | 18.39 $\pm$0.66 | 15.60 $\pm$0.70 | 85.36 $\pm$1.18 Table 5: Average performance and standard deviation of 5 independent repetitions trained on Palo-Alto and Miami locations from Argoverse2. Train Location | Model | Test Location | minADE1 | minADE5 | minFDE1 | NLL | RNK | ACC | DAC ---|---|---|---|---|---|---|---|---|--- Palo-Alto | Base | Palo-Alto | 4.94 $\pm$0.12 | 2.35 $\pm$0.05 | 12.13 $\pm$0.20 | 3.45 $\pm$0.07 | 17.41 $\pm$0.21 | 14.72 $\pm$1.01 | 92.94 $\pm$1.41 | | Ex-Palo-Alto | 5.02 $\pm$0.42 | 2.51 $\pm$0.23 | 12.24 $\pm$0.51 | 3.65 $\pm$0.12 | 22.18 $\pm$1.20 | 14.18 $\pm$0.79 | 91.90 $\pm$1.53 | Transfer | Palo-Alto | 4.91 $\pm$0.05 | 2.19 $\pm$0.01 | 11.32 $\pm$0.13 | 3.27 $\pm$0.01 | 13.75 $\pm$0.13 | 18.66 $\pm$0.43 | 95.92 $\pm$0.38 | | Ex-Palo-Alto | 5.33 $\pm$0.03 | 2.44 $\pm$0.01 | 12.39 $\pm$0.90 | 3.63 $\pm$0.01 | 18.30 $\pm$0.13 | 13.68 $\pm$0.34 | 95.92 $\pm$0.46 | SNGP${}_{\text{U}}$ | Palo-Alto | 4.23 $\pm$0.06 | 2.20 $\pm$0.01 | 10.63 $\pm$0.19 | 3.11 $\pm$0.03 | 15.03 $\pm$0.56 | 23.42 $\pm$0.35 | 92.02 $\pm$1.74 | | Ex-Palo-Alto | 4.55 $\pm$0.05 | 2.38 $\pm$0.02 | 11.35 $\pm$0.13 | 3.37 $\pm$0.02 | 18.66 $\pm$0.55 | 18.58 $\pm$0.68 | 92.06 $\pm$1.55 | SNGP${}_{\text{I}}$ | Palo-Alto | 4.23 $\pm$0.05 | 2.19 $\pm$0.04 | 10.40 $\pm$0.22 | 3.06 $\pm$0.03 | 13.72 $\pm$0.54 | 22.38 $\pm$0.47 | 91.74 $\pm$3.01 | | Ex-Palo-Alto | 4.57 $\pm$0.11 | 2.37 $\pm$0.04 | 11.30 $\pm$0.32 | 3.35 $\pm$0.01 | 17.43 $\pm$0.54 | 18.09 $\pm$0.74 | 91.88 $\pm$2.25 Miami | Base | Miami | 4.02 $\pm$0.21 | 2.22 $\pm$0.11 | 10.28 $\pm$0.35 | 3.45 $\pm$0.06 | 14.12 $\pm$0.78 | 18.97 $\pm$0.31 | 95.20 $\pm$0.98 | | Ex-Miami | 4.29 $\pm$0.22 | 2.31 $\pm$0.13 | 11.01 $\pm$0.39 | 3.47 $\pm$0.09 | 16.18 $\pm$0.92 | 17.92 $\pm$0.79 | 94.99 $\pm$1.12 | Transfer | Miami | 3.91 $\pm$0.01 | 1.85 $\pm$0.01 | 9.52 $\pm$0.02 | 2.94 $\pm$0.01 | 9.17 $\pm$0.04 | 21.33 $\pm$0.29 | 97.42 $\pm$0.72 | | Ex-Miami | 4.31 $\pm$0.02 | 2.07 $\pm$0.01 | 10.47 $\pm$0.05 | 3.10 $\pm$0.01 | 10.62 $\pm$0.04 | 19.65 $\pm$0.35 | 97.41 $\pm$0.98 | SNGP${}_{\text{U}}$ | Miami | 3.88 $\pm$0.04 | 2.03 $\pm$0.02 | 9.74 $\pm$0.11 | 3.00 $\pm$0.01 | 11.48 $\pm$0.16 | 22.07 $\pm$0.41 | 95.58 $\pm$0.40 | | Ex-Miami | 4.15 $\pm$0.04 | 2.21 $\pm$0.02 | 10.44 $\pm$0.13 | 3.11 $\pm$0.01 | 13.56 $\pm$0.21 | 21.50 $\pm$0.51 | 94.81 $\pm$0.35 | SNGP${}_{\text{I}}$ | Miami | 3.88 $\pm$0.05 | 1.99 $\pm$0.02 | 9.65 $\pm$0.15 | 2.99 $\pm$0.01 | 10.75 $\pm$0.21 | 21.71 $\pm$0.53 | 95.21 $\pm$0.46 | | Ex-Miami | 4.17 $\pm$0.05 | 2.20 $\pm$0.03 | 10.42 $\pm$0.15 | 3.09 $\pm$0.02 | 12.68 $\pm$0.31 | 21.25 $\pm$0.59 | 94.26 $\pm$0.58 Figure 3: Average performance and standard deviation of the informed and non- informed CoverNet-SNGP on Boston and Singapore test data, with (a) models trained on Singapore training data and (b) models trained on Boston training data (five repetitions). Tab. 4 and Tab. 5 show location-specific performances of our CoverNet- SNGP${}_{\text{I}}$ in comparison to the baselines on NuScenes and Argoverse2, respectively. We observe, that the performance generally and substantially deteriorates in locations which are not included in the training data. This sensitivity of trajectory prediction models to location-transfers can be a major limitation to their practical use. We also observe, that our CoverNet-SNGP${}_{\text{I}}$ can help to alleviate this issue by consistently improving the generalization over location- transfers. This is most visible in the comparison of the Boston trained models on NuScenes (see Fig. 3) and the Palo-Alto trained models in Argoverse2, where we see a better performance across most metrics in same-location and location- transfer tests. The Transfer-CoverNet baseline performs even worse than Base- CoverNet in these cases, pointing to the same limitation we see in Sec. 6.1 regarding its bias. In the other two comparisons, CoverNet-SNGP${}_{\text{I}}$ still shows advantages (notably NLL). However, in case of Miami in Argoverse2, more training data is available (compare Sec. 6.1), and in case of Singapore in NuScenes the drivability knowledge might be less useful (see Fig. 3), since all models achieve a lower DAC. ## 7 Conclusion Our work introduces a novel regularization-based continual learning method for the SNGP model. We apply this method in a PIL approach for trajectory prediction in autonomous driving, deriving a compute-efficient informed CoverNet-SNGP model integrating prior drivability knowledge. We demonstrate on two public datasets, that our informed CoverNet-SNGP increases data-efficiency and robustness to location-transfers, outperforming informed and non-informed baselines in low data regimes. Thus, we show that our proposed continual learning method is a feasible way to regularize SNGPs using informative priors. In future work, we plan to apply informed SNGPs to more recent transformer-based prediction models using self-supervised learning and investigate robustness against adversarial attacks and outliers. ## Acknowledgments The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Climate Action within the project “KI Wissen – Entwicklung von Methoden für die Einbindung von Wissen in maschinelles Lernen”. The authors would like to thank the consortium for the successful cooperation. ## References * [Bagus and Gepperth, 2021] Benedikt Bagus and Alexander Gepperth. An investigation of replay-based approaches for continual learning. In International Joint Conference on Neural Networks, IJCNN 2021. IEEE, 2021. * [Bahari et al., 2021] Mohammadhossein Bahari, Ismail Nejjar, and Alexandre Alahi. Injecting Knowledge in Data-driven Vehicle Trajectory Predictors. Transportation Research Part C: Emerging Technologies, 2021. * [Boulton et al., 2021] Freddy A. Boulton, Elena Corina Grigore, and Eric M. Wolff. Motion Prediction using Trajectory Sets and Self-Driving Domain Knowledge. arXiv preprint, https://arxiv.org/abs/2006.04767, 2021. * [Caesar et al., 2020] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 2020. * [Charpentier et al., 2023] Bertrand Charpentier, Chenxiang Zhang, and Stephan Günnemann. Training, architecture, and prior for deterministic uncertainty methods. arXiv preprint, https://arxiv.org/abs/2303.05796, 2023. * [Cormen et al., 2009] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 3rd Edition. MIT Press, 2009. * [Cui et al., 2020] Henggang Cui, Thi Nguyen, Fang-Chieh Chou, Tsung-Han Lin, Jeff Schneider, David Bradley, and Nemanja Djuric. Deep kinematic models for kinematically feasible vehicle trajectory predictions. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation, ICRA 2020, Paris, France, 2020. * [De Lange et al., 2022] Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory G. Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. * [Derakhshani et al., 2021] Mohammad Mahdi Derakhshani, Xiantong Zhen, Ling Shao, and Cees Snoek. Kernel continual learning. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Proceedings of Machine Learning Research. PMLR, 2021. * [Freiesleben and Grote, 2023] Timo Freiesleben and Thomas Grote. Beyond generalization: a theory of robustness in machine learning. Synthese, 2023. * [Huang et al., 2022] Yanjun Huang, Jiatong Du, Ziru Yang, Zewei Zhou, Lin Zhang, and Hong Chen. A Survey on Trajectory-Prediction Methods for Autonomous Driving. IEEE Transactions on Intelligent Vehicles, 2022. * [Kirkpatrick et al., 2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 2017. * [Kristiadi et al., 2020] Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being bayesian, even just a bit, fixes overconfidence in relu networks. In Proceedings of the 37th International Conference on Machine Learning, , ICML 2020, Proceedings of Machine Learning Research. PMLR, 2020. * [Liu et al., 2020] Jeremiah Z. Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, and Balaji Lakshminarayanan. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 2020. * [Loo et al., 2021] Noel Loo, Siddharth Swaroop, and Richard E. Turner. Generalized variational continual learning. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net, 2021. * [Makansi et al., 2022] Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Vincent Gehler, Dominik Janzing, Thomas Brox, and Bernhard Schölkopf. You mostly walk alone: Analyzing feature attribution in trajectory prediction. In The Tenth International Conference on Learning Representations, ICLR 2022. OpenReview.net, 2022. * [Malinin et al., 2021] Andrey Malinin, Neil Band, Yarin Gal, Mark J. F. Gales, Alexander Ganshin, German Chesnokov, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Denis Roginskiy, Mariya Shmatova, Panagiotis Tigas, and Boris Yangel. Shifts: A dataset of real distributional shift across multiple large-scale tasks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, 2021. * [Parisi et al., 2019] German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks, 2019. * [Phan-Minh et al., 2020] Tung Phan-Minh, Elena Corina Grigore, Freddy A. Boulton, Oscar Beijbom, and Eric M. Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 2020. * [Postels et al., 2022] Janis Postels, Mattia Segù, Tao Sun, Luca Daniel Sieber, Luc Van Gool, Fisher Yu, and Federico Tombari. On the practicality of deterministic epistemic uncertainty. In Proceedings of the 39th International Conference on Machine Learning, ICML 2022, Proceedings of Machine Learning Research. PMLR, 2022. * [Rahimi and Recht, 2007] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems 20: Annual Conference on Neural Information Processing Systems 2007, NeurIPS 2007. Curran Associates, Inc., 2007. * [Schlauch et al., 2023] Christian Schlauch, Christian Wirth, and Nadja Klein. Informed priors for knowledge integration in trajectory prediction. In Machine Learning and Knowledge Discovery in Databases: Research Track - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2023, Turin, Italy. Springer, 2023. * [Schwarz et al., 2018] Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Proceedings of Machine Learning Research. PMLR, 2018. * [Shwartz-Ziv et al., 2022] Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, and Andrew Gordon Wilson. Pre-train your loss: Easy bayesian transfer learning with informative priors. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, 2022. * [Titsias et al., 2020] Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning with gaussian processes. In 8th International Conference on Learning Representations, ICLR 2020. OpenReview.net, 2020. * [van Amersfoort et al., 2021] Joost R. van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, and Yarin Gal. On feature collapse and deep kernel learning for single forward pass uncertainty. 2021\. * [von Rueden et al., 2021] Laura von Rueden, Sebastian Mayer, Katharina Beckh, Bogdan Georgiev, Sven Giesselbach, Raoul Heese, Birgit Kirsch, Julius Pfrommer, Annika Pick, Rajkumar Ramamurthy, Michal Walczak, Jochen Garcke, Christian Bauckhage, and Jannis Schuecker. Informed Machine Learning – A Taxonomy and Survey of Integrating Knowledge into Learning Systems. IEEE Transactions on Knowledge and Data Engineering, 2021. * [Wilson et al., 2023] Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, Deva Ramanan, Peter Carr, and James Hays. Argoverse 2: Next generation datasets for self-driving perception and forecasting. arXiv preprint, https://arxiv.org/abs/2301.00493, 2023. * [Wörmann et al., 2022] Julian Wörmann, Daniel Bogdoll, Etienne Bührle, Han Chen, Evaristus Fuh Chuo, Kostadin Cvejoski, Ludger van Elst, Tobias Gleißner, Philip Gottschall, Stefan Griesche, Christian Hellert, Christian Hesels, Sebastian Houben, Tim Joseph, Niklas Keil, Johann Kelsch, Hendrik Königshof, Erwin Kraft, Leonie Kreuser, Kevin Krone, Tobias Latka, Denny Mattern, Stefan Matthes, Mohsin Munir, Moritz Nekolla, Adrian Paschke, Maximilian Alexander Pintz, Tianming Qiu, Faraz Qureishi, Syed Tahseen Raza Rizvi, Jörg Reichardt, Laura von Rueden, Stefan Rudolph, Alexander Sagel, Gerhard Schunk, Hao Shen, Hendrik Stapelbroek, Vera Stehr, Gurucharan Srinivas, Anh Tuan Tran, Abhishek Vivekanandan, Ya Wang, Florian Wasserrab, Tino Werner, Christian Wirth, and Stefan Zwicklbauer. Knowledge Augmented Machine Learning with Applications in Autonomous Driving: A Survey. arXiv preprint, https://arxiv.org/abs/2205.04712, 2022.
(eccv) Package eccv Warning: Package ‘hyperref’ is loaded with option ‘pagebackref’, which is *not* recommended for camera-ready version 11institutetext: 1OpenGVLab, Shanghai AI Laboratory 2Fudan University 3Tsinghua University 4The Chinese University of Hong Kong 5Nanjing University 6Harbin Institute of Technology 7SenseTime Research # The All-Seeing Project V2: Towards General Relation Comprehension of the Open World Weiyun Wang 2211 Yiming Ren 3311 Haowen Luo 33 Tiantong Li 3311 Chenxiang Yan 33 Zhe Chen 5511 Wenhai Wang 4411 Qingyun Li 6611 Lewei Lu 77 Xizhou Zhu 331177 Yu Qiao 11 Jifeng Dai †3†311 ###### Abstract We present the All-Seeing Project V2: a new model and dataset designed for understanding object relations in images. Specifically, we propose the All- Seeing Model V2 (ASMv2) that integrates the formulation of text generation, object localization, and relation comprehension into a relation conversation (ReC) task. Leveraging this unified task, our model excels not only in perceiving and recognizing all objects within the image but also in grasping the intricate relation graph between them, diminishing the relation hallucination often encountered by Multi-modal Large Language Models (MLLMs). To facilitate training and evaluation of MLLMs in relation understanding, we created the first high-quality ReC dataset (AS-V2) which is aligned with the format of standard instruction tuning data. In addition, we design a new benchmark, termed Circular-based Relation Probing Evaluation (CRPE) for comprehensively evaluating the relation comprehension capabilities of MLLMs. Notably, our ASMv2 achieves an overall accuracy of 52.04 on this relation- aware benchmark, surpassing the 43.14 of LLaVA-1.5 by a large margin. We hope that our work can inspire more future research and contribute to the evolution towards artificial general intelligence. Our project is released at https://github.com/OpenGVLab/all-seeing. ###### Keywords: Multimodal Large Language Model Pointer instructions †††Corresponding to Jifeng Dai<EMAIL_ADDRESS> ## 1 Introduction (a) Multi-modal Large Language Models (MLLMs) can process both text and images, but they can only capture the holistic visual information of the whole image. (b) Grounded MLLMs can link the objects mentioned in the sentence to the regions in the image while struggling to efficiently understand the relations between objects. (c) Our ASMv2 can comprehend and ground the relations between the objects in the image while maintaining the capabilities of MLLMs and Grounded MLLMs. Figure 1: Overview and comparison of our All-Seeing Model v2 with other MLLMs. To enhance relation comprehension ability while maintaining grounding, referring, and other general capabilities, we propose (1) a novel task, termed Relation Conversation (ReC), which unifies the formulation of text generation, object localization, and relation comprehension; (2) a high-quality dataset AS-V2, which consists of over 127K samples for ReC; (3) the All-Seeing Model v2 (ASMv2), which is capable of comprehending and grounding the relations between the objects in the image. The study of artificial general intelligence (AGI) systems that can match human intelligence and excel in any task across domains represents the ultimate goal in the field of artificial intelligence. Benefiting from the advancements of Large Language Models (LLMs), Multi-modal Large Language Models (MLLMs) have demonstrated impressive capabilities in a variety of Vision-Language tasks, suggesting new avenues for achieving AGI. However, as shown in Fig. 1(a), most popular MLLMs [50, 49, 11] are limited to understanding images as a whole. As an effective method to improve interaction efficiency, the capabilities of grounding and referring (_i.e_., adopting bounding boxes in responses) have attracted increasing attention and have been widely integrated into current Grounded MLLMs [81, 62, 8, 2, 80]. Such capabilities empower models to provide visual responses (_e.g_., bounding boxes), supporting more vision-language tasks such as region captioning [57, 31], referring expression comprehension [30, 57], and referring question answering [91]. However, as shown in Fig. 1(b), existing models primarily focus on recognizing certain objects within images, overlooking the perception of relations between these objects. Due to the lack of appropriate modeling methods and suitable training data for relation knowledge, these models struggle to comprehend the inter-object relations within images accurately. Consequently, these models are prone to hallucinations when dealing with relation questions or overly relying on language priors for judgment. To enhance relation comprehension ability while maintaining grounding, referring, and other general capabilities, we introduce a novel task, termed Relation Conversation (ReC). The formulation of ReC unifies the modeling of text generation, object localization, and relation comprehension. Specifically, as depicted in Fig. 1(c), ReC requires the model to generate the text response while linking all mentioned objects as well as the subjects and objects of each predicate in the response to the corresponding regions in the image simultaneously. Such explicit requirement for predicate grounding challenges the model to comprehend relations between objects within the image. Notably, models trained on ReC can be naturally adapted to the Scene Graph Generation task. The grounded objects serve as the nodes in the scene graph while the grounded predicates serve as the edges. Compared with the traditional scene graph generation, ReC enables the model to generate the scene graph in an open-ended manner, demonstrating the potential to generalize to previously unseen predicate labels, while also maintaining the general ability of MLLMs. From the data aspect, we construct the All-Seeing Dataset V2 (AS-V2) comprising 127K high-quality relation conversation samples, which is built upon the existing caption [10], location [45], and relation [84] annotations. Combining AS-V2 with other image-level and region-level multimodal corpora for training, we propose the All-Seeing Model v2 (ASMv2). Benefiting from the tailored task format and data, our model can deal with three types of relation tasks, including (1) Relation Conversation, which requires the model to link all mentioned objects and predicates to the corresponding regions in the image; (2) Open-ended Scene Graph Generation, which requires the model to generate a scene graph based on the given image in an open-ended manner; (3) Predicate Classification, which requires the model to generate a scene graph given the ground-truth object labels and localization. An example of ASMv2 is shown in Fig. 2. Figure 2: Examples of relation conversation responses from ASMv2. To evaluate the relation comprehension ability of existing MLLMs, we construct a benchmark called Circular-based Relation Probing Evaluation (CRPE), which is the first benchmark that covers all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability. CRPE is formulated as single- choice questions and consists of four splits: Existence, Subject, Predicate, and Object. The Existence split evaluates the object recognition ability while the remaining splits are designed to evaluate the relation comprehension capability. Additionally, to evaluate the dependency on language priors, we include abnormal data in CRPE, which depict relations that are rare but reasonable in the real world. Our main contributions are as follows: (1) We introduce the All-Seeing Project V2, which endows MLLMs with the ability not only to perceive all objects within the image but also to recognize the relations between these objects, leading to superior relation comprehension capability and the potential to generate scene graphs in an open-ended manner. (2) We propose a novel task, termed Relation Conversation, and the corresponding formulation method, unifying the modeling of captioning, grounding, and relation tasks flexibly. Based on the task and formulation, we constructed the AS-V2 dataset. Combining AS-V2 with other general multimodal corpora for training, we propose the All-Seeing Model v2 (ASMv2), which demonstrates powerful performance across various tasks, including Open-ended Scene Graph Generation and other general image-level and region-level vision- language tasks. (3) To evaluate the relation comprehension ability of existing MLLMs, we construct the CRPE benchmark. Notably, our ASMv2 achieves an overall accuracy of 52.04 on CRPE, surpassing the 43.14 of LLaVA-1.5 by a large margin. We also evaluate ASMv2 on various image-level and region-level vision-language tasks. Specifically, our model achieves an overall score of 74.4 on MMBench [51] and 1621.0 on MME [18], surpassing LLaVA-1.5 [49] by 5.5 points and 90.0 points separately. Besides, the average accuracy of ASMv2 on grounding benchmarks [30, 57] is 87.42, outperforming Qwen-VL [2] by 1.69 points. ## 2 Related Work ### 2.1 Vision-Language Models Significant advancements have been made in the field of visual recognition and understanding in recent years. Models based on the image-text matching framework [65, 28, 17, 11] achieve powerful zero-shot performance on various downstream tasks, thereby initiating the era of open-world semantic recognition and understanding. Subsequent works [37, 87] further integrate this framework with language modeling tasks to support more generative tasks. The recent progress of Large Language Models [5, 60, 77] leads to the emergency of many LLM-based multimodal models [35, 105, 50, 81, 38, 94, 11, 76, 89], which aim to integrate the powerful understanding and reasoning ability of LLMs with multimodal perception and comprehension. Despite their powerful performance, these works are only capable of capturing the holistic visual information of the whole image. Some recent methods [8, 62, 98, 80, 32, 95, 66, 52, 61] begin to focus on location-aware understanding. However, due to the lack of appropriate modeling methods and training data for relation comprehension, these methods struggle to comprehend the inter-object relations within images accurately. To enhance relation comprehension ability while maintaining other general capabilities of MLLMs, we introduce a novel task, termed Relation Conversation, which unifies the formulation of text generation, object localization, and relation comprehension. ### 2.2 Scene Graph Generation Scene Graph Generation (SGG) [54] is a crucial task in scene understanding and has attracted substantial interest across the research community. This area has witnessed the proposal of diverse model architectures, including message- passing-based frameworks [41, 14, 42, 92, 21, 23], attention-based networks [102, 64], tree-structured networks [96, 25], and DETR-based networks [39, 70, 13]. While most existing methods only utilize images as input, recent works begin to incorporate language information or knowledge graphs to facilitate SGG [54, 43, 97, 26, 16, 104], although the scope of language utilization remains limited to basic object or relation concepts. Compared to prior specialized models, our model is a powerful general model with strong vision- language understanding and reasoning ability and can generate the scene graph in an open-ended manner, exhibiting the potential to generalize to previously unseen predicate labels. ### 2.3 Benchmarks for Relation Comprehension Evaluating the comprehension of relations between objects is a crucial aspect of advancing MLLMs. Benchmarks like Visual Genome [31] and COCO [45, 10] provide foundational datasets for object detection and image captioning. These datasets primarily focus on individual object recognition and general descriptive capabilities. They include annotations for object relations but are not explicitly designed to probe the depth of relation comprehension in a structured and focused manner. Some synthetic datasets[29, 74, 1], are introduced to probe the spatial reasoning capabilities of vision-language models. These datasets offer controlled environments for model evaluation but inherently limit the problem’s scope due to their bounded expressivity. The Visual Spatial Reasoning (VSR) dataset [47] asks the model to classify whether the caption correctly describes the relation of two objects presented in the image. This approach primarily focuses on binary classification tasks instead of the understanding of relations within the scene. In this work, we introduce the CRPE benchmark, which consists of different splits and each split is designed to probe one of the elements in the relation triplet (subject, predicate, object). Therefore, we can evaluate the relation comprehension ability of existing MLLMs more systematically. ## 3 Data Construction ### 3.1 The All-Seeing Dataset v2 Figure 3: Data example in the AS-V2 dataset. In the relation conversation, all mentioned objects are linked to their corresponding regions within the image while the predicates are linked to the regions corresponding to their subjects and objects. Our objective is to establish a dataset to unlock the Relation Conversation capability for Multi-modal Large Language Models (MLLMs), which requires the model to predict not only the bounding boxes of each object but also those of the subjects and objects for each predicate mentioned in the sentence. In this section, we elaborate on the method for constructing the training dataset for ReC, termed All-Seeing Dataset v2 (AS-V2). Specifically, we utilize GPT-4V [61] to construct AS-V2 based on COCO images [6] and their annotations [84, 45, 10]. The key idea is to query GPT-4V to generate responses while linking the objects and predicates mentioned in the generated response to specific regions within the image, referring to the given location annotations and relation annotations. The formulation of Relation Conversation is presented in Sec. 4.1. The prompt for GPT-4V comprises six components: (1) Task description, which explains the formulation of relation conversation. (2) Image to be annotated. (3) Caption annotations of this image, intended to enhance GPT-4V’s understanding of the scene. (4) Location annotations, which are the bounding boxes of the objects in the scene and guide GPT-4V in annotating the objects in the caption. (5) Relation annotations, which are presented as a list of (subject, predicate, object) triplets and help GPT-4V to annotate the predicate in the caption. (6) Seed examples, which are manually annotated to assist GPT-4V in comprehending the task description and formatting the output. Although the caption annotations are not necessary for GPT-4V to produce the desired relation conversation data, incorporating these details into the prompt significantly reduces the hallucinations in the generated data. An example of the prompt is presented in Appendix 0.B. The generated data comprise three types, including: (1) Detailed description, which requires the model to generate a comprehensive description for an image. (2) Region captioning, which requires the model to generate a comprehensive description for a certain region within the image. (3) Conversation, which requires the model to respond to the user query in the multi-turn conversation. The question types include the relations between objects, the object types, counting the objects, object actions, object locations, and relative positions between objects. Each type of data is generated using different task descriptions and human-annotated seed examples. These tasks require the model to understand pointer instructions (_e.g_., utilizing bounding boxes as prompts) and link the objects and predicates mentioned in the generated response to the image regions. An example is shown in Fig. 3, with additional examples in Appendix 0.B. In this way, we collected 127K relation conversation samples in total, including 42K in detailed descriptions, 63K in region captioning, and 22K in conversations (90K turns in total), respectively. The conversation samples also include negative instructions to enhance model robustness. These instances contain incorrect relations, and the model should be able to recognize their incorrectness. ### 3.2 Circular-based Relation Probing Evaluation Figure 4: Data examples in the CRPE. The benchmark is formulated as single- choice questions and consists of four splits: Existence, Subject, Predicate, and Object. The Existence split is designed to evaluate the object recognition ability while the remaining splits focus on the evaluation of relation comprehension. A qualitative comparison in abnormal data between LLaVA-1.5 and ASMv2 is shown at the bottom. In this section, we introduce CRPE, a benchmark designed to quantitatively evaluate the object recognition and relation comprehension capabilities of models. The evaluation is formulated as single-choice questions. For a robust evaluation, we adopt CircularEval [51] as our evaluation strategy. Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the $N$ iterations, with $N$ corresponding to the number of choices. In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model. As shown in Fig. 4, each sample in our benchmark consists of an image and a single-choice question with one correct answer and three wrong answers. The location annotations [45] and the triplets (subject, predicate, object) in the relation annotations [84] are utilized to generate the evaluation data. We construct four evaluation splits, including (1) the Existence split: the question of this split is “Which of the following objects exists in the images?”. The correct answer is sampled from the semantic tags that exist in the image while the incorrect answer is sampled from those not exist in the image. (2) the Subject split: we generate the question based on the template “What is <predicate> the <object>?” and consider the subject in the triplet as the correct answer. The negative subjects are sampled from other semantic tags that exist in the image. (3) the Predicate split: we generate the question based on the template “What is the relation between <subject> and <object>?” and consider the predicate in the triplet as the correct answer. The negative predicates are randomly sampled. Only the predicates satisfying $P(p|s)>0$ and $P(p|o)>0$ can be sampled, where $p,s,o$ refer to predicates, subjects, and objects, separately. (4) the Object split: we generate the question based on the template “What is the <subject> <predicate>?” and consider the object in the triplet as the correct answer. The negative objects are sampled from other semantic tags that exist in the image. To avoid reference ambiguity, we ensure that the semantic tags of the subject and object in each triplet are distinct in the image. We also manually verify the generated samples and filter those with ambiguous questions or choices. Additionally, to evaluate the dependency on language priors, we further include abnormal data in the Predicate split, which depict relations that are rare but reasonable in the real world. Specifically, we first select relation triplets with minimal $P(p|s,o)$ and then employ DALLE-3 [3] to generate corresponding images for these triplets. Considering that the generated images might not match the specified triplets exactly, we perform a manual filtering process for these triplet-image pairs to ensure data quality. After that, we generate the evaluation data using the method mentioned above. Fig. 4 shows abnormal examples at the bottom and more examples of these abnormal data are presented in Appendix 0.D. ## 4 The All-Seeing Model v2 ASMv2 is a powerful Multi-modal Large Language Model (MLLM), which integrates the Relation Conversation (ReC) ability while maintaining powerful general capabilities. Specifically, it follows the model architecture of LLaVA-1.5 [49], comprising a vision encoder, a vision-language connector, and a language model. This model can deal with three types of relation tasks, including (1) Relation Conversation, which requires the model to link all objects and predicates mentioned in the response to the corresponding regions in the image. (2) Open-ended Scene Graph Generation, which requires the model to generate a scene graph based on the given image in an open-ended manner; (3) Predicate Classification, which requires the model to generate a scene graph given the ground-truth object labels and localization; In addition, our model is also capable of multi-modality dialogue tasks such as Image Captioning, Visual Question Answering, and Multi-turn conversation. Since the ReC task requires the model to link the objects and predicates to the corresponding regions in the image, our ASMv2 is also endowed with grounding and referring capabilities and exhibits state-of-the-art performance on region-level tasks, including Referring Expression Comprehension, Region Captioning, and Referring Question Answering. ### 4.1 Relation Conversation Figure 5: Data formulation for Relation Conversation. Each object is marked with <ref></ref> and followed by a bounding box indicating its location while each predicate is marked with <pred></pred> and followed by two bounding boxes referring to its subjects and objects. In this section, we elaborate on the formulation of ReC. Our objective is to propose a task that can enhance the relation comprehension ability while maintaining grounding, referring, and other general capabilities. As depicted in Fig. 5, we represent the sentence in the relation conversation as a text sequence. Specifically, our relation conversation marks the object and predicate in the sentence using <ref></ref> and <pred></pred>, respectively. Each marked object is followed by a bounding box, indicating its localization. Similarly, each predicate is followed by two bounding boxes, which specifically refer to the subjects and objects of the predicate. All bounding boxes are normalized to integer values within the range [0, 1000) and formatted as: $\texttt{<box>[[}x_{1},y_{1},x_{2},y_{2}\texttt{]]</box>}$. Please refer to Appendix 0.A for more details. Notably, the response in the relation conversation can be easily parsed into a scene graph. In a typical scene graph, each node denotes an object in the scene grounded by a bounding box with a semantic label, and each directed edge denotes the relation between a pair of objects with a predicate label. By utilizing the prediction of bounding boxes for each object (serving as semantic tags for nodes) and those for subjects and objects related to each predicate (serving as nodes, edges, and predicate labels), the generated ReC sentence can be naturally converted into a scene graph. Nodes without semantic tags will be labeled as Unknown. To convert the response shown in Fig. 5 into a scene graph, we first parse the objects marked by “<ref></ref>” and assign the marked text as the semantic tag of the following bounding box. Here, we assign “bus” and “road” as the semantic tag of the bounding box highlighted in red and green, separately. Then we extract the predicate label marked by “<pred></pred>” (_i.e_., “driving on”) and the box coordinates of the subjects and objects of it (_i.e_., bounding boxes highlighted with bold underline). After that, we utilize these box coordinates as keys to match their semantic tags. In this example, the bounding box for the subject of “driving on” is matched with the box highlighted in red, therefore the subject of “driving on” is “bus”. Similarly, the object of it is “road”. Hence, we obtain the parsed triplet (bus, driving on, road). Note that the bounding boxes of the subject and object are also part of the triplet while we omit them for simplicity. Compared with the traditional Scene Graph Generation task, our ReC task exhibits three advantages: (1) More flexible. Models trained on our proposed Relation Conversation task can be naturally adapted to the Scene Graph Generation task in an open-ended manner. (2) Open-World. Benefiting from the open-ended generation manner, models trained on ReC have the potential to generalize to previously unseen predicate labels in the Scene Graph Generation task. (3) More general. ReC requires models to generate a text response and link all mentioned objects to their corresponding regions within the image, thereby maintaining the grounding, referring, and general capabilities of MLLMs and broadening the applicability of these models in real-world scenarios. ### 4.2 Model Training The training process of ASMv2 is divided into two stages, with each stage comprising a pre-training phase and an instruction-tuning phase. The first stage is designed to enable the model to effectively understand visual information at the image level. The pre-training phrase utilizes 595K samples from CC3M [69] filtered by LLaVA [50] while the instruction-tuning phrase utilizes a blend of 665K samples from LLaVA-Mix [49]. We update the data format of the region-level data [30, 57, 31] in LLaVA-Mix to the format introduced in Sec. 4.1. The second stage trains the model with a mixture of image-level data and region-level data, which enables the model to comprehend the visual information at the region level, facilitating effective grounding of objects and predicates within sentences. The pre-training phrase employs 5M samples from CC12M [69] filtered by BLIP [36], 10M filtered samples from AS-1B [80], and 15M filtered samples from GRiT [62]. The instruction-tuning phase employs 4M samples collected from a variety of sources, including image-level datasets [50, 49, 99, 9, 48, 47, 29, 63, 56, 12, 4], region-level datasets [30, 57, 31, 80, 91, 100] and our proposed AS-V2 dataset. The summary of these datasets is presented in Tab. 10. ## 5 Experiments In this section, we first compare our ASMv2 with leading Multi-modal Large Language Models (MLLMs) on representative vision-language benchmarks in Sec. 5.1. In addition to these image-level benchmarks, we also evaluate ASMv2 on three representative region-level tasks in Sec. 5.2. After that, ASMv2 is evaluated on the Open-ended Scene Graph Generation task [84] in Sec. 5.3. The results and analyses of our proposed CRPE are presented in Sec. 5.4. Note that we utilize a consistent checkpoint for all evaluations. ### 5.1 Results on General Benchmarks Table 1: Results on 12 general visual-language benchmarks. Benchmark names are abbreviated due to space limits. VQA-v2 [20]; GQA [24]; VisWiz [22]; SQA${}^{\text{I}}$: ScienceQA-IMG [56]; VQA${}^{\text{T}}$: TextVQA [72]; POPE [40]; MME [18]; MMB: MMBench [51]; MMB${}^{\text{CN}}$: MMBench-Chinese [51]; SEED: SEED-Bench [33]; LLaVA${}^{\text{W}}$: LLaVA-Bench (In-the-Wild) [50]; MM-Vet [90]. ∗The training images of the datasets are observed during training. The best performances are marked bold. Model | VQA${}^{\text{v2}}$ | GQA | VisWiz | SQA${}^{\text{I}}$ | VQA${}^{\text{T}}$ | POPE | MME | MMB | MMB${}^{\text{CN}}$ | SEED | LLaVA${}^{\text{W}}$ | MM-Vet ---|---|---|---|---|---|---|---|---|---|---|---|--- BLIP-2 [35] | 41.0 | 41.0 | 19.6 | 61.0 | 42.5 | 85.3 | 1293.8 | - | - | 46.4 | 38.1 | 22.4 InstructBLIP-7B [15] | - | 49.2 | 34.5 | 60.5 | 50.1 | - | - | 36.0 | 23.7 | 53.4 | 60.9 | 26.2 InstructBLIP-13B [15] | - | 49.5 | 33.4 | 63.1 | 50.7 | 78.9 | 1212.8 | - | - | - | 58.2 | 25.6 Shikra [8] | 77.4* | - | - | - | - | - | - | 58.8 | - | - | - | - IDEFICS-9B [27] | 50.9 | 38.4 | 35.5 | - | 25.9 | - | - | 48.2 | 25.2 | - | - | - IDEFICS-80B [27] | 60.0 | 45.2 | 36.0 | - | 30.9 | - | - | 54.5 | 38.1 | - | - | - Qwen-VL [2] | 78.8* | 59.3* | 35.2 | 67.1 | 63.8 | - | - | 38.2 | 7.4 | 56.3 | - | - Qwen-VL-Chat [2] | 78.2* | 57.5* | 38.9 | 68.2 | 61.5 | - | 1487.5 | 60.6 | 56.7 | 58.2 | - | - LLaVA-1.5-7B [49] | 78.5* | 62.0* | 50.0 | 66.8 | 58.2 | 85.9 | 1510.7 | 64.3 | 58.3 | 58.6 | 63.4 | 30.5 LLaVA-1.5-13B [49] | 80.0* | 63.3* | 53.6 | 71.6 | 61.3 | 85.9 | 1531.3 | 67.7 | 63.6 | 61.6 | 70.7 | 35.4 VILA-7B [44] | 79.9* | 62.3* | 57.8 | 68.2 | 64.4 | 85.5 | 1533.0 | 68.9 | 61.7 | 61.1 | 69.7 | 34.9 VILA-13B [44] | 80.8* | 63.3* | 60.6 | 73.7 | 66.6 | 84.2 | 1570.1 | 70.3 | 64.3 | 62.8 | 73.0 | 38.8 ASMv2 (ours) | 81.0* | 63.9* | 58.1 | 87.1* | 60.2 | 86.3 | 1621.0 | 74.4 | 64.3 | 66.3 | 78.9 | 41.3 To evaluate the general ability of ASMv2, we perform a comprehensive comparison with leading MLLMs in Tab. 1. Benefiting from the stronger relation comprehension ability, ASMv2 exhibits SoTA performance on these benchmarks. #### 5.1.1 Results of Visual Question Answering. On general VQA benchmarks, such as VQAv2 [20] and GQA [24], our ASMv2 demonstrates superior overall performance compared to LLaVA-1.5 [49] and VILA [44]. On the VQAv2 dataset, our ASMv2 outperforms the LLaVA-1.5-13B by 1.0 points. Besides, our model also achieves competitive performance with baselines on text-oriented VQA benchmarks, including VizWiz-VQA [22] and TextVQA [72]. #### 5.1.2 Results of Multi-modal benchmarks. In recent comprehensive benchmarks, which consist of a wide range of sub-tasks covering various fine-grained capabilities, our model significantly outperforms the current SoTA MLLMs, such as LLaVA-1.5 [49] and VILA [44]. Specifically, our model achieves an overall score of 74.4 on MMBench and 1621.0 on MME, surpassing VILA by 4.1 points and 50.9 points separately. Besides, ASMv2 also exhibits state-of-the-art performance on SEED [33], LLaVA- Bench [50], and MM-Vet [90], outperforming baselines by a large margin. These results demonstrate the general ability of our model. ### 5.2 Results on Region-level Benchmarks To evaluate the region comprehension and grounding capability, we evaluate ASMv2 on three representative region-level tasks, including (1) Referring Expression Comprehension [30, 57], which requires the model to localize the target object conditioned on the given description. (2) Region Captioning [31, 57], which requires the model to generate a caption for a certain object in the image conditioned on the given region. (3) Referring Question Answering [91], which contains region referring in both questions and answers. Table 2: Accuracy scores on the Referring Expression Comprehension task. Model | RefCOCO | RefCOCO+ | RefCOCOg | Avg. ---|---|---|---|--- Val | Test-A | Test-B | Val | Test-A | Test-B | Val | Test OFA-L [79] | 79.96 | 83.67 | 76.39 | 68.29 | 76.00 | 61.75 | 67.57 | 67.50 | 72.64 VisionLLM-H [81] | - | 86.70 | - | - | - | - | - | - | - Shikra-7B [8] | 87.01 | 90.61 | 80.24 | 81.60 | 87.36 | 72.12 | 82.27 | 82.19 | 82.93 Shikra-13B [8] | 87.83 | 91.11 | 81.81 | 82.89 | 87.79 | 74.41 | 84.64 | 83.16 | 84.21 Qwen-VL-7B [2] | 88.55 | 92.27 | 84.51 | 82.82 | 88.59 | 76.79 | 85.96 | 86.32 | 85.73 MiniGPT-V2-7B [7] | 88.06 | 91.29 | 84.30 | 79.58 | 85.52 | 73.32 | 84.19 | 84.31 | 83.82 Ferret-7B [85] | 87.49 | 91.35 | 82.45 | 80.78 | 87.38 | 73.14 | 83.93 | 84.76 | 83.91 Ferret-13B [85] | 89.48 | 92.41 | 84.36 | 82.81 | 88.14 | 75.17 | 85.83 | 86.34 | 85.57 ASMv2 (ours) | 90.56 | 94.24 | 86.24 | 84.81 | 90.83 | 76.89 | 87.52 | 88.26 | 87.42 #### 5.2.1 Results of Referring Expression Comprehension. Our ASMv2 achieves state-of-the-art performance on the representative REC benchmarks [30, 57]. As shown in Tab. 2, our ASMv2 significantly outperforms current state-of-the-art MLLMs, including Qwen-VL [2] and Ferret [85]. Specifically, our model surpasses Ferret by an average of 1.85 points across all benchmarks. #### 5.2.2 Results of Region Captioning. Table 3: Results on the Region Captioning task. We mark the best performance bold and the second-best underlined. Model | VG [31] | RefCOCOg [57] ---|---|--- METEOR | CIDEr | METEOR | CIDEr GRiT [82] | 17.1 | 142.0 | 15.2 | 71.6 SLR [88] | - | - | 15.4 | 59.2 SLR+Rerank [88] | - | - | 15.9 | 66.2 Kosmos-2 [62] | - | - | 14.1 | 62.3 GPT4RoI-7B [98] | 17.4 | 145.2 | - | - GPT4RoI-13B [98] | 17.6 | 146.8 | - | - ASM-FT [80] | 18.3 | 148.7 | 21.8 | 107.8 ASMv2 (ours) | 17.9 | 153.5 | 21.7 | 114.7 Our model demonstrates state-of-the-art performance on the representative region captioning benchmarks, including VG [31] and RefCOCOg [57]. As shown in Tab. 3, our model achieves a CIDEr score of 114.7 on RefCOCOg, which surpasses the current state-of-the-art model (_i.e_. ASM-FT) by 6.9 points. On the VG dataset, our model also exhibits competitive results compared to the current state-of-the-art model. #### 5.2.3 Results of Referring Question Answering. Table 4: Results on Visual Commonsense Reasoning. Q, A, and R denote the Question, Answer, and Rationale. X$\rightarrow$Y means that the model needs to select the correct option for Y conditioned on X. ∗The single-task fine-tuning setting. Method | Validation Acc. (%) ---|--- Q$\rightarrow$A | QA$\rightarrow$R | Q$\rightarrow$AR ViLBERT [55] | 72.4 | 74.5 | 54.0 Unicoder-VL [34] | 72.6 | 74.5 | 54.5 VLBERT [73] | 75.5 | 77.9 | 58.9 ERNIE-ViL-L [86] | 78.5 | 83.4 | 65.8 VILLA [19] | 78.5 | 82.6 | 65.2 GPT4RoI-7B [98] | 87.4 | 89.6 | 78.6 ASMv2 (ours) | 87.8 | 88.8 | 78.4 *ASMv2 (ours) | 88.4 | 89.9 | 79.4 We evaluate the Referring Question Answering (RQA) ability of ASMv2 on the Visual Commonsense Reasoning (VCR) dataset [91], which evaluates the commonsense reasoning abilities in the form of single-choice questions. The questions and candidate choices in VCR contain region referring. The results are presented in Tab. 4. Although trained in a multi-task setting, ASMv2 exhibits competitive performance compared to the current state-of-the-art model (_i.e_., GPT4RoI [98]), which is finetuned on the VCR dataset in a single task setting. In addition, after the single task finetuning, our model outperforms GPT4RoI by 0.8 points. ### 5.3 Results on Open-ended Scene Graph Generation In this section, we evaluate the Relation Conversation capability of our model through the Open-ended Scene Graph Generation task on the Panoptic Scene Graph (PSG) dataset [84], which is a widely-used benchmark for the scene graph generation. See Sec. 0.C.1 for the results on the Predicate Classification task. #### 5.3.1 Baselines. Despite the powerful performance, most previous methods [84, 103, 78, 104] are constrained by pre-defined label sets and struggle to capture a diverse range of visual concepts from natural language in an open-ended manner. On the other hand, TextPSG [101] explores a methodology for generating scene graphs in an open-ended manner, which first generates the region proposals and then asks BLIP [36] to predict the semantic tags and predicate labels for these regions auto-regressively. Here, we consider traditional close-set scene graph generation models and TextPSG as our baseline in OpenSGG. #### 5.3.2 Metrics. Following the common practice [84, 101], we report the triplet Recall and mean Recall for every predicate category (mRecall) in the OpenSGG task. Concretely, a scene graph consists of a set of triplets (subject, predicate, object). A triplet is considered to be correct if the phrase labels are all correct and the location of the subject and object should match the ground truth with IoU greater than 0.5 respectively. We also report #Tuples to denote the average number of predicted tuples for each generated scene graph. #### 5.3.3 Results. Table 5: Recall scores on PSG. Gray denotes that the model generates the scene graphs in a close-ended manner. Model | #Tuples | Recall | mRecall ---|---|---|--- IMP [83] | 20.0 | 16.5 | 6.5 MOTIFS [93] | 20.0 | 20.0 | 9.1 VCTree [75] | 20.0 | 20.6 | 9.7 GPSNet [46] | 20.0 | 17.8 | 7.0 PSGFormer [84] | 20.0 | 18.6 | 16.7 TextPSG [101] | 50.0 | 4.8 | - TextPSG [84] | 100.0 | 5.5 | - ASMv2 (ours) | 9.2 | 14.2 | 10.3 As shown in Tab. 5, our ASMv2 demonstrates state-of-the-art performance in the OpenSGG task. Specifically, our ASMv2 significantly outperforms TextPSG by 8.7 points in recall while generating a significantly fewer average number of tuples compared to it (9.2 _vs_. 100.0). Note that having more tuples generally implies an advantage in computing recall. When compared to traditional scene graph generation models, which generate scene graphs in a close-ended manner, ASMv2 also exhibits competitive performance. Despite generating fewer tuples, our model maintains a competitive recall of 14.2 and a mean recall of 10.3. Another factor negatively impacting the performance is that our ASMv2 generates scene graphs in an open-ended manner while recall is calculated in an exact-match manner. Therefore, the triplets (people, standing on, grass) and (person, standing on, grass) are considered mismatched even though they represent the same semantics. A more appropriate metric for the OpenSGG task will be left for future work. ### 5.4 Results on CRPE Table 6: Accuracy scores on CRPE. Model | Exist. | Subj. | Pred. | Obj. | Overall ---|---|---|---|---|--- Qwen-VL [2] | 76.34 | 19.47 | 26.99 | 35.36 | 27.27 LLaVA-1.5 [49] | 84.90 | 37.86 | 43.68 | 47.89 | 43.14 ASMv2 (ours) | 87.24 | 50.55 | 48.46 | 57.10 | 52.04 In this section, we evaluate the relation comprehension ability of our ASMv2 and current leading MLLMs [49, 2] using our proposed CRPE benchmark. This benchmark consists of four splits: Existence, Subject, Predicate, and Object. The Existence split evaluates the models’ object recognition ability while the remaining splits are designed to evaluate the models’ relation comprehension ability. In addition to reporting the performance of each split in the benchmark individually, we also report the average score of the latter three splits as the overall score for relation comprehension ability. As shown in Tab. 6, the performance of existing MLLMs on the Existence questions is significantly higher than on the Subject, Predicate, and Object questions. This suggests that these models have a more robust capability to recognize objects within an image than to comprehend the relations between them. Specifically, our ASMv2 shows a remarkable improvement in understanding object relations compared to the other models. For example, ASMv2 achieves an overall accuracy of 52.04, which is significantly higher than the 43.14 of LLaVA-1.5 and the 27.27 of Qwen-VL. These results demonstrate that our can comprehend the relations between the objects within the image better, benefiting from the training of relation conversation data. ## 6 Conclusion In this paper, we propose a novel task, termed Relation Conversation (ReC), to challenge the model to understand the relations between the objects within the image. We construct the All-Seeing Dataset V2 (AS-V2), which is a high-quality ReC dataset to unlock the ReC ability of Multi-modal Large Language Models (MLLMs) and the CRPE to quantitatively evaluate the relation comprehension ability. Leveraging AS-V2 and other general multimodal corpora for training, we introduce the All-Seeing Model v2 (ASMv2), which exhibits stronger relation comprehension ability compared to existing leading MLLMs and achieves state- of-the-art performance on the Open-ended Scene Graph Generation task and various general image-level and region-level tasks. We hope that our work can inspire more future research and contribute to the evolution towards artificial general intelligence, equipping artificial intelligence systems with an “all-seeing eye” to achieve a deeper understanding of the world. ## References * [1] Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Neural module networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 39–48 (2016) * [2] Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., Zhou, J.: Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966 (2023) * [3] Betker, J., Goh, G., Jing, L., Brooks, T., Wang, J., Li, L., Ouyang, L., Zhuang, J., Lee, J., Guo, Y., et al.: Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf (2023) * [4] Biten, A.F., Tito, R., Mafla, A., Gomez, L., Rusinol, M., Valveny, E., Jawahar, C., Karatzas, D.: Scene text visual question answering. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 4291–4301 (2019) * [5] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. NeurIPS (2020) * [6] Caesar, H., Uijlings, J., Ferrari, V.: Coco-stuff: Thing and stuff classes in context. In: CVPR (2018) * [7] Chen, J., Zhu, D., Shen, X., Li, X., Liu, Z., Zhang, P., Krishnamoorthi, R., Chandra, V., Xiong, Y., Elhoseiny, M.: Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478 (2023) * [8] Chen, K., Zhang, Z., Zeng, W., Zhang, R., Zhu, F., Zhao, R.: Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195 (2023) * [9] Chen, L., Li, J., Dong, X., Zhang, P., He, C., Wang, J., Zhao, F., Lin, D.: Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793 (2023) * [10] Chen, X., Fang, H., Lin, T.Y., Vedantam, R., Gupta, S., Dollár, P., Zitnick, C.L.: Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015) * [11] Chen, Z., Wu, J., Wang, W., Su, W., Chen, G., Xing, S., Zhong, M., Zhang, Q., Zhu, X., Lu, L., Li, B., Luo, P., Lu, T., Qiao, Y., Dai, J.: Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv preprint arXiv:2312.14238 (2023) * [12] Clark, C., Gardner, M.: Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723 (2017) * [13] Cong, Y., Yang, M.Y., Rosenhahn, B.: Reltr: Relation transformer for scene graph generation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) * [14] Dai, B., Zhang, Y., Lin, D.: Detecting visual relationships with deep relational networks. In: CVPR (2017) * [15] Dai, W., Li, J., Li, D., Huat, A., Zhao, J., Wang, W., Li, B., Fung, P., Hoi, S.: Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500 (2023) * [16] Dupty, M.H., Zhang, Z., Lee, W.S.: Visual relationship detection with low rank non-negative tensor decomposition. In: AAAI (2020) * [17] Fang, Y., Wang, W., Xie, B., Sun, Q., Wu, L., Wang, X., Huang, T., Wang, X., Cao, Y.: Eva: Exploring the limits of masked visual representation learning at scale. arXiv preprint arXiv:2211.07636 (2022) * [18] Fu, C., Chen, P., Shen, Y., Qin, Y., Zhang, M., Lin, X., Qiu, Z., Lin, W., Yang, J., Zheng, X., et al.: Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394 (2023) * [19] Gan, Z., Chen, Y.C., Li, L., Zhu, C., Cheng, Y., Liu, J.: Large-scale adversarial training for vision-and-language representation learning. NeurIPS (2020) * [20] Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., Parikh, D.: Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In: CVPR (2017) * [21] Gu, J., Zhao, H., Lin, Z., Li, S., Cai, J., Ling, M.: Scene graph generation with external knowledge and image reconstruction. In: CVPR (2019) * [22] Gurari, D., Li, Q., Stangl, A.J., Guo, A., Lin, C., Grauman, K., Luo, J., Bigham, J.P.: Vizwiz grand challenge: Answering visual questions from blind people. In: CVPR (2018) * [23] Hu, Y., Chen, S., Chen, X., Zhang, Y., Gu, X.: Neural message passing for visual relationship detection. In: ICMLW (2019) * [24] Hudson, D.A., Manning, C.D.: Gqa: A new dataset for real-world visual reasoning and compositional question answering. In: CVPR (2019) * [25] Hung, Z.S., Mallya, A., Lazebnik, S.: Contextual translation embedding for visual relationship detection and scene graph generation. IEEE transactions on pattern analysis and machine intelligence pp. 3820–3832 (2020) * [26] Hwang, S.J., Ravi, S.N., Tao, Z., Kim, H.J., Collins, M.D., Singh, V.: Tensorize, factorize and regularize: Robust visual relationship learning. In: CVPR (2018) * [27] IDEFICS: Introducing idefics: An open reproduction of state-of-the-art visual language model. https://huggingface.co/blog/idefics (2023) * [28] Jia, C., Yang, Y., Xia, Y., Chen, Y.T., Parekh, Z., Pham, H., Le, Q., Sung, Y.H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: Int. Conf. Mach. Learn. (2021) * [29] Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.: Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In: CVPR (2017) * [30] Kazemzadeh, S., Ordonez, V., Matten, M., Berg, T.: Referitgame: Referring to objects in photographs of natural scenes. In: EMNLP (2014) * [31] Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision (2017) * [32] Lai, X., Tian, Z., Chen, Y., Li, Y., Yuan, Y., Liu, S., Jia, J.: Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692 (2023) * [33] Li, B., Wang, R., Wang, G., Ge, Y., Ge, Y., Shan, Y.: Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125 (2023) * [34] Li, G., Duan, N., Fang, Y., Gong, M., Jiang, D.: Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In: AAAI (2020) * [35] Li, J., Li, D., Savarese, S., Hoi, S.: Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 (2023) * [36] Li, J., Li, D., Xiong, C., Hoi, S.: Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In: ICML (2022) * [37] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. NeurIPS (2021) * [38] Li, K., He, Y., Wang, Y., Li, Y., Wang, W., Luo, P., Wang, Y., Wang, L., Qiao, Y.: Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355 (2023) * [39] Li, R., Zhang, S., He, X.: Sgtr: End-to-end scene graph generation with transformer. In: CVPR (2022) * [40] Li, Y., Du, Y., Zhou, K., Wang, J., Zhao, W.X., Wen, J.R.: Evaluating object hallucination in large vision-language models. EMNLP (2023) * [41] Li, Y., Ouyang, W., Wang, X., Tang, X.: Vip-cnn: Visual phrase guided convolutional neural network. In: CVPR (2017) * [42] Li, Y., Ouyang, W., Zhou, B., Wang, K., Wang, X.: Scene graph generation from objects, phrases and region captions. In: ICCV (2017) * [43] Liao, W., Rosenhahn, B., Shuai, L., Ying Yang, M.: Natural language guided visual relationship detection. In: CVPRW (2019) * [44] Lin, J., Yin, H., Ping, W., Lu, Y., Molchanov, P., Tao, A., Mao, H., Kautz, J., Shoeybi, M., Han, S.: Vila: On pre-training for visual language models. arXiv preprint arXiv:2312.07533 (2023) * [45] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) * [46] Lin, X., Ding, C., Zeng, J., Tao, D.: Gps-net: Graph property sensing network for scene graph generation. In: CVPR. pp. 3746–3753 (2020) * [47] Liu, F., Emerson, G., Collier, N.: Visual spatial reasoning. Transactions of the Association for Computational Linguistics (2023) * [48] Liu, F., Lin, K., Li, L., Wang, J., Yacoob, Y., Wang, L.: Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565 (2023) * [49] Liu, H., Li, C., Li, Y., Lee, Y.J.: Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744 (2023) * [50] Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. arXiv preprint arXiv:2304.08485 (2023) * [51] Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C., Liu, Z., et al.: Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281 (2023) * [52] Liu, Z., He, Y., Wang, W., Wang, W., Wang, Y., Chen, S., Zhang, Q., Lai, Z., Yang, Y., Li, Q., Yu, J., et al.: Interngpt: Solving vision-centric tasks by interacting with chatgpt beyond language. arXiv preprint arXiv:2305.05662 (2023) * [53] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) * [54] Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: ECCV (2016) * [55] Lu, J., Batra, D., Parikh, D., Lee, S.: Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. NeurIPS (2019) * [56] Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K.W., Zhu, S.C., Tafjord, O., Clark, P., Kalyan, A.: Learn to explain: Multimodal reasoning via thought chains for science question answering. NeurIPS (2022) * [57] Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A.L., Murphy, K.: Generation and comprehension of unambiguous object descriptions. In: CVPR (2016) * [58] Marino, K., Rastegari, M., Farhadi, A., Mottaghi, R.: Ok-vqa: A visual question answering benchmark requiring external knowledge. In: CVPR (2019) * [59] Mishra, A., Shekhar, S., Singh, A.K., Chakraborty, A.: Ocr-vqa: Visual question answering by reading text in images. In: ICDAR. pp. 947–952. IEEE (2019) * [60] OpenAI: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023) * [61] OpenAI: Gpt-4v(ision) system card (2023) * [62] Peng, Z., Wang, W., Dong, L., Hao, Y., Huang, S., Ma, S., Wei, F.: Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824 (2023) * [63] Plummer, B.A., Wang, L., Cervantes, C.M., Caicedo, J.C., Hockenmaier, J., Lazebnik, S.: Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In: ICCV (2015) * [64] Qi, M., Li, W., Yang, Z., Wang, Y., Luo, J.: Attentive relational networks for mapping images to scene graphs. In: CVPR (2019) * [65] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) * [66] Rasheed, H., Maaz, M., Shaji, S., Shaker, A., Khan, S., Cholakkal, H., Anwer, R.M., Xing, E., Yang, M.H., Khan, F.S.: Glamm: Pixel grounding large multimodal model. arXiv preprint arXiv:2311.03356 (2023) * [67] Schwenk, D., Khandelwal, A., Clark, C., Marino, K., Mottaghi, R.: A-okvqa: A benchmark for visual question answering using world knowledge. In: ECCV (2022) * [68] ShareGPT: https://sharegpt.com/ (2023) * [69] Sharma, P., Ding, N., Goodman, S., Soricut, R.: Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In: ACL (2018) * [70] Shit, S., Koner, R., Wittmann, B., Paetzold, J., Ezhov, I., Li, H., Pan, J., Sharifzadeh, S., Kaissis, G., Tresp, V., et al.: Relationformer: A unified framework for image-to-graph generation. In: ECCV (2022) * [71] Sidorov, O., Hu, R., Rohrbach, M., Singh, A.: Textcaps: a dataset for image captioning with reading comprehension. In: ECCV. pp. 742–758. Springer (2020) * [72] Singh, A., Natarajan, V., Shah, M., Jiang, Y., Chen, X., Batra, D., Parikh, D., Rohrbach, M.: Towards vqa models that can read. In: CVPR (2019) * [73] Su, W., Zhu, X., Cao, Y., Li, B., Lu, L., Wei, F., Dai, J.: Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530 (2019) * [74] Suhr, A., Lewis, M., Yeh, J., Artzi, Y.: A corpus of natural language for visual reasoning. In: Ann. Meeting of the Assoc. for Comput. Linguistics (2017) * [75] Tang, K., Zhang, H., Wu, B., Luo, W., Liu, W.: Learning to compose dynamic tree structures for visual contexts. In: CVPR. pp. 6619–6628 (2019) * [76] Tian, C., Zhu, X., Xiong, Y., Wang, W., Chen, Z., Wang, W., Chen, Y., Lu, L., Lu, T., Zhou, J., et al.: Mm-interleaved: Interleaved image-text generative modeling via multi-modal feature synchronizer. arXiv preprint arXiv:2401.10208 (2024) * [77] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) * [78] Wang, J., Wen, Z., Li, X., Guo, Z., Yang, J., Liu, Z.: Pair then relation: Pair-net for panoptic scene graph generation. arXiv preprint arXiv:2307.08699 (2023) * [79] Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J., Zhou, C., Zhou, J., Yang, H.: Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In: ICML (2022) * [80] Wang, W., Shi, M., Li, Q., Wang, W., Huang, Z., Xing, L., Chen, Z., Li, H., Zhu, X., Cao, Z., et al.: The all-seeing project: Towards panoptic visual recognition and understanding of the open world. arXiv preprint arXiv:2308.01907 (2023) * [81] Wang, W., Chen, Z., Chen, X., Wu, J., Zhu, X., Zeng, G., Luo, P., Lu, T., Zhou, J., Qiao, Y., et al.: Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. arXiv preprint arXiv:2305.11175 (2023) * [82] Wu, J., Wang, J., Yang, Z., Gan, Z., Liu, Z., Yuan, J., Wang, L.: Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280 (2022) * [83] Xu, D., Zhu, Y., Choy, C.B., Fei-Fei, L.: Scene graph generation by iterative message passing. In: CVPR (2017) * [84] Yang, J., Ang, Y.Z., Guo, Z., Zhou, K., Zhang, W., Liu, Z.: Panoptic scene graph generation. In: ECCV (2022) * [85] You, H., Zhang, H., Gan, Z., Du, X., Zhang, B., Wang, Z., Cao, L., Chang, S.F., Yang, Y.: Ferret: Refer and ground anything anywhere at any granularity. arXiv preprint arXiv:2310.07704 (2023) * [86] Yu, F., Tang, J., Yin, W., Sun, Y., Tian, H., Wu, H., Wang, H.: Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In: AAAI (2021) * [87] Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., Wu, Y.: Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917 (2022) * [88] Yu, L., Tan, H., Bansal, M., Berg, T.L.: A joint speaker-listener-reinforcer model for referring expressions. In: CVPR (2017) * [89] Yu, Q., Sun, Q., Zhang, X., Cui, Y., Zhang, F., Wang, X., Liu, J.: Capsfusion: Rethinking image-text data at scale. arXiv preprint arXiv:2310.20550 (2023) * [90] Yu, W., Yang, Z., Li, L., Wang, J., Lin, K., Liu, Z., Wang, X., Wang, L.: Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490 (2023) * [91] Zellers, R., Bisk, Y., Farhadi, A., Choi, Y.: From recognition to cognition: Visual commonsense reasoning. In: CVPR (2019) * [92] Zellers, R., Yatskar, M., Thomson, S., Choi, Y.: Neural motifs: Scene graph parsing with global context. In: CVPR (2018) * [93] Zellers, R., Yatskar, M., Thomson, S., Choi, Y.: Neural motifs: Scene graph parsing with global context. In: CVPR (2018) * [94] Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A., Beyer, L.: Lit: Zero-shot transfer with locked-image text tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18123–18133 (2022) * [95] Zhang, A., Zhao, L., Xie, C.W., Zheng, Y., Ji, W., Chua, T.S.: Next-chat: An lmm for chat, detection and segmentation. arXiv preprint arXiv:2311.04498 (2023) * [96] Zhang, H., Kyaw, Z., Chang, S.F., Chua, T.S.: Visual translation embedding network for visual relation detection. In: CVPR (2017) * [97] Zhang, J., Kalantidis, Y., Rohrbach, M., Paluri, M., Elgammal, A., Elhoseiny, M.: Large-scale visual relationship understanding. In: AAAI (2019) * [98] Zhang, S., Sun, P., Chen, S., Xiao, M., Shao, W., Zhang, W., Chen, K., Luo, P.: Gpt4roi: Instruction tuning large language model on region-of-interest. arXiv preprint arXiv:2307.03601 (2023) * [99] Zhang, Y., Zhang, R., Gu, J., Zhou, Y., Lipka, N., Yang, D., Sun, T.: Llavar: Enhanced visual instruction tuning for text-rich image understanding. arXiv preprint arXiv:2306.17107 (2023) * [100] Zhao, B., Wu, B., Huang, T.: Svit: Scaling up visual instruction tuning. arXiv preprint arXiv:2307.04087 (2023) * [101] Zhao, C., Shen, Y., Chen, Z., Ding, M., Gan, C.: Textpsg: Panoptic scene graph generation from textual descriptions. In: ICCV (2023) * [102] Zheng, S., Chen, S., Jin, Q.: Visual relation detection with multi-level attention. In: ACM MM (2019) * [103] Zhong, Y., Shi, J., Yang, J., Xu, C., Li, Y.: Learning to generate scene graph from natural language supervision. In: ICCV (2021) * [104] Zhou, Z., Shi, M., Caesar, H.: Vlprompt: Vision-language prompting for panoptic scene graph generation. arXiv preprint arXiv:2311.16492 (2023) * [105] Zhu, D., Chen, J., Shen, X., Li, X., Elhoseiny, M.: Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592 (2023) * [106] Zhu, Y., Groth, O., Bernstein, M., Fei-Fei, L.: Visual7w: Grounded question answering in images. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4995–5004 (2016) ## Appendix 0.A Relation Conversation Figure 6: A complex example for the formulation of Relation Conversation. In this section, we introduce more details about the formulation of Relation Conversation. As depicted in Fig. 6, when a certain text span is associated with multiple regions, these bounding boxes are formatted as: $\texttt{<box>[[}x_{1}^{1},y_{1}^{1},x_{2}^{1},y_{2}^{1}\texttt{], ..., [}x_{1}^{n},y_{1}^{n},x_{2}^{n},y_{2}^{n}\texttt{]]</box>},$ where $\texttt{[}x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i}\texttt{]}$ denotes the $i$-th bounding box linked to the object or predicate. For a specific predicate, the subject and object must be linked to an equal number of bounding boxes. Otherwise, one of them must be linked to just one bounding box and thus can be broadcast to match the count of another one. An example is shown in Fig. 6. As discussed in Sec. 4.1, to parse this example into a scene graph, we first assign the semantic tag “people” to the bounding box highlighted in red and blue. Similarly, we assign the semantic tag “grass” to the bounding box highlighted in green. We then extract the predicate label enclosed in “<pred></pred>” (_i.e_., “standing on”) and the box coordinates of its subjects and objects (_i.e_., bounding boxes highlighted with bold underline). After that, we utilize these box coordinates as keys to match their respective semantic tags. Considering that two subjects are linked to the predicate while only one object is linked, we broadcast the object to match the number of subjects. Given $N$ subjects and $N$ objects, we pack them into $N$ tuples in order, where each tuple consists of one subject and one object. In this example, we obtain two tuples, resulting in two parsed triplet (people, standing on, grass), each connected to a people with different bounding boxes. ## Appendix 0.B The All-Seeing Dataset v2 More data examples of AS-V2 are shown in Figs. 7, 8 and 9. Besides, prompts used to generate detailed description data are shown in Tabs. 7 and 8 (a) (b) (c) Figure 7: Data Examples of Detailed Description task in AS-V2. (a) (b) (c) Figure 8: Data Examples of Region Captioning task in AS-V2. (a) (b) (c) Figure 9: Data Examples of Conversation task in AS-V2. Due the space limitations, we exhibit only one turn for each conversation. Table 7: For each query, the system info explains the task description and the in-context- learning examples are presented in the form of multi-turn conversation. For each turn, the input query[‘context’] consists of (1) the image to be annotated, (2) the captions annotations of this image, (3) the location annotations, as well as (4) the relation annotations. The output query[‘context’] comprises the manually annotated scene graph conversation data. In this example, we provide the task description for the Detailed Description data in the relation conversation. messages = [{"role":"system", "content": f"""You are an AI visual assistant that can analyze a single image. You receive one image and five sentences, each describing this image you are observing. In addition, specific object locations within the image are given, along with detailed coordinates. These coordinates are in the form of bounding boxes, represented as [x1, y1, x2, y2] with int numbers ranging from 0 to 999. These values correspond to the top left x, top left y, bottom right x, and bottom right y. Note that these coordinates are normalized. Besides, the scene graph of this image is also provided as a list of tuples. Each tuple is represented as (subject, bounding box of the subject, object, bounding box of the object, predicate). Using the provided caption, bounding box, and scene graph information, describe the scene in a detailed manner. If there are errors in the caption, please ignore them and do not point them out in your description. Instead of directly mentioning the bounding box coordinates, utilize this data to explain the scene using natural language with its bounding box in the format like "<ref>object</ref><box>[[x1, y1, x2, y2]]</box>". When mentioning the predicate between two objects, you should mention it in the format like "<pred>predicate</pred><box>[[x1, y1, x2, y2]]</box><box>[[x3, y3, x4, y4]]</box>", where "<box>[[x1, y1, x2, y2]]</box>" denotes the bounding box coordinates of the subject and "<box>[[x3, y3, x4, y4]]</box>" denotes the bounding box coordinates of the object. Include details like object counts, position of the objects, relative position between the objects. When using the information from the caption, coordinates, or scene graph, directly explain the scene, and do not mention that the information source is the caption or the bounding box or the scene graph. You should mention all tuples and predicates included in the scene graph in the generated caption. Make sure that the box following the <pred>predicate</pred> has already been mentioned after a <ref>object</ref>."""} ] for _sample in fewshot_samples_: --- messages.append({"role":"user","content":sample[‘context’]}) messages.append({"role":"assistant","content":sample[‘response’]}) messages.append({"role":"user","content":‘\n’.join(query)}) Table 8: One example to illustrate the instruction-following data. The top block shows the contexts such as captions, locations, relations and images used to prompt GPT, and the bottom block shows the three types of responses. Note that the visual image is also used to prompt GPT. Context type 1: Captions | ---|--- A pretty young lady holding a dark colored umbrella. | A girl in a Pikachu suit standing beside a girl holding an umbrella. | A woman is wearing a raincoat and another woman is holding an umbrella over her head. | two girls stand in a yard, one holds an umbrella and the other is dressed as pikachu. | Two women who are standing in the grass. | Context type 2: Locations | person: [101, 252, 430, 963] | person: [539, 246, 826, 984] | grass: [0, 444, 999, 999] | $\cdots$ | Context type 3: Relations | (person, [101, 252, 430, 963], grass, [0, 444, 999, 999], standing on) | (person, [101, 252, 430, 963], person, [539, 246, 826, 984], beside) | (person, [539, 246, 826, 984], grass, [0, 444, 999, 999], standing on) | $\cdots$ | Response type 1: Detailed Description | In the image, two <ref>people</ref><box>[[101, 252, 430, 963], [539, 246, 826, 984]]</box> are <pred>standing on</pred><box>[[101, 252, 430, 963], [539, 246, 826, 984]]</box><box>[[0, 444, 999, 999]]</box> a <ref>grass</ref><box>[[0, 444, 999, 999]]</box>. $\cdots$ | Response type 2: Region Captioning | person: [539, 246, 826, 984] | A <ref>girl</ref><box>[[101, 252, 430, 963]]</box> in a Pikachu suit standing <pred>beside</pred><box>[[101, 252, 430, 963]]</box><box>[[539, 246, 826, 984]]</box> a <ref>girl</ref><box>[[539, 246, 826, 984]]</box>. | $\cdots$ Response type 3: Conversation | Question: | What are the two <ref>people</ref><box>[[101, 252, 430, 963], [539, 246, 826, 984]]</box> doing in the image? | — | Answer: | In the image, one <ref>person</ref><box>[[539, 246, 826, 984]]</box> is <pred>holding</pred><box>[[539, 246, 826, 984]]</box><box>[[326, 117, 976, 435]]</box> an <ref>umbrella</ref><box>[[326, 117, 976, 435]]</box>, $\cdots$ | $\cdots$ | ## Appendix 0.C The All-Seeing Model v2 ### 0.C.1 Predicate Classification Table 9: Recall scores on Predicate Classification task. Method | Predicate Classification ---|--- R@20 | mR@20 | R@50 | mR@50 | R@100 | mR@100 IMP [83] | 30.5 | 9.0 | 35.9 | 10.5 | 38.3 | 11.3 MOTIFS [93] | 45.1 | 19.9 | 50.5 | 21.5 | 52.5 | 22.2 VCTree [75] | 45.9 | 21.4 | 51.2 | 23.1 | 53.1 | 23.8 GPSNet [46] | 38.8 | 17.1 | 46.6 | 20.2 | 50.0 | 21.3 ASMv2 (ours) | 17.6 | 21.4 | 25.9 | 34.4 | 32.6 | 44.5 (a) (b) Figure 10: Word Clouds for evaluation data in PSG. Fig. 10(a) visualizes the distribution of ground-truth predicates while Fig. 10(b) visualizes those predicted by ASMv2. In this section, we evaluate the relation comprehension capability of our model through the Predicate Classification task (PredCls) on the Panoptic Scene Graph (PSG) dataset [84]. Compared to the Open-ended Scene Graph Generation task, PredCls aims to generate a scene graph given the ground-truth object labels and localization, focusing on the relation prediction performance without the interference of the detection performance. #### 0.C.1.1 Evaluation Setup. Assuming that the number of ground-truth objects is N, we query the model for $N\times\left(N-1\right)$ times, considering each of the ground-truth objects as the subject or object. For each query, we ask the model “What is the relation between the <subject> and the <object>? Answer the question using a single word or phrase.” and employ a vocabulary ranking method [5] to generate the scores for each predicate label. Following prior works [84, 75], we report the Recall and mean Recall (mRecall) here. #### 0.C.1.2 Results. As shown in Tab. 9, our ASMv2 demonstrates competitive performance on the Predicate Classification task within the PSG dataset. Specifically, our ASMv2 achieves superior performance in mRecall but is inferior in Recall. For instance, our ASMv2 significantly outperforms VCTree by 11.3 points in mR@50 and 20.7 points in mR@100, while it falls behind in terms of Recall. These results stem from the PSG dataset’s inherently imbalanced distribution of predicate labels, where broad predicates such as “on” and “over” are more frequent. As depicted in Fig. 10, our ASMv2 is less likely to predict these common but general predicates. Instead, it tends to predict more specific and less frequent predicates, like “standing on” and “parked on”, resulting in superior mRecall while inferior Recall. These results underline our model’s deeper and more detailed comprehension of visual relations. ### 0.C.2 Implementation Details Table 10: Details of the instruction-tuning data for ASMv2 in stage 2. We collect a wide range of high-quality data, totaling approximately 4 million samples. Task | #Samples | Dataset ---|---|--- Captioning | 124K | TextCaps [71], ShareGPT4V [9] | | VQAv2 [20], GQA [24], OKVQA [58], A-OKVQA [67], VQA | 314K | ScienceQA [56], CLEVR [29], Visual7W [106] OCR | 157K | ST-VQA [4], LLaVAR [99], OCR-VQA [59], DocVQA [12] Grounding | 643K | RefCOCO/+/g [30, 57] RegionVQA | 2.3M | RefCOCOg [57], VG [31], VCR [91], AS-Core [80] Conversation | 500K | LLaVA-Instruct [50], SVIT [100], LRV [48], AS-V2 (ours) Text | 40K | ShareGPT [68] #### 0.C.2.1 Training Stage 1. The global batch size is set to 256 in the pre-training phase and 128 in the instruction-tuning phase. We employ the AdamW optimizer [53] with the $\beta_{1}$ of 0.9, the $\beta_{2}$ of 0.999, and the weight decay of $0$. The learning rate is initialized as $1\times 10^{-3}$ for the pre-training phase and $2\times 10^{-5}$ for the instruction-tuning phase. Both phases include a linear warmup that lasts until the first 3% of training steps. The warmup is followed by a cosine decay strategy with a minimum learning rate of 0. We only train the vision-language connector in the pre-training phase while both the vision-language connector and the language model are trainable in the instruction-tuning phase. We train the model for 1 epoch in both phases. The image resolution of ASMv2 is set to 336 $\times$ 336. #### 0.C.2.2 Training Stage 2. The global batch size is set to 512 and the learning rate is initialized as $2\times 10^{-3}$ in both the pre-training phase and the instruction-tuning phase. The language model and vision-language connector are trainable in both phases while the vision encoder is always frozen. We train the model for 5000 steps in the pre-training phase and 1 epoch in the instruction-tuning phase. The other settings remain the same as the instruction-tuning phase of Stage 1. ## Appendix 0.D The Circular-based Relationship Probing Evaluation In this section, we present more examples of abnormal data in CRPE in Fig. 11. Figure 11: Data examples of abnormal data in the CRPE.
1]FAIR at Meta 2]Mila, Université de Montréal 3]New York University 4]Cifar fellow [*]Equal contribution # Modeling Caption Diversity in Contrastive Vision-Language Pretraining Samuel Lavoie Polina Kirichenko Mark Ibrahim Mahmoud Assran Andrew Gordon Wilson Aaron Courville Nicolas Ballas [ [ [ [ <EMAIL_ADDRESS> ###### Abstract There are a thousand ways to caption an image. Contrastive Language Pretraining (CLIP) on the other hand, works by mapping an image and its caption to a single vector—limiting how well CLIP-like models can represent the diverse ways to describe an image. In this work, we introduce Llip, Latent Language Image Pretraining, which models the diversity of captions that could match an image. Llip’s vision encoder outputs a set of visual features that are mixed into a final representation by conditioning on information derived from the text. We show that Llip outperforms non-contextualized baselines like CLIP and SigLIP on a variety of tasks even with large-scale encoders. Llip improves zero-shot classification by an average of $2.9\%$ zero-shot classification benchmarks with a ViT-G/14 encoder. Specifically, Llip attains a zero-shot top-1 accuracy of $83.5\%$ on ImageNet outperforming a similarly sized CLIP by $1.4\%$. We also demonstrate improvement on zero-shot retrieval on MS-COCO by $6.0\%$. We provide a comprehensive analysis of the components introduced by the method and demonstrate that Llip leads to richer visual representations. Samuel Lavoie at (a) CLIP and Llip representations (b) Averaged 0-shot classification accuracy Figure 1: We propose Llip, Latent Language Image Pretraining, to model the diversity of matching captions for a given image. (a) Conceptual visualization of CLIP (left) and Llip (right) architectures. CLIP independently encodes visual features (shown in circles) and text features (shown in squares) which are pulled closer together by maximizing the cosine similarity objective $\mathcal{L}$. The single image feature vector of CLIP has to compromise between all matching text features (illustrated in the feature manifold at the bottom of the Figure). Llip outputs a set of visual mixture tokens which are combined into a final visual feature vector conditioned on the context derived from the caption. Llip’s visual representations can more accurately represent each caption. (b) Zero-shot top-1 transfer accuracy averaged over 22 established classification benchmarks (see section 6.1) against Giga FLOPs for inference (estimated on the ImageNet zero-shot classification task) for encoders of various sizes. Llip outperforms the Visual Language Pretraining baselines. Llip was trained on the same data as MetaCLIP (Xu et al., 2023). ## 1 Introduction Contrastive Language-Image Pre-training (CLIP; Radford et al. (2021)) combined with a large-scale weakly supervised dataset has become the standard Visual Language Pre-training (VLP) approach to learn visual representation (Li et al., 2021, 2023e; Sun et al., 2023; Zhai et al., 2023; Xu et al., 2023). Due to its generality, CLIP representations are now used for many downstream tasks such as zero-shot classification (Radford et al., 2021), image generation (Ramesh et al., 2021) and visual question answering (Li et al., 2023b; Moon et al., 2023). At its core, CLIP aims to learn an image representation that is invariant to the caption diversity (see Figure 1(a)). CLIP uses a visual encoder and a text encoder to independently map visual and text inputs into a common representation space. The joint encoders are trained with a contrastive objective that maximizes the similarity of representations extracted from the same image-text pair while pushing away the representations from other examples (Radford et al., 2021). This training criterion encourages the representation of an image to exactly match the representation of its corresponding text description. Further, if different text descriptions are associated with an image, CLIP contrastive objective will push both text representations toward the same visual representation. Yet, there is an information imbalance between the visual and text modality as visual content is often more rich than its text description (Foucault, 1990). Multiple diverse text captions can be equally valid descriptions of a given image, each one focusing on a different visual aspect. For example, depending on context, someone could describe the animal from the image shown in Figure 1(a) while another person could instead highlight the location where the picture was taken. Both are valid descriptions of the image and, arguably, different descriptions may capture different visual properties of the image. A training objective of a vision-language model should therefore aim at capturing the diversity of possible text descriptions to model the richness of the visual input. In this work, we propose to explicitly model the fact that many different captions, and therefore representations, are plausible for a given image. To enable the prediction of different representations from a fixed image, we implement the image to text representation function as a one-to-many mapping. Conceptually, we augment our visual encoder with a latent variable that captures contextual information. Given this extra conditioning, our visual encoder can output different representations for different contexts. In our approach, the contextual latent is inferred directly from the target caption, which is then used to modulate the visual representation. Specifically, our visual encoder is implemented by a visual transformer that outputs $K$ learnable mixture tokens in addition to the visual tokens. The goal of the mixture tokens is to capture the different visual aspects of an input. We then make use of a cross-attention mechanism that infers the mixture token weights as a function of the text caption. The weighted mixture defines our contextual representation that is contrasted with text representations. We show that this simple modification of CLIP leads to significant improvement of the visual representation quality as illustrated in Figure 1(b) as well as a more rich visual representation (see Figure 5). We refer to our approach as Latent Language Image Pre-training (Llip). To demonstrate the value of our approach, we pretrain a family of vision transformer (ViT) encoders (Dosovitskiy et al., 2020) on the recent MetaCLIP (Xu et al., 2023) dataset and compare our approach on various zero-shot classification and text retrieval tasks. Through an empirical evaluation and control experiments we found that: * • On zero-shot transfer classification, Llip consistently outperforms CLIP pretraining for architecture of similar size on a large set of benchmarks. In particular, a VIT-G/14 encoder trained with Llip achieves a top-1 accuracy of 83.5% on the ImageNet 0-shot task outperforming a VIT-G/14 trained with CLIP by 1.4%. * • On zero-shot image-text and text-image retrieval, Llip consistently outperforms CLIP pretraining on COCO by 6.0% image-to-text retrieval. (a) Llip encodes an image contextualized on the text features to compute the objective. (b) Training Llip requires encoding an image with the target text caption. Figure 2: Summary of the method Llip. (a) Schema of Llip’s computation of the loss. An image encoder outputs $K$ _mixture tokens_ ($K=2$ in the schema). The mixture tokens are given to a cross-attention module as keys and values along with the text encoding that is given as the query. The visual representation to be contrasted with the text target is conditioned on the text itself, allowing the model to produce a different visual representation depending on the caption. (b) Llip uses a contrastive objective and requires encoding the visual representation with the text targets to compute the loss. ## 2 Related work Invariant representation. Invariance-based representation learning such as contrastive approaches aims at learning encoders that map two related inputs to the same point in representation space. This paradigm is commonly used in self-supervised learning (SSL) using a joint-embedding architecture (Bromley et al., 1993) where the two related inputs are two transformations of the same image (Purushwalkam & Gupta, 2020; Misra & van der Maaten, 2020; Chen et al., 2020a). In this case, the goal is to learn an invariant representation to a set of predefined image transformations that preserve the semantic content of the images (Chen et al., 2020a; Assran et al., 2022; Purushwalkam & Gupta, 2020; Misra & van der Maaten, 2020; Chen et al., 2020a; Oquab et al., 2023). While SSL methods can choose which invariance to promote through the choice of the transformations, it is not the case in vision-language pretraining as the two inputs of the encoders are from different modalities, i.e. an image and its text description. We hypothesize that enforcing invariance between image and text is not a desirable training objective as many text descriptions, capturing different visual aspects, could correspond to a given image. Predictive representation. Another line of works in SSL learns representation without relying on invariant loss with the use of a joint-embedding predictive architectures (JEPA) (LeCun, 2022; Baevski et al., 2022; Assran et al., 2023; Bardes et al., 2024). Given a pair of related inputs $x$ and $t$, JEPA approaches learn by predicting the representation of $t$ from $x$ conditioned on a context variable that indicates the transformation between $x$ and $t$. In practice, this idea has been explored in mask-modeling formulation where the conditioning indicates the position of $t$ (Baevski et al., 2022; Assran et al., 2023). Our approach Llip uses a similar learning principle in the context of vision-language pretraining. Our goal is to predict a text representation from the image input (see Figure 2(a)). One key difference with previous works is that we don’t have a direct access to the conditioning variable which specifies the relative transformation from an image to its caption, Llip has to infer it using the text description. Vision-Language Pretraining. A wide variety of prior works explored vision- language pretraining. Jia et al. (2021); Ilharco et al. (2021); Li et al. (2023d); Sun et al. (2023); Zhai et al. (2023); Fini et al. (2023); Mu et al. (2021) propose alternative contrastive-based Vision-Language Pretraining methods. Some VLP methods incorporate frozen feature extractors for image or text encoders (Zhai et al., 2022; Li et al., 2023c; Moayeri et al., 2023). Other approaches use instruction tuning (Liu et al., 2023), context (Zhou et al., 2022), and grounding objectives (Zhang et al., 2021; Li et al., 2022b; Dou et al., 2022) that require additional training data for supervision. Gao et al. (2022); Desai et al. (2024) tackle the lack of a one-to-one- correspondence between web-crawled images and captions by incorporating a hierarchical loss. All these prior works encourage invariance between image and text. Beyond contrastive pretraining, Wang et al. (2022b, a); Yu et al. (2022); Li et al. (2022a, 2023a); Dou et al. (2022) incorporate a decoder with a captioning loss into vision-language models in addition to the contrastive objective. Chen et al. (2020b); Li et al. (2021, 2020, 2022a) among others use an early or hybrid fusion of visual and text features using vision-grounded text encoder, i.e. cross-attention layers in the text encoder that attend to the output image patch tokens, which improves performance on downstream tasks but comes at a significantly increased computation cost. In our work we instead only apply a cross-attention operation to the output of vision and text encoders, and use it to mix the final visual representation vector from the mixing tokens and context inferred from the caption. In general, our approach is different from previous works in that it learns to model the diverse captions for an image solely with a contrastive objective. ## 3 Latent Language Image Pre-training This section describes our proposed method: Latent Language Image Pretraining. Llip learns to output a visual representation that is conditioned on a text caption. Thus, an image have a different representation depending on the caption considered during the inference. Our approach relies on two architectural components (see Figure 2): a visual encoder that outputs $K$ visual mixtures components, and a cross-attention module that selects how to weight the different mixture components based on the text representation. Visual mixture tokens. The image encoder is parameterized as a Vision Transformer (ViT) (Dosovitskiy et al., 2020) which processes $K$ learnable tokens along with each patch of the image (Darcet et al., 2023). Those learnable tokens are referred as the visual mixture tokens. The parameterization of our text encoder follows the CLIP’s text encoder (Radford et al., 2021) and outputs a single vector representation. Contextualization. Llip conditions the visual representation using the text representation through a multi-head cross-attention mechanism. Let $(x_{i},t_{i})$ be an image and a text caption from a dataset. We assume that $x_{i}$ and $t_{j}$ are a positive pair if $i=j$. Otherwise, they are a negative pair. An image encoder $x_{i}\mapsto{\bm{h}}_{i}$ maps an image to $K$ visual mixture tokens ${\bm{h}}_{i}$ with $h_{i}^{k}$ for $k\in[K]$ being the $k^{th}$ mixture tokens. A text encoder $t_{j}\mapsto g_{j}$ maps a caption to a text feature vector. We denote the index of each head of a multi-head cross-attention module as $m\in[M]$. The cross-attention queries are a projection of the text representation $g_{j}$: $\mathcal{Q}^{m}_{j}:=g_{j}\cdot W_{\mathcal{Q}}^{m}$. The cross-attention keys and values are the projections of the visual mixture tokens: $\mathcal{K}^{mk}_{i}:=h_{i}^{k}\cdot W_{\mathcal{K}}^{mk}$ and $\mathcal{V}^{mk}_{i}:=h_{i}^{k}\cdot W_{\mathcal{V}}^{mk}$. The keys, queries and values of the attention are all vectors in $\mathbb{R}^{d/m}$ as defined in Vaswani et al. (2023). The mixing weights for head $m$ are defined as: $\Phi^{m}_{ij}:=\sigma_{\tau}((\mathcal{Q}^{m}_{j}\cdot\mathcal{K}^{mk}_{i})_{k=1}^{K}),$ (1) with $\sigma_{\tau}$ being a softmax with temperature $\tau$ computed over the $K$ mixture tokens: $\sigma_{\tau}(z):=\dfrac{e^{z_{k}/\tau}}{\sum_{i=1}^{K}e^{z_{i}/\tau}}\leavevmode\nobreak\ \forall k\in[K]$. From the mixing weights and $\mathcal{V}$, we compute the contextualized visual representation: $z_{ij}:=\text{Concat}\left(\left(\sum_{k=1}^{K}{\Phi^{mk}_{ij}\cdot\mathcal{V}^{mk}_{ij}}\right)_{m=1}^{M}\right)\cdot W_{\mathcal{O}},$ (2) where $W_{\mathcal{O}}$ is a learnable projection matrix in $\mathbb{R}^{d\times d}$. Similarly we project the text representation ${z_{j}^{\prime}:=g_{j}^{t}\cdot W_{T}}$ where $W_{T}$ is learnable projection matrix of the text features. Both representation are normalized as previously done in CLIP when computing the objective function: $\hat{z}_{ij}=\dfrac{z_{ij}}{||z_{ij}||_{2}}$ and $\hat{z}^{\prime}_{j}=\dfrac{z^{\prime}_{j}}{||z^{\prime}_{j}||_{2}}$. Pretraining. For pretraining, we consider the SigLIP (Zhai et al., 2023) objective due to its memory efficiency. We modify SigLIP’s objective using our contextualized visual representation and propose the following loss: $\begin{split}\mathcal{L}_{\text{Llip{}}}:=\sum_{i=1}^{N}&\log\dfrac{1}{1+e^{(-a\hat{z}_{ii}\cdot\hat{z}_{i}^{\prime}+b)}}+\\\ \dfrac{1}{N}\sum_{j=1}^{N}&\log\dfrac{1}{1+e^{(a\hat{z}_{ij}\cdot\hat{z}_{j}^{\prime}-b)}},\end{split}$ (3) where $a$ and $b$ are learnable parameters, $\hat{z}_{j}^{\prime}$ is the text representation obtained from caption $j$ and $\hat{z}_{ij}$ is the visual representation obtained from mixing the visual mixture tokens of image $i$ with the text features of caption $j$. Avoiding a shortcut solution. Contextualizing the visual features with the target caption can introduce a shortcut solution: the network ignores $x_{i}$ and solely relies on $t_{i}$ to minimize its objective. The negative samples of the contrastive objective in equation 3 prevent that shortcut solution. While, the caption $t_{i}$ is a positive caption for $x_{i}$, the same caption is also a negative caption for a different sample $x_{j}$. Therefore, relying only on $t_{i}$ is not a valid solution because the objective also minimizes the similarity for pairs of negative samples, i.e. it pushes away $\hat{z}_{ji}$ from $\hat{z}_{j}$. Inference. The final visual representation depends on a caption. Consequently each image has to be encoded with all target captions as illustrated in Figure 2(b), both for pre-training and zero-shot evaluation. Fortunately, the fusion of the image and text is lightweight as it occurs in the output layer. The additional compute and memory cost is constant for a fixed number of mixture tokens $K$ as we scale up the size of the encoder (See Figure 8(a)). Inference for zero-shot classification in Llip is analogous to CLIP’s implementation. For a given image $x_{i}$, we have $C$ possible caption labels $t_{j},j\in[C]$. We encode each image $x_{i}$ with each caption label $t_{j}$ obtaining contextualized visual features $z_{ij}$. Then we compute the cosine similarity between the normalized visual features $\hat{z}_{ij}$ and text features $\hat{z}_{j}^{\prime}$, and define the predicted label as the one with the highest cosine similarity between the contextualized image features and the text features. ## 4 Experimental Setup Our empirical analysis over the next sections has three main objectives. First, we aim to demonstrate the contribution of each modification added by Llip via controlled experiments. Second, we illustrate the value of Llip in comparison to other contrastive VLP methods on a set of standard zero-shot benchmarks commonly used in the literature. Finally, we provide an comprehensive analysis of Llip representations and hyper-parameters. Before discussing our results, we describe our experimental setup. We perform our experiments on 5 models: ViT-B/32, ViT-B/16, ViT-L/14, ViT-H/14 and ViT-G/14. ViT-B/32 stands for a base Vision Transformer with image patch of size 32 and ViT-L/14 is a large Vision Transformer with patch of size 14 (see Dosovitskiy et al. (2020) for implementation details). To capture the visual variability in images, our method appends $K$ additional learnable tokens to the input sequence of transformers, similarly to Darcet et al. (2023). We refer to those extra tokens as mixture tokens and we denote the model with $K$ mixture tokens by $\text{\leavevmode\nobreak\ Llip{}}_{K}$. For all of our experiments, we crop and resize images to $224\times 224$. We pre-train our models with the AdamW optimizer (Kingma & Ba, 2017; Loshchilov & Hutter, 2017) with $\beta_{2}=0.95$ as done by Zhai et al. (2023) to stabilize the pre-training. We use a learnable scale parameter $a$ along with a learnable bias $b$ for our objective following the initialization of Zhai et al. (2023). Otherwise, all other training decisions closely follow the ones used by Radford et al. (2021); Xu et al. (2023). For all of the Llip experiments, we fix $M=8$ the number of heads in the cross-attention. Unless mentioned otherwise, the cross-attention’s temperature $\tau=5$. Our models were trained on the Common Crawl data curated using the methodology presented in Xu et al. (2023). We use a dataset of 2.5B image-text pairs collected using the same parameters that was used in Xu et al. (2023). As done in Radford et al. (2021); Xu et al. (2023) we pre-train our model for a total amount of 12.8B pairs of image-text seen with a batch size of 32,768. To increase the training efficiency, we leverage compilation and mixed- precision in PyTorch (Paszke et al., 2019). We use gradient checkpointing for computing the activations of the visual representations to reduce the memory during pre-training. The ViT-B and ViT-L models were trained on 128 V100 and A100 respectively. The larger models were trained on 256 A100 80GB GPUs. ## 5 From SigLIP to Llip To assess the impact of the contextualization of Llip, we explore how the performance evolves when gradually modifying an existing SigLIP baseline toward Llip. Our starting baseline SigLIP pre-training with a ViT-B/32 and the MetaCLIP dataset. We introduce three intermediate baselines – each corresponding to an intervention on the previous baseline – that gradually interpolate between SigLIP and Llip in the way the visual representation is computed. We present their respective performances on ImageNet zero-shot top-1 accuracy in Figure 3. Figure 3: Decomposing the effects of Llip’s ingredients. Ablation of the added components of Llip compared to SigLIP and their effect on zero-shot ImageNet transfer accuracy. Every models are trained with a ViT-B/32. From left to right, we evaluate: 1) Re-implemented SigLIP baseline, 2) adding additional $63$ mixture tokens (+Registers (Darcet et al., 2023)) which are not used in the final representation, 3) using uniform mixing of the learnable tokens (+Average), 4) non-uniform mixing of the tokens (+Learned average), 5) context-conditional mixing of the tokens (Llip64). Conditioning the mixing weights of the tokens on the text feature achieves the best performance. SigLIP. We reproduce SigLIP pre-training with our setup. The zero-shot accuracy on ImageNet is similar to the accuracy of 67.6 reported by MetaCLIP (Xu et al., 2023). \+ Register. We increase the amount of learned tokens from $1$ to $64$ in SigLIP, but only use the first learned token to compute SigLIP objective as done in Darcet et al. (2023) (they refer to additional tokens as registers). This procedure does not improve the ImageNet top-1 accuracy. \+ Average. Next, we explore the effect of tokens mixing. We compute equal- weighted average of all of the $64$ learned tokens and use the resulting vector to compute the objective. We find that averaging the learned tokens leads to a significant improvement over the baseline. Adding extra learned tokens and uniform mixing is an effective method to improve VLP. \+ Learned Average. We introduce non-uniform mixing to aggregate the mixture tokens. We apply a cross-attention operation as described in equation 2 except the query is a learned vector shared across all samples instead of the text caption. We don’t find a significant difference between uniform and non- uniform mixing of the learned tokens. Llip. Finally, we contrast the aforementioned baselines with Llip where the mixing weights now depend on the text features, i.e. the query token for the cross attention is a function of the text representation. Llip shows significant improvement over the average baseline in zero-shot Top-1 ImageNet accuracy. We find that strong performance of Llip comes from mixing visual features conditioned on the text features. Table 1: Zero-shot classification benchmarks when pretraining on the MetaCLIP dataset on ViT-B/32, ViT-B/16, ViT-L/14, ViT-H/14 and ViT-G/14. We compare Llip to CLIP and SigLIP for several backbones with different scales. We pre-train all the models with the MetaCLIP dataset and use the same pre-training recipe. Llip outperforms MetaCLIP across most benchmarks. ∗: Denotes that we reproduced the baseline with our setup. MetaCLIP numbers are reported from: 1: (Xu et al., 2023). | | Standard vision | Fine-grained | Other ---|---|---|---|--- | Average | ImageNet | Food-101 | CIFAR10 | CIFAR100 | CUB | SUN397 | Cars | Aircraft | Pets | Caltech-101 | Flowers | MNIST | STL-10 | GTSRB | DTD | EuroSAT | RESISC45 | PCAM | Country211 | KITTI | UCF101 | MIT-States ViT-B/32 | | | | | | | | | | | | | | | | | | | | | | | MetaCLIP1 | 62.8 | 67.6 | 82.7 | 95.2 | 77.7 | 67.8 | 66.8 | 77.4 | 27.0 | 90.9 | 92.8 | 69.9 | 42.7 | 96.3 | 39.2 | 58.9 | 51.1 | 66.3 | 50.0 | 17.7 | 29.3 | 67.5 | 47.6 SigLIP∗ | 63.5 | 67.3 | 81.8 | 94.8 | 77.1 | 68.9 | 66.5 | 78.7 | 29.0 | 88.9 | 93.0 | 70.3 | 41.9 | 96.8 | 52.3 | 58.8 | 47.4 | 64.7 | 54.8 | 17.0 | 30.9 | 69.5 | 46.9 $\text{Llip{}}_{64}$ | 67.5 | 70.4 | 84.1 | 95.5 | 80.8 | 71.5 | 68.6 | 82.2 | 34.9 | 92.3 | 92.9 | 74.8 | 66.3 | 97.5 | 53.6 | 58.8 | 49.9 | 67.5 | 64.5 | 20.7 | 37.8 | 71.6 | 48.5 ViT-B/16 | | | | | | | | | | | | | | | | | | | | | | | MetaCLIP1 | 66.2 | 72.1 | 88.3 | 95.7 | 79.0 | 71.4 | 68.5 | 82.9 | 30.3 | 91.7 | 93.3 | 73.9 | 66.1 | 98.4 | 46.6 | 62.1 | 51.1 | 71.1 | 50.5 | 22.7 | 16.6 | 73.0 | 50.4 SigLIP∗ | 67.1 | 72.3 | 88.5 | 96.0 | 79.0 | 74.1 | 68.5 | 83.5 | 33.8 | 92.2 | 94.2 | 72.5 | 63.3 | 98.5 | 40.8 | 60.3 | 50.1 | 68.6 | 55.5 | 22.0 | 38.2 | 74.3 | 50.4 $\text{Llip{}}_{64}$ | 69.7 | 75.3 | 89.0 | 95.7 | 81.4 | 75.0 | 70.9 | 88.2 | 41.5 | 93.5 | 94.7 | 74.9 | 79.6 | 98.5 | 54.0 | 63.7 | 56.7 | 67.6 | 53.1 | 25.7 | 24.9 | 77.6 | 51.7 ViT-L/14 | | | | | | | | | | | | | | | | | | | | | | | MetaCLIP1 | 72.8 | 79.2 | 93.5 | 97.6 | 84.2 | 80.1 | 73.7 | 88.7 | 44.4 | 94.7 | 95.5 | 81.8 | 64.4 | 99.3 | 56.3 | 68.3 | 58.7 | 74.6 | 66.5 | 34.0 | 29.7 | 81.7 | 55.6 SigLIP∗ | 73.9 | 79.4 | 93.2 | 97.6 | 84.0 | 82.3 | 72.0 | 90.7 | 51.9 | 95.5 | 95.7 | 83.1 | 67.4 | 99.2 | 67.3 | 69.2 | 58.0 | 74.4 | 55.6 | 33.3 | 37.4 | 82.4 | 55.5 $\text{Llip{}}_{32}$ | 74.7 | 80.9 | 93.6 | 98.0 | 86.8 | 81.2 | 74.4 | 91.7 | 55.1 | 96.0 | 95.2 | 81.4 | 68.0 | 99.3 | 68.8 | 69.8 | 59.8 | 77.3 | 54.7 | 36.4 | 34.8 | 84.5 | 56.1 ViT-H/14 | | | | | | | | | | | | | | | | | | | | | | | MetaCLIP1 | 75.5 | 80.5 | 94.2 | 98.0 | 86.4 | 83.4 | 74.1 | 90.0 | 50.2 | 95.4 | 95.6 | 85.1 | 72.7 | 99.4 | 62.5 | 72.4 | 66.3 | 74.6 | 65.8 | 37.2 | 38.2 | 82.2 | 56.2 $\text{Llip{}}_{64}$ | 77.7 | 82.7 | 95.1 | 97.9 | 87.2 | 86.2 | 75.0 | 92.4 | 61.3 | 96.0 | 95.8 | 86.4 | 86.6 | 99.4 | 70.8 | 72.8 | 62.4 | 74.2 | 68.6 | 41.3 | 33.6 | 86.2 | 57.2 ViT-G/14 | | | | | | | | | | | | | | | | | | | | | | | MetaCLIP1 | 76.8 | 82.1 | 94.9 | 98.5 | 88.6 | 84.0 | 74.7 | 90.9 | 52.7 | 96.1 | 95.7 | 89.5 | 78.1 | 99.5 | 61.6 | 72.6 | 73.7 | 75.5 | 65.6 | 41.5 | 31.0 | 85.6 | 56.6 $\text{Llip{}}_{64}$ | 79.7 | 83.5 | 95.6 | 98.5 | 89.5 | 86.8 | 76.5 | 93.6 | 67.4 | 96.7 | 95.8 | 89.5 | 89.9 | 99.5 | 72.5 | 75.7 | 70.7 | 77.7 | 71.9 | 45.6 | 31.1 | 88.0 | 57.9 Table 2: Zero-shot retrieval on Flickr30k (Young et al., 2014) and MSCOCO (Lin et al., 2014). Comparison of zero-shot retrieval performances of Llip with the SigLIP and MetaCLIP baselines. All methods are pre-trained with the same dataset and use the same pre-training recipe. We compare both Image to Text and Text to Image retrievals. Llip demonstrate consistent gain for both MSCOCO and Flicker30k. ∗: Reproduced number with our setup. MetaCLIP results are reported from: 1: (Xu et al., 2023). | Image$\to$Text | Text$\to$Image ---|---|--- | Flickr30K | MSCOCO | Flickr30K | MSCOCO | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 _ViT-B/16:_ | | | | | | | | | | | | MetaCLIP1 | 85.9 | 97.3 | 98.9 | 59.4 | 80.6 | 87.9 | 70.5 | 90.7 | 94.6 | 41.4 | 67.2 | 77.0 SigLIP∗ | 85.4 | 97.1 | 98.6 | 59.7 | 82.1 | 89.1 | 69.6 | 90.0 | 94.1 | 42.0 | 67.3 | 77.0 $\text{Llip{}}_{64}$ | 90.1 | 98.5 | 99.6 | 63.4 | 84.3 | 90.3 | 75.1 | 92.8 | 96.2 | 45.6 | 70.8 | 79.7 _ViT-L/14:_ | | | | | | | | | | | | MetaCLIP1 | 90.4 | 98.5 | 99.1 | 64.5 | 85.0 | 91.3 | 76.2 | 93.5 | 96.4 | 47.1 | 71.4 | 80.3 SigLIP∗ | 91.5 | 98.1 | 99.4 | 65.4 | 85.1 | 91.1 | 76.5 | 94.3 | 96.6 | 48.1 | 72.3 | 80.6 $\text{Llip{}}_{32}$ | 93.2 | 99.0 | 99.4 | 68.1 | 87.6 | 92.5 | 79.9 | 95.0 | 97.4 | 50.6 | 74.7 | 82.8 _ViT-H/14:_ | | | | | | | | | | | | MetaCLIP1 | 91.6 | 98.6 | 99.7 | 66.2 | 86.2 | 91.9 | 78.0 | 94.6 | 96.9 | 48.8 | 73.2 | 81.4 $\text{Llip{}}_{64}$ | 94.0 | 99.4 | 99.9 | 71.6 | 89.3 | 94.0 | 82.8 | 96.0 | 98.0 | 53.9 | 77.0 | 84.2 _ViT-G/14:_ | | | | | | | | | | | | MetaCLIP1 | 91.2 | 98.7 | 99.7 | 66.7 | 86.6 | 92.3 | 80.0 | 94.5 | 97.0 | 49.6 | 73.8 | 81.9 $\text{Llip{}}_{64}$ | 94.8 | 99.7 | 100 | 72.7 | 90.1 | 94.4 | 82.5 | 96.0 | 97.9 | 54.2 | 77.1 | 84.5 Table 3: Comparison of zero-shot classification. We compare Llip (_ViT-G/14_) to the best reported number of EVA-CLIP (_ViT-E/14_), OpenCLIP (_ViT-G/14_) and MetaCLIP (_ViT-G/14_) baselines on 22 classifications tasks involving object classification (e.g. ImageNet, CIFAR), fine-grained classification (e.g. Cars, Aircraft, Flowers), non-natural images (e.g. DTD, EuroSAT, PCAM). Llip obtains the best average performance across baselines and improves the best performance in $19$ out of the $22$ classification tasks. We only consider baselines that reports performance on the same tasks or that provide model weights. 1: (Sun et al., 2023); 2: (Cherti et al., 2023); 3: (Xu et al., 2023). | Average | ImageNet | Food-101 | CIFAR10 | CIFAR100 | CUB | SUN397 | Cars | Aircraft | Pets | Caltech-101 | Flowers | MNIST | STL-10 | GTSRB | DTD | EuroSAT | RESISC45 | PCAM | Country211 | KITTI | UCF101 | MIT-States ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- ViT-E/14: | | | | | | | | | | | | | | | | | | | | | | | EVA-CLIP1 | 75.6 | 82.0 | 94.9 | 99.3 | 93.1 | 85.8 | 75.1 | 94.6 | 54.1 | 95.8 | 90.5 | 84.5 | 74.7 | 99.0 | 67.7 | 68.2 | 75.8 | 75.6 | 63.7 | 35.7 | 12.4 | 83.1 | 56.7 ViT-G/14: | | | | | | | | | | | | | | | | | | | | | | | OpenCLIP2 | 73.5 | 80.1 | 93.1 | 98.2 | 87.5 | 84.4 | 74.5 | 94.5 | 49.7 | 95.2 | 86.4 | 81.5 | 71.6 | 98.5 | 62.5 | 69.0 | 70.0 | 72.6 | 63.6 | 33.8 | 15.6 | 80.5 | 54.5 MetaCLIP3 | 76.8 | 82.1 | 94.9 | 98.5 | 88.6 | 84.0 | 74.7 | 90.9 | 52.7 | 96.1 | 95.7 | 89.5 | 78.1 | 99.5 | 61.6 | 72.6 | 73.7 | 75.5 | 65.6 | 41.5 | 31.0 | 85.6 | 56.6 $\text{Llip{}}_{64}$ | 79.7 | 83.5 | 95.6 | 98.5 | 89.5 | 86.8 | 76.5 | 93.6 | 67.4 | 96.7 | 95.8 | 89.5 | 89.9 | 99.5 | 72.5 | 75.7 | 70.7 | 77.7 | 71.9 | 45.6 | 31.1 | 88.0 | 57.9 ## 6 Zero-shot Evaluations In this section, we evaluate the performance of Llip on zero-shot classification and retrievals benchmarks. We first present an apples-to-apples comparison between CLIP, SigLIP and Llip for various backbone sizes. We train all of the models with the MetaCLIP dataset and we fix the hyper-parameters to the one found in prior works (Radford et al., 2021; Zhai et al., 2023; Xu et al., 2023). We observe that Llip consistently outperforms the baselines for every model sizes on both zero-shot classification transfer and zero-shot retrieval. Next, we compare our approach with various baselines such as CLIP (Radford et al., 2021), OpenCLIP (Cherti et al., 2023), SigLIP (Zhai et al., 2023), MetaCLIP (Xu et al., 2023), CLIPA (Li et al., 2023d), Data Filtering Network (Fang et al., 2024) that all implement a variant of constrastive learning and EVA-CLIP (Sun et al., 2023) which combines contrastive objective with input masking. ### 6.1 Llip improves zero-shot performance for a fixed pre-training setup In this subsection, we evaluate Llip and compare it to the CLIP and SigLIP contrastive approaches. All methods use the same training dataset. We evaluate Llip on a wide variety of classification benchmarks. The classification benchmarks contain tasks on object classification (ImageNet (Recht et al., 2019), CIFAR (Krizhevsky, 2010), CUB (Li et al., 2003), Food-101 (Bossard et al., 2014), STL-10 (Coates et al., 2010), caltech-101 (Li et al., 2003), MNIST (LeCun & Cortes, 2010)), fine-grained classification (SUN397 (Xiao et al., 2010), Cars (Krause et al., 2013), Aircraft (Maji et al., 2013), Pets (Parkhi et al., 2012), Flowers (Nilsback & Zisserman, 2008), GTRSB (Stallkamp et al., 2011), Country211 (Radford et al., 2021)), non- natural images (DTD (Cimpoi et al., 2013), EuroSAT (Helber et al., 2019), RESIS45 (Cheng et al., 2017), PCAM (Ye et al., 2020)) and video classification (KITTI (Geiger et al., 2012), UCF101 (Soomro et al., 2012)) and attribute recognition (MIT-States (Isola et al., 2015)). In Table 1 demonstrates that Llip outperforms CLIP and SigLIP when controlling for the training data distribution. On a ViT-B/32, Llip outperforms SigLIP by 4.7% in average. On a ViT-G/14, Llip outperforms MetaCLIP by 2.9% in average. Table 2 also shows that Llip outperforms CLIP and SigLIP on the Flickr30k and MSCOCO zero-shot retrieval tasks. Llip outperforms a CLIP based model on MSCOCO text retrieval by 4% with a ViT-B/16 and 6% with a ViT-G/14. Llip observes similar improvement on MSCOCO image retrieval with a gain of 4.2% with a ViT-B/16 and 4.6% with a ViT-G/14. ### 6.2 Llip comparision with previous contrastive pre-training baselines We now compare Llip with previously reported numbers in the literature of contrastive visual language pre-training. While these numbers are obtained with different model architectures, training recipes and datasets, we observe that Llip is a competitive method. Figure 4: ImageNet zero-shot transfer classification. We compare a VIT-G/14 trained with $\text{Llip}_{64}$ with various vision-language baselines. We select the best reported number for every methods. Llip outperforms most of the vision-language pretraining baselines on ImageNet. Llip outperforms most of the. DFN, which is the only methods outperforming Llip, is trained on a larger datasets of 5B curated samples and use $378$ instead of $224$ as input image resolution. We report the imagenet performance of the baselines from: 1: (Cherti et al., 2023); 2: (Radford et al., 2021); 3: (Li et al., 2023d); 4: (Sun et al., 2023); 5: (Zhai et al., 2023); 6: (Xu et al., 2023); 7 (Fang et al., 2024). ImageNet. We investigate Llip’s zero-shot transfer performance on the ImageNet classification task (Russakovsky et al., 2015). We report the top-1 accuracy of Llip with a ViT-G/14 and the best reported numbers from OpenCLIP, CLIP, CLIPA-v2, SigLIP, MetaCLIP and DFN in Figure 4. Llip outperforms most previous approaches. In particular, our method shows a gain +0.3% over SigLIP while processing $4\times$ less samples during pre-training and a gain of 2.5% over EVA-CLIP that is pre-trained with a ViT-E/14 backbone that has $2.5\times$ more parameters that the ViT-G/14. While DFN obtains a higher zero-shot top-1 accuracy than Llip, it is trained on a larger datasets of 5B curated samples and uses $378$ instead of $224$ as input image resolution. We conjecture that Llip may also benefit from higher quality data, but we leave such analysis to future works. Closest in the setting of our work is MetaCLIP which trains a joint-embedding architecture using contrastive loss on the a similar pre-training dataset. Llip outperforms MetaCLIP VIT-G/14 by $+1.4\%$, highlighting the benefit of modelling the caption diversity. Other image classification tasks. To demonstrate the genericity of the learned representation with Llip, we measure performances across 22 standard zero-shot classification benchmarks that are usually reported in the literature in Table 3. We compare our approach with OpenCLIP, MetaCLIP and EVA-CLIP which all report results on the same set of tasks or release their model weights allowing us to evaluate and compare with these models. Results show that Llip obtains the best average performance across baselines. It reaches the the best performance in $19$ out of the $22$ classification tasks. ## 7 Analysis of Llip Figure 5: Llip’s representation is more expressive than the non-contextualized SigLIP baselines. Singular value spectrum of the covariance matrix of the visual features of a ViT-B/32 using different pre-training objectives. The embedding vectors are taken at the output of the visual encoder. SigLIP with a learned query baseline adds $64$ mixture tokens and learns how to average them using a cross-attention with a learnable query vector. We concatenate the $64$ mixture tokens along the batch dimension for the learned query baseline and Llip. Llip show slower decay in the singular value spectrum than the two baselines which indicates a larger variability of the features. Representation expressivity. We evaluate the expressivity of the learned visual features by computing the singular values of the covariance matrix of the visual features as done in Jing et al. (2022). This method was proposed to probe the dimensionality collapse in self-supervised pre-trained methods and also measures the expressiveness of learned representations (Hua et al., 2021). In particular, we compare SigLIP, SigLIP with learned query (see Section 5) and Llip64. We collect the embedding vectors of $5000$ samples from ImageNet’s validation set randomly chosen. For SigLIP with learned query and Llip, we concatenate the $64$ mixture tokens along the batch dimension. Then we compute the singular value spectrum of the feature covariance matrix (Jing et al., 2022) that we plot in log scale in Figure 5. Llip show slower decay in the singular value spectrum than the two baselines which indicates a larger variability of the features. Llip hyperparameters. Llip introduces two hyper-parameters: the number of mixture tokens and the temperature of the softmax of the cross-attention module. In Figure 6 we show the result of our study on both parameters conducted with a ViT-B/32. Number of mixtures tokens. In Figure 6(a), we find that increasing the number of mixture tokens consistently improves ImageNet’s top-1 accuracy without changing the model size. Moreover, as illustrated in Figure 1(b), Llip’s performance also scales with the model size. Llip enables three axes to scale the model: increasing the encoder’s size, decreasing image patch size or increasing the number of mixture tokens. (a) Number of mixture tokens. (b) Attention’s temperature. Figure 6: Analysis of Llip’s hyperparameters on downstream zero-shot top-1 ImageNet accuracy for a ViT-B/32 visual encoder. We explore the effect of the number of mixture tokens and the temperature of the softmax in the cross- attention. For (a), we set the attention temperature to $8$. For (b), we fix the number of mixture tokens $K=64$. Increasing the number of mixture tokens improves downstream performance. Llip’s performance is robust to temperature values, but a large temperature leads to a degradation in accuracy. Effect of softmax temperature. In Figure 6(b), we also explore the effect of the softmax temperature. The temperature controls the sharpness of the softmax’s output distribution. In each case we use the same temperature during training and inference. Higher temperatures lead to logits with higher magnitudes leading to sharper activations. Llip tends to be robust to a range of temperature values but its performance degrades for large temperatures. ## 8 Conclusion In this work, we propose Llip – a contrastive vision-language pre-training model with contextualization of visual features to model the diversity of possible captions that could match a given image. We show that a simple approach for deriving context from the text caption and conditioning visual features leads to richer representations and better downstream zero-shot performance on a wide variety of classifications and retrieval benchmarks. Our detailed ablation studies show the benefits of each components of Llip and its robustness to hyperparameters. We hope the strength of the model on downstream tasks and its simplicity will inspire the adoption of this approach in broader scenarios. ## Acknowledgements The authors want to thank Diane Bouchacourt, Florian Bordes, Hu Xu, Pascal Vincent, Pietro Astolfi, Oscar Mañas and Micah Goldblum for insightful discussions. Aaron Courville acknowledges the Canadian Research Chair and the Cifar Canadian AI Chair for their support. Polina Kirichenko and Andrew Gordon Wilson acknowledge support from NSF HDR-2118310, CDS&E-MSS 2134216, CAREER IIS-2145492, I-DISRE 19347. ## References * Assran et al. (2022) Assran, M., Caron, M., Misra, I., Bojanowski, P., Bordes, F., Vincent, P., Joulin, A., Rabbat, M., and Ballas, N. Masked siamese networks for label-efficient learning. In _European Conference on Computer Vision_ , pp. 456–473. Springer, 2022. * Assran et al. (2023) Assran, M., Duval, Q., Misra, I., Bojanowski, P., Vincent, P., Rabbat, M., LeCun, Y., and Ballas, N. Self-supervised learning from images with a joint-embedding predictive architecture. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 15619–15629, 2023. * Baevski et al. (2022) Baevski, A., Hsu, W.-N., Xu, Q., Babu, A., Gu, J., and Auli, M. Data2vec: A general framework for self-supervised learning in speech, vision and language. _arXiv preprint arXiv:2202.03555_ , 2022. * Bardes et al. (2024) Bardes, A., Garrido, Q., Ponce, J., Chen, X., Rabbat, M., LeCun, Y., Assran, M., and Ballas, N. Revisiting feature prediction for learning visual representations from video. _arXiv preprint arXiv:2404.08471_ , 2024. * Bossard et al. (2014) Bossard, L., Guillaumin, M., and Van Gool, L. Food-101 – Mining Discriminative Components with Random Forests. In Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T. (eds.), _Computer Vision – ECCV 2014_ , Lecture Notes in Computer Science, pp. 446–461, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10599-4. 10.1007/978-3-319-10599-4_29. * Bromley et al. (1993) Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R. Signature verification using a” siamese” time delay neural network. _Advances in neural information processing systems_ , 6, 1993. * Chen et al. (2020a) Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_ , pp. 1597–1607. PMLR, 2020a. * Chen et al. (2020b) Chen, Y.-C., Li, L., Yu, L., El Kholy, A., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. Uniter: Universal image-text representation learning. In _European conference on computer vision_ , pp. 104–120. Springer, 2020b. * Cheng et al. (2017) Cheng, G., Han, J., and Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. _Proceedings of the IEEE_ , 105(10):1865–1883, October 2017. ISSN 0018-9219, 1558-2256. 10.1109/JPROC.2017.2675998. * Cherti et al. (2023) Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., and Jitsev, J. Reproducible scaling laws for contrastive language-image learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 2818–2829, 2023. * Cimpoi et al. (2013) Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., and Vedaldi, A. Describing Textures in the Wild, November 2013. * Coates et al. (2010) Coates, A., Lee, H., and Ng, A. Y. An Analysis of Single-Layer Networks in Unsupervised Feature Learning, 2010. * Darcet et al. (2023) Darcet, T., Oquab, M., Mairal, J., and Bojanowski, P. Vision transformers need registers, 2023. * Desai et al. (2024) Desai, K., Nickel, M., Rajpurohit, T., Johnson, J., and Vedantam, R. Hyperbolic image-text representations, 2024. * Dosovitskiy et al. (2020) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020. * Dou et al. (2022) Dou, Z.-Y., Kamath, A., Gan, Z., Zhang, P., Wang, J., Li, L., Liu, Z., Liu, C., LeCun, Y., Peng, N., Gao, J., and Wang, L. Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone, November 2022. * Fang et al. (2024) Fang, A., Jose, A. M., Jain, A., Schmidt, L., Toshev, A. T., and Shankar, V. Data filtering networks. In _The Twelfth International Conference on Learning Representations_ , 2024. URL https://openreview.net/forum?id=KAk6ngZ09F. * Fini et al. (2023) Fini, E., Astolfi, P., Romero-Soriano, A., Verbeek, J., and Drozdzal, M. Improved baselines for vision-language pre-training. _Transactions on Machine Learning Research_ , 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=a7nvXxNmdV. Featured Certification. * Foucault (1990) Foucault, M. _Les mots et les choses_. Gallimard Paris, 1990. * Gao et al. (2022) Gao, Y., Liu, J., Xu, Z., Zhang, J., Li, K., Ji, R., and Shen, C. Pyramidclip: Hierarchical feature alignment for vision-language model pretraining, 2022. * Geiger et al. (2012) Geiger, A., Lenz, P., and Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3354–3361, 2012. 10.1109/CVPR.2012.6248074. * Helber et al. (2019) Helber, P., Bischke, B., Dengel, A., and Borth, D. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification, February 2019. * Hua et al. (2021) Hua, T., Wang, W., Xue, Z., Ren, S., Wang, Y., and Zhao, H. On feature decorrelation in self-supervised learning. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_ , pp. 9578–9588, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society. 10.1109/ICCV48922.2021.00946. URL https://doi.ieeecomputersociety.org/10.1109/ICCV48922.2021.00946. * Ilharco et al. (2021) Ilharco, G., Wortsman, M., Wightman, R., Gordon, C., Carlini, N., Taori, R., Dave, A., Shankar, V., Namkoong, H., Miller, J., Hajishirzi, H., Farhadi, A., and Schmidt, L. Openclip, July 2021. URL https://doi.org/10.5281/zenodo.5143773. If you use this software, please cite it as below. * Isola et al. (2015) Isola, P., Lim, J. J., and Adelson, E. H. Discovering states and transformations in image collections. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2015. * Jia et al. (2021) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q. V., Sung, Y., Li, Z., and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision, 2021. * Jing et al. (2022) Jing, L., Vincent, P., LeCun, Y., and Tian, Y. Understanding dimensional collapse in contrastive self-supervised learning. In _International Conference on Learning Representations_ , 2022. URL https://openreview.net/forum?id=YevsQ05DEN7. * Kingma & Ba (2017) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization, 2017. * Krause et al. (2013) Krause, J., Deng, J., Stark, M., and Fei-Fei, L. Collecting a Large-Scale Dataset of Fine-Grained Cars, 2013. * Krizhevsky (2010) Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images, 2010. * LeCun (2022) LeCun, Y. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27, 2022. * LeCun & Cortes (2010) LeCun, Y. and Cortes, C. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/, 2010. URL http://yann.lecun.com/exdb/mnist/. * Li et al. (2003) Li, F.-F., Andreetto, M., and Ranzato, M. A. The Caltech-UCSD Birds-200-2011 Dataset. https://authors.library.caltech.edu/records/cvm3y-5hh21, 2003. * Li et al. (2021) Li, J., Selvaraju, R. R., Gotmare, A. D., Joty, S., Xiong, C., and Hoi, S. Align before fuse: Vision and language representation learning with momentum distillation, 2021. * Li et al. (2022a) Li, J., Li, D., Xiong, C., and Hoi, S. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation, February 2022a. * Li et al. (2023a) Li, J., Li, D., Savarese, S., and Hoi, S. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models, June 2023a. * Li et al. (2023b) Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. _arXiv preprint arXiv:2301.12597_ , 2023b. * Li et al. (2023c) Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023c. * Li et al. (2022b) Li, L. H., Zhang, P., Zhang, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.-N., Chang, K.-W., and Gao, J. Grounded language-image pre-training, 2022b. * Li et al. (2020) Li, W., Gao, C., Niu, G., Xiao, X., Liu, H., Liu, J., Wu, H., and Wang, H. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. _arXiv preprint arXiv:2012.15409_ , 2020. * Li et al. (2023d) Li, X., Wang, Z., and Xie, C. Clipa-v2: Scaling clip training with 81.1 * Li et al. (2023e) Li, Y., Fan, H., Hu, R., Feichtenhofer, C., and He, K. Scaling language-image pre-training via masking. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 23390–23400, 2023e. * Lin et al. (2014) Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T. (eds.), _Computer Vision – ECCV 2014_ , pp. 740–755, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10602-1. * Liu et al. (2023) Liu, H., Li, C., Li, Y., and Lee, Y. J. Improved baselines with visual instruction tuning, 2023. * Loshchilov & Hutter (2017) Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_ , 2017. * Maji et al. (2013) Maji, S., Rahtu, E., Kannala, J., Blaschko, M., and Vedaldi, A. Fine-Grained Visual Classification of Aircraft, June 2013. * Misra & van der Maaten (2020) Misra, I. and van der Maaten, L. Self-supervised learning of pretext-invariant representations. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 6706–6716, Los Alamitos, CA, USA, jun 2020. IEEE Computer Society. 10.1109/CVPR42600.2020.00674. URL https://doi.ieeecomputersociety.org/10.1109/CVPR42600.2020.00674. * Moayeri et al. (2023) Moayeri, M., Rezaei, K., Sanjabi, M., and Feizi, S. Text-to-concept (and back) via cross-model alignment, 2023. * Moon et al. (2023) Moon, S., Madotto, A., Lin, Z., Nagarajan, T., Smith, M., Jain, S., Yeh, C.-F., Murugesan, P., Heidari, P., Liu, Y., Srinet, K., Damavandi, B., and Kumar, A. Anymal: An efficient and scalable any-modality augmented language model, 2023. * Mu et al. (2021) Mu, N., Kirillov, A., Wagner, D., and Xie, S. Slip: Self-supervision meets language-image pre-training, 2021. * Nilsback & Zisserman (2008) Nilsback, M.-E. and Zisserman, A. Automated Flower Classification over a Large Number of Classes. In _2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing_, pp. 722–729, Bhubaneswar, India, December 2008. IEEE. 10.1109/ICVGIP.2008.47. * Oquab et al. (2023) Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_ , 2023. * Parkhi et al. (2012) Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. V. Cats and dogs. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3498–3505, June 2012. 10.1109/CVPR.2012.6248092. * Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_ , pp. 8024–8035. Curran Associates, Inc., 2019. * Purushwalkam & Gupta (2020) Purushwalkam, S. and Gupta, A. Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases. _CoRR_ , abs/2007.13916, 2020. URL https://arxiv.org/abs/2007.13916. * Radford et al. (2021) Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning Transferable Visual Models From Natural Language Supervision, February 2021. * Ramesh et al. (2021) Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot text-to-image generation. In _International Conference on Machine Learning_ , pp. 8821–8831. PMLR, 2021. * Recht et al. (2019) Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? In _International conference on machine learning_ , pp. 5389–5400. PMLR, 2019. * Russakovsky et al. (2015) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet Large Scale Visual Recognition Challenge, January 2015. * Soomro et al. (2012) Soomro, K., Zamir, A. R., and Shah, M. UCF101: A dataset of 101 human actions classes from videos in the wild. _CoRR_ , abs/1212.0402, 2012. URL http://arxiv.org/abs/1212.0402. * Stallkamp et al. (2011) Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In _IEEE International Joint Conference on Neural Networks_ , pp. 1453–1460, 2011. * Sun et al. (2023) Sun, Q., Fang, Y., Wu, L., Wang, X., and Cao, Y. Eva-clip: Improved training techniques for clip at scale, 2023. * Vaswani et al. (2023) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need, 2023. * Wang et al. (2022a) Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J., Zhou, C., Zhou, J., and Yang, H. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework, 2022a. * Wang et al. (2022b) Wang, Z., Yu, J., Yu, A. W., Dai, Z., Tsvetkov, Y., and Cao, Y. SimVLM: Simple visual language model pretraining with weak supervision. In _International Conference on Learning Representations_ , 2022b. URL https://openreview.net/forum?id=GUrhfTuf_3. * Xiao et al. (2010) Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. SUN database: Large-scale scene recognition from abbey to zoo. In _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , pp. 3485–3492, June 2010. 10.1109/CVPR.2010.5539970. * Xu et al. (2023) Xu, H., Xie, S., Tan, X. E., Huang, P.-Y., Howes, R., Sharma, V., Li, S.-W., Ghosh, G., Zettlemoyer, L., and Feichtenhofer, C. Demystifying CLIP Data, October 2023. * Ye et al. (2020) Ye, W., Yao, J., Xue, H., and Li, Y. Weakly supervised lesion localization with probabilistic-cam pooling. _ArXiv_ , abs/2005.14480, 2020. URL https://api.semanticscholar.org/CorpusID:215776849. * Young et al. (2014) Young, P., Lai, A., Hodosh, M., and Hockenmaier, J. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. _Transactions of the Association for Computational Linguistics_ , 2:67–78, 2014. 10.1162/tacl_a_00166. URL https://aclanthology.org/Q14-1006. * Yu et al. (2022) Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., and Wu, Y. Coca: Contrastive captioners are image-text foundation models, 2022. * Zhai et al. (2022) Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A., and Beyer, L. Lit: Zero-shot transfer with locked-image text tuning, 2022. * Zhai et al. (2023) Zhai, X., Mustafa, B., Kolesnikov, A., and Beyer, L. Sigmoid Loss for Language Image Pre-Training, September 2023. * Zhang et al. (2021) Zhang, P., Li, X., Hu, X., Yang, J., Zhang, L., Wang, L., Choi, Y., and Gao, J. Vinvl: Revisiting visual representations in vision-language models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 5579–5588, June 2021. * Zhou et al. (2022) Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Conditional prompt learning for vision-language models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 16816–16825, June 2022. ## Appendix ## 9 Training setup and hyperparameters We compare our training setup in Table 4 where we compare the training datasets, the amount of samples seen and the batch size across the methods. Llip uses the same dataset as MetaCLIP and the same batch size and amount of samples seen as MetaCLIP and CLIP. Notably, it sees less samples than the other baselines and uses a smaller dataset than SigLIP. Table 4: Training protocol of the baselines and Llip: the dataset used, the number of samples seen during training and the batch size. | data | samples seen | batch size ---|---|---|--- CLIP | WIT-400M | 12.8B | 32K SigLIP | WebLI-10B | 40B | 32K OpenCLIP | LAION-2B | 39B | 160K MetaCLIP | MetaCLIP-2.5B | 12.8B | 32K EVA CLIP | LAION-2B | 11B+9B | 144K Llip | MetaCLIP-2.5B | 12.8B | 32K The hyperparameters that we used for our method are precisely the same hyper- parameters that were used for training MetaCLIP and CLIP with the only exception of the beta2 parameter of Adam set to $0.95$, the initialization of the scale and the additional bias is -10 as in SigLIP. For zero-shot evaluation, an image has to be encoded with the target caption. Since every targets is encoded with every images and we do not know a priori which is the right target, the ground truth target cannot leak in the prediction. To reduce the compute and memory overhead in zero-shot classification, we average the text predictions and the cross-attention queries over the template axis. ## 10 Additional Results ### 10.1 Robustness In Table 5 we show additional results on robustness benchmarks including out- of-distribution ImageNet variants across model sizes. We also show performance on geographic diversity broken down by region and model type as well as attributes from MIT States in Table 6. We find while the larger Llip model was not tuned based on the temperature parameter, when properly tuned Llip outperforms the baselines across all DollarStreet regions with a smaller encoder. Table 5: Robustness results on ViT-B/32, ViT-B/16 and ViT-L/14. | Average | Val | V2 | Sketch | R | W | A ---|---|---|---|---|---|---|--- ViT-B/32 | | | | | | | SigLIP | 57.8 | 67.3 | 59.1 | 56.2 | 76.7 | 58.4 | 28.9 $\text{Llip{}}_{128}$ | 62.8 | 71.2 | 62.9 | 60.6 | 82.6 | 62.9 | 36.3 ViT-B/16 | | | | | | | SigLIP | 66.0 | 72.1 | 65.0 | 61.2 | 84.0 | 65.4 | 48.3 $\text{Llip{}}_{64}$ | 69.7 | 75.3 | 68.3 | 63.8 | 86.6 | 69.2 | 55.0 ViT-L/14 | | | | | | | MetaCLIP* | 76.6 | 79.2 | 72.5 | 68.9 | 91.8 | 75.4 | 72.0 $\text{Llip{}}_{32}$ | 79.1 | 80.9 | 74.8 | 70.5 | 93.6 | 78.0 | 76.7 Table 6: Diversity across geographies. | Africa | Asia | Europe | Americas | Overall Top5 ---|---|---|---|---|--- ViT-B/16: | | | | | MetaCLIP | 70.38 | 80.85 | 84.12 | 82.17 | 79.65 SigLIP | 74.21 | 80.02 | 84.45 | 82.08 | 79.94 Llip64 | 74.38 | 81.26 | 85.45 | 83.17 | 80.93 ViT-L/14: | | | | | MetaCLIP | 79.23 | 85.66 | 88.42 | 87.87 | 85.26 Llip32 | 76.94 | 84.44 | 86.33 | 85.61 | 83.55 Table 7: Scene and video understanding. We compare MetaCLIP to Llip on two scene understanding tasks (CLEVRCount, SUN397) and two video Understanding tasks. Both models use a ViT-L/14 encoder. While Llip is competitive on both type of tasks, results show that the gain of Llip are more pronounced on video understanding tasks. MetaCLIP performance is reported from: 1: (Xu et al., 2023). | Scene Understanding | Video Understanding ---|---|--- | CLEVR | SUN397 | Avg | KITTI | UCF101 | Avg MetaCLIP1 | 25.9 | 73.6 | 49.8 | 29.6 | 81.6 | 55.6 Llip32 | 25.5 | 74.3 | 49.9 | 34.7 | 84.5 | 59.6 ### 10.2 Scene and video understanding. In Table 7, we focus specifically on scene and video understanding. We compare MetaCLIP to Llip on two scene understanding tasks (CLEVRCount, SUN397) and two video understanding tasks (KITTI, UCF101). We find the gains of Llip are more pronounced on video understanding tasks where the model obtains $+5.0\%$ on KITTI and $+2.8\%$ on UCF101. ### 10.3 Using image tokens in the cross-attention While the input to Llip’s vision encoder is always $P$ image tokens and $K$ additional visual mixture tokens, in the standard version of Llip we only use the outputs of the visual mixture tokens in the cross-attention (equation 2). In this experiment, we also included the outputs of the image patch tokens at the last layer of ViT together with the visual mixture tokens in the cross- attention (so $P+K$ tokens are used in total). We use Llip with ViT-B/32 for which we have $P=49$ image patch tokens, and we report results on ImageNet zero-shot classification varying the number of visual mixture tokens $K$ in Figure 7. We train the model with temperature $\tau=1$. We can see a similar trend as in Figure 6: the model performance increases with the higher number of the mixture tokens $K$. Moreover, Llip with a smaller number of additional visual mixture tokens $K=32$ (see Figure 6) is more effective than Llip using $P=49$ image patch tokens and $K=1$ mixture token (note that in the latter case the total number of tokens used in the cross-attention is higher, however, the number of additional mixture tokens used affects the performance more). We hypothesize that additional learnable tokens enable learning more expressive features leading to stronger performance. Figure 7: Using image patch tokens together with additional visual mixture tokens in Llip. We report zero-shot top-1 ImageNet accuracy against the number of visual mixture tokens for a ViT-B/32 visual encoder. We train Llip with temperature $\tau=1$. Similarly to results in Figure 6, increasing the number of mixture tokens improves downstream performance. ### 10.4 Comparison of the compute time vs accuracy of Llip with CLIP Inference time Figure 1(b) shows that the additional number of FLOPs for making an ImageNet prediction with Llip becomes marginal compared to CLIP as we scale up the encoder size. The same conclusion may be made with respect to the inference time for making an ImageNet prediction. In Figure 8(a), we report the inference time for IN1K’s 0-shot (1000 prompts per image) Llip’s inference time is slightly higher than CLIP for the same model size, while having 1.7% improvement on 0-shot IN1K with a ViT-L/14, 2.2% with a ViT-H/14 and 1.4% with a ViT-G/14. Additionally Llip outperforms larger CLIP models while requiring a significantly lower inference time. (a) Zero-shot ImageNet accuracy top-1 accuracy against the inference time of inferring one ImageNet sample for vision encoders of various sizes. (b) Effect of increasing the number of mixture tokens on the estimated amount of compute required for pre-training a ViT-G/14 backbone using the training recipe of (Radford et al., 2021). We find that the biggest additional cost of pre-training Llip comes from the additional mixture tokens in the vision transformer. The cost of computing the objective function is negligible. Figure 8: Analysis of the compute overhead of using Llip’s contextualization for (a) zero-shot inference vs. ImageNet’s zero-shot transfer accuracy and (b) estimated pre-training GPU hours of Llip compared to CLIP. Pre-training GPU hours In Figure 8(b), we present the amount of GPU hours that it takes for pre-training Llip and MetaCLIP for different number of mixture tokens. For estimating the amount of GPU hours, we compute the number of samples processed per hour on one A-100. We extrapolate the amount of samples processed per hour to obtain time it takes to process 12.8B samples. While we see an increasing cost for pre-training Llip, this increase is not due to the objective of Llip. The cost of pre-training CLIP and Llip with the ViT-G/14 is almost identical when we fix the amount of mixture tokens processed by the vision transformer. Thus, the additional cost does not come from the contextualization per se, but the additional computation of the mixture tokens.
# RiemannONets: Interpretable Neural Operators for Riemann Problems Ahmad Peyvan Vivek Oommen Ameya D. Jagtap George Em Karniadakis ###### Abstract Developing the proper representations for simulating high-speed flows with strong shock waves, rarefactions, and contact discontinuities has been a long- standing question in numerical analysis. Herein, we employ neural operators to solve Riemann problems encountered in compressible flows for extreme pressure jumps (up to $10^{10}$ pressure ratio). In particular, we first consider the DeepONet that we train in a two-stage process, following the recent work of [1], wherein the first stage, a basis is extracted from the trunk net, which is orthonormalized and subsequently is used in the second stage in training the branch net. This simple modification of DeepONet has a profound effect on its accuracy, efficiency, and robustness and leads to very accurate solutions to Riemann problems compared to the vanilla version. It also enables us to interpret the results physically as the hierarchical data-driven produced basis reflects all the flow features that would otherwise be introduced using ad hoc feature expansion layers. We also compare the results with another neural operator based on the U-Net for low, intermediate, and very high- pressure ratios that are very accurate for Riemann problems, especially for large pressure ratios, due to their multiscale nature but computationally more expensive. Overall, our study demonstrates that simple neural network architectures, if properly pre-trained, can achieve very accurate solutions of Riemann problems for real-time forecasting. ###### keywords: Neural operator networks , Riemann problems , Compressible flows , DeepONet , U-Net , Data-driven basis organization=Division of Applied Mathematics, 182 George Street, Brown University, city=Providence,state=RI, postcode=02912,country=USA ## 1 Introduction In recent years, data-driven modeling methods have shown great potential for solving many challenging problems in the fields of computational science and engineering. Some of these methods, in particular, can have a large impact on large-scale computational problems and real-time forecasting as they are efficient and, once trained, can be used for the same or similar tasks repeatedly. Neural operators are a new paradigm for learning nonlinear mappings between the input and output functions. In particular, there are various neural operators available in the literature: The Deep Operator Network (DeepONet), developed by Lu et al. [2] (first published in 2019 in [3]), is the first proposed neural operator. DeepONet has been used to solve many problems in computational science and engineering, such as stiff chemical kinetics [4], multiscale bubble dynamics [5], brittle fracture analysis [6], two-phase microstructural evolution [7] solar-thermal systems forecasting [8], electroconvection [9], etc. In addition, several extensions of DeepONet have been proposed in recent studies, including Partition-of-Unity (PoU) based DeepONet [4], physics-informed DeepONet [10], DeepONet with proper orthogonal decomposition (POD-DeepONet) [11, 12], multifidelity DeepONet [13, 14], DeepONet with UQ [15, 16, 17] multiscale DeepONet [18], etc. Other neural operator networks have also been proposed in the literature, such as the Fourier neural operator (FNO) [19], the wavelet neural operator (WNO) [20], the spectral neural operator (SNO) [21], the convolutional neural operator [22], etc. The main difference between DeepONet and the aforementioned operators is that DeepONet learns a new basis (through the trunk net) to represent the operator whereas other operators, e.g., FNO, use a pre-specified basis, e.g., Fourier expansions. When implemented on a computer, not all models behave as operators, raising doubts about what operator learning actually is. To address this, [23] proposed a unifying mathematical framework for representation equivalent neural operators (ReNO) to ensure that operations at the continuous and discrete levels are equivalent. In their recent work, Lee and Shin [1] proposed a novel two-step training procedure for DeepONet. The newly introduced sequential two-step training approach begins with trunk network training, which includes Gram-Schmidt orthonormalization via QR-factorization, and then advances on to branch network training. U-Net-based operators [24, 25, 26, 27] are another class of neural operators that can be particularly effective for approximating mathematical operators due to their inherent multi-scale nature. U-Net is a U-shaped fully convolutional neural network [28], first proposed for biomedical image segmentation tasks. The authors of [24, 25, 26] demonstrated that conditioning the U-Net with respect to time can significantly improve the predictions of the vanilla U-Net. In our current work, we extend this idea and condition the U-Net with respect to the initial pressure and temperature states. Neural operators have been successfully used to solve high-speed viscous flows. In [29], Mao et al. employed DeepONet to solve hypersonic viscous flows that give high-gradient solutions without exact shocks. In this work, we are using DeepONet to solve the Riemann problem for the compressible Euler equations of gas dynamics. To the best of our knowledge, this is the first attempt to solve the Riemann problem that has discontinuous solutions using neural operator networks. Specifically, the Riemann problem presents a hyperbolic partial differential equation characterized by a discontinuous initial solution. The inherent complexity of dealing with discontinuities makes these problems particularly challenging to solve. Notably, the accuracy of the solution tends to deteriorate in the vicinity of shocks and contact waves. In summary, the following are our main contributions: * • Our study leverages the capabilities of deep neural operators to investigate its efficacy in mapping input pressure ratios to the final solution at a specified time. The final solution encompasses primitive field variables such as density, pressure, and velocity. * • We assess the impact of activation functions on prediction accuracy by investigating both fixed and adaptive (Rowdy) activation functions [30]. * • We explore the performance of two training strategies for RiemannONet: the traditional vanilla approach (one-step training) and the recently proposed novel two-step training method [1]. * • We enforce positivity-preserving constraints during the training of RiemannONet, a crucial consideration grounded in the governing physical principles. * • We systematically compare two types of neural operators for solving Riemann problems: 1) a modified DeepONet architecture, and 2) a U-Net conditioned on the pressure and temperature initial conditions. * • We obtain interpretable basis functions for such discontinuous solutions. To this end, we employ QR and SVD methods to investigate the solution spectrum and diverse bases. Furthermore, we conduct a comprehensive investigation into these basis functions, delving into the influence of network architecture on their structure. The structure of this paper unfolds as follows: Section 2 elucidates the governing equations for the Riemann problem. In Section 3, we describe the two neural operators employed in RiemannONets. Section 4 is dedicated to an in- depth discussion of the results obtained across various test cases. Section 5 delves into the exploration of hierarchical and interpretable basis functions for representing the two neural operators. Our conclusions and a summary of our findings are encapsulated in Section 6. ## 2 Governing Equations In this study, we consider the Riemann problem of the one-dimensional general form of the hyperbolic Euler equations (1) as $\frac{\partial\mathbf{U}(x,t)}{\partial t}+\frac{\partial\mathbf{F}(\mathbf{U})}{\partial x}=\mathbf{S}(x,t),\quad t\in[0,t_{f}],\quad\textrm{and}\quad\mathbf{U}(x,0)=\Bigg{\\{}\begin{array}[]{lc}\mathbf{U}_{L}&x\leq x_{c}\\\ \mathbf{U}_{R}&x>x_{c}\end{array}$ (1) where $t$ is time, $x$ is the spatial coordinate, $\mathbf{U}$ indicates the vector of conservative variables, $\mathbf{F}$ denotes the corresponding advective fluxes, and $\mathbf{S}$ is a vector of source terms. The hyperbolic Euler equation is subjected to a discontinuous initial condition composed of two constant states of the conservative variable vectors as $\mathbf{U}_{L}$ and $\mathbf{U}_{R}$ separated at $x=x_{c}$. We are interested in determining the solution at a final time, as $t=t_{f}$. The definitions of the conservative variable, flux, and source term vectors are explained below. The Euler equations takes the form of Eq. (1) with $\mathbf{S}=\mathbf{0}$, the conservative variables vector and flux vectors defined as $\mathbf{U}=\begin{pmatrix}\rho\\\ \rho u\\\ \rho E\end{pmatrix}^{T},\quad\mathbf{F}=\begin{pmatrix}\rho u\\\ \rho u^{2}+p\\\ u(\rho E+p)\end{pmatrix}^{T},$ (2) respectively. In Eq. (2), the total energy is described as $\rho E=\frac{p}{\gamma-1}+\frac{1}{2}u^{2},$ (3) where $\gamma=1.4$ for all the test cases without chemistry. In Eqs. (2) and (3), $\rho$ is density, $p$ is pressure, and $u$ indicates the velocity. ## 3 Methodology Within this section, we introduce two distinct neural operators: DeepONet and U-Net. We provide a comprehensive overview of the necessary modifications essential for their effective training within the framework of RiemannONets. ### 3.1 Deep Operator Networks The development of modern machine learning models has made it possible to create rapid simulators for solving parametric PDEs. Instead of dealing with the PDE explicitly, a substitute model of the PDE solution operator that can regularly simulate PDE solutions for varied initial and boundary conditions is frequently required, e.g. in inverse problems, in design, in uncertainty quantification, etc. This promise is fulfilled by the novel idea of neural operators, which was first proposed in 2019 in [3] in the form of DeepONet [2] and is inspired by the universal approximation theorem of operators. In this section, we will discuss the architecture of DeepONet for the Riemann problem named RiemannONet. Figure 1: Schematic representation of RiemannONet for the Sod’s problem. The input to the trunk net is the sensor location in spatial dimensions, while the input to the branch net is different realizations of left side pressure (keeping right side pressure fixed). The output of RiemannONet is the primitive variables consisting of velocity, density, and pressure at the final time. #### 3.1.1 Training procedure ##### Single step Our RiemannONet consists of two networks, namely, the trunk net and the branch net; see Fig. 1. In the branch net, the encoded input is the different realization of input pressure $p_{l}$ (only the left side, as the right side has a fixed value), whereas the trunk net input consists of the spatial dimensions $x$. The aim is to learn the solution (primitive variables) at a later time. The output of the network is given by $U_{r,s,v}^{NN}=\sum_{l}B_{r,l,v}\cdot T_{s,l},$ (4) where $U_{r,s,v}^{NN}$ is the profile of the $v^{th}$ primitive variable, $B_{r,l,v}$ is the output of the branch net, and $T_{s,l}$ is the output of the trunk net. Here $l$ is the index of the latent dimension, which is a hyperparameter. The RiemannONet is being trained on a data set consisting of input-output data, which is divided into training and testing datasets. The loss function consists of a data mismatch term: $\mathcal{J}(\Theta)=\left|\left|\mathbf{U}^{\text{Exact}}-\mathbf{U}^{NN}(\Theta)\right|\right|_{L_{2}},$ (5) where $\Theta$ represents all the trainable parameters of the network. ##### Two-step The training of DeepOnet can be split into two stages employing the approach of Lee and Shin [1]. First, the trunk net is trained by minimizing the loss function $\mathcal{L}^{t}(\Theta)=\left|\left|\sum_{l}T_{l}(\Theta)\mathcal{A}_{l}-\mathbf{U}^{\text{Exact}}\right|\right|_{L_{2}},$ (6) where $\Theta$ represents all the trainable parameters of the trunk network and $\mathcal{A}_{l}$ denotes the elements of a trainable matrix defined as $\mathbf{A}\in\mathbb{R}^{r,vK}$ with $K$ the number of neurons of the output layer of the trunk net. In the equation (6), the index $l$ iterates over the neurons in the final layer of the trunk network. Let $\Theta^{*}$ and $\mathbf{A}^{*}$ be the optimal values of the trainable parameters. The second step consists of training the branch net by first performing a $QR$-factorization of the trunk net represented by the matrix $\mathbf{T}(\Theta^{*})$ using the following formula: $\mathbf{Q}^{*}\mathbf{R}^{*}=\mathbf{qr}\left(\mathbf{T}(\Theta^{*})\right),$ (7) where $\mathbf{qr}$ represents the QR-factorization function. In this study, we replaced the QR-factorization with the SVD decomposition since the SVD approach provides a unique solution and generates a hierarchical set of orthonormal basis functions. We can construct the equivalent Q, and R matrices for the SVD method as $\mathbf{U}\mathbf{S}\mathbf{V}=\mathbf{svd}\left(\mathbf{T}(\Theta^{*})\right),\quad\mathbf{Q}^{*}=\mathbf{U},\textrm{ and}\quad\mathbf{R}^{*}=\mathbf{S}\mathbf{V}.$ (8) Therefore, we can replace the QR with the SVD decomposition without any other modification. The column of matrix $\mathbf{Q}^{*}$ forms a set of orthonormal basis functions. Next, the branch net is trained to match $\mathbf{R}^{*}\mathbf{\hat{A}}^{*}$ where $\mathbf{\hat{A}}$ is the reshape of matrix $\mathbf{A}$ with $\mathbf{\hat{A}}\in\mathbb{R}^{r,K,v}$. The matrix multiplication occurs along the second axis with a dimension of $K$. Hence, we minimize the loss function for the training of the branch net as $\mathcal{L}^{b}(\mu)=\left|\left|\mathbf{B}(\mu)-\mathcal{R}^{*}\mathcal{\hat{A}}^{*}\right|\right|_{L_{2}},$ (9) where $\mathbf{B}$ represents the branch network with $\mathbf{B}\in\mathbb{R}^{r,K,v}$. We can now construct a trained DeepONet model by multiplying a trunk network defined as $\mathbf{\hat{T}}=\mathbf{T}(\Theta^{*})\left(\mathbf{R}^{*}\right)^{-1}$ into the trained branch network defined as $\mathbf{B}(\mu^{*})$. The trained branch network is also described as $\mathbf{B}(\mu^{*})$, where $\mu^{*}$ is the optimum value of the branch network parameters that minimize the loss function in Eq. (9). #### 3.1.2 Rowdy activation functions Adaptive activation functions are state-of-the-art activation functions that can give superior performance with respect to their fixed counterparts [31]. There are various adaptive activation functions proposed for deep and physics- informed neural networks [32]; see, for example, the work of Jagtap et al. [33, 34]. In this work, we employed the Rowdy activation function [30]. These adaptive activation functions have been successfully employed to solve problems involving high-frequency complex structures. In the Rowdy activation function, the base activation function $\phi_{1}$ is any standard activation function, such as hyperbolic tangent. The rest of the activation functions $\phi_{k},k=2,\cdots,K$ are defined as sine functions as follows: $\phi_{k}(x)=n\sin((k-1)nx),$ where $n$ is the scaling factor, and here we select $n=10$. In this work, we modify the Rowdy activation function by adding a shiftable parameter to the base and sine functions. We choose $K=2$ and define the activation function as $g(x)=h{\left(10a\left(x\right)+c\right)}+10a_{1}\sin{\left(10F_{1}\left(x\right)+c_{1}\right)},$ (10) where $x$ is the input to the activation function and $h(\left(10a\left(x\right)+c\right))$ denotes the base function of the Rowdy activation functions that could be $\cos(\left(10a\left(x\right)+c\right))$ or $\tanh(\left(10a\left(x\right)+c\right))$ functions for various Riemann problems in this study. The Rowdy adaptive activation functions incorporate five trainable parameters, including $a$, $c$, $a_{1}$, $F_{1}$, and $c_{1}$. The $a$ and $c$ trainable coefficients are initialized using a constant value of $0.1$. The $a_{1}$ coefficient is the amplitude hyper-parameter that is initialized as zero. The phase shift parameter $c_{1}$ is also initialized with the zero value. The $F_{1}$ is the frequency hyper-parameter that is initialized with a constant value of $0.1$. #### 3.1.3 Positivity preservation for density and pressure Ensuring the positivity of density and pressure helps maintain the physical integrity of these conservation laws. Neglecting these constraints can lead to physically unrealistic results and numerical instabilities in simulations and calculations. In this work, we enforce the positivity conditions of pressure and density during the training of the neural network. For each epoch, the predicted density and pressure are forced to be above a small positive value, $\epsilon=10^{-10}$. The same positivity conditions have been enforced in the work of Jagtap et al. [35], where they employed physics-informed neural networks to solve inverse problems in supersonic flows; see also [36, 37], and in the work of Peyvan et al. [38] for hypersonic flow simulations. ### 3.2 U-Net with parameter conditioning The U-Net is a multiscale convolutional network architecture that is extensively used for solving image segmentation [28] problems. Gupta _et al._ [24] introduced the temporal conditioning mechanism that enables the U-Net to learn time-dependent systems. The efficiency of this approach was further extended and demonstrated in [25, 26]. In this work, we extend the same idea and condition the U-Net with respect to the pressure states on the left side of the initial condition profile. The model consists of two networks. 1) a U-shaped fully convolutional neural network; and 2) a Multi-Layered Perceptron (MLP). The MLP takes $p_{l}$ as the input and learns a collection of functions $\vec{f}(p_{l})$ as the output. The MLP used in this work consists of 2 hidden layers with 128 neurons and learns non-linear functions of $P_{l}$ as follows: $\vec{f}(p_{l})=w^{MLP}_{2}\sin(w^{MLP}_{1}\sin(w^{MLP}_{0}p_{l}+b^{MLP}_{0})+b^{MLP}_{1})+b^{MLP}_{2},$ (11) where $\\{w^{MLP}_{i},b^{MLP}_{i}\\}_{i=0}^{2}$ represents the parameters learned during the training of the operator. The U-Net takes a subset of reference fields from the training dataset ($U^{ref}$) as the input. $U^{ref}$ is a collection of discrete representations of density ($\rho$), velocity ($v$) and pressure ($p$) fields sub-sampled from the already available training dataset. In this context, $U^{ref}$ is a two-dimensional tensor with $C$ channels and a spatial dimension of $W$. The U-Net learns to project $U^{ref}$ to multiple basis functions of the same spatial resolution using the convolutional block that comprises 1) a 1D convolutional layer [39], 2) a group normalization layer [40], and 3) a non-linear activation layer. The one-dimensional convolution can be performed on a two-dimensional tensor $u$, using a three-dimensional weight tensor $w^{conv}$ with an input channel size of $C$, output channel size of $C^{\prime}$, and a dimensionality of $W^{\prime}$, in the following manner: $\text{conv}(u)_{k^{\prime},i}=\sum_{k=0}^{C-1}\sum_{m=0}^{W^{\prime}-1}u_{k,i+m}.w^{conv}_{k,k^{\prime},m}$ (12) where $w^{conv}$ is the weight tensor learned during the training process. For the group normalization operation on a two-dimensional tensor $u$, we separate $C$ channels to $G$ groups of $\tilde{C}$ channels ($C=G\times\tilde{C}$), compute $G$ means and standard deviations separately as, $\mu_{g}=\frac{1}{\tilde{C}.W}\sum_{\tilde{k}=0}^{\tilde{C}-1}\sum_{m=0}^{W-1}u_{g,\tilde{k},m}\quad\text{, $g$=0,1,...,$G$-1},$ (13) $\sigma_{g}^{2}=\frac{1}{\tilde{C}.W}\sum_{\tilde{k}=0}^{\tilde{C}-1}\sum_{m=0}^{W-1}(u_{g,\tilde{k},m}-\mu_{g})^{2}\quad\text{, $g$=0,1,...,$G$-1}.$ (14) We then normalize $u$ as, $\hat{u}_{g,\tilde{k},i}=\frac{u_{g,\tilde{k},i}-\mu_{g}}{\sqrt{\sigma_{g}^{2}+\epsilon}}\quad\text{, $g$=0,1,...,$G$-1}$ (15) where $\epsilon$ is a small positive constant. The output of the group normalization operation is, $\text{GN}(u)_{k,i}=\gamma_{k}.\hat{u}_{k,i}+\beta_{k}\quad\text{, $k$=0,1,...,$C$-1}.$ (16) Here, $\gamma_{k}$ and $\beta_{k}$ are trainable $C$ dimensional parameters to learn the ideal shift and scale operation. The conv($u$) and GN($u$) are linear transformations of $u$. We introduce non- linearity using Gaussian Error Linear Unit (GELU) activation function [41], which can be approximately represented as, $\text{GELU}(u)=0.5u(1+\tanh(\sqrt{\frac{2}{\pi}}(u+0.044715u^{3}))).$ (17) The convolutional block that non-linearly transforms $u$ can be represented as, $\text{conv\\_block}(u)=\text{GELU}(\text{GN}(\text{conv}(u))).$ (18) The downsampling operation is performed by the one-dimensional max-pooling layer [42]. This operation is expressed as, $\text{down}(u)_{k,i}=\max_{0\leq m<W^{\prime}}u_{k,i.S_{w}+m},$ (19) where $S_{w}$ represents strides along width and height. In order to obtain a scale-down factor of 2 during the downsampling operation, $S_{w}=W^{\prime}=2$ during maxpooling. The up-sampling is performed by the one-dimensional transpose convolutional operation [43] in the following manner: $\text{up}(u)_{k,i}=\sum_{m=0}^{W^{\prime}-1}u_{k,i.S_{w}+m}.w^{tconv}_{m},$ (20) where $w^{tconv}_{m,n}$ is a trainable two-dimensional tensor with dimnesionality $W^{\prime}$. To achieve a scale-up factor of 2, $S_{w}=W^{\prime}=2$. Borrowing the notations from [25], the latent representations of $U^{ref}$ can be expressed as, $\vec{z}^{\mathcal{L}_{p}}=\begin{cases}\text{conv\\_block}(U^{ref})&\text{if $p$=1}\\\ \text{conv\\_block}(\text{down}(\vec{z}^{\mathcal{L}_{p-1}}))&\text{if $p$=2,3,4}\end{cases}$ (21) The $\vec{z}^{\mathcal{L}_{p}}$ can be interpreted as the learned basis functions at 4 different scales. We further visualize the eigen spectrum and analyze the eigen modes of the multiscale basis functions in section 5.2. Next, we condition the latent basis functions $\vec{z}^{\mathcal{L}_{p}}$ with the pressure initialization $P_{l}$ using an element-wise product operation as shown below. $\vec{z}^{\mathcal{L}_{p}}_{p_{l}}=\vec{z}^{\mathcal{L}_{p}}\odot w^{\mathcal{L}_{p}}\vec{f}(p_{l})\\\ \implies(z^{\mathcal{L}_{p}}_{p_{l}})_{k,i}=(z^{\mathcal{L}_{p}})_{k,i}.(w^{\mathcal{L}_{p}}\vec{f}(p_{l}))_{k}\quad\forall p,$ (22) where $w^{\mathcal{L}_{p}}$ linearly projects $\vec{f}(p_{l})$ to number of channels at the $p^{th}$ latent level before conditioning the latent representation. We note that the conditioned latent variable $\vec{z}^{\mathcal{L}_{p}}_{p_{l}}$ is discrete with respect to space and continuous with respect to the parameter $P_{l}$. Next, we upsample the conditioned latent vectors in the following manner, $\vec{d}^{\mathcal{L}_{p}}_{p_{l}}=\begin{cases}\text{up}(\text{conv\\_block}(\vec{z}^{\mathcal{L}_{p+1}}_{p_{l}}))&\text{if $p$=3}\\\ \text{up}(\text{conv\\_block}(\vec{z}^{\mathcal{L}_{p+1}}_{p_{l}}\mathrel{\ooalign{$\bigcirc$\cr$+$\cr}}\vec{d}^{\mathcal{L}_{p}}_{p_{l}}))&\text{if $p$=1,2},\end{cases}$ (23) where $\mathrel{\ooalign{$\bigcirc$\cr$+$\cr}}$ concatenates the tensors along the dimension of channels. The output of the model is computed as, $U(P_{l})=\text{GN}(\text{conv}(\text{conv\\_block}(\vec{z}^{\mathcal{L}_{1}}_{p_{l}}\mathrel{\ooalign{$\bigcirc$\cr$+$\cr}}\vec{d}^{\mathcal{L}_{1}}_{p_{l}}))).$ (24) A schematic of the architecture used in this study is presented in Fig. 2. Figure 2: U-Net conditioned on pressure initialization ($p_{l}$). $U^{ref}\subseteq U^{\text{train}}$ is provided as input to a U-Net which behaves like a multi-scale neural operator. The output of each encoder block, $\vec{z}^{\mathcal{L}_{p}}$, is conditioned on the parameter $p_{l}$ through an element-wise product operation. The corresponding representation is concatenated with the previous decoder block’s output ($\vec{d}^{\mathcal{L}_{p-1}}_{p_{l}}$), and subsequently projected back as the output of the model. ## 4 Results In the subsequent test cases, we utilize the RiemannONets to establish a mapping from the initial pressure value on the left side of the membrane to the density, velocity, and pressure profiles at a final time. This mapping is performed across 200 equidistant coordinate points situated within the one- dimensional physical domain. The RiemannONets refer to either DeepONet or U-Net architectures. Table 1 presents an overview of the results of this study and provides a comprehensive comparison of several deep neural operators trained and applied for the inference of three Riemann problems. Here, we investigate low-pressure ratio (LPR), intermediate-pressure ratio (IPR), and high-pressure ratio (HPR) Sod problems. For each test case in Table 1, we train the neural operators with ten different initializations of weights and biases and present the mean and standard deviation of the $L_{2}$ norm of the error across the ten ensembles. The mean training time of 10 ensembles is also shown in Table 1. The training time is computed by deploying the code on the NVIDIA RTX 3090 GPUs with Ampere architecture and the on-card memory of 24 GBs. Both branch and trunk nets consist of five hidden layers, each with a width of 150 neurons for the 2-step training DeepONet and 50 for the vanilla DeepONet. Using a single shared trunk neural net, we set up the branch net to infer three variables, including density, velocity, and pressure. The relative $L_{2}$ norm of the error is computed over the entire test data set. Each sample in the test data set contains 200 density, velocity, and pressure values corresponding to 200 coordinate points. Let $\mathcal{D}$ be a third- order tensor indicating the inferred solution of the test data set with the shape of $\left(N_{s},N_{p},3\right)$, where $N_{s}=100$ is the number of samples, $N_{p}=200$ is the number of points, and 3 referring to the density, velocity, and pressure. The relative $L_{2}$ norm for each quantity of interest is computed as: $L_{2}\left(E_{k}\right)=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\frac{\sqrt{\sum_{j=1}^{N_{p}}\left(D_{i,j,k}-G_{i,j,k}\right)^{2}}}{\sqrt{(\sum_{j=1}^{N_{p}}G_{i,j,k}^{2})}},\quad k=1,2,3,$ (25) where $E_{k}$ denotes the point-wise error of predicted density ($k=1$), velocity ($k=2$), and pressure $k=1$). In Eq. (25), $G_{i,j,k}$ refers to the ground truth for $i^{\textrm{th}}$ test sample at $j^{\textrm{th}}$ coordinate for density, velocity, or pressure solution. The total relative $L_{2}$ norm of the error is the mean of the $L_{2}$ norm of error in density, velocity, and pressure. Table 1: Relative L2 norms mean and standard deviation obtained using 10 runs. The $L_{2}$ norm of the error is calculated over the entire testing dataset for density, velocity, and pressure profiles. The time reported is the training time; the inference time is negligible. Cases | $L_{2}(\rho)$ % | $L_{2}(u)$ % | $L_{2}(p)$ % | total $L_{2}$ norm % | Time (Min) ---|---|---|---|---|--- LPR(1 step Tanh) | $0.96\pm 0.064$ | $3.92\pm 0.251$ | $0.74\pm 0.044$ | $1.88\pm 0.120$ | $12.95$ LPR(1 step Rowdy) | $\mathbf{0.70\pm 0.103}$ | $\mathbf{2.57\pm 0.580}$ | $\mathbf{0.53\pm 0.081}$ | $\mathbf{1.27\pm 0.254}$ | $38.07$ LPR(2 step Rowdy) | $\mathbf{0.41\pm 0.017}$ | $1.28\pm 0.119$ | $\mathbf{0.33\pm 0.047}$ | $0.67\pm 0.061$ | $33.45$ LPR(U-Net) | $0.49\pm 0.070$ | $\mathbf{0.86\pm 0.129}$ | $0.36\pm 0.094$ | $\mathbf{0.57\pm 0.098}$ | 1388.58 IPR(2 step Rowdy) | $\mathbf{0.33\pm 0.027}$ | $\mathbf{0.86\pm 0.071}$ | $\mathbf{0.20\pm 0.030}$ | $\mathbf{0.46\pm 0.043}$ | 33.06 IPR(U-Net) | $0.48\pm 0.069$ | $0.98\pm 0.117$ | $0.50\pm 0.156$ | $0.66\pm 0.114$ | 1257.57 HPR(2 step Rowdy(QR)) | $0.71\pm 0.113$ | $3.50\pm 0.248$ | $3.70\pm 3.79$ | $2.64\pm 1.38$ | $34.60$ HPR(2 step Rowdy(SVD)) | $\mathbf{0.66\pm 0.093}$ | $3.39\pm 0.104$ | $2.86\pm 1.680$ | $2.31\pm 0.626$ | $26.98$ HPR(U-Net) | $1.00\pm 0.208$ | $\mathbf{2.65\pm 0.115}$ | $\mathbf{2.27\pm 0.457}$ | $\mathbf{1.97\pm 0.260}$ | 1235.67 In Table 1, we first compare the accuracy of the vanilla DeepONet framework with adaptive and normal activation functions. We employed the Rowdy adaptive activation function described in Eq. (10) with a $\tanh$ base activation function. For the first comparison, the (LPR) test problem is used. For the vanilla DeepONet, we use weight ($L_{2}$) regularization and apply density and pressure constraint positivity during training. For all the variables, the vanilla DeepONet with Rowdy activation function performs superior to the DeepONet with $\tanh$ activation function. The training time for the Rowdy activation function is longer than $\tanh$ since the optimizer must minimize the loss function for five additional coefficients of the adaptive activation function besides the weights and biases. We have also compared the accuracy of one-step Rowdy and two-step Rowdy DeepONet approaches for the LPR case. We can conclude that the two-step training significantly improves the inference accuracy while requiring less training time than the vanilla DeepONet. We can observe that for the LPR case, the U-Net accuracy in inferring velocity and pressure is better than the two- step Rowdy approach. In comparison, the two-step Rowdy provides greater accuracy for density than the U-Net. The training time for the two-step Rowdy approach is significantly less than the U-Net, while the accuracy is comparable. For the IPR case, the two-step Rowdy method obtains better accuracy for all the variables. The two-step Rowdy method provides a small standard deviation for the IPR cases, indicating that the two-step DeepONet is independent of the initialization point. For the HPR test case, we have compared the accuracy of the two-step training approach with SVD and QR- factorization of the trunk net. The SVD factorization demonstrates higher accuracy compared to the QR factorization. Here, we also compare the accuracy of the two-step approach with U-Net. The U-Net provides higher accuracy for velocity and pressure and a higher error for density than the two-step training with the SVD approach. For all the test cases, the computational cost of U-Net is much higher than the two-step training DeepONet approach. | ---|--- (a) $\rho_{\textrm{test}}$, $\tanh$ | (b) $\rho_{\textrm{test}}$, Rowdy Figure 3: Low pressure ratio test case: Comparison of $\tanh$ versus Rowdy $\tanh$ adaptive activation functions The density of four samples is inferred from the testing data set. The predictive accuracy of the Rowdy $\tanh$ adaptive activation functions is better than that of its fixed counterpart. ### 4.1 Low pressure ratio Sod problem In this case, we solve Eq. (1) on a spatial domain defined such as $x\in[0,1]$. The initial conditions for primitive variables are imposed as $\left(\rho,u,p\right)=\begin{cases}\left(1.0,0.0,p_{l}\right)&x\leq 0.5\\\ \left(0.125,0.0,0.1\right)&x>0.5,\end{cases}$ (26) where $p_{l}\in[1.0,5.0]$. We employed the analytical method described in [44] to obtain the results for primitive variables at $t_{f}=0.1$. From the 500 cases we randomly choose 400 trajectories for training and 100 trajectories for testing data-sets. | | ---|---|--- (a) $\rho_{\textrm{test}}$, DeepONet | (b) $u_{\textrm{test}}$, DeepONet | (c) $p_{\textrm{test}}$, DeepONet | | (d) $\rho_{\textrm{test}}$, U-Net | (e) $u_{\textrm{test}}$, U-Net | (f) $p_{\textrm{test}}$, U-Net Figure 4: Low Pressure Ratio Sod Problem: Comparison of DeepONet and U-Net results. The first row shows the DeepONet results for density, velocity, and pressure, whereas the second row shows the corresponding results for U-Net. The density of four samples is inferred from the testing data set. We explore the utilization of adaptive activation functions in the DeepONet framework in contrast to conventional $\tanh$ activation functions. Figure 3 illustrates the testing density results for a low-pressure ratio. A standard activation function is employed in the left plot, while the adaptive Rowdy activation function is used in the right plot. It is evident that, compared to the Rowdy activation functions, the fixed activation function introduces more oscillations, particularly in the vicinity of discontinuous solutions such as contact and shock waves. Despite the increased computational cost associated with Rowdy activation, relying solely on fixed activation functions for training is unlikely to enhance the accuracy of predictions. Henceforth, we have employed the Rowdy activation function for all subsequent test cases. The results of the DeepONet and U-Net models are illustrated in Figure 4. Specifically, the first row displays the outcomes of DeepONet for density, velocity, and pressure, while the second row presents the corresponding results for U-Net. The density for four distinct samples is inferred from the testing dataset. Notably, both DeepONet and U-Net exhibit good performance, particularly in addressing the challenges posed by the low-pressure ratio problem. In particular, while capturing the shock, contact, and expansion waves, both architectures exhibit negligible overshoots and undershoots. Focusing on the density profiles, we can observe that the U-Net provides some low-amplitude oscillations in the blue curve compared to the DeepONet counterpart. However, DeepONet induces an overshoot in the velocity profile of $pl=1.15$ at the shock wave location. The visual analysis from the figure matches the $L_{2}$ norm results shown in Table 1. | | ---|---|--- (a) $\rho_{\textrm{test}}$, DeepONet | (b) $u_{\textrm{test}}$, DeepONet | (c) $p_{\textrm{test}}$, DeepONet | | (d) $\rho_{\textrm{test}}$, U-Net | (e) $u_{\textrm{test}}$, U-Net | (f) $p_{\textrm{test}}$, U-Net Figure 5: Intermediate Pressure Ratio Sod Problem: Comparison of DeepONet and U-Net results. The first row shows the DeepONet results for density, velocity, and pressure, whereas the second row shows the corresponding results for U-Net. The density of four samples is inferred from the testing data set. ### 4.2 Medium pressure ratio Sod problem Next, we select a more challenging Sod problem for which we increase the pressure and density ratio. We solve Eq. (1) on a spatial domain defined as $x\in[-1,2]$. For this case, the initial conditions are selected as $\left(\rho,u,p\right)=\begin{cases}\left(2.0,0.0,p_{l}\right)&x\leq 0.5\\\ \left(0.125,0.0,0.1\right)&x>0.5,\end{cases}$ (27) where $p_{l}\in[50.0,100.0]$. We again use the exact method to obtain the results at $t_{f}=0.1$. We choose 500 equispaced various $p_{l}$ values and randomly assign 400 cases for training and 100 for testing. Figure 5 depicts the outcomes obtained from DeepONet and U-Net for the IPR problem, showcasing the testing results for density, velocity, and pressure. The initial row presents the results of DeepONet for density, velocity, and pressure, while the subsequent row displays the corresponding outcomes for U-Net. The density values for four distinct samples are extrapolated from the testing data set. Notably, in this instance, the accuracy of DeepONet is slightly better than that of U-Net, particularly evident in the absence of oscillations near the shock wave location at the right-most discontinuities in the curves. | | ---|---|--- (a) $\rho_{\textrm{test}}$, DeepONet | (b) $u_{\textrm{test}}$, DeepONet | (c) $p_{\textrm{test}}$, DeepONet | | (d) $\rho_{\textrm{test}}$, U-Net | (e) $u_{\textrm{test}}$, U-Net | (f) $p_{\textrm{test}}$, U-Net Figure 6: High Pressure Ratio Sod Problem: Comparison of DeepONet and U-Net results. The first row shows the DeepONet results for density, velocity, and pressure, whereas the second row shows the corresponding results for U-Net. The density of four samples is inferred from the testing data set. ### 4.3 High pressure ratio Sod problem (LeBlanc problem) For the last test case, we increase the pressure ratio to the extreme values employed in the so-called LeBlanc-Sod problem. We solve Eq. (1) on a spatial domain defined as $x\in[-20,20]$. For this case, the initial conditions are selected as $\left(\rho,u,p\right)=\begin{cases}\left(2.0,0.0,p_{l}\right)&x\leq-10\\\ \left(0.001,0.0,1.0\right)&x>-10,\end{cases}$ (28) where $p_{l}\in[10^{9},10^{10}]$. We use the exact method to obtain the results at $t_{f}=0.0001$. We choose 500 equispaced various $p_{l}$ values and randomly assign 400 cases for training and 100 for testing. For training the neural nets on this problem, we employ the logarithm of density and pressure values to construct the loss function. For the inference stage, we apply the exponential function on the density and pressure to convert the predicted values to physical values of pressure and density. The LeBlanc problem poses a significant challenge due to a very high-pressure ratio. Figure 6 presents the inferred results of DeepONet and U-Net for density, velocity, and pressure. Notably, these results highlight the testing outcomes. U-Net adeptly captures discontinuous features such as shocks and contacts with higher accuracy (without oscillations) than the DeepONet. ## 5 Operator Representation This section is dedicated to the analysis of data-driven basis functions, forming the foundation for obtaining representations of the two neural operators. It’s worth highlighting that the basis for DeepOnet is continuous, while for U-Net, it takes a discrete form. | | ---|---|--- (a) Spectrum | (b) Mode$=0$ basis | (c) Mode$=10$ basis | | (d) Mode$=30$ basis | (e) Mode$=80$ basis | (f) Mode$=120$ basis Figure 7: QR vs. SVD spectrum and basis comparison: The top-left plot shows the spectrum comparison between QR and SVD. The rest of the plots compare the different modes of QR and SVD. ### 5.1 Basis functions - DeepONet In Fig. 7 we compare the spectrum of singular values obtained using QR and SVD decomposition of the trunk net output using the optimized trunk parameters for the training of the HPR problem. The QR eigenvalues are sorted after creating a positive set of eigenvalues by adding the minimum eigenvalue’s absolute value, which is negative to all the modes eigenvalues. Some of the basis functions for five different modes are depicted for comparison. We can observe that the SVD basis functions exhibit a hierarchical structure from the lowest to highest modes, where we observe high-frequency oscillations in the basis function profile. In contrast, the basis functions computed using the QR factorization are not hierarchical since mode zero exhibits high-frequency features like the higher modes. Moreover, the QR spectrum is flat across all the modes, inferring that the contribution of all the modes is equal. Both sets of 150 eigenfunctions obtained from QR and SVD decomposition are orthonormal; however, the spectrum of SVD approaches zero at the highest mode, indicating a smaller amount of oscillations than the QR spectrum. Therefore, the SVD factorization is more robust than the QR factorization and can be used for feature extraction since the low-mode eigenfunctions mimic the physical shape of the solution profiles. | | ---|---|--- (a) LPR | (b) IPR | (c) HPR Figure 8: Effect of jump in pressure ratio: We compare the third basis for low (a), intermediate (b), and high pressure (c) ratios. In Fig. 8, we compare the third mode basis functions for three problems from low to high-pressure ratios. The pressure ratio range varies such that $p_{l}\in[10,50]$, $p_{l}\in[500,1000]$, and $p_{l}\in[10^{9},10^{10}]$ for LPR, IPR, and HPR problems, respectively. We can observe that the shape of the low modes remains approximately the same from LPR to IPR cases, indicating that the data-driven basis function obtained using the low-pressure training data set can train a branch net for the intermediate pressure data set for at least the low modes. | | ---|---|--- (a) Spectrum | (b) Mode$=0$ basis | (c) Mode$=10$ basis | | (d) Mode$=30$ basis | (e) Mode$=80$ basis | (f) Mode$=120$ basis Figure 9: The top-left plot shows the SVD spectrum for the intermediate pressure ratio, and rest of the plots shows different SVD basis functions. In Fig. 9 we present the eigenvalue spectrum of the basis functions for the IPR problem across 150 modes corresponding to the number of neurons in the last layer of the trunk net. According to Fig. 9(a), the eigenvalues decay when the mode number increases. The descending trend of the spectrum describes the shapes of the basis functions for modes $0$, $10$, $30$, $80$, and $120$. From mode $0$, it is evident that the expansion waves are the first feature that are extracted from the data set. The highest-mode basis functions capture the wide range of frequencies exhibited at discontinuities such as contact and shock waves. Considering Fig. 9(c)-(e), we can realize the fact that these modes are hierarchically contributing to resolving the shock wave feature. The order of the SVD orthonormal basis functions can be employed to construct the inferred solution manually using a specific number of high modes to avoid oscillations at the discontinuities of the solution. In the following section, we explain an approach that modifies the number of basis functions to infer the solution accurately. | | ---|---|--- (a) Spectrum | (b) Mode$=0$ basis | (c) Mode$=1$ basis | | (d) Mode$=10$ basis | (e) Mode$=20$ basis | (f) Mode$=49$ basis Figure 10: Comparison of the SVD spectrum and the basis functions for different number of neurons. It is evident that as the number of neurons increases, more high frequencies are captured, which is useful for accurately resolving shock structure present in the solution. Fig. 10(a)-(f) depicts the spectrum of eigenvalues for a trunk net constructed by five hidden layers with a constant width of 50, 100, and 150 neurons corresponding to 50, 100, and 150 eigenvalues and basis functions. By comparing the spectrum of eigenvalues, we observe that using a higher number of modes can recover a higher content of information at the higher modes. Employing a higher number of neurons in the trunk net layers decreases the $L_{2}$ norm of the error, but it could induce oscillations from the highest modes. According to Fig. 10(b) and (c), the first feature extracted from the data is the expansion wave, while the second one is the shape of the velocity profile. Considering Fig. 10(e) and (f), we can deduce that by increasing the number of neurons, we are increasing the capacity of capturing the shock wave more accurately since we can observe high-frequency jumps in the basis function of 50 neuron cases close to the location of the shock wave near the right boundary exits for Mode$=49$. At the same time, there is no high- frequency oscillation for the 100 and 150-neuron cases. The shock wave feature is captured using modes higher than $49$ in the 100 and 150-neuron cases. | ---|--- (a) Spectrum | (b) Mode$=0$ basis | (c) Mode$=1$ basis | (d) Mode$=2$ basis Figure 11: Layer-wise contribution to the spectrum and the basis: The figure shows the contribution of each hidden layer to the final basis. We have plotted the contribution of the first five layers. We see that with each layer, more high-frequencies are captured, and the final layer (layer 5) output (excluding the linear layer) is almost similar to the basis function. In Fig. 11, we attempt to reveal the contribution of each hidden layer to the shape of the final basis functions of the HPR problem. For this purpose, we performed an SVD decomposition of the output of hidden layers from the first to the last linear layer. Using the eigenvalues of each layer output, we can observe the flow of information learned by the trunk net. Considering Fig. 11(a), the red line shows the spectrum of eigenvalues of the output of the last linear layer of the trunk net. The label “$m$ layers” refers to the output of the $m^{th}$ hidden layer. The spectrum shows that the first layers always contribute to learning the lowest-mode basis functions. According to Fig. 11(a), the flow of information does not reach the highest modes until the $4^{th}$ hidden layer, where we can see there is no plateau region in the spectrum of the output of the first $4$, first $5$ and last linear layer. In Fig. 11(b)-(d), the shape of the first three modes out of the layers is shown. From Fig. 11(b), we see that the hidden layers keep adding more information as we move from the trunk input to the trunk net output. | ---|--- (a) Spectrum rectangle architecture | (b) Spectrum cone architecture | (c) Mode$=0$ basis rectangle architecture | (d) Mode$=0$ basis cone architecture Figure 12: Layer-wise contribution to the spectrum and the basis for rectangular and decoder architectures: The decoder architecture refers to a trunk shape such as [30, 70, 108, 150, 150] and rectangle network refers to [150, 150, 150, 150, 150] hidden layers arrangement. According to Fig. 11(a), we can use a smaller number of neurons for the first hidden layer of the neural net as the eigenvalues of most of the neurons are approximately zero. So, we can generate a cone-shaped configuration of the hidden layers according to the non-zero eigenvalues and use the downsized trunk net for the training procedure. To this end, we chose $[30,70,108,150,150]$ number of neurons for the hidden layers of the trunk net. We computed the optimal number of neurons for each layer based on the points on the spectrum that reach the plateau state. A comparison of the original rectangle shape versus cone shape architectures is depicted in Fig. 12. We are comparing the rectangle with a cone shape (Fig. 12(a) and (b); we can see that when we use the cone shape type, the flow of information to the last layers gets delayed for higher modes. This fact can be seen by comparing the shape of the basis functions of the first $5$ hidden layers. We can see that the shape of the basis functions changes for higher modes in the rectangle architecture. | | ---|---|--- (a) Branch coefficients $\rho$ train | (b) Branch coefficients $u$ train | (c) Branch coefficients $p$ train | | (d) Branch coefficients $\rho$ test | (e) Branch coefficients $u$ test | (f) Branch coefficients $p$ test Figure 13: Contribution of coefficients of basis functions in the final inferred solution of the IPR problem: We plotted the value of branch coefficients for the intermediate pressure ratio case. The top row shows the basis coefficients for training, and the bottom row shows the basis coefficients for testing. The first, second, and third columns give the basis coefficients for density, velocity, and pressure. The insets show representative basis spatial modes. Figure 13 depicts the value of the coefficients of the basis functions that are predicted by the branch net. The bar plots show the values on a log scale for the vertical scale. The values are sorted from lowest to highest modes for density, velocity, and pressure. The first row depicts the branch coefficients for the training data set, and the second row shows the branch coefficients for the test data set. The inset plots show the shape of the basis function at a particular mode. Here, we are trying to investigate the contribution of various features to the inferred solution of the IPR problem. According to Fig.13(a), the highest contribution to the density profile comes from mode $0$ while the highest contribution for the velocity is at mode $1$ (see Fig. 13 (b) and (e)). Another interesting fact is the large contribution of the highest modes in the inferred profile of density, velocity, and pressure. This large contribution results from predicting a discontinuous solution consisting of shock and contact waves. | | ---|---|--- (a) Spectrum | (b) Mode$=0$ basis | (c) Mode$=10$ basis | | (d) Mode$=20$ basis | (e) Mode$=30$ basis | (f) Mode$=40$ basis Figure 14: (a) Eigenspectra of U-Net at $\mathcal{L}_{1}$, $\mathcal{L}_{2}$, $\mathcal{L}_{3}$, $\mathcal{L}_{4}$. (b)-(f) Orthogonally decomposed representation of the basis learned by the U-Net at different latent levels. ### 5.2 Basis functions - U-Net In this section, we visualize and analyze the basis functions learned by the U-Net conditioned on $p_{l}$. Specifically, we consider the IPR test case and train the U-Net to learn the dependence of the density field ($\rho$) on $p_{l}$. From Eq. (21), $\vec{z}^{\mathcal{L}_{p}}$ represents the discrete basis functions learned by the U-Net at the $p^{th}$ latent level spanning $\mathcal{L}_{p}$. We perform SVD on $\vec{z}^{\mathcal{L}_{p}}$ separately for $p=1,2,3,4$ and visualize the eigenvalues and eigenmodes. $\vec{z}^{\mathcal{L}_{p}}=U_{p}\Sigma_{p}V_{p}^{T}$ (29) where $\Sigma_{p}$ represents the ordered list of eigenvalues and $V_{p}$ represents the corresponding eigenmodes at the latent space $\mathcal{L}_{p}$. We visualize the decay of eigen values $\Sigma_{p}$ for $p=1,2,3,4$ in Fig. 14 (a). We also plot the scaled basis functions for eigen modes = 0, 10, 20, 30, 40 for all $p$ in Fig. 14 (b)-(f). In Fig. 14(a) we observe that the rate of decay of eigenvalues is greater for $\mathcal{L}_{4}$ than $\mathcal{L}_{3}$ than $\mathcal{L}_{2}$ than $\mathcal{L}_{1}$. Therefore, the basis functions $\vec{z}^{\mathcal{L}_{p}}$ with a larger $p$ only learn the high energy modes with lower frequencies, and subsequently, the basis functions with a lower $p$ become increasingly responsible for learning the lower energy modes that carry higher frequencies. Furthermore, from Fig. 14 (e) and (f), the basis functions corresponding to $\mathcal{L}_{4}$ are constant, which again indicates its inability to capture high frequencies. The 0th modes of $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ shown in Fig. 14 (b) have a striking similarity with the pattern of the density fields learned by the U-Net, as well as the 0th mode learned by DeepONet shown in Fig. 9(b). ### 5.3 Constructing optimal sets of basis functions for higher accuracy We investigate the effect of the trunk net width on the accuracy of the two- step approach to learning the solution to the HPR problem. According to Table 2, we can conclude that increasing the number of neurons in the trunk network can lead to higher accuracy. Figure 10 shows that the accuracy improvement is achieved by incorporating higher modes basis functions to construct the discontinuous solution of the Riemann problems. However, by increasing the number of neurons, we are adding high oscillatory basis functions to the set, which can give rise to unwanted oscillations in the inferred solution. Therefore, at the inference stage, we can take advantage of the hierarchical features of the SVD basis functions and remove higher modes one by one from the basis set. We can monitor the corresponding accuracy by computing the $L_{2}$ norm of the solution predicted by the truncated set of basis functions. We then use the set of basis functions that provides the lowest $L_{2}$ norm of the error. In brief, we can design a procedure to use the two- step DeepONet for the best-inferred solution of the Riemann problems as follows: Table 2: Relative L2 norms mean and standard deviation obtained using 10 runs. The $L_{2}$ norm of the error is calculated over the entire testing dataset for density, velocity, and pressure profiles for the HPR test cases using 50, 100, and 150 neurons for the width of the trunk net. The time reported is the training time; the inference time is negligible. Cases | $L_{2}(\rho)$ % | $L_{2}(u)$ % | $L_{2}(p)$ % | total $L_{2}$ norm % | Time (Min) ---|---|---|---|---|--- HPR(2 step Rowdy(50)) | $2.96\pm 0.050$ | $6.51\pm 0.046$ | $8.47\pm 0.147$ | $5.98\pm 0.080$ | $33.23$ HPR(2 step Rowdy(100)) | $0.92\pm 0.043$ | $3.83\pm 0.147$ | $3.28\pm 0.839$ | $2.67\pm 0.343$ | $34.51$ HPR(2 step Rowdy(150)) | $\mathbf{0.66\pm 0.093}$ | $\mathbf{3.39\pm 0.104}$ | $\mathbf{2.86\pm 1.680}$ | $\mathbf{2.31\pm 0.626}$ | $26.98$ 1. 1. Increase the number of neurons at the last layer of the trunk net to obtain the lowest value of loss during training. 2. 2. If oscillations are near the discontinuous solution, we remove the highest basis functions from the set and compute the optimal error value. 3. 3. Use the optimum number of basis functions for the trained model for future inference. Figure 15: The $L_{2}$ norm of the error in density, velocity, and pressure for the HPR problem by using optimal number of basis functions to avoid numerical artifact due to the existence of high mode basis functions in the orthonormal set. We have followed this procedure on the HPR test case with ten ensembles to ascertain our claim. First, we took the best-trained DeepONet from the set of 10 ensembles, removed the highest mode basis functions from the set, and used the remaining basis functions to infer the solution. We then computed the $L_{2}$ norm of the variables using the first 60 to 150 basis functions from the orthonormal basis set. The results of the errors are depicted in Fig. 15. We can observe that the optimal number of basis functions to use here is 114 first basis functions. We applied the approach to the ten ensembles of the two step-trained DeepONets and recalculated the $\%L_{2}(\rho)=0.573\pm 0.091$ ,$\%L_{2}(u)=3.39\pm 0.105$, and $\%L_{2}(p)=2.29\pm 1.78$. We improved the accuracy of $\rho$ by $13.6$ percent and of $p$ by $20.0$ percent. The accuracy of the velocity remained intact, as we may need to use an even higher number than 150 neurons to see a similar trend as in pressure and density inference results. ## 6 Summary Enhancing the prediction of solutions for high-speed flows governed by compressible Euler equations holds significant implications for the design of aerospace vehicles, including airplanes, re-entry vehicles, missiles, and more. In this study, we leveraged the properties of deep neural operators – RiemannONets – to tackle Riemann problems, crucial for simulating high-speed flows. These problems entail discontinuous solutions, such as shocks and contact discontinuities, representing some of the most challenging aspects in the realm of scientific computing. We devised the RiemannONets by incorporating two distinct neural operators. The first operator is built upon DeepONet, which underwent modifications for a two-stage training approach, enhancing prediction accuracy. The second operator is a U-Net, tailored to be conditioned on pressure initialization. Our training and testing of RiemannONets focused on the Sod shock tube problem, encompassing pressure ratios ranging from 10 to $10^{10}$. Specifically, RiemannONets were trained on input-output data sets, establishing a mapping from the initial solution to the final time step. Once trained, RiemannONets can predict solutions for unseen datasets in real-time, requiring no additional optimization. The predictions from both neural operators exhibited remarkable accuracy across low, intermediate, and very high pressure ratios, with an error margin below $2\%$. However, it is noteworthy that U-Net’s computational speed lags behind DeepONet by orders of magnitude. These results correspond to a single mapping from the initial condition to the final time. We have also shown that the use of adaptive activation function in the structure of the DeepONet increases the accuracy of the prediction comparing to the fixed activation functions. Notably, we achieved similarly accurate results for the time-evolving Sod problem using the DeepONet trained in two stages. We systematically explored the basis functions generated by the trunk net of DeepONet for representing the operator in a continuous manner. Using the orthonormalization process, we constructed a set of orthonormal basis functions. For orthonormalization, we performed QR-factorization and singular value decomposition (SVD) on the output of trunk net. Our in-depth investigation of the data-driven basis functions has led us to the following conclusions: * • The SVD decomposition resulted in a hierarchical orthonormal basis with distinctive features, which can be used to remove Gibbs oscillations at the discontinuities during the inference. * • A comparison of the eigenvalue spectra of QR and SVD revealed that the QR eigenvalues are equally contributing to the final solution, while the SVD eigenvalues show a descending trend in contribution from low to high modes. * • The unique basis of the SVD decomposition proves particularly advantageous for high-speed flows. * • The SVD basis functions exhibit a similar shape for low, intermediate, and high-pressure ratio Sod problems. Consequently, we can utilize a trunk network trained on a fixed range of pressure ratios to train the branch network for higher pressure ratios ranges. * • Using a larger width of the trunk network in the two-step training improves the accuracy of the inference by learning a higher amount of information at high modes. * • The first hidden layers of the trunk network are responsible for learning low modes while the later hidden layers contribute to learning the high-frequency features. * • The specific contribution of each basis function in constructing density, velocity, and pressure fields are explored, revealing the distinct hierarchy of feature learning for density, velocity, and pressure profiles. * • Employing the hierarchy of the SVD basis functions, we constructed a procedure that can effectively remove high frequency artifacts near the discontinuous region of the Riemann problem solutions. At present, there are no numerical methods in existence that can attain the SVD data-driven orthonormal basis functions. This requires the incorporation of ad hoc features in the basis functions, such as the utilization of enrichment methods in finite elements or the integration of a feature layer in a neural network to address discontinuous, singular, or multiscale solutions. Additionally, we visualize the basis functions learned by different latent levels of the U-Net. This rvealed that the first mode of U-Net is similar to the first mode of DeepOnet. Moreover, the lower latent layers focus on learning the low modes whereas the higher levels capture the highest modes. Our ongoing research is focused on examining basis functions tailored for addressing two- and three-dimensional high-speed flow problems characterized by discontinuous solutions. ## Acknowledgments This work was supported by the U.S. Army Research Laboratory W911NF-22-2-0047 and by the MURI-AFOSR FA9550-20-1-0358. ## References * [1] S. Lee, Y. Shin, On the training and generalization of deep operator networks, arXiv preprint arXiv:2309.01020 (2023). * [2] L. Lu, P. Jin, G. Pang, Z. Zhang, G. E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators, Nature machine intelligence 3 (3) (2021) 218–229. * [3] L. Lu, P. Jin, G. E. Karniadakis, Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators, arXiv preprint arXiv:1910.03193 (2019). * [4] S. Goswami, A. D. Jagtap, H. Babaee, B. T. Susi, G. E. Karniadakis, Learning stiff chemical kinetics using extended deep neural operators, Computer Methods in Applied Mechanics and Engineering 419 (2024) 116674. * [5] C. Lin, Z. Li, L. Lu, S. Cai, M. Maxey, G. E. Karniadakis, Operator learning for predicting multiscale bubble growth dynamics, The Journal of Chemical Physics 154 (10) (2021). * [6] S. Goswami, M. Yin, Y. Yu, G. E. Karniadakis, A physics-informed variational deeponet for predicting crack path in quasi-brittle materials, Computer Methods in Applied Mechanics and Engineering 391 (2022) 114587. * [7] V. Oommen, K. Shukla, S. Goswami, R. Dingreville, G. E. Karniadakis, Learning two-phase microstructure evolution using neural operators and autoencoder architectures, npj Computational Materials 8 (1) (2022) 190. * [8] J. D. Osorio, Z. Wang, G. Karniadakis, S. Cai, C. Chryssostomidis, M. Panwar, R. Hovsapian, Forecasting solar-thermal systems performance under transient operation using a data-driven machine learning approach based on the deep operator network architecture, Energy Conversion and Management 252 (2022) 115063\. * [9] S. Cai, Z. Wang, L. Lu, T. A. Zaki, G. E. Karniadakis, DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks, Journal of Computational Physics 436 (2021) 110296. * [10] S. Wang, H. Wang, P. Perdikaris, Learning the solution operator of parametric partial differential equations with physics-informed DeepONets, Science advances 7 (40) (2021) eabi8605. * [11] L. Lu, X. Meng, S. Cai, Z. Mao, S. Goswami, Z. Zhang, G. E. Karniadakis, A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data, Computer Methods in Applied Mechanics and Engineering 393 (2022) 114778. * [12] S. Venturi, T. Casey, SVD perspectives for augmenting DeepONet flexibility and interpretability, Computer Methods in Applied Mechanics and Engineering 403 (2023) 115718. * [13] A. A. Howard, M. Perego, G. E. Karniadakis, P. Stinis, Multifidelity deep operator networks, arXiv preprint arXiv:2204.09157 (2022). * [14] L. Lu, R. Pestourie, S. G. Johnson, G. Romano, Multifidelity deep neural operators for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat transport, Physical Review Research 4 (2) (2022) 023210. * [15] Y. Yang, G. Kissas, P. Perdikaris, Scalable uncertainty quantification for deep operator networks using randomized priors, Computer Methods in Applied Mechanics and Engineering 399 (2022) 115399. * [16] C. Moya, S. Zhang, G. Lin, M. Yue, Deeponet-grid-uq: A trustworthy deep operator framework for predicting the power grid’s post-fault trajectories, Neurocomputing 535 (2023) 166–182. * [17] G. Lin, C. Moya, Z. Zhang, B-DeepONet: An enhanced Bayesian DeepONet for solving noisy parametric PDEs using accelerated replica exchange SGLD, Journal of Computational Physics 473 (2023) 111713. * [18] L. Liu, W. Cai, Multiscale DeepONet for nonlinear operators in oscillatory function spaces for building seismic wave responses, arXiv preprint arXiv:2111.04860 (2021). * [19] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Fourier neural operator for parametric partial differential equations, arXiv preprint arXiv:2010.08895 (2020). * [20] T. Tripura, S. Chakraborty, Wavelet neural operator: a neural operator for parametric partial differential equations, arXiv preprint arXiv:2205.02191 (2022). * [21] V. Fanaskov, I. Oseledets, Spectral neural operators, arXiv preprint arXiv:2205.10573 (2022). * [22] B. Raonić, R. Molinaro, T. Rohner, S. Mishra, E. de Bezenac, Convolutional neural operators, arXiv preprint arXiv:2302.01178 (2023). * [23] F. Bartolucci, E. de Bézenac, B. Raonić, R. Molinaro, S. Mishra, R. Alaifari, Are neural operators really neural operators? frame theory meets operator learning, arXiv preprint arXiv:2305.19913 (2023). * [24] J. K. Gupta, J. Brandstetter, Towards multi-spatiotemporal-scale generalized PDE modeling, arXiv preprint arXiv:2209.15616 (2022). * [25] V. Oommen, K. Shukla, S. Desai, R. Dingreville, G. E. Karniadakis, Rethinking materials simulations: Blending direct numerical simulations with neural operators (2023). arXiv:2312.05410. * [26] O. Ovadia, V. Oommen, A. Kahana, A. Peyvan, E. Turkel, G. E. Karniadakis, Real-time inference and extrapolation via a diffusion-inspired temporal transformer operator (DiTTO) (Dec 2023). * [27] M. A. Rahman, Z. E. Ross, K. Azizzadenesheli, U-NO: U-shaped neural operators, arXiv preprint arXiv:2204.11127 (2022). * [28] O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241. * [29] Z. Mao, L. Lu, O. Marxen, T. A. Zaki, G. E. Karniadakis, DeepM&Mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators, Journal of computational physics 447 (2021) 110698. * [30] A. D. Jagtap, Y. Shin, K. Kawaguchi, G. E. Karniadakis, Deep kronecker neural networks: A general framework for neural networks with adaptive activation functions, Neurocomputing 468 (2022) 165–180. * [31] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational physics 378 (2019) 686–707. * [32] A. D. Jagtap, G. E. Karniadakis, How important are activation functions in regression and classification? a survey, performance comparison, and future directions, Journal of Machine Learning for Modeling and Computing 4 (1) (2023). * [33] A. D. Jagtap, K. Kawaguchi, G. E. Karniadakis, Adaptive activation functions accelerate convergence in deep and physics-informed neural networks, Journal of Computational Physics 404 (2020) 109136. * [34] A. D. Jagtap, K. Kawaguchi, G. Em Karniadakis, Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks, Proceedings of the Royal Society A 476 (2239) (2020) 20200334. * [35] A. D. Jagtap, Z. Mao, N. Adams, G. E. Karniadakis, Physics-informed neural networks for inverse problems in supersonic flows, Journal of Computational Physics 466 (2022) 111402. * [36] Z. Mao, A. D. Jagtap, G. E. Karniadakis, Physics-informed neural networks for high-speed flows, Computer Methods in Applied Mechanics and Engineering 360 (2020) 112789. * [37] A. D. Jagtap, E. Kharazmi, G. E. Karniadakis, Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems, Computer Methods in Applied Mechanics and Engineering 365 (2020) 113028. * [38] A. Peyvan, K. Shukla, J. Chan, G. Karniadakis, High-order methods for hypersonic flows with strong shocks and real chemistry, Journal of Computational Physics 490 (2023) 112310. * [39] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems 25 (2012). * [40] Y. Wu, K. He, Group normalization, in: Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19. * [41] D. Hendrycks, K. Gimpel, Gaussian error linear units (gelus), arXiv preprint arXiv:1606.08415 (2016). * [42] K. Yamaguchi, K. Sakamoto, T. Akabane, Y. Fujimoto, A neural network for speaker-independent isolated word recognition., in: First International Conference on Spoken Language Processing, ICSLP, 1990, pp. 1077–1080. * [43] V. Dumoulin, F. Visin, A guide to convolution arithmetic for deep learning, arXiv preprint arXiv:1603.07285 (2016). * [44] E. F. Toro, Riemann Solvers and Numerical Methods for Fluid Dynamics: a Practical Introduction, Springer Science & Business Media, 2013.
22 A6.IPB 73 Paris, France 18-22 September 2022 2022the authors Celina Pasiecznik1 Andrea D’Ambrosio2 Daniel Jang3 Richard Linares4 SM Candidate, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, MA 02139<EMAIL_ADDRESS>Postdoctoral Associate, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, MA 02139<EMAIL_ADDRESS>PhD Candidate, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, MA 02139, <EMAIL_ADDRESS>Rockwell International Career Development Professor, Associate Professor of Aeronautics and Astronautics, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, MA 02139, <EMAIL_ADDRESS> # A Dynamical Systems Analysis of the Effects of the Launch Rate Distribution on the Stability of a Source-Sink Orbital Debris Model ###### Abstract Future launches are projected to significantly increase both the number of active satellites and aggregate collision risk in Low Earth Orbit (LEO). Ensuring the long-term sustainability of the space environment demands an accurate model to understand and predict the effect of launch rate distribution as a major driver of the evolution of the LEO orbital population. In this paper, a dynamical systems theory approach is used to analyze the effect of launch rate distribution on the stability of the LEO environment. A multi-shell, three-species source-sink model of the LEO environment referred to as MOCAT-3 for MIT Orbital Capacity Assessment Tool - 3 Species, is used to study the evolution of the species populations. The three species included in the model are active satellites, derelict satellites, and debris. Each shell is modeled by a system of three equations, representing each species, that are coupled through coefficients related to atmospheric drag, collision rate, mean satellite lifetime, post-mission disposal probability, and active debris removal rate. The major sink in the model is atmospheric drag, whereas the only source apart from collision fragments is the launch rate, making it the critical manageable factor impacting the orbital capacity. Numerical solutions of the system of differential equations are computed, and an analysis of the stability of the equilibrium points is conducted for numerous launch rate distributions. The stability of the equilibrium points is used to test the sensitivity of the environment to run-away debris growth, known as Kessler syndrome, that occurs at the instability threshold. Various bounding cases are studied from business-as-usual launch rates based on historic launch data, to high launch rates wherein a fraction of the satellite proposals filed with the International Telecommunication Union (ITU) are launched. An analysis of the environment’s response to perturbations in launch rate and debris population is conducted. The maximum perturbation in the debris population from the equilibrium state, for which the system remains in a stable configuration, is calculated. Plots of the phase space about the equilibrium points are generated. The results will help to better understand the orbital capacity of LEO and the stability of the space environment, as well as provide improved guidelines on future launch plans to avoid detrimental congestion of LEO. Source-Sink, Launch Rate, System Dynamics, Kessler Syndrome, Debris Evolutionary Model ## 1 Introduction ### 1.1 Motivation The unprecedented launch rate of satellites into LEO may have severe consequences on the stability of the orbital environment for decades to come. In the $200-900$ km altitude range alone, the number of satellites launched per year has increased over the last decade as shown in Figure 1. Companies such as Amazon and SpaceX, have announced plans to launch constellations of thousands of satellites into LEO over the next decade [1] with hundreds of satellites already launched. A rising launch rate will increase orbital congestion which in turn raises the chance of debris-generating collisions. A growing debris population can have catastrophic consequences on space missions. It is important to study how increased launch activities affect the evolution of the orbital environment and the production of debris, as well as how increased debris populations affect the stability of LEO. Figure 1: Number of objects launched to $200-900$ km altitudes over the past decade [2]. ### 1.2 Literature Review Various analytic models of the orbital environment have been proposed in the literature that make use of differential equations to represent the evolution of the number of objects in space and their interactions. Some of these models have also been used to test the LEO environment’s sensitivity to run-away debris growth. This run-away debris growth is known as Kessler syndrome wherein the congestion of the orbital environment is large enough to cause a chain reaction of debris generation. To study such debris growth caused by collisions, Kessler and Cour-Palais developed a source-sink model to predict detrimental debris population growth [3]. Furthermore, Talent [4] used one ODE to represent the total number of objects in space and studied various evolutionary cases for different launch rates to look for catastrophic behaviour. Zhang et al. [5] developed a model using partial differential equations and solved the equations numerically to study the long-term evolution of the space debris environment. A dynamical systems analysis was conducted by Drmola and Hubik [6] in which three different classes of debris were used to study various debris accumulation scenarios and whether they lead to Kessler syndrome. The MOCAT model was inspired the three population model JASON given in reference [7]. The MOCAT-3 model has been used to calculate the intrinsic capacity of LEO in reference [8]. The model was extended to included a differentiation between slotted and unslotted satellites in reference [9]. Here, a brief overview of the MOCAT-3 model is given, and various numerical analysis are conducted using the model for the study of launch rates and debris generation. ### 1.3 Paper objectives The objective of this paper is to study the evolution of the LEO environment for various launch rate distributions and to determine the stability of the environment in response to perturbations in the debris population. The intent of finding the stability boundary is to determine how large the debris population can grow before run-away debris growth, referred to as Kessler syndrome, occurs. ### 1.4 Paper structure The remainder of the paper is arranged as follows: Section 2 gives an overview of the source-sink model and methods used for conducting the stability analysis; Section 3 introduces various launch rate distribution cases; Section 4 presents the equilibrium solutions and the results of perturbations to these solutions; Section 5 shows the analysis of the instability threshold where Kessler syndrome occurs; and finally Section 6 remarks on the meaning of the analysis and draws conclusions about the future of the orbital environment. ## 2 Methods ### 2.1 MOCAT-3 Model The MOCAT-3 Model was developed in reference [8] wherein a detailed description of each parameter is given. Here we provide a brief overview of the model and the key equations describing the evolution of each species. MOCAT-3 is a probabilistic source-sink model with three species: active satellites (S), derelict satellites (D) and debris (N). The orbital environment within the altitude range of $200-900$ km is divided into $20$ spherical orbital shells with a shell thickness of $35$ km, represented by the variable $d$. The evolution of each species is represented by a set of differential equations per shell $\\{\dot{S}(h)$, $\dot{D}(h)$, $\dot{N}(h)\\}$, where $h$ is a value from $1$ to $20$ indicating the shell number. Shell with $h=1$ is the lowest shell for the altitude range $200-235$ km and shell with $h=20$ is the highest shell for the altitude range $865-900$ km. We assume each object has a near-circular orbit. The launch rate per year is represented by $\lambda(h)$ and only appears as a source in $\dot{S}$. New active satellites appear instantly in their orbital shell $h$ and do not cross through lower shells. Dropping the explicit dependence on shell number, the set of coupled ordinary differential equations representing the evolution of each species is given by equations (1), (2), (3). $\dot{S}=\lambda-S/\Delta t-\phi_{SN}(\delta+\alpha)NS-\phi_{SD}(\delta+\alpha)DS-\alpha_{a}\phi_{SS}S^{2}\quad$ (1) $\dot{D}=\frac{(1-P)S}{\Delta t}+\phi_{SD}\delta DS+\phi_{SN}\delta NS-\phi_{DN}ND-\phi_{DD}D^{2}+\frac{D_{+}v_{D+}}{d}-\frac{Dv_{D}}{d}$ (2) $\dot{N}=K0_{SN}\phi_{SN}\alpha NS+K0_{SD}\phi_{SD}\alpha DS+K0_{DN}\phi_{DN}ND+K0_{DD}\phi_{DD}D^{2}+\alpha_{a}K0_{SS}\phi_{SS}S^{2}+K0_{NN}\phi_{NN}N^{2}+\frac{N_{+}v_{N+}}{d}-\frac{Nv_{N}}{d}$ (3) Here $D_{+}$ and $N_{+}$ refer to the populations of $D$ and $N$ in the shell directly above the current shell, namely $D_{+}=D(h+1)$ and $N_{+}=N(h+1)$. For the highest shells, we assume these parameters represent the current shell: $D_{+}=D(h)$ and $N_{+}=N(h)$, for reasons given in section 2.2. In general, once an object is in orbit it can only flow into lower altitude shells and not into upper shells. The de-orbiting of objects from higher shells to lower shells is dictated by a static exponential model for the atmospheric density described in [8]. Only derelict and debris objects are assumed to de-orbit from atmospheric drag effects as active satellites are assumed to have station-keeping capabilities that counter-act these drag effects. These drag effects are accounted for in the terms containing the variable $v$ which represent the change in the semi-major axis. Active satellites $S$ can become derelict $D$ or debris $N$ through collisions but no species can become an active satellite $S$ for which the only source is $\lambda$. Furthermore, active satellites directly exit the environment at a rate of $1/\Delta t$ with a success probability of $P$, the rest becoming derelict satellites. The lifetime of each active satellite is taken as $\Delta t=5$, whereas the probability of successful post-mission disposal is $P=0.95$. The number of fragments created during collisions between the species is determined by the NASA standard break-up model [10] which determines the values of $K0$ and $\phi$. These variables are specified for each type of collision between all species combinations. For the collision model, the average mass, area, and diameter values used for each species was taken from reference [11], and are shown in Table 1. The variables $\delta$, $\alpha$, $\alpha_{a}$ set the proportionality of collisions that become debris objects. Specifically, $\delta=10$ gives the ratio of collisions that produce disabling versus lethal debris, $\alpha=0.2$ is the fraction of derelict and debris objects that an active satellite fails to avoid, and $\alpha_{a}=0.01$ is the fraction of active satellites that another active satellite fails to avoid. This section has summarized the parameters of the MOCAT-3 model which was used for the numerical analysis given in the rest of the paper. | Active | Derelict | Debris ---|---|---|--- Mass (kg) | 223 | 223 | 0.640 Area ($m^{2}$) | 1.741 | 1.741 | 0.020 Diameter (m) | 1.490 | 1.490 | 0.180 Table 1: Physical characteristics of each species. ### 2.2 Equilibrium Solutions and Stability The equilibrium points for the set of coupled differential equations (1), (2), (3) were solved for each shell by finding the population of each type of species $\\{S_{eq}(h),D_{eq}(h),N_{eq}(h)\\}$ for which the differential equations equal zero: $\dot{S}=0,\hskip 4.0pt\dot{D}=0,\hskip 4.0pt\dot{N}=0.$ For this set of values $\\{S_{eq}(h),D_{eq}(h),N_{eq}(h)\\}$, the sources and sinks of the environment balance each other and the system is in equilibrium. A change in the launch rate generates a new set of equilibrium solutions because the launch rate is a major source in the active satellites population. Since each differential equation (1), (2), (3) has degree $2$, the set of $3$ coupled equations has $2^{3}=8$ equilibrium solutions per shell. We eliminate solutions that are purely imaginary or that contain a real negative part, as these are considered non-physical solutions for the species populations. For the launch rate cases we studied, we found that each shell has two sets of positive, real-valued equilibrium solutions $\\{S_{1}(h),D_{1}(h),N_{1}(h)\\}$ and $\\{S_{2}(h),D_{2}(h),N_{2}(h)\\}$, with one solution set having a larger number of active satellites than the other: $S_{1}>S_{2}$. We used this solution set $\\{S_{1}(h),D_{1}(h),N_{1}(h)\\}$ as the influx populations to the next lower shell $\\{S_{1}(h-1),D_{1}(h-1),N_{1}(h-1)\\}$. For the highest shell, we assumed the influx of objects from higher altitudes was equaled to the outflow of objects from that shell. This assumption may differ from reality as there are many objects located above $900$ km but it is difficult to measure how many of these objects would de-orbit due to atmospheric drag at such high altitudes per year. By finding the equilibrium solutions $\\{S,D,N\\}$ per shell starting with the highest altitude shell and ending with the lowest altitude shell, we guarantee the equilibrium of the entire orbital environment in the 200km-900km altitude range because each shell depends only on the species population within that shell and directly above it. Our model does not contain any flows from lower shells into higher shells so the lower shells do not affect the equilibrium of the higher shells. For a given launch rate and set of initial conditions $\\{S_{i},D_{i},N_{i}\\}$, we integrate the differential equations (1), (2), (3) with respect to time to find the number of years required for the source- sink model to reach equilibrium. The initial conditions we used was based on Two Line Element data from space-track.org 111The TLE catalog was downloaded from space-track.org, accessed on August 18th, 2022.. We used the process of reference [9] that classified each object as an active satellite, derelict satellite, or as debris according to its mass, diameter and area. We note that unlike reference [9], we do not distinguish between slotted and unslotted satellites, thus we combined these two populations for the active satellite population. The initial populations of each species are displayed per altitude shell in Figure 2. Figure 2: Population of each species from TLE data as of August 2022. ### 2.3 Basin of Attraction _Basin of Attraction Definition:_ “The set of points in the space of system variables such that initial conditions chosen in this set dynamically evolve to a particular attractor” [12]. In our model, an attractor is a stable equilibrium point. Once the source-sink model is at a steady equilibrium state, we studied how the model reacts to perturbations in $\\{S,D,N\\}$ from equilibrium. In particular, we studied how a sudden increase in debris $N$ alters the orbital environment by looking at how the system evolved after the perturbation and whether or not it tended back toward equilibrium. We found the basin of attraction about the stable equilibrium point for various initial conditions of debris $N$. One of the reasons we decided to focus on analyzing the stability of the orbital environment with respect to changes in the species of debris rather than changes in active or derelict satellites, is the exact amount of debris currently in LEO is unknown. Another reason is we wanted to study the instability threshold for debris creation known as ‘Kessler syndrome’ in which the amount of debris in the orbital environment is numerous enough that it continuous to generate more and more debris, creating a chain reaction. To visualize the basin of attraction, we plotted phase space diagrams. ## 3 Launch Rate Distributions _Launch Rate Distribution Definition:_ The number of active satellites launched into orbit per year per altitude shell. We used various launch rate distributions to study the stability of the LEO environment. The two types of launch rate distributions that we studied are _static_ and _dynamic_ launch rates. A static launch rate represents a constant influx of active satellites per altitude shell per year for a given number of years. A dynamic launch rate represents a variable launch rate per altitude shell per year. For each launch rate, a unique set of equilibrium solutions is found. Thus, a static launch rate is used to study the behaviour of the system of equations with respect to the equilibrium solutions. However, a varying launch rate represents a more realistic scenario since the number of satellites launched per year has changed drastically over the past few decades. A dynamic launch rate was studied as a separate case wherein the equilibrium solutions changed with variations in the launch rate. ### 3.1 Static Launch Rate We studied two cases of static launch rates. For the first case we used the maximum number of satellites launched in one year per altitude shell over the past ten years. The second case, we used ‘As Received’ filings database from the International Communications Union (ITU). #### 3.1.1 _Case 1: Past Launch Rates_ We used the maximum number of satellites launched within one year over the past ten years for each altitude shell. We use the maximum historic launch rate per shell instead of using the launch rate from a specific year because particular years have a low number of launches to certain shells that doesn’t represent the general behaviour for launch cadence. This launch rate distribution allowed us to analyze the stability of the current LEO environment to see if current launch activities are sustainable or if we are already in danger of run-away debris growth. The number of satellites launched into each altitude shell was taken from the Union of Concerned Scientists database [2], and is displayed in Figure 3. Figure 3: Maximum launch rate over the last ten years per altitude shell [2]. #### 3.1.2 _Case 2: ITU Filings_ The ‘As Received’ ITU filings database [13], is a list of satellite notices filed with the ITU that have not yet been reviewed or published by the ITU. It should be noted that the ITU states this database is not regulated. However, this database allows for some forecasting of satellite launches over the next few years. Each filing includes the altitude and number of satellites that an organization intends to launch. We have filtered through this database to eliminate duplicate filings made by the same organization. We used the average of the apogee and perigee altitude to bin the satellites into altitude shells. The number of satellites forecast to be launched into each shell is shown in Figure 4. Figure 4: ‘As Received’ ITU filings of satellite notices [13]. These filings are valid for several years and the deployment of a satellite or constellation of satellites into orbit can take over a year. Thus, in using ITU filings to estimate a launch rate per year, we divided the total number of satellites in each shell by a number of years $n$. ### 3.2 Dynamic Launch Rate We studied a dynamic launch rate as a third launch rate distribution. As shown in Figure 1, the number of objects launched into orbit each year has not remained constant. By using a dynamic launch rate, we can represent such a change in launch rate per year. #### 3.2.1 Case 3: Varying Launch Rate per Year We followed a similar approach to [4] in modeling a dynamic launch rate. We took the launch rate displayed in Figure 3 as the base case and then increased this launch rate by 0%,1%,3%,5%, and 7% each year for $50$ years. Then we set the launch rate to be constant at the rate calculated at the end of the $50$ years. We let the environment evolve for another $800$ years at this constant launch rate. The total number of objects per year for each incremental launch rate is displayed in Figure 5. The total number of launched satellites and species populations at the end of the 800 years for each incremental launch rate is given in Table 2. As can be seen in the table, the various percentage increases in launch rate for the first 50 years produces drastically different total number of objects in the environment at the end of the simulation. For example, a 7% increase in launch rate produces a total number of objects at the end of the 800 year simulation that is about two orders of magnitude larger than the total produced by a 1% increase in launch rate. This numerical analysis shows how a few percent difference in increasing launch rate per year creates large differences in the population of each species when propagated over time. Figure 5: Total number of objects in orbit for launch rates growing by 0-7% per year for 50 years and then remaining constant for 800 years. % Increase | Launch $\lambda$ | Active S | Derelict D | Debris N | Total ---|---|---|---|---|--- 0% | 1965 | 9820 | 1383 | 1402 | 12606 1% | 3232 | 16146 | 1984 | 1743 | 19873 3% | 8614 | 42976 | 4621 | 4317 | 51914 5% | 22533 | 111830 | 12387 | 22061 | 146270 7% | 57883 | 270980 | 60571 | 741220 | 1072800 Table 2: Population of each species and total number of objects in orbit at the end of a period of constant growth rate in launch as shown in Figure 5. ## 4 Results and Discussion ### 4.1 Equilibrium Solutions For each launch rate, we determined the set of equilibrium solutions. We computed the equilibrium solutions per shell for the ‘business as usual’ case 3.1.1; the results for which are presented in Figure 6. Since equilibrium solutions exist, we can conclude that our current launch activity is sustainable for the $200-900$ km altitude range and that run-away debris growth will not occur if launch rates remain at these levels. Figure 6: Equilibrium solutions per species for the constant launch rate case 3.1.1. We also computed the equilibrium solutions for case 3.1.2 where $\lambda_{itu}/n$ with $n=7$ years and found that no positive, real-valued solutions existed. This may seem alarming but realistically we should not assume that all the satellite notices filed will be used. Only a fraction of the satellite notice filings will actually be launched. We increased $n$ to $n=21$, and found a set of equilibrium points for each shell as shown in Figure 7. Figure 7: Equilibrium solutions per species for a constant launch rate proportional to ITU filings given in case 3.1.2. For the dynamic launch case 3.2.1, positive, real-valued equilibrium solutions could only be found for the 1% increase in launch rate per year. A summary of total populations of each species at equilibrium for each launch case is given in Table 3. Rate | Total Launch $\lambda$ | Total Active S | Total Derelict D | Total Debris N | Total All ---|---|---|---|---|--- Case 1 | 1965 | 9820 | 1383 | 1402 | 12606 Case 2 | 56410 | 278847 | 22704 | 157870 | 459421 Case 3: 1% | 3232 | 16145 | 2292 | 4281 | 22719 Table 3: Total population of each species at equilibrium for various launch rate distributions. Overall, the amount of active satellites $S$ at equilibrium is proportional to the number of satellites launched. The amount of debris $N$ and derelict satellites $D$, however, vary with respect to the amount of active satellites, with higher shells acquiring a larger number of derelict and debris objects because the sink caused by atmospheric drag removes less objects per year at higher altitudes due to the lower atmospheric density. ### 4.2 Stability Analysis The stability of the equilibrium solutions found for Case 1 (3.1.1), Case 2 (3.1.2) and Case 3 (3.2.1) at 1% increment only, was determined by computing the eigenvalues of the solutions. All eigenvalues were found to be negative indicating all of these equilibrium solutions are stable. As an example, Table 4 displays the eigenvalues for Case 2 launch rate of section 3.1.2. Active S | Derelict D | Debris N ---|---|--- -0.200 | -1.028 | -0.212 -221.167 | -0.591 | -0.211 -885.271 | -0.432 | -0.211 -345.598 | -0.350 | -0.200 -149.086 | -0.253 | -0.200 -86.341 | -0.119 | -0.209 -37.246 | -0.089 | -0.208 -70.472 | -0.077 | -0.208 -36.780 | -0.056 | -0.205 -19.147 | -0.043 | -0.200 -17.605 | -0.001 | -0.203 -10.008 | -0.003 | -0.205 -9.180 | -0.004 | -0.201 -5.522 | -0.007 | -0.201 -4.784 | -0.017 | -0.201 -3.129 | -0.024 | -0.204 -2.350 | -0.026 | -0.203 -1.306 | -0.027 | -0.203 -1.776 | -0.027 | -0.201 -0.777 | -0.147 | -0.202 Table 4: The eigenvalues of each population for the equilibrium solutions displayed in Figure 7. Additionally, we used phase portraits to depict the phase space about the stable equilibrium point per shell. The phase portraits for the equilibrium points of the Case 1 launch rate, shown in Figure 6, are displayed for two altitude shells in Figure 8. Figure 8: Phase portraits about the stable equilibrium state at different altitudes. We analyzed how many years were needed for the orbital environment to settle into its equilibrium state for the launch rate $\lambda_{itu}/21$ assuming the initial conditions are given by Figure 2. We integrated the set of differential equations using these initial conditions. The results are shown for active satellites, derelict satellites, and debris in Figures 9, 10, 11, respectively. The population of active satellites settles into equilibrium across all shells within 10 years. The populations of derelict satellites and debris requires greater than 50 years to reach equilibrium for particular altitude shells. Figure 9: Number of active satellites in orbit over time for the launch rate given in Section 3.1.1. Figure 10: Number of derelict satellites in orbit over time for the launch rate given in Section 3.1.1. Figure 11: Amount of debris in orbit over time for the launch rate given in Section 3.1.1. ### 4.3 Perturbing Launch Rate for One Year We studied how the orbital environment would react to a one-time drastic increase in launch. This perturbation in launch rate allows for all ITU filings displayed in 4 to be launched in one year. The utility of this approach is we can study how the orbital environment reacts to a launch rate for which no equilibrium solutions exist as stated in section 2.2. We started with initial conditions given by Figure 2 and a launch rate of $\lambda_{itu}/21$ as shown in Figure 7. We allowed the system of equations to evolve for 20 years at this constant launch rate and then we increased the launch by a factor of twenty for one year. These two launch rates are displayed in Figure 12. Figure 12: Launch forecast proportional to ITU ’As received’ filings. After this one year increase in launch activity, we decreased the launch rate back to the original rate of $\lambda_{itu}/21$ and allowed the system to evolve for another 20 years. The results are shown for each species in Figures 13, 14, 15. Overall, the system evolved back toward the stable equilibrium solution given in Figure 7 within the 20 years following the perturbation in launch rate. A change in the launch rate in effect perturbs the population of each species away from equilibrium. Figure 13: Number of active satellites over time with a one time increase in launch at 20 years. Figure 14: Number of derelict satellites over time with a one time increase in launch at 20 years. Figure 15: Number of debris objects over time with a one time increase in launch at 20 years. ### 4.4 Perturbations of Equilibrium Solutions We studied how the orbital environment reacts to the event of a sudden increase in debris. We analyzed two cases: the first case depicts the effect of an uniform increase in debris across all shells, and the second case depicts the effect of debris increase in one shell and compares a perturbation in debris at a high altitude vs. at a low altitude. #### 4.4.1 Equal Perturbation Across all Shells Starting with the system at equilibrium depicted in Figure 7 for the constant launch rate given by case 3.1.2, we perturbed the amount of debris in each shell by $10,000$ objects at $t=20$ years and allowed the system to evolve for 200 years. The change in the amount of debris is shown in Figure 16. Table 5 summarizes these results by displaying the total amount of each species before the increase in debris, at the time of the event, and 200 years after. From Figure 16, we note that such an event creates a minimal effect in lower altitude shells with each species returning back to its equilibrium state within a few years. However, for higher altitude shells the scenario is drastically different with the system remaining out of equilibrium for at least $200$ years. This analysis shows how an ‘explosion’ type of event that produces a large amount of debris across all shells, greatly affects the amount of debris present at higher altitudes for many years following the event. Species | Initial | At Event | After 200 Years | $\Delta$ ---|---|---|---|--- S | 292830 | - | 292629 | -201 D | 21290 | - | 22597 | +1308 N | 107712 | 307712 | 145379 | +37667 Table 5: Population of each species before and after a sudden increase in debris across all shells. Figure 16: Debris population over time with an impulsive increase in debris by 10,000 fragments at 20 years. #### 4.4.2 Impulsive Debris Perturbations in High Shell vs Low Shell From the equilibrium values displayed in Figure 7, the amount of debris was increased by 10,000 objects in the second-highest shell (830km-865km) and in the shell with altitude range (410km-445km). We chose the second highest altitude shell rather than the highest shell since it contained significantly more active satellites. For the lower altitude shell, we chose the shell that contained the greatest amount of active satellites overall since this shell is the most sensitive to collisions between debris and active satellites. We set the orbital environment to equilibrium and then we added a perturbation in debris at $t=20$ years. After this perturbation, we allowed the system to evolve for 200 years. The results are displayed in Figure 17. Increasing the amount of debris by 10,000 objects in a high altitude shell has a much more significant impact on the overall LEO environment than increasing the debris in a lower altitude shell. In the left side of Figure 17 we see that the environment quickly returns to its near-equilibrium state after the perturbation occurs, whereas the environment does not recover to equilibrium if such a perturbation occurs at a high altitude shell as shown in the right side of the figure. Rather than returning to its equilibrium state, a large perturbation of the debris population at a high altitude causes the population of debris to keep growing across multiple shells over the course of 200 years. It could be that for a longer period of time $t>200$ years, the system will return to its equilibrium state or it could be the case that the population of debris has reached an amount that causes collisions to continuously occur and debris to grow without end. We studied this behaviour in more detail in the next section 5. Figure 17: A comparison of the evolution of the debris population after a sudden increase in debris at $t=20$ years in two different shells. ## 5 Instability Threshold: Kessler Syndrome Using the launch rate given in case 3.1.2, we calculated the instability threshold as the maximum perturbation in debris away from equilibrium for which the population of debris continues to increase without bound over 1,000 years. The two types of perturbations we used were perturbing debris in all shells simultaneously as done in section 4.4.1 and perturbing debris in each shell individually similar to the approach used in section 4.4.2. The system does not necessarily need to return to its equilibrium solution within 1,000 years of the perturbation but for the perturbation to be considered apart of the stable region, the amount of debris and derelict satellites need to be decreasing at the end of the 1,000 years. In other words, at $t=1000$ years the set of $\\{S,D,N\\}$ must satisfy: $\hskip 4.0pt\dot{D}\leq 0,\hskip 4.0pt\dot{N}\leq 0$ for all shells. The number of active satellites can be decreasing or increasing at the end time. In this way, we calculated the threshold at which run-away debris growth occurs, referred to as Kessler syndrome. ### 5.1 Perturbing All Shells Simultaneously To the nearest thousandth, the largest perturbation to the debris population for which the system reverted toward the equilibrium state after 1,000 years was found to be 29,000 debris objects. Perturbing the debris population by 30,000 objects was found to cause runaway debris growth. These two cases are displayed in Figure 18. In the left of Figure 18 it is clear that the perturbation in debris causes debris growth for about 400 years but the system begins to return to equilibrium as it evolves for the remaining 600 years. This is not true for a perturbation of 30,000 debris objects as displayed in the right side of the Figure 18, wherein the population of debris continues to grow reaching a couple quintillion before the integration fails at $t=600$ years. Such run-away debris growth displays Kessler syndrome as the orbital environment is unstable and continues to diverge away from its equilibrium state. Through collisions with active satellites, a perturbation in debris also causes a change in the population of derelict satellites. The evolution of the derelict population for a perturbation of 29,000 and 30,000 debris objects is displayed in Figure 19. Overall, the runaway debris growth occurred in higher altitude shells, but we are also interested in the instability threshold of lower altitude shells. Thus, rather than simultaneously perturbing all shells away from equilibrium, we studied the instability threshold of each altitude shell in the next section. Figure 18: A comparison of the evolution of the debris population after a perturbation in debris occurs across all shells. The stable regime is displayed on the left and the unstable regime is displayed on the right. Figure 19: A comparison of evolution of the derelict population after a perturbation in debris across all shells. ### 5.2 Perturbing Shells Individually By perturbing the amount of debris in each shell individually, we were able to analyze how sensitive each altitude shell is to a sudden increase in debris. We calculated the maximum perturbation in the debris population away from equilibrium to the nearest thousandth for which the system evolved back toward equilibrium within 1000 years. The instability threshold hence exists at this maximum perturbation amount. The results are presented in Table 6. Altitudes below 410 km are not included in the table because these shells could withstand a perturbation of debris equal to $10^{8}$ objects. We conclude that the stability of the orbital environment at lower altitude shells is much more resilient to perturbations in debris than higher altitude shells. This result concurs with the result of section 4.4.2. The reasoning for this behaviour is the sink of the model, namely atmospheric drag, is much greater at lower altitude shells, which removes debris from the environment preventing collisions with active and derelict satellites that would create more debris. We would like to note that this analysis was done for a particular launch rate taken as a fraction of the ITU filings $\lambda_{itu}/21$ as shown in Figure 7. Debris creation is directly affected by the launch rate since launch activity is the source of active satellites per shell, and a higher density of active satellites per shell creates a greater likelihood of collision with debris. Thus, different evolution of the debris population would occur for a different launch rate. Altitude Shell (km) | Debris at Equilibrium | Max. Perturbation in Debris ---|---|--- 410-445 | 2462 | 53442000 445-480 | 1445 | 28478000 480-515 | 1145 | 17331000 515-550 | 1953 | 9861000 550-585 | 3077 | 5775000 585-620 | 5085 | 3348000 620-655 | 7983 | 2001000 655-690 | 12046 | 1216000 690-725 | 17185 | 610000 725-760 | 14725 | 414000 760-795 | 18644 | 263000 795-830 | 18621 | 91000 830-865 | 703 | 71000 865-900 | 25 | 63000 Table 6: Maximum perturbation in debris per shell before Kessler Syndrome occurs. ## 6 Remarks and Conclusions Given a launch rate distribution that is based on historic launch activities (3.1.1), the current orbital environment will evolve to a stable equilibrium state. In such a state, the sources of the model, namely due to launch and collisions, balance the sinks of the model, namely post-mission disposal and atmospheric drag. Thus if launch activities remain at current levels, Kessler syndrome will not occur in the $200-900$ km altitude range of the orbital environment, given the assumptions of our model. This is also true for an increased but constant launch rate (3.1.2) for which a stable equilibrium state exists. We note that the evolution of the environment from the current populations of active, derelict, and debris objects to this equilibrium state would take decades. Given a dynamic launch rate distribution, an ever- increasing launch rate entails the system will not reach an equilibrium state. However, if the launch rate becomes constant after a period of continuous growth then the system may evolve towards the equilibrium state if the growth rate was small enough. In our analysis, only a growth rate of 1% in launch rate per year over 50 years lead to a stable equilibrium state. Larger growth rates in launch rate entailed no equilibrium state would be reached with each species population ever-increasing. Perturbations in the debris population away from equilibrium showed how sensitive the environment is to an increase in debris. In general, perturbations in the debris populations in higher altitude shells had more drastic consequences than perturbations in lower altitude shells due to increased atmospheric drag forces existing at lower altitudes. Run-away debris growth is more common at high altitudes, with Kessler syndrome resulting from significantly smaller perturbations in debris than at lower altitudes. Thus a debris-generating event occurring at a high altitude is more dangerous than one occurring at a low altitude, since at higher altitudes such an event can trigger Kessler syndrome to occur. ## Acknowledgments The authors would like to thank Thomas Roberts for sharing processed ITU filing data. The authors wish to acknowledge the support of this work by the Defense Advanced Research Projects Agency (Grant N66001-20-1-4028). The content of the information does not necessarily reflect the position or the policy of the Government. No official endorsement should be inferred. Distribution statement A: Approved for public release; distribution is unlimited. ## References * [1] C. Davenport, “Thousands more satellites could soon be launched into space. can the federal government keep up?,” Jul 2020. * [2] UCS-Satellite-Database-5-1-2022. http://www.ucsusa.org/satellite_database. * [3] D. J. Kessler and B. G. Cour-Palais, “Collision frequency of artificial satellites: The creation of a debris belt,” Journal of Geophysical Research: Space Physics, vol. 83, no. A6, pp. 2637–2646, 1978. * [4] D. L. Talent, “Analytic model for orbital debris environmental management,” Journal of Spacecraft and Rockets, vol. 29, no. 4, pp. 508–513, 1992. * [5] B. Zhang, Z. Wang, and Y. Zhang, “Discrete evolution model based on mean spatial density for space debris environment,” Astrophysics and Space Science, vol. 364, 04 2019. * [6] J. Drmola and T. Hubik, “Kessler syndrome: System dynamics model,” Space Policy, vol. 44-45, pp. 29–39, 2018. * [7] H. G. Lewis, G. G. Swinerd, R. J. Newland, and A. Saunders, “The fast debris evolution model,” Advances in Space Research, vol. 44, pp. 568–578, 9 2009\. * [8] A. D’Ambrosio, M. Lifson, and R. Linares, “The capacity of low earth orbit computed using source-sink modeling,” 2022. * [9] M. Lifson, A. D’Ambrosio, D. Arnas, and R. Linares, “How many satellites can we fit in low earth orbit?: Capacity integrating risk-based and intrinsic methods,” Astrodynamics Specialist Conference, Preprint. * [10] P. Krisko, “Proper implementation of the 1998 nasa breakup model,” Orbital Debris Quarterly News, vol. 15, no. 4, pp. 1–10, 2011. * [11] G. L. Somma, Adaptive remediation of the space debris environment using feedback control. PhD thesis, University of Southampton, 2019. * [12] M. Crisan, “Convergence towards a dynamic theory of linguistics and semantics,” in Convergence and Hybrid Information Technologies (M. Crisan, ed.), ch. 3, Rijeka: IntechOpen, 2010. * [13] ITU e-Submission of Satellite Network Filings ‘As Received’. https://www.itu.int/ITU-R/space/asreceived/Publication/AsReceived.
$\displaystyle\deg_{\tilde{m}_{0}}(\tilde{m}^{\prime}):=\sum_{i=1}^{r}a_{i}.$ (0.7.2) Then, by (a), we have $\displaystyle\deg_{\tilde{m}_{0}}(\tilde{m}_{0})=0<\deg_{\tilde{m}_{0}}(\tilde{m}_{1})<\cdots<\deg_{\tilde{m}_{0}}(\tilde{m}_{J})=\deg_{\tilde{m}_{0}}(\tilde{m}_{\gamma}).$ (0.7.3) (d). For the segment $\gamma_{j}$ in the condition (5), suppose that $\gamma_{j}$ crosses walls $[\mathfrak{d}_{\lambda},f_{{\lambda}}]_{n_{j}}$ $(\lambda\in\Lambda_{j}$) at $t_{j}$. Note that $\gamma^{\prime}=-m_{j-1}$ just before $\gamma$ crosses these walls. Then, the intersection signs in (0.2.6) are given by $\displaystyle\epsilon_{j}=\begin{cases}1&\langle n_{j},m_{j-1}\rangle>0,\\\ -1&\langle n_{j},m_{j-1}\rangle<0.\\\ \end{cases}$ (0.7.4) Thus, $\epsilon_{j}\langle n_{j},m_{j-1}\rangle=|\langle n_{j},m_{j-1}\rangle|=|\langle n_{j},\tilde{m}_{j-1}\rangle_{1}|$. Therefore, by Definition 0.2.6, we have $\displaystyle\mathfrak{p}_{\gamma_{j},\mathfrak{D}}(c_{j-1}x^{\tilde{m}_{j-1}})=c_{j-1}x^{\tilde{m}_{j-1}}\prod_{\lambda\in\Lambda_{j}}f_{{\lambda}}^{|\langle\delta(n_{j})n_{j},\tilde{m}_{j-1}\rangle_{1}|}.$ (0.7.5) It is crucial that there is _no division_ by $f_{{\lambda}}$ in this expression for the forthcoming positivity of theta functions. ###### Definition 0.7.2 (General position). We say that $Q\in M_{\mathbb{R}}\setminus\mathrm{Supp}(\mathfrak{D})$ is _in general position_ if for each $\ell>0$ there is some neighborhood $U_{\ell}$ of $Q$ such that, for the reduction $\mathfrak{D}_{\ell}$, any broken line for $Q^{\prime}\in U_{\ell}$ converges to a broken line for $Q$ in the limit $Q^{\prime}\rightarrow Q$. (Namely, no broken line crosses $\mathrm{Sing}(\mathfrak{D}_{\ell})$ in the limit.) Below we always and implicitly assume that $Q$ is in general position. ###### Definition 0.7.3 (Theta function). Under the same assumption and notations in Definition 0.7.1, the _theta function $\vartheta_{Q,\tilde{m}_{0}}$_ for $\tilde{m}_{0}$ with endpoint $Q$ is defined by $\displaystyle\vartheta_{Q,\tilde{m}_{0}}:=\sum_{\gamma\in B(Q,\tilde{m}_{0})}c_{\gamma}x^{\tilde{m}_{\gamma}},$ (0.7.6) where $B(Q,\tilde{m}_{0})$ is the set of all broken lines for $\tilde{m}_{0}$ with endpoint $Q$. We also set $\displaystyle\vartheta_{Q,0}=1.$ (0.7.7) The function $\vartheta_{Q,\tilde{m}_{0}}$ only depends on the equivalence class of $\mathfrak{D}$, though the condition for $Q$ to be in general position depends on the choice of a representative of $\mathfrak{D}$. ###### Lemma 0.7.4 ([GHKK18, Proposition 3.4]). For any scattering diagram $\mathfrak{D}$, we have $\displaystyle\vartheta_{Q,\tilde{m}_{0}}\in x^{\tilde{m}_{0}}\mathbbm{k}[[P_{1}]].$ (0.7.8) ###### Proof. By Remark (b), each term $c_{\gamma}x^{\tilde{m}_{\gamma}}$ belongs to $x^{\tilde{m}_{0}}\mathbbm{k}[[P_{1}]]$. Thus, it is enough to show that, for a given $\tilde{m}^{\prime}\in\tilde{m}_{0}+P_{1}$, there are only finitely many $\gamma$ such that $\tilde{m}_{\gamma}=\tilde{m}^{\prime}$. We note the following facts: * • By the constraint (0.7.3), such $\gamma$ bends at most $\deg_{\tilde{m}_{0}}(\tilde{m}^{\prime})$ times. * • The walls of $\mathfrak{D}$ that contribute to $\tilde{m}^{\prime}$ belongs to the reduction $\mathfrak{D}_{\ell}$ at $\ell=\deg_{\tilde{m}_{0}}(\tilde{m}^{\prime})$, which has only finitely many walls. * • At each bending there are only finitely many possibilities of bending. Therefore, there are only finitely many possibilities of such $\gamma$. ∎ ###### Proposition 0.7.5 ([GHKK18, Proposition 3.8]). Let $\mathfrak{D}$ be any scattering diagram for a given seed $\mathfrak{s}$, and let $\mathcal{C}^{+}_{\mathfrak{s}}$ be the positive orthant in (0.2.40). For $\tilde{m}_{0}\in\tilde{M}^{\circ}$, suppose that $m_{0}\in\mathcal{C}^{+}_{\mathfrak{s}}\cap M^{\circ}$. Also, suppose that $Q\in\mathrm{Int}(\mathcal{C}^{+}_{\mathfrak{s}})$. Then, we have $\displaystyle\vartheta_{Q,\tilde{m}_{0}}=x^{\tilde{m}_{0}}.$ (0.7.9) ###### Proof. We may assume that $\tilde{m}_{0}\neq 0$. It is clear that $Q+\mathbb{R}_{\geq 0}m_{0}$ is a broken line for $\tilde{m}_{0}$ with end point $Q$, and it does not intersect any walls of $\mathfrak{D}_{\mathfrak{s}}$; therefore, the associated monomial is $x^{\tilde{m}_{0}}$. We claim that under the condition there is no broken line $\gamma$ with bending. Suppose that such $\gamma$ exists. We use the notation in Definition 0.7.1. For each break point $t_{j}$ therein, let $n_{j}\in N^{+}$ be a unique (not necessarily primitive) element such that $\gamma$ intersects walls in $n_{j}^{\perp}$ at $t_{j}$ and that the velocity of $\gamma$ is shifted by $-p^{*}(n_{j})$. Let $L_{j}$ be the segment of $\gamma$ for the interval $I_{j}$. Then, we claim the following: $\displaystyle L_{j}\subset H_{j}^{-}:=\biggl{\\{}z\in M_{\mathbb{R}}\biggm{|}\biggl{\langle}\sum_{i=1}^{j}n_{i},z\biggr{\rangle}\leq 0\biggr{\\}}\quad(j=1,\dots,J).$ (0.7.10) We show it by the induction on $j$. This is true for $j=1$, because the support of the wall crossed by $\gamma$ at $t_{1}$ is in $n_{1}^{\perp}$, and $\gamma$ is in $\mathcal{C}^{+}_{\mathfrak{s}}$ for $t\rightarrow-\infty$. Now suppose that (0.7.10) hold up to $j$. Then, we have $\langle\sum_{i=1}^{j}n_{i},\gamma(t_{j+1})\rangle\leq 0$. Also, $\langle n_{j+1},\gamma(t_{j+1})\rangle=0$. Thus, $\gamma(t_{j+1})\in H_{j+1}^{-}$. Moreover, the new velocity vector $\gamma^{\prime}_{j+1}=-m_{0}-\sum_{i=1}^{j+1}p^{*}(n_{i})$ satisfies the inequality $\displaystyle\begin{split}\biggl{\langle}\sum_{i=1}^{j+1}n_{i},\gamma^{\prime}_{j+1}\biggr{\rangle}&=\biggl{\langle}\sum_{i=1}^{j+1}n_{i},-m_{0}\biggr{\rangle}+\biggl{\\{}\sum_{i=1}^{j+1}n_{i},\sum_{i=1}^{j+1}n_{i}\biggr{\\}}\\\ &=\biggl{\langle}\sum_{i=1}^{j+1}n_{i},-m_{0}\biggr{\rangle}\leq 0.\end{split}$ (0.7.11) Thus, the fact (0.7.10) holds for $j+1$. Since $\mathrm{Int}(\mathcal{C}^{+}_{\mathfrak{s}})\cap H_{j}^{-}=\emptyset$, $\gamma$ cannot reach $Q\in\mathrm{Int}(\mathcal{C}^{+}_{\mathfrak{s}})$. ∎ ### 0.7.2 Transitivity on theta functions Now we assume that $\mathfrak{D}$ is consistent. ###### Proposition 0.7.6 ([CPS, Lemma 4.9]). Let $\mathfrak{D}$ be a consistent scattering diagram. Let $\tilde{m}_{0}\in\tilde{M}^{\circ}$ and $Q,\,Q^{\prime}\in M_{\mathbb{R}}\setminus\mathrm{Supp}(\mathfrak{D})$. Then, for any admissible curve $\gamma$ from $Q$ to $Q^{\prime}$, we have $\displaystyle\vartheta_{Q^{\prime},\tilde{m}_{0}}=\mathfrak{p}_{\gamma,\mathfrak{D}}(\vartheta_{Q,\tilde{m}_{0}}).$ (0.7.12) The rest of this subsection is devoted to prove this proposition. We may assume that $\tilde{m}_{0}\neq 0$. Let $J$ be the maximal ideal of $\mathbbm{k}[[P_{1}]]$ generated by $P_{1}\setminus\\{0\\}$. ###### Lemma 0.7.7 ([CPS, Lemma 4.7]). Let $\mathfrak{D}_{\ell}$ be the reduction of $\mathfrak{D}$ at $\ell$. Suppose that $Q$ and $Q^{\prime}$ are in the same chamber of $\mathrm{Supp}(\mathfrak{D}_{\ell})$. Then, we have the equality $\displaystyle\vartheta_{Q,\tilde{m}_{0}}\equiv\vartheta_{Q^{\prime},\tilde{m}_{0}}\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.13) ###### Proof. Since we consider $\vartheta_{Q,\tilde{m}_{0}}$ only modulo $x^{\tilde{m}_{0}}J^{\ell+1}$, one can replace $\mathfrak{D}$ with $\mathfrak{D}_{\ell}$ to calculate $\vartheta_{Q,\tilde{m}_{0}}$. First, suppose that $Q$ and $Q^{\prime}$ are connected by a curve $\beta$ in the same chamber such that any point on $\beta$ is in general position for $\mathfrak{D}_{\ell}$. Then, each broken line with endpoint $Q$ continuously deformed to a broken line with endpoint $Q^{\prime}$. Therefore, the theta function is unchanged. Suppose that there is no such curve between $Q$ and $Q^{\prime}$. Let us consider the special case where $Q$ and $Q^{\prime}$ are connected by a curve $\beta$ in the same chamber such that there is a unique point $Q_{0}$ on $\beta$ that is not in general position. (The general case reduces to this case.) Then, there is at least one broken line $\gamma$ with endpoint $Q$ such that, in the limit $Q\rightarrow Q_{0}$, the broken line intersects with some boundary $\partial\mathfrak{d}$ or joint $\mathfrak{j}=\mathfrak{d}_{1}\cap\mathfrak{d}_{2}$ of $\mathfrak{D}_{\ell}$. (a)$\gamma_{1}$$\gamma_{2}$$\gamma^{\prime}_{1}$$\gamma_{A}$$\gamma_{B}$$\mathfrak{j}$$\textstyle{\bullet}$$\textstyle{\bullet}$$\textstyle{\bullet}$ (b)$\gamma$$\gamma^{\prime}$$\gamma_{A}$$\gamma_{B}$$\mathfrak{c}$$\textstyle{\bullet}$$\textstyle{\bullet}$$\textstyle{\bullet}$ Figure 17: Broken lines in the proof of Lemma 0.7.7. First, let us consider the case $\mathfrak{j}$. See Figure 17 (a). Suppose that $\gamma$ crosses the walls that contain the joint $\mathfrak{j}$ with the incoming attached monomial $c_{\mathrm{in}}x^{\tilde{m}_{\mathrm{in}}}$ and the outgoing attached monomial $c_{\mathrm{out}}x^{\tilde{m}_{\mathrm{out}}}$. There may be several broken lines $\gamma_{1}=\gamma$, …, $\gamma_{p}$ with endpoint $Q$ such that their bending patterns differ from $\gamma$ only inside these walls. They have the common incoming attached monomial $c_{\mathrm{in}}x^{\tilde{m}_{\mathrm{in}}}$ and the common exponent $\tilde{m}_{\mathrm{out}}$ for the outgoing attached monomials. Then, all these broken lines have a common limit for $Q\rightarrow Q_{0}$. Let $\gamma_{A}$ be the half of a small loop around $\mathfrak{j}$, which is on the same side of $\mathfrak{j}$ with $\gamma_{i}$’s as in Figure 17 (a). Then, the sum of the outgoing attached monomials for $\gamma_{1}$, …, $\gamma_{p}$ is exactly the terms of the monomials in $\mathfrak{p}_{\gamma_{A},\mathfrak{D}_{\ell}}(c_{\mathrm{in}}x^{\tilde{m}_{\mathrm{in}}})$ with exponents $x^{\tilde{m}_{\mathrm{out}}}$. We repeat the same argument to the opposite side of $\mathfrak{j}$ with the opposite half $\gamma_{B}$ of the small loop around $\mathfrak{j}$. Then, by the consistency of $\mathfrak{D}$, we have $\displaystyle\mathfrak{p}_{\gamma_{A},\mathfrak{D}_{\ell}}(c_{\mathrm{in}}x^{\tilde{m}_{\mathrm{in}}})\equiv\mathfrak{p}_{\gamma_{B},\mathfrak{D}_{\ell}}(c_{\mathrm{in}}x^{\tilde{m}_{\mathrm{in}}})\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.14) Thus, passing through $Q_{0}$ does not change $\vartheta_{Q,\tilde{m}_{0}}$ modulo $x^{\tilde{m}_{0}}J^{\ell+1}$. Therefore, we obtain the equality (0.7.13). Next, we consider the remaining case $\partial\mathfrak{d}$. See Figure 17 (b). We may concentrate on a face $\mathfrak{c}$ of $\mathfrak{d}$ of codimension 2 such that $\mathfrak{c}$ is not a joint. Then, $\mathfrak{c}$ is contained in some walls of $\mathfrak{D}_{\ell}$ with the common normal vector $n$. In this case, for each broken line for $Q$, there is a unique broken line for $Q^{\prime}$ with the same attached incoming and outgoing monomial modulo $x^{\tilde{m}_{0}}J^{\ell+1}$, thanks to the consistency of $\mathfrak{D}$. Thus, passing through $Q_{0}$ does not change $\vartheta_{Q,\tilde{m}_{0}}$ modulo $x^{\tilde{m}_{0}}J^{\ell+1}$. ∎ ###### Lemma 0.7.8 ([CPS, Lemma 4.8]). Suppose that $Q$ and $Q^{\prime}$ are in different chambers of $\mathrm{Supp}(\mathfrak{D}_{\ell})$. Then, for any admissible curve $\gamma$ from $Q$ to $Q^{\prime}$, we have $\displaystyle\vartheta_{Q^{\prime},\tilde{m}_{0}}\equiv\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}(\vartheta_{Q,\tilde{m}_{0}})\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.15) ###### Proof. We may assume that $Q$ and $Q^{\prime}$ are in adjacent chambers of $\mathrm{Supp}(\mathfrak{D}_{\ell})$, since the general case follows from this case by composing the automorphisms $\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}$. Let $n\in N_{\mathrm{pr}}^{+}$ be the normal vector of the walls separating $Q$ and $Q^{\prime}$. We decompose $\displaystyle\vartheta_{Q,\tilde{m}_{0}}=\sum_{\tilde{m}\in\tilde{M}^{\circ}}c_{\tilde{m}}x^{\tilde{m}}=\vartheta_{+}+\vartheta_{0}+\vartheta_{-},$ (0.7.16) $\displaystyle\vartheta_{+}=\sum_{\scriptstyle\tilde{m}\in\tilde{M}^{\circ}\atop\scriptstyle\langle n,m\rangle>0}c_{\tilde{m}}x^{\tilde{m}},\quad\vartheta_{0}=\sum_{\scriptstyle\tilde{m}\in\tilde{M}^{\circ}\atop\scriptstyle\langle n,m\rangle=0}c_{\tilde{m}}x^{\tilde{m}},\quad\vartheta_{-}=\sum_{\scriptstyle\tilde{m}\in\tilde{M}^{\circ}\atop\scriptstyle\langle n,m\rangle<0}c_{\tilde{m}}x^{\tilde{m}}.$ (0.7.17) Also, we decompose $\vartheta_{Q^{\prime},\tilde{m}_{0}}=\vartheta^{\prime}_{+}+\vartheta^{\prime}_{0}+\vartheta^{\prime}_{-}$ in the same way. Thanks to Lemma 0.7.7, we may take $Q$ and $Q^{\prime}$ to be very close to the wall and also move parallel to the wall inside each chamber as we like. Without loosing generality, one can assume that $\langle n,Q\rangle>0$ and $\langle n,Q^{\prime}\rangle<0$. $Q^{\prime}$$Q$$\textstyle{\bullet}$$\textstyle{\bullet}$$\textstyle{\bullet}$$\textstyle{\bullet}$ Figure 18: Broken lines in the proof of Lemma 0.7.8. (i). $\vartheta_{+}$ and $\vartheta^{\prime}_{+}$. For any exponent $\tilde{m}$ of $\vartheta^{\prime}_{+}$, $\langle n,m\rangle>0$ holds. Then, any broken line for $\vartheta^{\prime}_{+}$ is obtained by extending a broken line for $\vartheta_{+}$. See Figure 18. Thus, we have $\displaystyle\vartheta_{+}^{\prime}\equiv\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}(\vartheta_{+})\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.18) (ii). $\vartheta_{-}$ and $\vartheta^{\prime}_{-}$. The situation is opposite, and we have $\displaystyle\vartheta_{-}\equiv\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}^{-1}(\vartheta^{\prime}_{-})\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.19) (iii). $\vartheta_{0}$ and $\vartheta^{\prime}_{0}$. The broken lines for both do not cross the walls separating $Q$ and $Q^{\prime}$ in the final step. Then, the proof of Lemma 0.7.7 applies to $\vartheta_{0}$ and $\vartheta^{\prime}_{0}$, and we have $\displaystyle\vartheta_{0}^{\prime}\equiv\vartheta_{0}\equiv\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}(\vartheta_{0})\mod x^{\tilde{m}_{0}}J^{\ell+1}.$ (0.7.20) We can unify the above three equalities into a single one $\displaystyle\vartheta_{Q^{\prime},\tilde{m}_{0}}\equiv\mathfrak{p}_{\gamma,\mathfrak{D}_{\ell}}(\vartheta_{Q,\tilde{m}_{0}})\mod x^{\tilde{m}_{0}}J^{\ell+1}$ (0.7.21) as desired. ∎ Since $\ell$ is arbitrary in (0.7.15), we have the desired equality (0.7.12). This completes the proof of Proposition 0.7.6. ### 0.7.3 Positivity and mutation invariance of theta functions Now let us specialize to a CSD $\mathfrak{D}_{\mathfrak{s}}$ for a given seed $\mathfrak{s}$. We have the following fundamental result on the positivity of theta functions for a CSD. ###### Theorem 0.7.9 (Positivity of theta functions [GHKK18, Theorem 1.13 & Remark 3.2]). For a CSD $\mathfrak{D}_{\mathfrak{s}}$, every theta function $\vartheta_{Q,\tilde{m}_{0}}\in x^{\tilde{m}_{0}}\mathbbm{k}[[P_{1}]]$ has only positive integer coefficients. ###### Proof. We choose a positive realization of $\mathfrak{D}_{\mathfrak{s}}$ in Theorem 0.5.2, so that every wall function $f[tn]^{s}$ of $\mathfrak{D}_{\mathfrak{s}}$ has only positive integer coefficients. Then, the product in the right hand side of (0.7.5) has also only positive integer coefficients. Thus, $c_{\gamma}$ in (0.7.1) is a positive integer. Therefore, the theta function defined by (0.7.6) has only positive integer coefficients. ∎ We conclude the section by showing the mutation invariance of theta functions of a CSD and its consequences. Let us define a piecewise-linear transformation in parallel to $T_{k}$ in (0.6.25), $\displaystyle\begin{matrix}\tilde{T}_{k}=\tilde{T}_{k,\mathfrak{s}}:&\tilde{M}^{\circ}&\rightarrow&\tilde{M}^{\circ}\\\ &\tilde{m}&\mapsto&\tilde{m}+[\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}]_{+}p_{1}^{*}(e_{k})=\begin{cases}\tilde{S}_{k}(\tilde{m})&m\in\mathcal{H}_{k}^{+},\\\ \tilde{m}&m\in\mathcal{H}_{k}^{-},\end{cases}\end{matrix}$ (0.7.22) where $\tilde{S}_{k}(\tilde{m})$ is the one in (0.6.10). ###### Proposition 0.7.10 ([GHKK18, Proposition 3.6]). For a given seed $\mathfrak{s}$, let $\mathfrak{s}^{\prime}=\mu_{k}(\mathfrak{s})$ be the mutation of $\mathfrak{s}$ in direction $k$. (a). There is a one-to-one correspondence between the broken lines for $\tilde{m}_{0}$ with endpoint $Q$ with respect to a CSD $\mathfrak{D}_{\mathfrak{s}}$ and the broken lines for $\tilde{T}_{k}(\tilde{m}_{0})$ with endpoint $T_{k}(Q)$ with respect to a CSD $\mathfrak{D}_{\mathfrak{s}^{\prime}}=T_{k}(\mathfrak{D}_{\mathfrak{s}})$. The locus of the line is transformed by the piecewise-linear map $T_{k}$ in (0.6.25). (b). (Mutation invariance of theta functions.) The theta function $\vartheta^{\mathfrak{s}}_{Q,\tilde{m}_{0}}$ with respect to $\mathfrak{D}_{\mathfrak{s}}$ and the one $\vartheta^{\mathfrak{s}^{\prime}}_{Q,\tilde{m}_{0}}$ with respect to $\mathfrak{D}_{\mathfrak{s}^{\prime}}$ are related by $\displaystyle\vartheta^{\mathfrak{s}^{\prime}}_{T_{k}(Q),\tilde{T}_{k}(\tilde{m}_{0})}=\begin{cases}\tilde{S}_{k}(\vartheta_{Q,\tilde{m}_{0}}^{\mathfrak{s}})&Q\in\mathcal{H}_{k}^{+},\\\ \vartheta_{Q,\tilde{m}_{0}}^{\mathfrak{s}}&Q\in\mathcal{H}_{k}^{-}.\end{cases}$ (0.7.23) where $\mathcal{H}_{k}^{\pm}$ is the one in (0.6.23), and $\tilde{S}_{k}$ acts on the exponents of $x$ in $\vartheta_{Q,\tilde{m}_{0}}^{\mathfrak{s}}$. ###### Proof. (a). In $\mathcal{H}_{k}^{-}$ the walls and broken lines are unchanged, while in $\mathcal{H}_{k}^{+}$ the walls and broken lines are transformed by the linear map $S_{k}$. Thus, the rule for broken lines are preserved in each $\mathcal{H}_{k}^{\pm}$. Therefore, we only need to check the rule when a broken line crosses the wall $\mathbf{w}_{e_{k}}=[e_{k}^{\perp},1+x^{p_{1}^{*}(e_{k})}]_{e_{k}}$. Case (1). Suppose that a broken line $\gamma$ with respect to $\mathfrak{D}_{\mathfrak{s}}$ crosses $e_{k}^{\perp}$ from $\mathcal{H}_{k}^{-}$ with attached monomial $cx^{\tilde{m}}$. Then, $\langle e_{k},-\tilde{m}\rangle_{1}>0$. After crossing $e_{k}^{\perp}$, the new attached monomial is a term in $cx^{\tilde{m}}(1+x^{p_{1}^{*}(e_{k})})^{|\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}|}$. Applying $\tilde{S}_{k}$, it transforms to a term in $\displaystyle\begin{split}&\quad\ cx^{\tilde{m}+\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}p_{1}^{*}(e_{k})}(1+x^{p_{1}^{*}(e_{k})})^{|\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}|}\\\ &=cx^{\tilde{m}-|\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}|p_{1}^{*}(e_{k})}(1+x^{p_{1}^{*}(e_{k})})^{|\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}|}\\\ &=cx^{\tilde{m}}(1+x^{p_{1}^{*}(e^{\prime}_{k})})^{|\langle\delta_{k}e^{\prime}_{k},\tilde{m}\rangle_{1}|}.\end{split}$ (0.7.24) This is the rule for a broken line with respect to $T_{k}(\mathfrak{D}_{\mathfrak{s}})$. Case (2). Suppose that a broken line $\gamma$ with respect to $\mathfrak{D}_{\mathfrak{s}}$ crosses $e_{k}^{\perp}$ from $\mathcal{H}_{k}^{+}$ with attached monomial $cx^{\tilde{m}}$. Then, $\langle e_{k},-\tilde{m}\rangle_{1}<0$. After crossing $e_{k}^{\perp}$, the new attached monomial is a term in $cx^{\tilde{m}}(1+x^{p_{1}^{*}(e_{k})})^{|\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}|}$, which is rewritten as $\displaystyle cx^{\tilde{m}+\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}p_{1}^{*}(e_{k})}(1+x^{p_{1}^{*}(e^{\prime}_{k})})^{|\langle\delta_{k}e^{\prime}_{k},\tilde{m}\rangle_{1}|}.$ (0.7.25) Since the attached monomial for the incoming segment is transformed from $cx^{\tilde{m}}$ to $cx^{\tilde{m}+\langle\delta_{k}e_{k},\tilde{m}\rangle_{1}p_{1}^{*}(e_{k})}$, this is the correct rule for a broken line with respect to $T_{k}(\mathfrak{D}_{\mathfrak{s}})$. (b). Let $\gamma$ be a broken line with respect to $\mathfrak{D}_{\mathfrak{s}}$, and let $\gamma^{\prime}=T_{k}(\gamma)$ be the corresponding broken line with respect to $\mathfrak{D}_{\mathfrak{s}^{\prime}}$. Let $c_{\gamma}x^{\tilde{m}_{\gamma}}$ and $c^{\prime}_{\gamma}x^{\tilde{m}^{\prime}_{\gamma}}$ be the monomials attached with $\gamma$ and $\gamma^{\prime}$, respectively. Then, by (a), we have $\displaystyle c^{\prime}_{\gamma}x^{\tilde{m}^{\prime}_{\gamma}}=\begin{cases}c_{\gamma}x^{\tilde{S}_{k}(\tilde{m}_{\gamma})}&Q\in\mathcal{H}_{k}^{+},\\\ c_{\gamma}x^{\tilde{m}_{\gamma}}&Q\in\mathcal{H}_{k}^{-}.\end{cases}$ (0.7.26) Thus, we have (0.7.23). ∎ We have the following corollary of Propositions 0.7.5 and 0.7.10. ###### Corollary 0.7.11 ([GHKK18, Corollary 3.9]). Let $\mathfrak{D}_{\mathfrak{s}}$ be a CSD, and let $\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}$ be the $G$-cone in (0.6.75). For $\tilde{m}_{0}\in\tilde{M}^{\circ}$, suppose that $m_{0}\in\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}\cap M^{\circ}$. Also, suppose that $Q\in\mathrm{Int}(\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}})$. Then, we have $\displaystyle\vartheta_{Q,\tilde{m}_{0}}=x^{\tilde{m}_{0}}.$ (0.7.27) ###### Proof. By (0.6.75), $\displaystyle\mathcal{C}^{+}_{\mathfrak{s}_{i}}=(T_{k_{i-1},\mathfrak{s}_{i-1}}\circ\cdots\circ T_{k_{0},\mathfrak{s}_{0}})(\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}).$ (0.7.28) Let $T:=T_{k_{i-1},\mathfrak{s}_{i-1}}\circ\cdots\circ T_{k_{0},\mathfrak{s}_{0}}$ and $\tilde{T}:=\tilde{T}_{k_{i-1},\mathfrak{s}_{i-1}}\circ\cdots\circ\tilde{T}_{k_{0},\mathfrak{s}_{0}}$. Let $S:M_{\mathbb{R}}\rightarrow M_{\mathbb{R}}$ be the linear map that locally coincides with $T$ on the cone $\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}$. Let $\tilde{T}$ and $\tilde{S}$ be the corresponding maps for $\tilde{M}^{\circ}\rightarrow\tilde{M}^{\circ}$. Then, applying (0.7.23) repeatedly, we obtain $\displaystyle\vartheta^{\mathfrak{s}_{i}}_{T(Q),\tilde{T}(\tilde{m}_{0})}=\tilde{S}(\vartheta_{Q,\tilde{m}_{0}}),$ (0.7.29) where $\tilde{S}$ acts on the exponents of $x$. Meanwhile, $\vartheta^{\mathfrak{s}_{i}}_{T(Q),\tilde{T}(\tilde{m}_{0})}=x^{\tilde{T}(\tilde{m}_{0})}$ by Proposition 0.7.5. Thus, we have $\vartheta_{Q,\tilde{m}_{0}}=x^{\tilde{m}_{0}}$. ∎ Combining Proposition 0.7.6 and Corollary 0.7.11, we have another fundamental theorem on theta functions for a CSD, which provides the identification of theta functions with cluster monomials or cluster variables for the corresponding cluster pattern. ###### Theorem 0.7.12 ([GHKK18, Theorem 4.9]). Let $\mathfrak{D}_{\mathfrak{s}}$ be a CSD, and let $\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}$ be the $G$-cone in (0.6.75). For $\tilde{m}_{0}\in\tilde{M}^{\circ}$, suppose that $m_{0}\in\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}}\cap M^{\circ}$. Also, suppose that $Q\in\mathrm{Int}(\mathcal{C}_{\mathfrak{s}}^{+})$. Then, for any admissible curve $\gamma$ from any point in $\mathrm{Int}(\mathcal{C}^{\mathfrak{s}}_{\mathfrak{s}_{i}})$ to $Q$, we have $\displaystyle\vartheta_{Q,\tilde{m}_{0}}=\mathfrak{p}_{\gamma,\mathfrak{D}_{\mathfrak{s}}}(x^{\tilde{m}_{0}}).$ (0.7.30) Notes All results are taken from [GHKK18, §3, §4] and [CPS, §4] adapted to the present formulation of scattering diagrams. ## .8 Source code of Ordering Algorithm For the reader’s convenience we present a source code of Ordering Algorithm (Algorithm 0.5.7) written for SageMath 9.4 111Sage Mathematics Software, The Sage Development Team, https://www.sagemath.org.. It is an almost direct translation of Algorithm 0.5.7, and no effort is made for the efficiency and sophistication. (If any minor or major bug is found, please let us know.) The following examples explain how to use it on Sage Notebook. ###### Example .8.1. (a). Let us order the product $\Psi[e_{2}]^{5}\Psi[e_{1}]^{3}$ modulo $G^{>5}$. In: order(5,[[0,1,5],[1,0,3]]) Out: [[1, 0, 3], [3, 1, 5], [2, 1, 15], [3, 2, 125], [1, 1, 15], [2, 2, 60], [2, 3, 270], [1, 2, 30], [1, 3, 30], [1, 4, 15], [0, 1, 5]] (b). One can order any (not necessarily anti-ordered) product with factors of the form $\Psi[tn]^{s/t}$ $(n\in N_{\mathrm{pr}}^{+},\ s,t\in\mathbb{Z}_{>0})$ as specified in Algorithm 0.5.7. In: order(6,[[0,1,1],[1,2,1],[2,1,1],[2,2,1/2],[1,0,2]]) Out: [[1, 0, 2], [4, 1, 1], [3, 1, 2], [2, 1, 2], [4, 2, 19], [3, 2, 16], [1, 1, 2], [2, 2, 13/2], [3, 3, 33], [2, 3, 10], [1, 2, 1], [2, 4, 9/2], [1, 3, 1], [0, 1, 1]] Enjoy exploring the Badlands! Source code of Ordering Algorithm for SageMath 9.4 ⬇ 1 # Ordering Algorithm for dilogarithm products 2 # written by Tomoki Nakanishi for SageMath 9.4 3 # ver.2021.10.30.b 4 # usage: order(L,C) 5 6def bf(a,b): 7 # bilinear form {a,b} such that {e_2,e_1}=1 8 # a, b: rank 2 integer vectors 9 res = a[1]*b[0]-a[0]*b[1] 10 return res 11 12def check_ordered(C): 13 # check dilog product C is ordered completely 14 # C has factors of [n]^{c}, c>0 any, len(C)>1 15 l = len(C) # l>1 16 for i in range(l-1): 17 if bf(C[i][:2],C[i+1][:2]) > 0: # if {n’,n}>0 18 return False # not ordered 19 elif bf(C[i][:2],C[i+1][:2]) == 0: # {n’,n}=0 20 if C[i][0]+C[i][1] > C[i+1][0]+C[i+1][1]: # deg(n’)>deg(n) 21 return False # not ordered 22 return True # ordered 23 24def decompose_initial(C): 25 # decompose dilog product C for the main routine (1) 26 # C has factors [tn]^{s/t} 27 res = [] 28 l = len(C) 29 for i in range(l): 30 if C[i][2].is_integer() and C[i][2] > 1: # if the power is integer >1 31 for j in range(C[i][2]): 32 res = res+[[C[i][0],C[i][1],1]] # decompose to [tn]’s 33 else: 34 res = res+[C[i]] 35 return res 36 37def decompose(p,C): 38 # decompose dilog product C for subroutine (p), p>1 39 # p>0: integer 40 # C=[n’]^{c’}[n]^{c}) is an p-exchangeable pair 41 res = [] 42 c0 = C[0][2] # multiple of 1/p 43 c1 = C[1][2] # multiple of 1/p 44 for i in range(p*c0): 45 res = res+[C[0][:2]+[1/p]] # decompose to [n]^{1/p} 46 for i in range(p*c1): 47 res = res+[C[1][:2]+[1/p]] # decompose to [n]^{1/p} 48 return res 49 50def join(C): 51 # join factors with common [n] in dilog product C 52 # C has factors of [n]^{c}, len(C)>1 53 res = C 54 l = len(res) # l>1 55 flag_joined = False 56 while not flag_joined: 57 l = len(res) 58 for i in range(l-1): 59 if res[i][:2] == res[i+1][:2]: 60 res = res[:i]+[res[i][:2]+[res[i][2]+res[i+1][2]]]+res[i+2:] 61 break 62 if i == l-2: # if this is the last pair, it is joined 63 flag_joined = True 64 return res 65 66def order_p_partial(L,p,C): 67 # order dilog product C up to G^{>L} by p-pentagon (i.e.,{n’,n}=p) 68 # L,p>0: integer 69 # C has factors of [n]^{1/p}, len(C)>1 70 res = C 71 flag_ordered_p = False 72 while not flag_ordered_p: # finish if p-ordered 73 l = len(res) # l>1 74 for i in range(l-1): 75 if bf(res[i][:2],res[i+1][:2]) < 0: # if ordered, do nothing 76 pass 77 elif bf(res[i][:2],res[i+1][:2]) == 0: # if parallel 78 if res[i][0]+res[i][1] > res[i+1][0]+res[i+1][1]: 79 # if deg(left)>deg(right), commute it 80 res = res[:i]+[res[i+1]]+[res[i]]+res[i+2:] 81 break 82 elif bf(res[i][:2],res[i+1][:2]) > 0: # if anti-ordered 83 if res[i][0]+res[i][1]+res[i+1][0]+res[i+1][1] > L: 84 # if deg(n+n’)>L, commute it 85 res = res[:i]+[res[i+1]]+[res[i]]+res[i+2:] 86 break 87 elif bf(res[i][:2],res[i+1][:2]) == p: 88 # else if {n’,n}=p, apply p-pentagon relation 89 if res[i][2] == 1/p and res[i+1][2]== 1/p: 90 res = res[:i]+[res[i+1]]+[[res[i][0]+res[i+1][0],\ 91 res[i][1]+res[i+1][1],1/p],res[i]]+res[i+2:] 92 break 93 if i == l-2: # if we reach the last pair, it is p-ordered 94 flag_ordered_p = True 95 return res 96 97def order_p(L,p,C): 98 # subroutine (p); recursively defined 99 # L,p>0: integer 100 # for p=1, C has factors [tn]^{s/t} (n:primitive vector, s,t>0 integers) 101 # for p>1, C has factors of [n]^{1/p} 102 # len(C)>1 103 res = C 104 while not check_ordered(res): # do until completely ordered 105 if p == 1: # only in the main routine (1) 106 res=decompose_initial(res) # decompose to [n] 107 res = order_p_partial(L,p,res) # p-order the product 108 l = len(res) # l>1 109 for i in range(l-1): 110 if bf(res[i][:2],res[i+1][:2]) > 0: # find q-admissible pair 111 q = bf(res[i][:2],res[i+1][:2]) 112 Cin = decompose(q,[res[i],res[i+1]]) # decompose to [n]^{1/q} 113 Cout = order_p(L,q,Cin) # go to subroutine (q) (recursively) 114 res = res[:i]+Cout+res[i+2:] 115 break 116 res = join(res) 117 return res 118 119def order(L,C): 120 """␣␣␣␣L:␣integer␣>␣0,␣C:␣list␣of␣[n1,n2,c] 121␣␣␣␣n1,n2:␣integers>=0,␣c:␣rational>0␣such␣that␣c*gcd(n1,n2)=integer>0␣""" 122 # main routine (1) 123 # L>0: integer 124 # C has factors [tn]^{s/t} (n:primitive positive vector, s,t>0 integers) 125 if not (L > 0 and L.is_integer()): # check L 126 return print("the␣first␣argument␣is␣illegal") 127 l = len(C) # l>=1 128 for i in range(l): 129 n1 = C[i][0] 130 n2 = C[i][1] 131 if not (n1.is_integer() and n2.is_integer()\ 132 and n1 >= 0 and n2 >= 0 and n1+n2 > 0): # check (n1,n2) 133 return print("the␣second␣argument␣is␣illegal") 134 s = C[i][2]*math.gcd(n1,n2) 135 if not (s > 0 and s.is_integer()): # check c 136 return print("the␣second␣argument␣is␣illegal") 137 if l == 1: # for a single factor, just return 138 return C 139 res = order_p(L,1,C) # enter the main routine (1) 140 return res \end{lstlisting} 141 142 143%%%%%%%%%%%% part 3 %%%%%%%%%%%%%% 144%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ## References * [Bri17] T. Bridgeland, _Scattering diagrams, Hall algebras and stability conditions_ , Algebraic Geometry 4 (2017), 523–561; arXiv:1603.00416 [math.AG]. * [CI12] G Cerulli Irelli, _Cluster algebras of type $A_{2}^{(1)}$_, Algebras and Representation Theory 15 (2012), 977–1021; arXiv:0904.2543 [math.RA]. * [CPS] M. Carl, M. Pumperla, and B. Siebert, _A tropical view of Landau-Ginzburg models_ , preprint, 2010, available at https://www.math.uni-hamburg.de/home/siebert/preprints/LGtrop.pdf. * [DM21] B. Davison and T. Mandel, _Strong positivity for quantum theta bases of quantum cluster algebras_ , Invent. Math. (2021), published online; arXiv:1910.12915 [math.RT]. * [FG09] V. V. Fock and A. B. Goncharov, _Cluster ensembles, quantization and the dilogarithm_ , Annales Sci. de l’École Norm. Sup. 42 (2009), 865–930; arXiv:math/0311245 [math.AG]. * [FG16] V. Fock and A. Goncharov, _Cluster poisson varieties at infinity_ , Selecta Math. (N.S.) 22 (2016), 2569–2589; arXiv:1104.0407 [math.AG]. * [FZ02] S. Fomin and A. Zelevinsky, _Cluster algebras I. Foundations_ , J. Amer. Math. Soc. 15 (2002), 497–529 (electronic); arXiv:math/0104151 [math.RT]. * [FZ03a] , _Cluster algebras II. Finite type classification_ , Invent. Math. 154 (2003), 63–121; arXiv:math/0208229 [math.RA]. * [FZ03b] , _Y-systems and generalized associahedra_ , Ann. of Math. 158 (2003), 977–1018; arXiv:hep–th/0111053. * [FZ07] , _Cluster algebras IV. Coefficients_ , Compositio Mathematica 143 (2007), 112–164; arXiv:math/0602259 [math.RT]. * [GHK15] M. Gross, P. Hacking, and S. Keel, _Birational geometry of cluster algebras_ , Algebr. Geom. 2 (2015), 137–175; arXiv:math/1309.2573. * [GHKK18] M. Gross, P. Hacking, S. Keel, and M. Kontsevich, _Canonical bases for cluster algebras_ , J. Amer. Math. Soc. 31 (2018), 497–608; arXiv:1411.1394 [math.AG]. * [GPS10] M. Gross, R. Pandharipande, and B. Siebert, _The tropical vertex_ , Duke Math. J. (2010), no. 153, 297 – 362; arXiv:0902.0779 [math.AG]. * [Gro11] M. Gross, _Tropical geometry and mirror symmetry_ , CBMS Regional Conf. Ser. in Math., no. 114, Amer. Math. Soc., 2011. * [GS11] M. Gross and B. Siebert, _From affine geometry to complex geometry_ , Annals of Math. 174 (2011), 95–138; arXiv:math/0703822. * [IOTW] K. Igusa, K. Orr, G. Todorov, and J. Weyman, _Modulated semi-invariants_ , arXiv:1507.03051 [math.RT]. * [IT21] K Igusa and G. Todorov, _Picutre groups and maximal green sequences_ , Electronic Research Archive 29 (2021), 3031–3068; arXiv:2007.14584 [math.RT]. * [ITW] K. Igusa, G. Todorov, and J. Weyman, _Picture groups of finite type and cohomology in type $A_{n}$_, arXiv:1609.02636 [math.RT]. * [Jac79] N. Jacobson, _Lie algebras_ , Dover Publications, New York, 1979. * [KS06] M. Kontsevich and Y. Soibelman, _Affine structures and non-Archimedean analytic spaces_ , Prog. Math. 244 (2006), 321–385; arXiv:math/0406564. * [KS08] , _Stability structures, motivic Donaldson-Thomas invariants and cluster transformations_ , 2008, arXiv:0811.2435 [math.AG]. * [KS14] , _Wall-crossing strunctures in Donaldson-Thomas invariants, integrable systems and mirror symmetry_ , Homological mirror symmetry and tropical geometry, Lect. Notes Unione Ital., vol. 15, Springer, 2014, pp. 197–308; arXiv:1303.3253 [math.AG]. * [Lew81] L. Lewin, _Polylogarithms and associated functions_ , North-Holland, Amsterdam, 1981. * [Mag54] W. Magnus, _On the exponential solution of differential equations for a linear operator_ , Commun. Pure Appl. Math. 7 (1954), 649–673. * [Mat21] K. Matsushita, _Consistency relations of rank 2 cluster scattering diagrams of affine type and the pentagon relation_ , 2021, arXiv:2112.04743 [math.QA]. * [Mul16] G. Muller, _The existence of a maximal green sequence is not invariant under quiver mutation_ , Electron. J. Combin. 23 (2016), #P2.47, 23 pages; arXiv:1503.04675. * [Nak21] T. Nakanishi, _Synchronicity phenomenon in cluster patterns_ , J. London Math. Soc. 103 (2021), 1120–1152; arXiv:1906.12036 [math.RA]. * [NC13] A. Nájera Chávez, _On the c-vectors and g-vectors of the Markov cluster algebra_ , Séminaire Lontharingien de Combinatoire 69 (2013), B29d;arXiv:1112.2357. * [Rea] N. Reading, _Dominance phenomena: mutation, scattering, and cluster algebras_ , arXiv:1802.10107 [math.CO]. * [Rea14] , _Universal geometric cluster algebras_ , Mathematische Zeitschrift 277 (2014), 499–547; arXiv:1209.3987. * [Rea20a] , _A combinatorial approach to scattering diagrams_ , Algebraic Combinatorics 3 (2020), 603–636; arXiv:1806.05094 [math.CO]. * [Rea20b] , _Scattering fans_ , Int. Math. Res. Notices 23 (2020), 9640–9673; arXiv:1712.06968. * [Rei09] M. Reineke, _Poisson automorphisms and quiver moduli_ , J. Inst. Math. Jussieu 9 (2009), 653–667, arXiv:0804.3214 [math.RT]. ## Index * admissible curve Definition 0.2.5 * admissible region §0.6.5 * $B$-equivalent Definition 0.1.22 * Badlands §0.3.5, 5th item * Baker-Campbell-Hausdorff (BCH) formula §0.1.2 * broken line Definition 0.7.1 * chamber item (a). * $G$- Definition 0.6.23 * cluster element Definition 0.3.13 * cluster scattering diagram (CSD) Definition 0.3.11 * cone Definition 0.2.1 * convex rational polyhedral Definition 0.2.1 * $G$- Definition 0.6.23 * positivie/negative §0.2.4 * strongly convex Definition 0.2.1 * decomposition of $G$ * at $z$ §0.2.3 * by $e_{i}$ §0.3.4 * by $n$ §0.3.1 * degree function §0.1.1 * dilogarithm, see Euler dilogarithm * dilogarithm element Definition 0.1.10 * Euler dilogarithm §0.1.4 * fan * $G$- §0.6.6 * finiteness condition Definition 0.2.3 * fixed data Definition 0.1.1 * nondegenerate Definition 0.1.1 * $G$-chamber Definition 0.6.23 * $G$-cone Definition 0.6.23 * $G$-fan §0.6.6 * general point (in $M_{\mathbb{R}}$) Definition 0.2.8 * general position Definition 0.7.2 * intersection sign Definition 0.2.6 * joint Definition 0.5.13 * parallel Definition 0.5.13 * perpendicular Definition 0.5.13 * matrix * exchange §0.1.1 * skew-symmetrizable §0.1.1 * mutation * of a cluster scattering diagram Definition 0.6.7 * seed (for a fixed data) Definition 0.6.1 * mutation invariance * of a cluster scattering diagram Theorem 0.6.9 * of theta functions Proposition 0.7.10 * normal vector (of a wall) Definition 0.2.2 * normalization factor Definition 0.1.18 * normalized automorphism Definition 0.4.9 * normalized form §0.1.5 * ordered/anti-ordered §0.3.5, §0.5.2 * Ordering Algorithm Algorithm 0.5.7 * source code 8.§ * Ordering Lemma Proposition 0.5.4 * $p$-exchangeable pair Definition 0.5.5 * parallel subgroup Definition 0.1.5 * path-ordered product Definition 0.2.6 * pentagon relation Proposition 0.1.14 * positive element §0.1.1 * positive realization §0.5.1 * positivity (of theta functions) Theorem 0.7.9 * primitive §0.1.1 * principal extension * of a fixed data Definition 0.4.1 * principal $x$-representation §0.4.2 * reduction (of a scattering diagram) Definition 0.2.3 * rescaling (of a fixed data) Definition 0.1.16 * scattering diagram Definition 0.2.3 * cluster Definition 0.3.11 * consistent Definition 0.2.13 * equivalent Definition 0.2.7 * extended Definition 0.6.14 * trivial Definition 0.2.12 * with minimal support Definition 0.2.23 * seed * for a fixed data Definition 0.1.2 * singular locus (of a scattering diagram) Definition 0.2.4 * structure group §0.1.2 * extended §0.6.2 * support * of a scattering diagram Definition 0.2.4 * of a wall Definition 0.2.2 * theta function Definition 0.7.3 * total wall element Definition 0.2.10 * wall Definition 0.2.2 * attached to a joint Construction 0.5.14 * incoming Definition 0.3.9 * outgoing Definition 0.3.9 * unreachable Example 0.6.25 * wall element Definition 0.2.2 * wall function Definition 0.4.10 * $x$-representation §0.1.3 * principal §0.4.2 * $x$-variable (cluster variable) §0.1.1 * $y$-representation §0.1.3 * $y$-variable (coefficient) §0.1.1 * Zassenhaus formula §0.1.2
# Leveraging knowledge graphs to update scientific word embeddings using latent semantic imputation Jason Hoelscher-Obermaier, Edward Stevinson11footnotemark: 1 Valentin Stauber, Ivaylo Zhelev Victor Botev, Ronin Wu22footnotemark: 2, Jeremy Minton22footnotemark: 2 Iris AI, Bekkestua, Norway <EMAIL_ADDRESS>Co-first authors Co-PIs ###### Abstract The most interesting words in scientific texts will often be novel or rare. This presents a challenge for scientific word embedding models to determine quality embedding vectors for useful terms that are infrequent or newly emerging. We demonstrate how latent semantic imputation (LSI) can address this problem by imputing embeddings for domain-specific words from up-to-date knowledge graphs while otherwise preserving the original word embedding model. We use the Medical Subject Headings (MeSH) knowledge graph to impute embedding vectors for biomedical terminology without retraining and evaluate the resulting embedding model on a domain-specific word-pair similarity task. We show that LSI can produce reliable embedding vectors for rare and out of vocabulary (OOV) terms in the biomedical domain. ## 1 Introduction Word embeddings are powerful representations of the semantic and syntactic properties of words that facilitate high performance in natural language processing (NLP) tasks. Because these models completely rely on a training corpus, they can struggle to reliably represent words which are infrequent, or missing entirely, in that corpus. The latter will happen for any new terminology emerging after training is complete. Rapid emergence of new terminology and a long tail of highly significant but rare words are characteristic of technical domains, but these terms are often of particular importance to NLP tasks within these domains. This drives a need for methods to generate reliable embeddings of rare and novel words. At the same time, there are efforts in many scientific fields to construct large, highly specific and continuously updated knowledge graphs that capture information about these exact terms. Can we leverage these knowledge graphs to mitigate the short-comings of word embeddings on rare, novel and domain- specific words? We investigate one method for achieving this information transfer, latent semantic imputation (LSI) (Yao et al., 2019). In LSI the embedding vector for a given word, $w$, is imputed as a weighted average of existing embedding vectors, where the weights are inferred from the local neighborhood structure of a corresponding embedding vector, $\mathbf{w}_{d}$, in a domain-specific embedding space. We study how to apply LSI in the context of the biomedical domain using the Medical Subject Headings (MeSH) knowledge graph (Lipscomb, 2000), but expect the methodology to be applicable to other scientific domains. ## 2 Related work Embeddings for rare/out of vocabulary (OOV) words. Early methods for embedding rare words relied on explicitly provided morphological information (Alexandrescu and Kirchhoff, 2006; Sak et al., 2010; Lazaridou et al., 2013; Botha and Blunsom, 2014; Luong and Manning, 2016; Qiu et al., 2014). More recent approaches avoid dependence on explicit morphological information by learning representations for fixed-length character n-grams that do not have a direct linguistic interpretation (Bojanowski et al., 2017; Zhao et al., 2018). Alternatively, the subword structure used for generalization beyond a fixed vocabulary can be learnt from data using techniques such as byte-pair encoding (Sennrich et al., 2016; Gage, 1994) or the WordPiece algorithm (Schuster and Nakajima, 2012). Embeddings for arbitrary strings can also be generated using character-level recurrent networks (Ling et al., 2015; Xie et al., 2016; Pinter et al., 2017). These approaches, as well as transformer-based methods mentioned below, provide some OOV generalization capability but are unlikely to be a general solution since they will struggle with novel terms whose meaning is not implicit in the subword structure such as, e.g., eponyms. Note that we experimented with fastText and it performed worse than our approach. Word embeddings for the biomedical domain. Much research has focused on how to best generate biomedical-specific embeddings and provide models to improve performance on downstream NLP tasks (Major et al., 2018; Pyysalo et al., 2013; Chiu et al., 2016; Zhang et al., 2019). Work in the biomedical domain has investigated optimal hyperparameters for embedding training Chiu et al. (2016), the influence of the training corpus (Pakhomov et al., 2016; Wang et al., 2018; Lai et al., 2016), and the advantage of subword-based embeddings (Zhang et al., 2019). Word embeddings for clinical applications have been proposed (Ghosh et al., 2016; Fan et al., 2019) and an overview was provided in Kalyan and Sangeetha (2020). More recently, transformer models have been successfully adapted to the biomedical domain yielding contextual, domain- specific embedding models (Peng et al., 2019; Lee et al., 2019; Beltagy et al., 2019; Phan et al., 2021). Whilst these works highlight the benefits of domain-specific training corpora this class of approaches requires retraining to address the OOV problem. Improving word embeddings using domain information. Our problem task requires improving a provided embedding model for a given domain, without detrimental effects on other domains. Zhang et al. (2019) use random walks over the MeSH headings knowledge graph to generate additional training text to be used during the word embedding training. Similar ideas have led to using regularization terms that leverage an existing embedding during training of a new embedding to preserve information from an original embedding during training on a new corpus (Yang et al., 2017). Of course, these methods require the complete training of one or more embedding models. Faruqui et al. (2014) achieve a similar result more efficiently by defining a convex objective function that balances preserving an existing embedding with decreasing the distance between related vectors, based on external data sources such as a lexicon. This technique has been applied in the biomedical domain (Yu et al., 2016, 2017), but has limited ability to infer new vocabulary because without the contribution from the original embedding this reduces to an average of related vectors. Another approach is to extend the embedding dimension to create space for encoding new information. This can be as simple as vector concatenation from another embedding (Yang et al., 2017), possibly followed by dimensionality reduction (Shalaby et al., 2018). Alternatively, new dimensions can be derived from existing vectors based on external information like synonym pairs (Jo and Choi, 2018). Again, this has limited ability to infer new vocabulary. All of these methods change the original embedding, which limits applicability in use-cases where the original embedding quality must be retained or where incremental updates from many domains are required. The optimal alignment of two partially overlapping word embedding spaces has been studied in the literature on multilingual word embeddings (Nakashole and Flauger, 2017; Jawanpuria et al., 2019; Alaux et al., 2019) and provides a mechanism to patch an existing embedding with information from a domain-specific embedding. Unfortunately, it assumes the embedding spaces have the same structure, meaning it is not suitable when the two embeddings encode different types of information, such as semantic information from text and relational information from a knowledge base. ## 3 Latent Semantic Imputation LSI, the approach pursued in this paper, represents embedding vectors for new words as weighted averages over existing word embedding vectors with the weights derived from a domain-specific feature matrix (Yao et al., 2019). This process draws insights from Locally Linear Embedding (Roweis and Saul, 2000). Specifically, a local neighborhood in a high-dimensional word embedding space $E_{s}$ ($s$ for semantic) can be approximated by a lower-dimensional manifold embedded in that space. Hence, an embedding vector $\mathbf{w}_{s}$ for a word $w$ in that local neighborhood can be approximated as a weighted average over a small number of neighboring vectors. This would be useful to construct a vector of a new word $w$ if we could determine the weights for the average over neighboring terms. But since, by assumption, we do not know $w$’s word embedding vector $\mathbf{w}_{s}$, we also do not know its neighborhood in $E_{s}$. The main insight of LSI is that we can use the local neighborhood of $w$’s embedding $\mathbf{w}_{d}$ in a domain-specific space, $E_{d}$, as a proxy for that neighborhood in the semantic space of our word-embedding model, $E_{s}$. The weights used for constructing an embedding for $w$ in $E_{s}$ are calculated from the domain space as shown in Fig. 1: a k-nearest-neighbors minimum-spanning-tree (kNN- MST) is built from the domain space features. Then the L2-distance between $\mathbf{w}_{d}$ and a weighted average over its neighbors in the kNN-MST is minimized using non-negative least squares. The resulting weights are used to impute the missing embedding vectors in $E_{s}$ using the power iteration method. This procedure crucially relies on the existence of words with good representations in both $E_{s}$ and $E_{d}$, referred to as anchor terms, which serve as data from which the positions of the derived embedding vectors are constructed. Figure 1: Latent Semantic Imputation. $\mathbb{R}^{d}$ is the domain space and $\mathbb{R}^{s}$ is the semantic space. ## 4 Methodology We extend the original LSI procedure described above in a few key ways. Instead of using a numeric data matrix as the domain data source of LSI, we use a node embedding model trained on a domain-specific knowledge graph to obtain $E_{d}$. As knowledge graphs are used as a source of structured information in many fields, we expect our method to be applicable to many scientific domains. Knowledge graphs are prevalent in scientific fields as they serve as a means to organise and store scientific data, as well as to aid downstream tasks such as reasoning and exploration. Their structure and ability to represent different relationship types makes it relatively easy to integrate new data, meaning they can evolve to reflect changes in a field and as new data becomes available. We use the 2021 RDF dump of the MeSH knowledge graph (available at https://id.nlm.nih.gov/mesh/). The complete graph consists of 2,327,188 nodes and 4,272,681 edges, which we reduce into a simpler, smaller, and undirected graph to be fed into a node embedding algorithm. We extract a subgraph consisting of solely the nodes of type "ns0__TopicalDescriptor" and the nodes of type "ns0__Concept" that are directly connected to the topical descriptors via any relationship type. The relationship types and directionality were removed. This results in 58,695 nodes and 113,094 edges. We use the node2vec graph embedding algorithm (Grover and Leskovec, 2016) on this subgraph to produce an embedding matrix of 58,695 vectors with dimension 200 (orange squares in Fig. 2). The hyperparameters are given in Appendix 8.1. These node embeddings form the domain-specific space, $E_{d}$, as described in the previous section. We note that in preliminary experiments, the adjacency matrix of the knowledge graph was used directly as $E_{d}$ but this yielded imputed embeddings that performed poorly. To provide the mapping between the MeSH nodes and the word embedding vocabulary we normalize the human-readable "rdfs__label" node property by replacing spaces with hyphens and lower-casing. The anchor terms are then identified as the normalized words that match between the graph labels and the vocabulary of the word-embedding model; resulting in 12,676 anchor terms. As an example, "alpha-2-hs-glycoprotein" appears as both a node in the reduced graph and in the word-embedding model, along with its neighbors in the kNN- MST, which include "neoglycoproteins" and "alpha-2-antiplasmin". These serve to stabilise the positions of unknown word embedding vectors for domain space nodes which did not have corresponding representations in the semantic space during the LSI procedure. LSI has one key hyper-parameter: the minimal degree of the kNN-MST graph, $k$. The stopping criterion of the power iteration method is controlled by another parameter, $\eta$, but any sufficiently small value should allow adequate convergence and have minimal impact on the resulting vectors. Following Yao et al. (2019) we set $\eta=10^{-4}$ but we use a larger $k=50$ since initial experiments showed a better performance for larger values of $k$. Figure 2: Extended latent semantic imputation pipeline. A knowledge graph is simplified to a smaller, undirected graph. This is used to derive the node embedding model used in LSI (see Fig. 1) to impute missing terms in the semantic space. ## 5 Experiments We aim to answer two questions to evaluate our imputation approach: Do the imputed embeddings encode semantic similarity and relatedness information as judged by domain experts? And, can the imputed embeddings be reliably used alongside the original, non-imputed word embeddings? We use the UMNSRS dataset to answer these questions (Pakhomov et al., 2010). It is a collection of medical word-pairs annotated with a relatedness and similarity score by healthcare professionals, such as medical coders and clinicians; some examples are shown in Table 1. For each word-pair we calculate the cosine similarity between the corresponding word embedding vectors and report the Pearson correlation between these cosine similarities and the human scores. Term1 | Term2 | Similarity | Relatedness ---|---|---|--- Acetylcysteine | Adenosine | 256.25 | 586.50 Anemia | Coumadin | 623.75 | 926.50 Rales | Lasix | 742.00 | 1379.50 Tuberculosis | Hemoptysis | 789.50 | 1338.50 Table 1: Examples of UMNSRS word pairs. Scores range from 0 to 1600 (larger = more similar/related). To obtain additional insight into the performance of the imputation procedure we split the words in the UMNSRS dataset into two groups of roughly the same size: one group of words (trained) which we train directly as part of the word embedding training and another group of words (imputed) which we obtain via imputation. This split results in three word-pair subsets that contain imputed/imputed word pairs, trained/trained word pairs, and imputed/trained word pairs. Note that due to an incomplete overlap of the UMNSRS test vocabulary with both the MeSH node labels and our word embedding vocabulary we cannot evaluate on every word pair in UMNSRS (see Table 4 for more details). Applying the UMNSRS evaluation to these three groups of word pairs we aim to measure the extent to which the imputation procedure encodes domain-specific semantic information. For word embedding training we prepare a corpus of 74.4M sentences from open access publications on PubMed (from https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/; accessed on 2021-08-30). To simulate the problem of missing words as realistically as possible we then prepare a filtered version of this corpus by removing any sentence containing one of the imputed terms (in either singular or plural form). This filtering removes 2.36M of the 74.4M sentences (3.2%). We then train 200-dimensional skip-gram word embedding models on both the full and the filtered version of the training corpus. In addition, we also train fastText embeddings (Bojanowski et al., 2017) on both the full and the filtered corpus. For details on the hyper-parameters see Appendix 8.2. Since fastText, which represents words as n-grams of their constituent characters, has been shown to give reasonable embedding vectors for words which are rare or missing in the training corpus it represents a suitable baseline to which we can compare our imputation procedure. We check that the embedding models (both skip-gram and fastText) trained on the filtered corpus perform roughly on par with those trained on the full corpus when evaluated using the trained/trained subset of the UMNSRS test data. We also check that the skip-gram model trained on the full corpus performs comparable to the BioWordVec model (Zhang et al., 2019) across all subsets of UMNSRS. See Appendix 8.3 for details. LSI is a means of leveraging the domain space to create OOV embedding vectors. As a simple alternative baseline, we directly use the domain space embeddings for the OOV words. We need to align the domain space onto the semantic space, which we do with a rotation matrix derived from the anchor term embeddings in the two spaces via singular value decomposition. ### 5.1 Results The main results are displayed in Fig. 3 which shows the Pearson correlation between cosine similarities and human annotator scores for UMNSRS similarity and relatedness. The error bars are standard deviations across 1,000 bootstrap resamples of the test dataset. From left to right we show results for the trained/trained, imputed/trained, and imputed/imputed subsets. We compare two models trained on the filtered corpus (which does not contain any mentions of the imputed words): a skip-gram model extended by LSI and a fastText model. For reference we also show the correlation strengths obtained when directly using the MeSH node embeddings which form the basis of the imputation. Note that for this last model, the test cases we evaluate are different, since the MeSH model cannot represent all word pairs in UMNSRS (see appendix 8.3 for details). Uncertainties on the MeSH model are high for the trained/trained subset due to the limited overlap of the MeSH model with the words in the trained subset (see Table 4). In Fig. 3 the imputed/trained group also includes the performance of the simple baseline, Skip-gram (filtered) + MeSH, formed of a mixture of aligned embeddings. We do not show the performance of this baseline on the other two groups since, by construction, it is identical to that of Skip-gram (filtered) + LSI for trained/trained and that of MeSH node2vec for imputed/imputed. (a) UMNSRS similarity. (b) UMNSRS relatedness. Figure 3: Correlation with UMNSRS scores. Three things stand out: 1. 1. The LSI-based model is competitive on novel vocabulary: it performs significantly better than the fastText model on word pairs containing only imputed terms (imputed/imputed) and modestly better on mixed word pairs (imputed/trained). It also outperforms the simple but surprisingly strong baseline, Skip-gram (filtered) + MeSH. 2. 2. There is a significant difference in Pearson correlation between the different word pair categories. Note that the same trend in correlation across word pair categories can be seen in the word embedding model trained on the full corpus without imputation (see Fig. 6). 3. 3. The LSI-based model obtains better scores than the underlying MeSH node embeddings across most categories. This proves that the similarity and relatedness information directly encoded in the domain embedding does not limit the similarity and relatedness information encoded in the resulting imputed model. ### 5.2 Discussion In this paper we use a significantly larger subset of the MeSH graph compared to related work on MeSH-based embeddings (Guo et al., 2021; Zhang et al., 2019) by including more than just the topical descriptor nodes. Using a larger graph for the imputation allows us to impute a wider variety of words and evaluate the imputation procedure on a larger subset of UMNSRS. The graph we use for imputation is also much larger than the domain data used in previous work on LSI (Yao et al., 2019). This shows that LSI can apply to knowledge graphs and scale to larger domain spaces which is crucial for real-world applications. We observe that the UMNSRS similarity and relatedness correlations of the MeSH node embedding models do not constitute an upper bound on the correlations obtained for the imputed word embeddings. This is intuitively plausible since LSI combines the global structure of the trained word embedding vectors with the local structure of the domain embeddings. This is in contrast to the original LSI paper in which the domain data alone was sufficient to obtain near perfect scores on the evaluation task and, as such, could have been used directly which obviates the need for LSI. This observation reduces the pressure for an optimal knowledge graph and associated embedding, although a systematic search for better subgraphs to use is likely to yield improved imputation results. It is also of note that most of the trends displayed by the LSI model hold for both the similarity and relatedness scores, despite these being distinctly separate concepts. Relatedness is a more general measure of association between two terms whilst similarity is a narrower concept tied to their likeness. This might not be the case if the graph construction had been limited to particular relationship types or if direction of the relations had been retained. There are noteworthy differences between our experiment and the use cases we envisage for LSI. The words we impute in our experiment are taken from the constituent words of the UMNSRS word pairs rather than being solely defined by training corpus statistics. This is a necessary limitation of our evaluation methodology. It remains a question for further research to establish ways of evaluating embedding quality on a larger variety of OOV words and use this for a broader analysis of the performance of LSI. ## 6 Strengths and weaknesses of LSI Our experiments highlight several beneficial features of LSI. It is largely independent of the nature of the domain data as long as embeddings for the domain entities can be inferred. It does not rely on retraining the word embedding and is therefore applicable to cases where retraining is not an option due to limitations in compute or because of lack of access to the training corpus. It allows word embeddings to be improved on demand for specific OOV terms, thus affording a high level of control. In particular, it allows controlled updates of word embeddings in light of new emerging research. The current challenges we see for LSI are driven by limited research in the constituent steps of the imputation pipeline. Specifically, there is not yet a principled answer for the optimal selection of a subgraph from the full knowledge graph or the optimal choice of node embedding architecture. The answer to these may depend on the domain knowledge graph. Also, there are not yet generic solutions for quality control of LSI. This problem is likely intrinsically hard since the words which are most interesting for imputation are novel or rare and thus exactly the words for which little data is available. ## 7 Conclusion In this paper, we show how LSI can be used to improve word embedding models for the biomedical domain using domain-specific knowledge graphs. We use an intrinsic evaluation task to demonstrate that LSI can yield good embeddings for domain-specific out of vocabulary words. We significantly extend the work of Yao et al. (2019) by showing that LSI is applicable to scientific text where problems with rare and novel words are particularly acute. Yao et al. (2019) assumed a small number of domain entities and a numeric domain data feature matrix. This immediately yields the metric structure required to determine the nearest neighbors and minimum spanning tree graph used in LSI. We extend this to a much larger number of domain entities and to domain data which does not have an a priori metric structure but is instead given by a graph structure. We demonstrate that LSI can also work with relational domain data thus opening up a broader range of data sources. The metric structure induced by node embeddings trained on a domain knowledge graph provides an equally good starting point for LSI. This shows that LSI is a suitable methodology for controlled updates and improvements of scientific word embedding models based on domain-specific knowledge graphs. ## 8 Future work We see several fruitful directions for further research on LSI and would like to see LSI applied to other scientific domains, thereby testing the generalizability of our methodology. This would also provide more insight on how the domain knowledge graph as well as the node embedding architecture impact the imputation results. The use of automatic methods for creating medical term similarity datasets (Schulz and Juric, 2020) would facilitate the creation of large-scale test sets. The UMNSRS dataset, along with the other human-annotated, biomedical word pair similarity test sets used in the literature, all consist of fewer than one thousand word pairs (Pakhomov et al., 2016, 2010; Chiu et al., 2018). The use of larger test sets would remove the aforementioned evaluation limitations. Further research could elucidate how to best utilize the full information of the domain knowledge graph in LSI. This includes information about node and edge types, as well as literal information such as human-readable node labels and numeric node properties (such as measurement values). It also remains to be studied how to optimally choose the anchor terms (to be used in the imputation step) to maximize LSI performance. Our methodology could also be generalized from latent semantic imputation to what might be called latent semantic information fusion where domain information is used for incremental updates instead of outright replacement of word embedding vectors. Finally, LSI could also be extended to provide alignment between knowledge graphs and written text by using the spatial distance between imputed vectors of knowledge graph nodes and trained word embedding vectors as an alignment criterion. ## Acknowledgements This paper was supported by the AI Chemist funding (Project ID: 309594) from the Research Council of Norway (RCN). We thank Shibo Yao for helpful input and for sharing raw data used in (Yao et al., 2019) and Dr. Zhiyong Lu and Dr. Yijia Zhang of the National Institute of Health for sharing their word embedding models. We thank the three anonymous reviewers for their careful reading and helpful comments. ## References * Alaux et al. (2019) Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2019. Unsupervised Hyperalignment for Multilingual Word Embeddings. * Alexandrescu and Kirchhoff (2006) Andrei Alexandrescu and Katrin Kirchhoff. 2006. Factored Neural Language Models. In _Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers_ , pages 1–4, New York City, USA. Association for Computational Linguistics. * Beltagy et al. (2019) Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3615–3620, Hong Kong, China. Association for Computational Linguistics. * Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. _Transactions of the Association for Computational Linguistics_ , 5:135–146. * Botha and Blunsom (2014) Jan A. Botha and Phil Blunsom. 2014. Compositional Morphology for Word Representations and Language Modelling. * Chiu et al. (2016) Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to Train good Word Embeddings for Biomedical NLP. In _Proceedings of the 15th Workshop on Biomedical Natural Language Processing_ , pages 166–174, Berlin, Germany. Association for Computational Linguistics. * Chiu et al. (2018) Billy Chiu, Sampo Pyysalo, Ivan Vulić, and Anna Korhonen. 2018. Bio-simverb and bio-simlex: wide-coverage evaluation sets of word similarity in biomedicine. _BMC Bioinformatics_ , 19(1):33. * Fan et al. (2019) Yadan Fan, Serguei Pakhomov, Reed McEwan, Wendi Zhao, Elizabeth Lindemann, and Rui Zhang. 2019. Using word embeddings to expand terminology of dietary supplements on clinical notes. _JAMIA Open_ , 2(2):246–253. * Faruqui et al. (2014) Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2014. Retrofitting word vectors to semantic lexicons. _arXiv preprint arXiv:1411.4166_. * Gage (1994) Philip Gage. 1994. A new algorithm for data compression. _The C Users Journal archive_ , 12:23–38. * Ghosh et al. (2016) Saurav Ghosh, Prithwish Chakraborty, Emily Cohn, John S. Brownstein, and Naren Ramakrishnan. 2016. Characterizing diseases from unstructured text: A vocabulary driven word2vec approach. In _Proceedings of the 25th ACM International on Conference on Information and Knowledge Management_ , CIKM ’16, page 1129–1138, New York, NY, USA. Association for Computing Machinery. * Grover and Leskovec (2016) Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. * Guo et al. (2021) Zhen-Hao Guo, Zhu-Hong You, De-Shuang Huang, Hai-Cheng Yi, Kai Zheng, Zhan-Heng Chen, and Yan-Bin Wang. 2021. MeSHHeading2vec: A new method for representing MeSH headings as vectors based on graph embedding algorithm. _Briefings in Bioinformatics_ , 22(2):2085–2095. * Jawanpuria et al. (2019) Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019\. Learning Multilingual Word Embeddings in Latent Metric Space: A Geometric Approach. _Transactions of the Association for Computational Linguistics_ , 7:107–120. * Jo and Choi (2018) Hwiyeol Jo and Stanley Jungkyu Choi. 2018. Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons. _arXiv:1804.07946 [cs]_. * Kalyan and Sangeetha (2020) Katikapalli Subramanyam Kalyan and S. Sangeetha. 2020. SECNLP: A survey of embeddings in clinical natural language processing. _Journal of Biomedical Informatics_ , 101:103323. * Lai et al. (2016) Siwei Lai, Kang Liu, Shizhu He, and Jun Zhao. 2016. How to generate a good word embedding. _IEEE Intelligent Systems_ , 31(6):5–14. * Lazaridou et al. (2013) Angeliki Lazaridou, Marco Marelli, Roberto Zamparelli, and Marco Baroni. 2013. Compositional-ly Derived Representations of Morphologically Complex Words in Distributional Semantics. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1517–1526, Sofia, Bulgaria. Association for Computational Linguistics. * Lee et al. (2019) Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. _Bioinformatics_ , page btz682. * Ling et al. (2015) Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramón Fermandez, Silvio Amir, Luís Marujo, and Tiago Luís. 2015. Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 1520–1530, Lisbon, Portugal. Association for Computational Linguistics. * Lipscomb (2000) Carolyn E. Lipscomb. 2000. Medical Subject Headings (MeSH). _Bulletin of the Medical Library Association_ , 88(3):265–266. * Luong and Manning (2016) Minh-Thang Luong and Christopher D. Manning. 2016. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1054–1063, Berlin, Germany. Association for Computational Linguistics. * Major et al. (2018) Vincent Major, Alisa Surkis, and Yindalon Aphinyanaphongs. 2018. Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research. _AMIA Annual Symposium Proceedings_ , 2018:1405–1414. * Nakashole and Flauger (2017) Ndapandula Nakashole and Raphael Flauger. 2017. Knowledge Distillation for Bilingual Dictionary Induction. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2497–2506, Copenhagen, Denmark. Association for Computational Linguistics. * Pakhomov et al. (2016) Serguei V. S. Pakhomov, Greg Finley, Reed McEwan, Yan Wang, and Genevieve B. Melton. 2016. Corpus domain effects on distributional semantic modeling of medical terms. _Bioinformatics (Oxford, England)_ , 32(23):3635–3644. * Pakhomov et al. (2010) Serguei V. S. Pakhomov, Bridget T. McInnes, T. Adam, Y. Liu, Ted Pedersen, and G. Melton. 2010. Semantic Similarity and Relatedness between Clinical Terms: An Experimental Study. In _AMIA … Annual Symposium Proceedings. AMIA Symposium_. * Peng et al. (2019) Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. * Phan et al. (2021) Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, and Grégoire Altan-Bonnet. 2021. SciFive: A text-to-text transformer model for biomedical literature. * Pinter et al. (2017) Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking Word Embeddings using Subword RNNs. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 102–112, Copenhagen, Denmark. Association for Computational Linguistics. * Pyysalo et al. (2013) Sampo Pyysalo, Filip Ginter, Hans Moen, Tapio Salakoski, and Sophia Ananiadou. 2013\. Distributional Semantics Resources for Biomedical Text Processing. In _Proceedings of LBM 2013_ , page 5. * Qiu et al. (2014) Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of Word Representations and Morpheme Representations. In _Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers_ , pages 141–150, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. * Roweis and Saul (2000) Sam T. Roweis and Lawrence K. Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. _Science_ , 290(5500):2323–2326. * Sak et al. (2010) Haşim Sak, Murat Saraçlar, and Tunga Güngör. 2010. Morphology-based and sub-word language modeling for Turkish speech recognition. In _2010 IEEE International Conference on Acoustics, Speech and Signal Processing_ , pages 5402–5405. * Schulz and Juric (2020) Claudia Schulz and Damir Juric. 2020. Can embeddings adequately represent medical terminology? new large-scale medical term similarity datasets have the answer! _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(05):8775–8782. * Schuster and Nakajima (2012) Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In _2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 5149–5152. * Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. _arXiv:1508.07909 [cs]_. * Shalaby et al. (2018) W. Shalaby, Wlodek Zadrozny, and Hongxia Jin. 2018. Beyond word embeddings: Learning entity and concept representations from large scale knowledge bases. _Information Retrieval Journal_. * Wang et al. (2018) Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul Kingsbury, and Hongfang Liu. 2018. A comparison of word embeddings for the biomedical natural language processing. _Journal of Biomedical Informatics_ , 87:12–20. * Xie et al. (2016) Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation Learning of Knowledge Graphs with Entity Descriptions. * Yang et al. (2017) Wei Yang, Wei Lu, and Vincent Zheng. 2017. A Simple Regularization-based Algorithm for Learning Cross-Domain Word Embeddings. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2898–2904, Copenhagen, Denmark. Association for Computational Linguistics. * Yao et al. (2019) Shibo Yao, Dantong Yu, and Keli Xiao. 2019. Enhancing Domain Word Embedding via Latent Semantic Imputation. _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 557–565. * Yu et al. (2016) Zhiguo Yu, Trevor Cohn, Byron C. Wallace, Elmer Bernstam, and Todd Johnson. 2016\. Retrofitting word vectors of mesh terms to improve semantic similarity measures. In _Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis_ , pages 43–51. * Yu et al. (2017) Zhiguo Yu, Byron C. Wallace, Todd Johnson, and Trevor Cohen. 2017. Retrofitting concept vector representations of medical concepts to improve estimates of semantic similarity and relatedness. _Studies in health technology and informatics_ , 245:657. * Zhang et al. (2019) Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. 2019. BioWordVec, improving biomedical word embeddings with subword information and MeSH. _Scientific Data_ , 6(1):52. * Zhao et al. (2018) Jinman Zhao, Sidharth Mudgal, and Yingyu Liang. 2018. Generalizing Word Embeddings using Bag of Subwords. ## Appendix ### 8.1 Hyper-parameters for MeSH node2vec We train node2vec (https://github.com/thibaudmartinez/node2vec) embeddings with the hyperparameters shown in Table 2 from a subgraph of MeSH containing 58,695 nodes and 113,094 edges. Hyperparameter | Variable name | Value ---|---|--- Training epochs | epochs | 50 No. of random walks | n_walks | 10 Return parameter | p | 0.5 Inout parameter | q | 0.5 Context window | context_size | 15 Dimension | dimension | 200 Table 2: Hyperparameters for MeSH node2vec training ### 8.2 Hyper-parameters for word embeddings We use gensim (https://radimrehurek.com/gensim; version 4.1.2.) for training skipgram and fastText word embedding models with the hyperparameters provided in Table 3. All other hyperparameters are set to the default values of the gensim implementation. For the skipgram model we use the hyperparameters from Chiu et al. (2016), which are reported to be optimal for the biomedical domain. For fastText we are not aware of literature on optimal hyperparameters for the biomedical domain so we use the default values except for the embedding dimension which we set to 200 to ease comparison with the skipgram model. We trained the fastText models for 10 epochs but found that the performance of the fastText model on UMNSRS saturates after epoch 1. We use the fastText model after the first epoch for the remainder of our experiments and analysis. Variable name | fastText | skipgram ---|---|--- epochs | 1 | 10 negative | 5 | 10 vector_size | 200 | 200 alpha | 0.025 | 0.05 sample | 1E-03 | 1E-04 window | 20 | 30 Table 3: Hyperparameters for skipgram and fastText training. See the gensim documentation for the definition of the hyperparameters. (a) UMNSRS similarity. (b) UMNSRS relatedness. Figure 4: UMNSRS correlations for skipgram models. (c) UMNSRS similarity. (d) UMNSRS relatedness. Figure 5: UMNSRS correlations for fastText models. (a) UMNSRS similarity. (b) UMNSRS relatedness. Figure 6: UMNSRS correlations for BioWordVec. | UMNSRS relatedness | UMNSRS similarity ---|---|--- Model | | trained/ --- trained | imputed/ --- trained | imputed/ --- imputed | trained/ --- trained | imputed/ --- trained | imputed/ --- imputed MeSH node2vec | 28 | 70 | 133 | 30 | 72 | 135 all other models | 83 | 99 | 124 | 84 | 101 | 126 Table 4: Number of test cases per model and test set split for UMNSRS evaluation. ### 8.3 Details on the UMNSRS evaluation Table 4 shows the number of test cases per model and UMNSRS test data split. All models have been evaluated on the same subsets of UMNSRS except for the MeSH node embeddings model where limited overlap with the UMNSRS test vocabulary prevents us from evaluating on exactly the same subsets. The embedding models (both skip-gram and fastText) trained on the filtered corpus perform roughly on par with those trained on the full corpus when evaluated using the trained/trained subset of the UMNSRS test data (see Fig. 6 and 6). When comparing the performance of the filtered skipgram model + LSI to the full skipgram model on the subset of test data involving imputed words (imputed/trained and imputed/imputed) the full model outperforms LSI (see Fig. 6). This suggests that, if training text for the OOV words were available, we should make use of it. Similarly, and as expected, when comparing the performance of the filtered and full fastText models on the subset of test data involving imputed words (imputed/trained and imputed/imputed) the full model again outperforms the filtered model (see Fig. 6). As a sanity check, we also compare the skip-gram model trained on the full corpus to BioWordVec, a recent state-of-the-art word embedding model for the biomedical domain (Zhang et al., 2019) and find similar performance across all subsets of UMNSRS (see Fig. 6).
# Modelling Social Care Provision in An Agent-Based Framework with Kinship Networks Umberto Gostoli MRC/CSO Social and Public Health Sciences Unit University of Glasgow, Glasgow, UK G2 3AX <EMAIL_ADDRESS> Eric Silverman MRC/CSO Social and Public Health Sciences Unit University of Glasgow, Glasgow, UK G2 3AX <EMAIL_ADDRESS> ###### Abstract Current demographic trends in the UK include a fast-growing elderly population and dropping birth rates, and demand for social care amongst the aged is rising. The UK depends on informal social care – family members or friends providing care – for some 50% of care provision. However, lower birth rates and a graying population mean that care availability is becoming a significant problem, causing concern amongst policy-makers that substantial public investment in formal care will be required in decades to come. In this paper we present an agent-based simulation of care provision in the UK, in which individual agents can decide to provide informal care, or pay for private care, for their loved ones. Agents base these decisions on factors including their own health, employment status, financial resources, relationship to the individual in need, and geographical location. Results demonstrate that the model can produce similar patterns of care need and availability as is observed in the real world, despite the model containing minimal empirical data. We propose that our model better captures the complexities of social care provision than other methods, due to the socioeconomic details present and the use of kinship networks to distribute care amongst family members. _K_ eywords Social Care, Kinship Networks, Agent-Based Modelling ## 1 Introduction As human lifespans continue to lengthen and birth-rates drop throughout much of the developed world, many nations are experiencing an increase in demand for social care – the provision of personal and medical care for people in need of assistance due to age, disability or other factors. In the UK, the elderly consume the largest share of social care, and lower birth-rates mean that the supply of available carers is decreasing over time even as demand is growing rapidly [1]. As a result, social care is a frequent topic in UK policy debates, with widespread concern that the country will be unable to afford the significant public investment needed to fulfill this growing care need. Age UK notes that nearly 50% of those aged 75 and over are living with a long- term limiting health condition – and the fastest-growing age group in the UK is the over-85s, so the problem will only get worse in the coming decades [2]. As a result, social care need is rising at a pace that outstrips the growth of public and private care supply. Unmet social care need is therefore an increasingly important and widespread social problem in the UK. The 2017 Ipsos MORI report _Unmet Need for Care_ showed that “more than half of those with care needs had unmet need for at least some of their needs” [3]. This means that less than half of those elderly individuals in need of assistance with activities of daily living (ADLs) were able to receive sufficient care. Large numbers of UK citizens are thus living without sufficient care; an estimated 1.2 million people did not receive sufficient care for their needs in 2017, an increase of 48% since 2010 [2]. The increased pressure on the social care system also has an impact on the health care system, with delayed discharges from hospital being a particularly expensive consequence of unmet care need. Patients in need of social care frequently stay in hospital longer than necessary due to a lack of care availability, leading to a shortage of beds for other patients and increased costs to hospitals. According to Age UK, between 2010 and 2016 the number of additional bed-days attributable a lack of available home-care packages increased by 181.7%, while wait times for residential care placements increased by 40% [2]. In 2016 the National Audit Office estimated the total delays attributable to care shortages amounted to 2.7 million bed-days per year, resulting in an annual cost to the National Health Service (NHS) of approximately £820 million [4]. The UK is largely dependent on informal social care, or care provided free-of- charge by family members and loved ones, in order to meet the needs of the population. Informal care is enormously widespread in the UK and is much larger than the formal care infrastructure. The Family Resources Survey 2013/14 showed that there were 5.3 million informal carers in the UK [5], while projections indicate that the number of people receiving informal care will increase by 60% in the period 2015-2035 [6]. The increase in demand is such that Carers UK proposes that carer numbers would need to increase 40% over the next two decades to meet demand [7]. At the same time, the number of people using privately-funded social care is expected to rise by almost 50% by 2035, and private expenditure for social care is projected to increase from £6.8 billion to almost £20 billion in the same period, almost a three-fold increase [6]. Understanding how the need for social care evolves and the process through which informal and formal care is provided is a vital component in developing sustainable social care policies. In this regard, the importance of support and care-giving networks has long been recognized [8]. In particular, research has shown that informal care is provided mostly through care networks with an average of three to five members [9]. These networks are predominantly composed of an individual’s close relatives. Aldridge and Huges [5] report that 72% of carers provide care to a member of their immediate family, whether a parent (40%), partner (18%), son or daughter (14%), while Petrie and Kirkup [10] show that 51% of carers provide care to someone in their household. Wettstein and Zulkarnain[11] show that, in 2011, 31% of informal care in the US was provided by spouses, 47% by children and 18% by other relatives (sons- in-law, daughters-in-law and grandchildren), with only 4% being provided by non-relatives. Moreover, empirical research has shown that the kind of social care provided is affected by socioeconomic status. Petrie and Kirkup [10] report that people working in routine occupations and those with lower qualifications are more likely to provide informal care. Moreover, Laing and Buisson [12] find that, while across the UK an estimated 24% of care home residents are funded through private ‘top-ups’, in North-East England only 18% are funded privately, as opposed to 54% in the wealthier South-East. Overall, these findings suggest that informal care becomes less common in higher socioeconomic status groups than in lower groups, while formal care becomes more common. Finally, the social care literature also reveals a significant gender gap in social care provision. Wettstein et al. [11] reported that in the US daughters are almost twice as likely (31%) to provide care as compared to sons (16%). These findings are confirmed by Petrie and Kirkup [10], who find that 59% of family carers in the UK are women. At the population level, 16% of women provide informal care as compared to 12% of men. In this paper, we propose an agent-based model of the UK informal and privately-funded formal social care system. This model reflects the complexity of a system where demographic, social and economic processes interact to determine the dynamics of social care demand and supply. Our aim is to provide a theoretical framework that allows us to to improve our understanding of the mechanisms driving unmet social care need. Moreover, using ABMs enables us to model scenarios of economic and social policy change in virtual populations, providing a means to test complex policies targeted at multiple levels of society. Such models can allow policy-makers to experiment with potential policy interventions and reveal any possible unintended side-effects of those policies prior to implementing them in the real world. Previous work has attempted to address the social care problem using agent- based simulation approaches [13] [14]. Here we present a simulation that significantly extends these previous efforts. The model provides a more comprehensive simulation of social care provision behaviour, via the inclusion of a detailed socioeconomic model and the representation of care provision as not just a simple one-to-one exchange of resources, but a complex negotiation taking place between members of the care receivers’ kinship networks. ## 2 The Model ### 2.1 Motivations Here we present an agent-based model as a formal representation of the complex demographic and social processes affecting informal social care demand and supply, and the dynamics of the social care outcomes resulting from their interaction over time. Our primary concern at this stage, was not the precise replication of empirical, real-world data but, rather, the development of a theoretical framework potentially capable of representing the full complexity of the social care system. In our results we aim for _qualitative similarity_ to real-world social care trends, rather than precise numerical replication, in order to determine if our modelling of the underlying processes is producing appropriate outcomes. With these motivations in mind, at this initial stage the behavioural assumptions made in this paper should not be considered necessarily accurate, but rather should be seen as a first approximation, i.e. a necessary point of departure allowing us to provide proof-of-concept results and demonstrate the model’s potential as a policy-making tool. Further work and more specialized expertise will be needed to refine and revise the model’s behavioural assumptions in order for the model to be used in this way. Ultimately, our aim is for this simulation to facilitate the development and evaluation of alternative social care policies, including complex interventions aimed at behavioural change. Our addition of health care costs and a detailed economic model to the simulation can also clarify the impact of interventions on other key areas of policy, enable more robust policy evaluation, and reduce the incidence of unintended consequences derived from otherwise well-meaning policy prescriptions. Future versions of this framework can also be applied to nations other than the UK, simply by altering the simulated geography and population data. ### 2.2 Basics of the model The model itself is complex and contains many detailed economic and social processes and sub-processes; this section provides a high-level overview of the model’s functionality. For details of the previous models which formed the initial basis for this simulation, please refer to Noble et al. [13] and Silverman et al. [14]. 111For those who wish to examine the model more closely, or run it for yourself, you can find the annotated Python code available at https://github.com/UmbertoGostoli/ABM-for-social- care/releases/tag/v.09 The simulated agents occupy a space roughly based on UK geography. Agents live in houses which form towns containing clusters of up to 1225 houses. These clusters vary in size in rough proportion to real-world population density, which varies across the 8$\times$12 grid composing the model’s geography. The agent population is scaled down from real UK levels at a factor of roughly 1:10,000. The model updates in one-year time steps. Initial populations are generated and randomly distributed in the year 1860, and the model then runs until 2040 when final social care costs are calculated and recorded. We start the model in 1860 to ensure the population dynamics have time to settle before empirical population data (UK Census data) is integrated into the model in 1951. ### 2.3 Agent Life-Course Newborn agents are classed as dependent children until age 16, at which point they reach adulthood (the minimum working age in the UK). The agents then decide whether to continue to study or to start looking for a job. This choice is repeated every two years until the age of 24. This is a probabilistic choice that depends on the household’s income level and on the parent’s education level. When the agent decides to look for a job (or when it reaches age 24 at the latest), the agent then enters the workforce. During their working life, agents can be hired, fired and can change jobs. When an agent is unemployed, they find a job with a certain probability which depends on the unemployment rate (which is an input of the model), then start earning an income and paying tax. At a certain age, agents retire from the workforce (at age 65 when default parameter settings are used), at which point they stop paying tax and begin receiving a pension. This version of the simulation uses a Gompertz-Makeham mortality model to approximate mortality rates until 1951, as in Noble et al. [13]. From 1951 the simulation uses mortality rates from the Human Mortality Database [15]. After 2009 a Lee-Carter model is used to generate future mortality rates, as in Silverman et al. [14]. #### 2.3.1 Partnership Formation Upon reaching adulthood, agents can form partnerships with other agents 222Hereafter we use ‘partnership’ as shorthand for relationships capable of producing children.. Employed male agents and adult female agents are randomly paired with probabilities based on the agents’ socioeconomic difference, age difference and geographical distance (with the relative weight of these three factors depending on the model’s parameters). Age-specific annual divorce probabilities determine whether a couple dissolves their partnership. Fertility rates follow the procedures outlined in Silverman et al. [14], in which data from [16] and the Office for National Statistics [17] are used from 1950–2009, at which point Lee-Carter projections are used. #### 2.3.2 Migration As in the real world, agents can migrate for a variety of reasons. An agent becomes independent when they leave the parental home. This can happen when they form a partnership, when they find a job in a different town from the family home, or with a certain probability when they find a job in the same town of their parental home. When a partnership dissolves, the male agent will move elsewhere on the map, while any dependent children resulting from that partnership will stay with the mother. A family will relocate to a new house if one of the two parents finds a job in another town or when the family needs a larger house due to the increased size of the family. Retired agents with social care needs may elect to move into the household of one of their children, with a probability directly proportional to the amount of care provided by that household. Rarely, dependent children will be orphaned before reaching adulthood; in that case the agent will be adopted by a household of their kinship network or by a randomly-selected couple if no such household is available. ### 2.4 Health status and care need Agents begin in a healthy state and have no need of additional care. They may enter different categories of care need depending on age- and gender-specific probabilities. Table 1 shows the five possible categories of care need and the amount of care required per week at each level of need. Agents who enter a state of care need do not recover and return to normal health, but instead continue to progress to higher levels of need over time. The probability of progressing to higher care need levels depends positively on the agent’s age and on the discounted sum of the agent’s unmet care need in past periods 333The assumption is that a prolonged period of unmet care will increase the agents’ frailty and, therefore, the probability that their health will further deteriorate.. Table 1: The different care need categories, with the number of hours of care required per week Care need category | Weekly hours of care required ---|--- None | 0 Low | 8 Moderate | 16 Substantial | 32 Critical | 80 Social care provision is linked to informal care availability in an agent’s kinship network. An agent’s kinship network’s nodes consist of the households of agents with a familial relationship with the agent; the degree of kinship is defined as the network distance between the household and the agent. If they have time or available income, agents will provide informal or formal care to anyone in their kinship network with care need. The amount of care agents are available to provide depends on their status (through their income), their kinship relationship with the receiver and their geographical distance from the receiver (see the Kinship Network sub-section below for details). ### 2.5 Model Enhancements This updated version of the Linked Lives model presented in Silverman et al. [14] has been rewritten from the ground up and substantially extended for greater detail and realism. The following features have been introduced: * • The population is composed of five socioeconomic status groups. * • The care supply is provided by the agent’s kinship network, (a network of households which have a kinship relationship to the agent). * • Households allocate part of their income to care provision, which can be in the form of both informal and formal care. * • A salary function implying an inverse relationship between time taken off work to provide informal care and the agent’s hourly wage. * • Unmet social care needs affect the agents’ hospitalization probability (and the associated health care costs). #### 2.5.1 Socioeconomic Status Groups The population is composed of 5 distinct socioeconomic status (SES) groups. These categories follow the Approximated Social Grade, a socioeconomic classification produced by the Office for National Statistics, which is composed of 6 categories (A, B, C1, C2, D and E). For convenience, we redistributed category E (state pensioners, casual and lowest grade workers, unemployed with state benefits only) into the categories D (semi-skilled and unskilled manual workers) and C2 (skilled manual workers), to maintain a unimodal distribution. We initialised the groups’ distribution to roughly reflect the 2016 UK distribution. The SES groups are characterized by different education levels, different career paths (represented by income growth curves) and different unemployment rates. The introduction of SES groups has a number of effects on the various stages of agent life-courses. A higher socioeconomic position is associated with lower mortality and fertility rates, and to a lower probability of developing care need. In the marriage market, the probability that two opposite-sex individuals will form a couple depends on their SES distance (which determine the marriage probability together with the partners’ geographical distance and age difference)444The relationship between SES distance and probability of forming a couple is asymmetric: the probability of getting married decreases less rapidly with the SES distance if the higher-status individual is male rather than female.. In the job market, a higher SES is associated with higher starting and maximum salaries (but a lower salary growth rate); a higher probability to find a job and a lower probability to be fired (probabilities which are reflected in a lower unemployment rate); and a higher probability to change jobs if the job offer comes from a different town from their current hometown. Given that in this model care supply and relocation decisions depend on income level, the socioeconomic position of an agent affects its behaviour (and that of the household it belongs to) through the agent’s income, as a higher socioeconomic position is associated with a higher income level. For example, the share of income the household allocates to care supply depends positively on the household’s per capita income (see the Formal Care section below). We included an inter-generational mobility process which allows agents to move to a different SES group from their parents’. Each SES group is associated with an education level. From the age of 16, an agent can decide whether to continue its studies (a choice that will allow the agent to reach a higher education level and therefore a higher SES group) or start searching for a job (in which case the agent is assigned the SES group associated with the education level reached). This choice is made by the agents every two years, until the age of 24 (i.e. at ages 16, 18, 20 and 22) 555We assume each education step lasts 2 years. The educational stages correspond to A-level, Higher National Diploma, Degree and Higher Degree.. The probability that an agent keeps studying depends on three factors: the per capita available income (i.e, net of social care costs) of the agent’s household; the difference between the maximum education level reached by the agent’s parents and the agent’s current education level; and the amount of time the agent allocates to informal social care supply. In this highly stylized inter-generational mobility process, an agent’s SES group is determined by the education level the agent reaches666Meaning the SES group of agents ending their studies when they are 16, 18, 20, 22 and 24 year-old, will be, respectively, D, C2, C1, B and A.. #### 2.5.2 Kinship Networks Each agent is associated with a kinship network – a network of households containing at least one agent with a kinship relation to them. This network includes: * • the agent’s household (distance 0); * • the households of the agents’ parents and of children who are not part of the agent’s household (distance I); * • the households of the agents’ grandparents, grandchildren, brothers and sisters who are not part of the agent’s household nor of the households at distance I (distance II); * • the households of the agents’ uncles, aunts, nephews and nieces who are not part of the agent’s household nor of the households at distance I and II (distance III). The kinship network has two functions. First, the network defines the total care supply available for a particular agent in need. The care receiver’s total care supply is the sum of the available care supply of all the members of all the households that are part of the kinship network which met certain conditions. These conditions represent kinship- and space-specific limitations in the supply of care. Kinship relationship and distance determine the quantity of informal care that each member of the receiver’s kinship network is available to supply, as shown in Table 2.5.2: [htb] Amount of care agents can provide depending on their status and distance from the receiver Agent status Household (D-0) D-I D-II D-III Teenager 16 0 0 0 Student 24 16 8 4 Employed 28* 20* 12* 8* Unemployed 32 24 16 8 Retired 48 36 20 10 * • *These are the minimum amounts the employed can offer, representing the informal care provided outside of the working hours. If needed, additional informal care can be supplied by these agents taking time off their working hours. Moreover they can use their income to pay for formal care. See the Formal Care section for details. In particular, we assume first that the amount of care which an agent is available to provide to another agent depends positively on their kinship’s closeness. Agents living with the care receiver are an exception, as these agents’ available care supply is assumed to be equal to that of the receiver’s next of kin (i.e. spouse, children and parents), independently from the kinship degree (although most of the time the members of the receiver’s household are his/her next of kin). The second factor affecting the informal care supply availability is the physical distance from the receiver’s household. With regards to the physical distance we can distinguish three classes of care suppliers: * • agents living in the receiver’s household; * • relatives living in another household in the same town as the care receiver; * • relatives living in a different town. We assume that only households living in the same town of the care receiver can provide informal care. Moreover, in the case of formal care we assume that only the care receiver’s household and the households at distance I (i.e., the care receiver’s parents or children) are available to provide formal care. The care allocation process proceeds in a series of steps in which a 4-hour ‘quantum’ of care supply is transferred to an agent needing care from one of the households with available supply in its kinship network. The care allocation function first samples a care receiver from the pool of people with unmet social care need who have a kinship network with available care supply. The probability of each care receiver being sampled is directly proportional to the care receivers’ quantity of unmet care need. Then the allocation function samples a household from the selected care receiver’s kinship network with a probability that is directly proportional to a distance-weighted measure of the household’s available supply 777For an household at kinship network’s distance x from the care receiver, the weight is the reciprocal of the exponential function of the product of x and a parameter.. Once the supplying household has been selected, a ’quantum’ of care is transferred from one of the household’s members with available supply to the care receiver. The member who is to provide care within the selected household is determined in two steps. First, one of the household’s six possible care sources is selected with a probability proportional to the residual available care of each source. The six care sources consist of the five groups that can provide informal care as shown in Table 2: teenager, student, retired, unemployed and employed; plus a sixth source which represents the amount of care which the household is available to supply by allocating part of the household income (a category we call out-of-income care). The household’s out-of-income care supply is the share of income that the household has available to allocate to care, either directly in the form of formal care, or indirectly in the form of informal care provided by employed household members (who provide care by taking unpaid time off work). If one of the first five sources is selected, the household member with the greater residual available care within that category will provide care. If the out-of-income category is selected, the household will provide formal care if the lowest hourly wage among the hourly wages of the employed household members is higher than the hourly cost of social care. Otherwise, the employed household member with the lowest hourly wage will provide the ‘quantum’ of care888Given that the quantity of out-of- income care supply depends on the household’s per-capita income, this care allocation mechanism implies that the probability that members of the households will spend time providing informal care is inversely related to the household’s per-capita income. In other words, the wealthier the supplying household, the more likely that it will provide formal care. At the end of this step, both the care receiver’s care need and the supplying household’s availability are reduced by the amount of care transferred (i.e. the 4-hour ‘quantum’ of care). The allocation process is repeated until the set of care receivers with unmet care need and available care supply is empty (i.e. there are no more care receivers with outstanding care need and available care supply). The ultimate result of this care allocation function is that units of social care are distributed from potential providers to receivers across the receiving agents’ kinship network, and decisions about who provides care and choosing informal or formal care provision are made according to the composition, socioeconomic position and employment status of potential care providers. We suggest this detailed decision-making process better represents the complexities of care decisions that families need to navigate. The specifics of this complex process can be adjusted further by incorporating insights received from qualitative and quantitative data on informal care-givers and their decision-making. The kinship network also allows households to compute the informal care attraction associated with each town, which represents a rough measure of the informal care that a household expects to get or supply in a given town. The households use this town-specific attraction to make relocation decisions 999In this model, agents face the choice of relocating to other towns either if they receive a job offer or partner with someone from another town.. More precisely, for a particular household $H_{i}$, the informal care attraction of a town $T$, is a function of the sum of the members of the households with a kin relationship with H, living in the town $T$ 101010The household’s kinship network is obtained by joining the kinship networks of the household’s members.. The contribution of a household $H_{j}$ to the social care attraction is weighted according to the degree of kinship between $H_{i}$ and $H_{j}$. This weighted sum is then multiplied by the complement to one of the share of care provided by the government. Two assumptions characterise the towns’ informal care attraction: first, the larger the local kinship network in a town (in terms of number of people who are part of it), the higher the amount of social care the agent can expect to receive (or to supply) and the higher the social care ‘value’ associated with a town; second, the higher the share of care supplied by the government, the lower the importance of the potential informal care in the relocation decision. Apart from this ‘network size’ factor, a household’s probability of relocating depends on the relocation cost, which increases with the household’s size and the number of years the household’s members lived in the current town 111111The relocation cost R can be thought of as a measure of the social capital developed by the household in their current town. This social capital would be lost by relocating to another town, so it acts as a barrier to relocation. Formally, relocation cost is computed as: $R=K\sum\limits_{i=1}^{n}y_{i}^{p}$ where n is the set of the household’s members, yi is the number of years the household member i spent in the current town and p is a parameter with value smaller than 1 (as additional years will increase the member’s social capital by increasingly smaller amounts). K is a scaling parameter., and the town’s homophily attraction, which depends on the town’s share of people belonging to other SES groups. Towns with a larger share of unoccupied houses are more likely to be chosen for relocation. #### 2.5.3 Formal Care Both informal and formal care are allocated through the care recipient’s kinship network. Each household allocates a share of its income to care, which increases with the household’s per capita income 121212More precisely, with x being the household’s per-capita income, the share allocated to care supply is the complement to one of the reciprocal of the exponential function of the product of x and a parameter.. We assume that formal care can be provided only by the care receiver’s own household or from the households in the care receiver’s kinship network with distance equal to 1131313In other words, only the care receiver’s parents and children are available to pay for formal care if they cannot provide informal care.. As explained above, the choice between the informal and formal care is stochastic, with probabilities equal to the relative availability of the two kinds of care. In order for the household to supply formal care, the hourly wage of the working member with the lowest wage (and available time left) must be higher than the price of formal social care. If the hourly wage of this member is lower than the price of formal social care, they will take time off work to provide informal care (as in this case it is cheaper for the household to give up the agent’s salary than pay for formal care). However, the supplying household can provide informal care only if it is in the same town of the care receiver, otherwise it will provide only formal care. #### 2.5.4 Salary function. We model the hourly salary that an agent receives (on average) when it finds a job using the Gompertz function. This function is a double exponential that takes the following three SES-specific arguments: * • the initial salary level; * • the final salary level; * • the salary growth rate. The salary growth rate is multiplied by the agent’s cumulative work experience, which is the discounted sum of all the fractions of the working week allocated to work (if an agent works full time, this fraction is equal to 1) 141414Formally, the salary function is: $w=Fe^{ce^{-rt}}$ where: $c=ln{\frac{I}{F}}$ and F is the maximum hourly wage, I is the initial hourly wage, r is the wage growth rate and t is the discounted cumulative work experience.. This equation implies that if an agent takes time off work to provide informal care, this will result in less work experience and, therefore a lower hourly salary. Given the properties of the care allocation mechanism, the agents employed with the lowest hourly salary will be more likely to provide informal care in the future (if there are not retired/students/unemployed agents or if the household’s retired/students/unemployed agents do not have available informal care supply). In this model new mothers devote their time to caring for their newborn, meaning that we see a gender pay gap emerge due to the interaction between: * • the allocation of part of the household’s income to care supply; * • the choice of the care giver within the supplying household; * • the salary function. Retired agents receive a pension which is proportional to their final income level. We assume that when an agent needs care they retire due to sickness and thereafter receive a pension. If their care need is low or moderate, their ill-health pension is their normal pension reduced in accordance with the ratio between lost working years and the maximum number of working years (in other words, the working years of a person in good health). If the agent’s care need is substantial or critical, the ratio is computed by reducing the number of lost working years by 50%. #### 2.5.5 Hospitalisation We assume that agents with care need will spend additional time in the hospital, time whose duration is a function of the agents’ care need level and the average discounted share of unmet care need. The higher the agent’s care need level and the higher her average share of unmet care need, the greater the number of days the agent is expected to spend in the hospital each year. By multiplying the sum of the hospitalisation’s duration by the hospitalisation cost per day, we can determine the cost of unmet care need for the public health service. #### 2.5.6 Simulation steps More specifically, the simulation unfolds through the following eleven steps: 1. 1. deaths: with a given probability which depends on age, SES and care need level, some agents are removed from the population; 2. 2. adoptions: children without parents are adopted; 3. 3. births: with a probability which depends on age and SES, married woman give birth to new agents; 4. 4. divorces: some couples dissolve (and the male relocates); 5. 5. marriages: with a certain probability which depends on age, SES and geographical location, some singles get married and go to live together. 6. 6. social care allocation: social care is transferred from agents with available hours (or income) to agents with social care needs; 7. 7. age transitions: the age of the agents is incremented, and their age-related status is updated. 8. 8. social transitions: students decide whether to start working or keep studying, depending on their family income per capita, their parents’ SES and their care responsibilities. In case they start looking for a job, they are assiged the SES associated to the education level they have reached. 9. 9. job market: employed and unemployed agents receive new job offers (and eventually accept them) and employed agents are fired, with probabilities which depend on the age and SES-specific unemployment rate. 10. 10. relocations: agents who have accepted job offers from towns different from their current town, relocate. 11. 11. care transitions: with a probability which depends on age, SES, current care need level and unmet care need, agents pass to higher care need levels or are hospitalised. ## 3 Social Policy Experiments ABM allows us to investigate the effects of virtual social or economic policies. This model is characterised by various policy-related parameters, the value of which can be directly or indirectly related to specific measures of social care policy. Therefore, by varying these parameters, we can examine ‘what if’ scenarios and simulate social care outcomes and costs under alternative social care policies. For illustrative purposes, we investigated the effect on social care of two policies: the introduction of tax-deductible social care expenses, and direct governmental funding of care for people above a certain level of care need. For the first policy, we assume that the carers are allowed to deduct 100% of social care expenses from their tax. In the second policy experiment, we assume that the government directly pays for the social care expenses of those agents with the highest social care need (i.e., care need level at ‘critical’, see Table 1). We assume that the two policies are implemented from simulation year 2020 and compare the outputs of these two policy scenarios with the benchmark no-policy scenario in the period 2020-2040. We present two results: the hours of unmet care need in each scenarios and the incremental cost-effectiveness ratio (ICER) of the two policies considered 151515In this paper we formally define the ICER as: $e=\frac{C_{p}-C_{b}}{U_{b}-U_{p}}$ where C is the total additional cost of policy implementation, U is the total unmet care need and the subscripts p and b indicate two scenarios (i.e. the policy scenario and the benchmark scenario, respectively). In this case, being b the no-policy scenario, Cb is equal to $0$. . Figure 1: Hours of informal and formal care received and of unmet care need in the simulated population. Figure 2: Hours of informal and formal care received and unmet care need per recipient. ## 4 Results As described above, this simulation is highly complex with numerous processes at play, and a number of parameters governing system behaviour. For these early-stage results, we present figures from a representative single run at default parameter values plus a comparison of this run, taken as the benchmark, with two simulations where we introduce two alternative policy interventions in the year 2020. The single-run charts displayed here were chosen to highlight the key features of this new simulation framework, namely the modelling of formal care, additional economic and labour market details such as socioeconomic status groups, interaction between social care need and health care demand, and agent kinship networks. Figure 2 shows the evolution of hours of informal and formal care delivered and of unmet social care need per week for the whole population over the period 1990-2040. In our simulations, the total population reaches a peak of around 10,000 agents in the period 2025, and then decreases slowly to around 8000 by 2040. As in Bijak et al. [18], the simulation’s population projections roughly match those of the Office for National Statistics in the UK if we assume no international migration. Figure 3: Hours of informal and formal care received and unmet care need per recipient by care need level. Figure 4: Hours of informal and formal care received and unmet care need per recipient by SES group (with I being the poorest SES group). Figure 5: Informal care supplied by women as share of total informal care supplied, for the population. Figure 6: Ratio of women’s to men’s income, for the population as a whole. In Figure 2 we can see that from 2010 informal care cannot keep pace with the growth in care need, with a consequent rapid increase of unmet care need, which reaches a peak in 2025. In Figure 2 we show the weekly hours of informal and formal care delivered and the unmet social care need per recipient. Here we can see that the negative trend of the unmet care need after 2025 in Figure 2 is entirely due to the decrease of population: in fact, the unmet care need per recipient keeps growing quite steadily up to 2033. Figure 4 shows the mean informal and formal care received and the unmet care need by care need level over the period 2020-2040 (note that the heights of the bars correspond to the weekly hours of care required as shown in Table 1). We can see that most of the unmet care need is due to the agents with the highest care need. Figure 4 shows the mean informal and formal care received and the unmet care need by SES group (with I being the poorest SES group). We can see that the relative weight of informal care with respect to formal care decreases from the poorest to the wealthiest SES group, a result which reflects empirical findings. As expected, by comparing the green bars in Figure 4, we can see that the poorest SES group has a somewhat higher level of unmet care need compared to the wealthiest group but also that unmet care need is a significant share of total care need across SESs. Figure 6 shows the share of total informal care provided by women. At the population level this figure starts from just below 75% in 1990, then decreases steadily until 2020, when it starts fluctuating just above 60%, a value which is in line with empirical findings. Moreover, there is a marked difference between SES groups, with the poorest SES group having the highest (and growing) gender inequality. As shown in Figure 6, this inequality in care provision affects income inequality between genders. At the population level, female agents’ income is about 5% lower than the male’s income. Therefore our model can explain at least part of the gender pay gap. Figure 8 shows the total informal and formal care provided in the period 2020-2040 according to the kinship distance between caregiver and receiver: household (distance 0); parents and children (distance 1); grandparents, grandchildren and brothers (distance 2); aunts, uncles and nephews (distance 3). We can see that most social care is supplied within the household, however a significant part comes from outside the household, especially formal care. Figure 8 shows annual per capita health care cost due to the hospitalization of people with care needs. We can see that although the total unmet care need levels off after 2020 (as shown in Figure 2), the per capita health care cost keeps growing until the late 2030s. This is due to the decrease in population after 2015, resulting in the per capita health care cost reaching its peak 15 years after the unmet social care need. Figure 7: Total informal and formal care supplied by different groups of receivers’ suppliers: household, parents and children (I), grandparents, grandchildren and brothers (II) and aunts, uncles and nephews (III). Figure 8: Per capita health care cost due to the hospitalisation of people with social care needs (per year). In the last two charts we investigate and compare the efficacy of two policy interventions. These two policy experiments are illustrative examples that show how this model may be used as a tool for the design and evaluation of social care policies. In these examples we consider a tax deduction policy, in which the households are allowed to deduct all the social care expenses from their tax base, and a direct funding policy, in which the social care needs of people with the highest need level (the ‘critical’ level in Table 1) are taken care of directly by the state. The policies are implemented starting in simulation year 2020, and the two scenarios are compared to the benchmark no- policy scenario. In Figure 10 we can see that the two policies have a positive impact on total unmet social care need, as expected. However, while the tax deduction policy has a marginal effect, decreasing the total unmet care need by about 11%, the direct funding policy has a drastic effect, reducing the total unmet care to 23% of the original level. Although the tax deduction policy has a smaller effect on the unmet social care need, Figure 10 shows that the tax deduction policy is more cost-effective than the direct funding policy. Figure 9: Unmet care need per recipient in the no-policy scenario and with the two alternative social policies. Figure 10: Cost-effectiveness of ‘tax deduction’ and ‘direct funding’ policies. This is because, while the effect of direct funding on total unmet social care is seven times the effect of the tax deduction, the cost of the former policy compared to the latter is much higher, as shown in Figure 12. We note however that notwithstanding its cost-effectiveness, the effect of the tax deduction policy on unmet care demand is limited and thus it is not a solution to the growing social care demand on its own. Finally, Figure 12 shows the discounted value of the hospitalization costs in the period 2020-2040 in the three policy scenarios. We can see that, apart from the direct costs associated with each policy, our model allows us to estimate the policies’ spillover effects (in this case, the effect on hospitalization cost), effects which we need to take into account to assess the relative advantages and disadvantages of alternative policies. Figure 11: Unmet care need per recipient in the no-policy scenario and the two alternative social policy scenarios. Figure 12: Cost-effectiveness of ‘tax deduction’ and ‘direct funding’ policies. ## 5 Discussion While the results presented here are still early, the outcomes of these simulation runs suggest that this model can produce broadly realistic portraits of the coming trends in UK social care. The overall population dynamics largely mirror those in evidence in the real world, with the notable exception of international migration which is not modelled here. The simulation also produces inequalities in care provision by both SES and gender; while the data presently available is not sufficient to determine if the socioeconomic inequalities are accurate, the gender pay gap is reflective of current UK norms. The simulation results on unmet care need lend credence to the worries expressed by UK policy-makers. The simulation shows that unmet care need will continue to grow over time, and given that 1.2 million older people in England alone are not receiving sufficient care [2], further growth in these figures could lead to severe consequences for public health. The results also show marked inequalities in care provision, with the wealthy capable of supporting their aged relatives in need through privately-funded care while still staying in work and receiving high wages. Among the lower-income groups women are providing an overwhelmingly larger share of informal care as compared to men, meaning that women at the lower end of the socioeconomic scale are more likely to be pushed out of work and toward unpaid care provision. Our use of kinship networks as the mechanism for distributing care illustrates that kinship distance impacts care provision behaviour, and thus is an important aspect of care decision-making that should be taken into account in future research. We observed a significant difference in caring behaviours between within-household and kinship distance I agents, with the latter providing much more formal care than informal. This suggests that future models in this area should consider kinship networks and their impact on caring behaviours. Understanding the negotiation process within families regarding care provision will be an important aspect for policy-makers to examine as well, as policies put into place to support and encourage informal care may need to take account of these complex social aspects of care. Finally, the speculative policy scenarios we provide here demonstrate the efficacy of this platform as an aid to policy-makers who wish to examine the impact and possible unintended side-effects (spillover effects) of their planned policy interventions. The scenarios showed that while some policies such as tax deductions may seem like an easy ‘win’ for the policy-maker, the actual impact on social care need is minimal. For significant reductions in care need and related health care costs, more expansive – and expensive – interventions are needed. The direct funding scenario here is a relatively simplistic example used to demonstrate this point; the model is capable of simulating the results of much more nuanced policy interventions, due to the detailed modelling of key social care mechanisms and related economic and social processes. ## 6 Future Work While the current simulation does broadly reflect the expected trends in social care provision, validating the results is difficult. Ongoing projects in Scotland are linking administrative data sources to develop a clearer picture of social care provision and receipt across the country. In future revisions of the model we intend to make use of these data by focusing the simulation on Scotland rather than the whole UK. As the framework matures and incorporates more real-world data, we will use Gaussian process emulators to perform detailed sensitivity analyses, following the example of Silverman et al. [14]. Apart from the validation process, this model can be expanded in various directions. First, according to the Office for National Statistics the proportion of informal childcare to GDP increased from 13.8% to 17.6% in the period 2005-2014 [19]. Child care represents a significant part of the total household’s informal care need and, therefore, it affects the care supply which can be allocated to social care needs. Our next planned update to the model will include a child care mechanism which will allow us to model the interaction with social care supply. Second, according to the Office for National Statistics, in 2016, 28.2% of births in England and Wales were to women who were not born in the UK [20]. Moreover, projections show that post-2016 immigration accounts for 77% of total population growth until 2041 [21]. The addition of international migration to the model will allow us to understand UK demographic dynamics in future decades. Following on from the simplistic policy comparison presented here, in future work we will consult with social care policy-makers in Scotland to model proposed social care policies in more detail. Social care policy is very complex, with the programmes available varying not only by region (England, Wales, Scotland and Northern Ireland), but by individual local councils. The model is capable of representing these details by varying relevant policy parameters across the simulated UK geography. Once we are able to replicate the current state of social care policy in the UK, we will then be able to develop detailed evaluations of the possible impacts of new policy interventions. Our model’s ability to capture detailed individual-level care decision-making will enable policy-makers to examine policies aimed at behavioural change as well as broader economic and social policies. Finally, while the current simulation is very UK-centric, the core simulation engine can be altered easily to examine the situation in other countries. Ultimately we hope to produce a simulation framework that is capable of modelling informal and formal social care across a variety of cultural and economic contexts. ## 7 Data Availability The code is available on GitHub: https://github.com/UmbertoGostoli/ABM-for-social-care/tree/v.09 The output data used for the figures is available in this Dryad repository: https://datadryad.org/review?doi=doi:10.5061/dryad.6tm6183 ## 8 Authors’ Contributions Umberto Gostoli developed the model (based on a previous version, developed by Eric Silverman), run the simulations, gathered the data and produced the charts. Eric Silverman conceived the research and helped develop the model. Both authors contributed to draft the paper. All authors gave final approval for publication. ## 9 Funding The authors are part of the Complexity in Health Improvement Programme supported by the Medical Research Council (MC_UU_12017/14) and the Chief Scientist Office (SPHSU14). ## 10 Acknowledgements We would like to thank Chris Patterson and Lauren White from the MRC/CSO Social and Public Health Sciences Unit for their helpful comments. ## References * [1] Coleman DA. 2002. Replacement migration, or why everyone is going to have to live in Korea: a fable for our times from the United Nations. Philos Trans R Soc London B Biol Sci 357. * [2] Age UK. 2017. The Health and Care of Older People in England 2017. London, UK: Age UK. * [3] Ipsos MORI. 2017. Unmet Need for Care, Report 15-042098-01. London, UK: Ipsos Public Affairs. * [4] National Audit Office. 2016. Discharging older patients from hospital. London, UK: National Audit Office. * [5] Aldridge H, and Hughes C. 2016. Informal carers and poverty in the UK. London, UK: New Policy Institute. * [6] Wittenberg R, Hu B. 2015. Projections of demand for and costs of social care for older people and younger adults in England, 2015 to 2035. London, UK: London School of Economics. * [7] Carers UK. 2015. Facts About Carers - Policy Briefing. London, UK: Carers UK. * [8] Keating N, Otfinowski P, Wenger C, Fast J, Derksen L. 2003. Understanding the caring capacity of informal networks of frail seniors: a case for care networks. Ageing Soc 23, 1. * [9] Tennstedt SL, McKinlay JB, Sullivan LM. 1989. Informal care for frail elders: The role of secondary caregivers. Gerontologist 29, 5. * [10] Petrie K, Kirkup J. 2018. Caring for carers. London, UK: The Social Market Foundation. * [11] Wettstein G, Zulkarnain A, et al. 2017. How Much Long-Term Care Do Adult Children Provide? Issue in Brief. * [12] Laing and Buisson. 2016. Care of Older People: UK market report. London, UK: LaingBuisson. * [13] Noble J, Silverman E, Bijak J, Rossiter S, Evandrou M, Bullock S, Vlachantoni A, Falkingham J. 2012. Linked lives: the utility of an agent-based approach to modelling partnership and household formation in the context of social care. Proceedings of the 2012 Winter Simulation Conference. Eds: C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose and J.M. Uhrmacher. Publisher: IEEE. * [14] Silverman E, Hilton J, Noble J, Bijak J. 2013. Simulating the cost of social care in an ageing population. Proceedings of the 27th European Conference on Modelling and Simulation. Eds: W. Rekdalsbakken, R.T. Bye, H. Zhang. Publisher: Digitaldruck Pirrot. * [15] Human Mortality Database 2011. url: http://www.mortality.org/cgi-bin/hmd. Accessed 26/07/2011 * [16] Eurostat Statistics Database: Domain Population and Social Conditions. url: http://epp.eurostat.ec.europa.eu. Accessed 27/10/2011 * [17] Office for National Statistics 1998. Birth Statistics, Series FM1 (27). London, UK: Office for National Statistics. * [18] Bijak J, Hilton J, Silverman E, Cao VD. 2013. Reforging the Wedding Ring: Exploring a Semi-Artificial Model of Population for the United Kingdom with Gaussian process emulators. Demogr Res 29, 27 * [19] Office for National Statistics 2016. Chapter 2: Home produced childcare services. London, UK: Office for National Statistics. * [20] Office for National Statistics 2017. Births by parents country of birth, England and Wales: 2016. London, UK: Office for National Statistics. * [21] Cangiano A. 2018. The Impact of Migration on UK Population Growth. Oxford, UK: University of Oxford. url: https://migrationobservatory.ox.ac.uk/resources/briefings/the-impact-of-migration-on-uk-population-growth/
Feature Attribution Explanations for Spiking Neural Networks Elisa Nguyen University of Twente Enschede, Netherlands Meike Nauta University of Twente Enschede, Netherlands Gwenn Englebienne University of Twente Enschede, Netherlands Christin Seifert University of Marburg Marburg, Germany Third-generation artificial neural networks, Spiking Neural Networks (SNNs), can be efficiently implemented on hardware. Their implementation on neuromorphic chips opens a broad range of applications, such as machine learning-based autonomous control and intelligent biomedical devices. In critical applications, however, insight into the reasoning of SNNs is important, thus SNNs need to be equipped with the ability to explain how decisions are reached. We present Temporal Spike Attribution (TSA), a local explanation method for SNNs. To compute the explanation, we aggregate all information available in model-internal variables: spike times and model weights. We evaluate TSA on artificial and real-world time series data and measure explanation quality w.r.t. multiple quantitative criteria. We find that TSA correctly identifies a small subset of input features relevant to the decision (i.e., is output-complete and compact) and generates similar explanations for similar inputs (i.e., is continuous). Further, our experiments show that incorporating the notion of absent spikes improves explanation quality. Our work can serve as a starting point for explainable SNNs, with future implementations on hardware yielding not only predictions but also explanations in a broad range of application scenarios. Source code is available at <https://github.com/ElisaNguyen/tsa-explanations>. Explainability, feature attribution, spiking neural network § INTRODUCTION Spiking neural networks (SNNs), also known as third-generation artificial neural networks [1], consist of spiking neurons. Spiking neurons emit spikes at certain points in time to transmit information, similar to action potentials in biological neurons and are thus close to biological reality [2]. SNNs are at least as powerful as deep artificial neural networks with continuous activation functions (ANNs) [1]. Their applicability to supervised, unsupervised and reinforcement learning are active research areas [3]. However, the predictive performance of SNNs is not yet on par with ANNs due to the non-differentiability of spikes, making SNN optimization an active research field [4]. Nonetheless, SNNs are interesting as they yield the potential to be implemented in neuromorphic hardware, which is energy- and memory-efficient [5]. Moreover, studies show improved adversarial robustness of SNNs [6]. Their inherent temporal nature also lends itself naturally to processing temporal data making them suitable for critical domains relying on sensor data such as autonomous control [7] and applications using biomedical signals [8]. Critical domains, for example medical applications, pose specific requirements to machine learning models. In addition to high predictive performance, models should make predictions based on the right reasons and be transparent about their decision-making process [9]. Exposing important information of machine learning models is the focus of research on EXplainable Artificial Intelligence (XAI) [10, 11]. Model explanations address the requirement for algorithm transparency and provide methods to inspect model behavior [12]. While various explanation methods exist for second-generation artificial neural networks (ANNs) [11, 10], to the best of our knowledge, the current body of work in explaining SNNs only comprises two major works, namely [13, 14]. XAI for SNNS is thus yet a sparsely studied research area. If left unaddressed, this research gap could lead to situations where SNNs are methodologically mature for real-world deployment but remain unused because they lack transparency. We contribute to the field of XAI for SNNs by presenting Temporal Spike Attributions (TSA), an SNN-specific explanation method. The resulting explanations are local, i.e., explain a particular prediction and answer the question: `Why did the model make this decision?' [10]. We build on the explanation method of [14] which uses the model's spike trains. We additionally include the SNN's weights to consider all model-internal variables and regard the absence of spikes to be informative as it also impacts the resulting spike patterns. TSA results in more complete and correct explanations due to the utilization of comprehensive model-internal information. We demonstrate TSA on time series data. In contrast to anecdotal evidence which is mainly used to evaluate XAI methods [15], we systematically evaluate TSA quantitatively w.r.t. multiple aspects relevant to explanation quality: The correctness of the explanations (correctness), the explanation's ability to capture the complete model behavior (output-completeness), sensitivity to input changes (continuity), and explanation size (compactness). In summary, our contributions are as follows: * We present Temporal Spike Attribution (TSA), a local feature attribution method for SNNs inferred from all model internal variables. * We apply Kim & Panda's [14] explanation method, which uses only spike train information, on time series data as a baseline to show the impact of incorporating all model internal information in TSA. * We thoroughly validate TSA's explanation performance using a multi-faceted quantitative evaluation of feature attribution explanations for SNNs, evaluating correctness, completeness, continuity and compactness. Because SNNs are more popular in neuroscience than machine learning, we briefly introduce SNNs in Section <ref>. Section <ref> reviews related works on SNN explainability. We reflect on the effect of SNN-internal variables on a prediction and present our explanation method, TSA, capturing these effects in Section <ref>. The multi-faceted evaluation in Sections <ref> and <ref> shows the improved explanation performance of TSA. We discuss results in Section <ref> and conclude in Section <ref>. § SPIKING NEURAL NETWORKS This section introduces SNNs and their components. SNNs are characterized by their computational units, spiking neurons [2]. Analogous to traditional ANNs, SNNs are networks of spiking neurons with weighted connections. SNNs process information in the form of spike trains, i.e., spikes over time. Neuron models differ in the spike generation mechanisms. Our proposed method is independent of the chosen spike generation mechanism. Without loss of generality, we employ the commonly used leaky integrate-and-fire (LIF) neuron model (cf. Figure <ref>). The membrane potential $u$ of a LIF neuron can be modeled with a linear differential equation: \begin{equation} \label{eq:lifintegrate} \tau_{m}\frac{d u}{d t} = -[u(t)-u_{\textrm{rest}}] + I(t), \end{equation} where $I(t)$ describes the amount by which $u$ changes to external input. $\tau_m$ is the time constant of the neuron, which dictates the decay of $u$ in time. A LIF neuron fires when $u(t)$ crosses a threshold $\theta$ from below. Upon firing a spike, the membrane potentials of the postsynaptic neurons are changed by the weight value, as the spikes are propagated forward in the network. The sign of the weight defines the synapses' nature (i.e., inhibitory or excitatory) and the weight value defines the strength of the postsynaptic potential. After firing, $u$ is reset to a low reset potential $u_{r}$, and then slowly increases back to its resting value $u_{\textrm{rest}}$. SNNs internally process spike trains and thus require input data in the form of spikes. The translation of non-spiking to spiking data is called neural coding. Different neural codes exist with temporal and rate coding being the most common. Temporal coding is biologically more plausible than rate coding because it emphasizes exact spike times as information carriers [2]. Schematic overview of an SNN with LIF neurons. Input spikes $x_i$ are fed to the SNN. Internally, it transmits information as spike patterns $s_i$, which are propagated forward with weights $w_{j,i}^{(l)}$ to determine the postsynaptic potential (PSP). The state of the postsynaptic neuron $u_j$ is changed by the PSP. § RELATED WORK Our work is positioned in the field of EXplainable Artificial Intelligence (XAI) for SNNs. XAI researches methods to address the black-box nature of machine learning models and explain their reasoning to laypersons and experts  [12]. Machine learning models can either be explained globally by providing an overview of the whole model, or locally by explaining single predictions. Global explanations aim at providing a global understanding of how input relates to an outcome distribution addressing the model explanation and model inspection problem [10, 11]. Examples of global explanation methods are [16] and [17]. The complexity of global explanations increases with the number of input features and model parameters and is therefore a challenging problem in XAI. Local explanations target individual model predictions and address the outcome explanation problem [10, 11], i.e. they explain the relation between a specific model input and output. Two prominent examples of local explanations are LIME [18] and SHAP [19]. We aim to explain predictions of SNNs and develop a local explanation method. §.§ Explaining Spiking Neural Networks Comparison of our method (TSA) to related work FSF [13] and SAM [14] in terms of XAI taxonomy [10] and evaluation [15]. Completeness refers to output-completeness. FSF [13] SAM [14] TSA (Ours) XAI Method Post-hoc Post-hoc Post-hoc Scope Global Local Local Data type Tabular Images Time-series Model MC-SEFRON Convolutional SNNs Specific to SNNs Correctness $\checkmark$ $\times$ $\checkmark$ Completeness $\times$ $\times$ $\checkmark$ Continuity $\times$ $\times$ $\checkmark$ Compactness $\times$ $\times$ $\checkmark$ Coherence $\times$ $\checkmark$ $\times$ Few works have studied explaining SNNs, which we introduce in the following. Jeyasothy et al. [13] present feature strength functions (FSFs) to explain a specific SNN model architecture MC-SEFRON with a population encoding layer, no hidden layers, and time-dependent weights. FSFs invert the population coding scheme to link the explanation back to input features and extract interpretable knowledge. FSFs are functions of the input, i.e. in a human-understandable domain rather than the temporal domain of spike trains. The FSFs are a global and model-specific explanation method, which addresses the model inspection problem [11]. In contrast, we target local explanations to explain model decisions that are applicable to a wider range of SNN models. Kim & Panda [14] present Spike Activation Map (SAM), a local explanation method for SNNs. SAM generates visual heatmaps based on a calculation of input feature importance (in the image classification case, these are pixels) and was studied on deep convolutional SNNs with LIF neurons on image data. SAM is inspired by the biological observation that short inter-spike intervals are deemed to carry information because they likely cause a postsynaptic spike. The authors define contribution scores of single spikes and aggregate them to represent the spike train contribution in the neuronal contribution score (NCS). The final activation map is computed at time $t$ by a forward pass in the network by multiplying NCS's at $t$ and summarizing NCS's across the channel axis of convolutional layers. In contrast to the model-specific FSFs [13], NCSs are model-agnostic because they are solely based on spike information which is part of all SNNs. Our explanation method TSA is model-agnostic but not model-independent because TSA also takes the model's weights into account, which represent what the SNN has learned. Furthermore, we aim to cater to the intrinsic temporal design of SNNs and therefore designed TSA for explaining predictions of a time series classification task. We look at time series data as opposed to image data to better fit the temporal nature of SNNs. Thus, we contribute to local explanations for SNNs and compare TSA to SAM. Table <ref> presents a concise comparison of the related work and our explanation method. We do not compare to model-agnostic methods for ANNs (e.g., LIME [18]) because their application to SNNs is not trivial. Moreover, ANN-based explanations do not rely on SNN model internals and hence might not capture the true model behavior [20]. Our aim is an SNN-specific explanation method. §.§ Evaluating SNN Explanations In contrast to evaluating the predictive performance of models with quasi-standard evaluation metrics (e.g., F-score, AUC), evaluating explanations is an ongoing research topic. Since the recipients of explanations are humans and are usually context-dependent, there is no standard evaluation protocol [12]. In addition, a good explanation fulfills several different properties, e.g. the correctness or faithfulness in explaining the model behavior and human-comprehensibility among others [15, 21, 22]. Moreover, evaluating explanations is challenging because the ground truth (what the model actually learned) is rarely known. To overcome this issue, one could apply the “Controlled Synthetic Data Check” [15] where a model is applied to (structured) synthetic datasets such that the true data distribution is known, e.g. [23]. We apply this method by constructing a synthetic data set for a classification problem on two input sensors (cf. Section <ref>). The evaluations of FSFs and SAM were each focused on one aspect of explanation quality: Jeyasothy et al. [13] evaluated “reliability" by using FSFs instead of model weights in the same prediction task, testing how correctly the FSFs capture the global model behavior. Kim & Panda [14] tested the “accuracy" of SAM by comparison with an existing heatmap explanation on ANNs, i.e. how coherent and aligned SAM explanations are to other explanations. In our work, we perform a multi-faceted evaluation based on the Co-12 framework for evaluating XAI [15] on a synthetic and a real-world data set. More specifically, we evaluate correctness, output-completeness, continuity, and compactness as defined by Nauta et al. [15]. We chose this subset of Co-12 properties as we focus on studying the content of TSA explanations first before considering presentation- and user-related properties. § TEMPORAL SPIKE ATTRIBUTION (TSA) SNNs learn internal weights during training and process data as spike trains (cf. Section <ref>. Whereas the Spike Activation Map (SAM) [14] considers only spike trains to generate explanations, TSA captures all information available in the model for a prediction $\hat{y}$ at time $T$ of one $D$-dimensional input $x^{D\times T}$. These information are (i) spike times $S^{(l)}$, (ii) learned weights $W$, and (iii) membrane potentials at the output layer $U^{(L)}$. Each component has an influence on the output $\hat{y}$ and therefore should be included in the explanation. We explain the single components in Sections <ref> to <ref> and describe their integration to a feature attribution explanation in Section <ref>. §.§ Influence of Hidden Neuron Spike Times In temporal coding, the information about the data is assumed to be in the exact spike times of a neuron [2]. The spike times indicate the attribution of neurons $I$ to their downstream neurons $J$ and represent the model's activation in a prediction. Hence, the spike times influence the prediction. The relationship between the spike times of $I$ and their attribution to $J$ is captured in [14]'s neuronal contribution score (NCS). The NCS is characterized by $\gamma$, which specifies the steepness of the exponential decay over time. We define the decay at the same rate as the decay of the LIF neuron's membrane potential $u$ to reflect the dynamics of the model. While we build on the NCS, our spike time component $N_{i}(t)$ additionally considers the absence of spikes as information carriers. In a fully connected SNN, each neuron of layer $l$ is connected with each neuron of layer $l+1$. The weighted sum of the neuron's spiking behavior in $l$ determines the amount by which the membrane potentials of neurons in $l+1$ are changed. Absent spikes do not contribute to this sum. Hence, if a neuron $i$ does not spike at time $t$, it does not contribute actively to a change in $J$, allowing a natural decay. Absent spikes can thus be understood in two ways: (i) an absent spike does not affect $u_j$ (cf. Eq. <ref>) or (ii) an absent spike affects $u_j$ by not changing $u_j$ and letting it decay naturally (cf. Eq. <ref>). In the second case, the attribution of absent spikes to the postsynaptic neuron is negative. However, absent spike attribution should not weigh as much as spikes because their effect is highly dependent on other incoming synapses. We weigh the contribution of absent spikes by $\frac{1}{B}$ as an approximation of their attribution factor, with $B$ being the size of the preceding layer. This approximation is simple and reflects the relative magnitude of a non-spiking neuron's attribution. Formally, we calculate the spike time component $N_{i}(t)$ as follows: \begin{align} \small \label{eq:ingredient_spikes_s} N^{S}_{i}(t) &= \sum_{t'=0}^{t} \begin{cases} \exp (-\gamma |t - t'|) & \text{if } x_{i, t'}=1\\ 0 & \text{Otherwise} \end{cases} \end{align} only considering the presence of spikes, and \begin{align} \small \label{eq:ingredient_spikes_ns} N^{NS}_{i}(t) &= \sum_{t'=0}^{t} \begin{cases} \exp (-\gamma |t - t'|) & \text{if } x_{i, t'}=1\\ -\frac{1}{B}\exp (-\gamma |t - t'|) & \text{Otherwise} \end{cases} \end{align} when including the information about absent spikes. In our experiments, we compare both variants as TSAS (spikes only according to Eq. <ref>), and TSANS (non spikes included, according to Eq. <ref>). §.§ Influence of Model Weights The weights $W$ represent the strengths of connections of an SNN and have so far not been considered in feature attribution explanations for SNNs. We specifically include weights to capture the contribution of connections to the predictions of an SNN. Weights determine the impact of spikes to downstream neurons $J$ directly, where the weight value indicates the weight's attribution to a neuron $j \in J$, and the sign specifies whether the synapse is excitatory or inhibitory and leads to an in- or decrease of $u_j$. Since weights are a property of the model, i.e. independent of the input, the weight contribution obtains its meaning in combination with other components. §.§ Influence of Output Layer The output layer is the last computational layer, i.e., the basis for prediction. The output layer consists of spiking neurons, thus the prediction is dictated either by spike patterns $S^{(L)}$ or membrane potentials $U^{(L)}$. In the first case, the influence of the output layer can be captured by the NCS. The SNN models in our work follow [24]'s architecture and make predictions based on the latter, i.e. the largest $u_i^{(L)}$ determines the predicted class. We capture the effect of the output layer on a model prediction by considering the classification softmax probability $P(t)$ in the computation of an explanation. \begin{equation} \Vec{P}(t) = \text{softmax}(U^{(L)}) \end{equation} §.§ Computation of TSA TSA combines neuron spike times, model weights, and output layer information in a forward pass into a final score as shown in Algorithm <ref>. A neuron $i$ generates spike train $s_i$ to the downstream computational layers. The neuron is connected to the next layer $l+1$ via the weight matrix $W^{(l)}$. Spike time information is captured by $\Vec{N}^{(l)}(t)$, model weights are contained in $W$, and output layer information, i.e. membrane potentials are encoded in $\Vec{P}(t)$. The spike times and the weights are combined by multiplying the diagonal matrix of $\Vec{N}^{(l)}(t_c)$ with $W^{(l)}$ per layer (line 7 in Algorithm <ref>). The resulting matrix can be computed for the input layer and all hidden layers. The result is a set of matrices consisting of scores that represent how the presynaptic neurons contribute to the postsynaptic neurons under direct consideration of the weights. The values are aggregated in a forward pass through the model. This represents how the input influences the model's neurons and is captured by the input contribution $C_I(t)$ (Line 8). The final feature attribution $A(x, t) \in \mathbb{R}^{O\times D \times t}$ is computed by multiplying $C_I(t)$ with the softmax probabilities (Line 10). The absence of spikes can be understood as either not affecting (Eq. <ref>) or affecting downstream neurons (Eq. <ref>). We compare both variants as TSAS (spikes only), and TSANS (non spikes included). Thus, the computation in line 6 of Algorithm <ref> differs respectively. Temporal Spike Attribution Let $x\in\mathbb{R}^{D\times T}$ be an input to SNN $f$ with $L$ layers, $S^{(l)}$ a layer's spike trains, $U^{(L)}$ the output layer's membrane potentials, $W^{(l)}$ the weight matrix connecting layers $l$ and $l+1$, and $t$ the explanation time. [1] $S^{(1)}, ..., S^{(L-1)}, U^{(L)}\gets f(x)$ $\Vec{P}(t) \gets \text{softmax}(U^{(L)})$ $t' = 0, 1, 2, ..., t$ Initialize $C_I(t') = I \in \mathbb{R}^{DxD}$ $l = 1, 2, ..., L-1$ Compute $\Vec{N}^{(l)}(t')$ $N_W^{(l)}(t') \gets \text{diag}[\Vec{N}^{(l)}(t')]\cdot W^{(l)}$ $C_I(t') \gets C_I(t') \cdot N_W^{(l)}(t')$ $A(x, t') \gets C_I(t')\cdot \text{diag}(\Vec{P}(t'))$ Concatenate $A(x, t')$ to attribution map $A(x, t)$. § EXPERIMENTAL SETUP We demonstrate TSAS and TSANS on both synthetic and real-world data using fully connected SNNs of different depths. Additionally, we compare the quality of the extracted explanations to SAM [14]. §.§ Data Sets Synthetic data sets are commonly used in XAI research as the data's true distribution is known [23]. For the synthetic data set, we chose a simple task of classifying two-dimensional, binary time series data of varying length, i.e., $x_{i,t} \in \{0,1\}$, into one of four classes equivalent to a logical OR. An example is shown in Figure <ref>. The underlying idea of such a simple, synthetic dataset is that the SNN should learn and use the same reasoning as the data generation process, which is known a priori. We can then evaluate whether the explanation shows the expected reasoning. We generate the data by sampling both the duration and activation of $x_i$ at random. The maximum duration for an activity is set at 600 time steps, and we generate 900,000 time steps sequentially as the entire data set. Once the data is generated, we add labels per time step according to the data. We perform a 70%-30% sequential train-test split, i.e. the first 70% of the data constitute the training and the remaining 30% the test set. Label $x_1$ $x_2$ A 0 0 B 1 1 C 0 1 D 1 0 Synthetic data set. Class label assignment (top) and example time series (bottom). For a real-world scenario, we use the “Activities of Daily Living Recognition using Binary Sensors” data set (ADL) [25]. ADL is an imbalanced multivariate time series data set that can be used for multi-class classification. The data was collected from a wireless binary sensor network installed in the homes of two subjects over 35 days at a time granularity of one second. The data set includes 10 classes specifying different activities of the subjects, e.g. Sleeping or Toileting. ADL is openly available in the UCI machine learning repository[1]. The sensors are human-interpretable, e.g., activation of the Bed sensor means that the subject is lying in their bed (cf. Figure <ref>). Since the data is human-understandable, feature attributions in this input space are easily understandable (e.g. “The most attributing feature is the bed sensor activation at time $t$. Hence, it is important for the model that the subject lies in bed at $t$ for predicting the activity “Sleeping” at $t$."). The SNN models are trained to predict the activity at each time step. The input data is converted to spikes, and we apply no other data transformations. Thus, the neural coding is a direct mapping of spike times. As a preprocessing step, we add a constantly spiking sensor as a bias input to the data. Gaps between activities were filled with inactivity in all sensors. We introduce the Other class for these time gaps.[2] We treat the data as one long time series per subject and split the data set sequentially into a training (60%), validation (20%), and test (20%) set to preserve temporal dependencies. [2]We found two cases where the activity end precedes the start (Index 78, 80 of subject A). We excluded these activities from the data set as the data collection or labeling was faulty and filled their time with inactivity. Data sample of subject A from the ADL dataset with either the Basin or the Seat sensor active. §.§ SNN Models We train three SNNs with 1, 2 and 3 hidden layers denoted SNN-1L, SNN-2L and SNN-3L to evaluate the effect of network depth on explanation quality, where the respective size of the hidden layers is determined by hyperparameter tuning. The models are implemented as recurrent networks with binary activations with discretized formulas of the model dynamics as in [24]. We train the models in a similar fashion, i.e. using backpropagation with surrogate gradients. We adapt the training procedure to our data set that exhibits a temporal order (i.e., activities follow one after another in time). The membrane potential state $u(t)$ is retained between data samples to reflect the temporal dependencies. So, the model state is initialized with the last state of the last simulation. While the main focus of our work is explaining SNNs and not their optimization, the models should perform reasonably well, so that the models have learned something worth explaining. Therefore, we do a hyperparameter search in a greedy optimization process for 20 epochs under the assumption of independence of hyperparameters for the ADL task(Table <ref>). Due to the simplicity of the synthetic task, we omit hyperparameter tuning. Results of greedy hyperparameter search. Tested ranges are: {0.01, 0.001, 0.0001} for $\Delta t$ and learning rate, {0.1, 0.01, 0.001} for $\tau_{syn}$ and $\tau_{mem}$, {128, 256, 512} for batch size, {25, 50, 100, 200} for hidden layer sizes. 3cADL Synthetic Hyperparameter SNN-1L SNN-2L SNN-3L All SNNs $\Delta t$ 0.001 0.001 0.001 0.001 $\tau_{syn}$ 0.01 0.01 0.01 0.01 $\tau_{mem}$ 0.01 0.001 0.01 0.001 Learning rate 0.01 0.001 0.001 0.001 Batch size 128 256 512 128 Hidden size 1 - 100 50 10 Hidden size 2 - - 25 10 The final models were fully retrained on the training set with early stopping with a patience of 10 epochs, monitoring the validation loss. We compare the SNN models against the majority baseline, i.e., a classifier that always predicts the class most represented in the training set in terms of balanced accuracy [26] at the 95% confidence interval (CI): \begin{equation} \small \textrm{CI} = 1.96\sqrt{\frac{\text{Balanced Accuracy}*(1-\text{Balanced Accuracy})}{N}} \end{equation} where $N$ is the number of samples. The synthetic data set is balanced with four classes, thus the majority baseline has a balanced accuracy of .25, while in ADL the majority class (Sleeping) leads to a balanced accuracy of .09. SNN model performance for the synthetic and ADL task in balanced accuracy ()% at 95% CI. Results are based on one training run. The synthetic data set contains no validation split (n.a.). The baseline is a majority vote. Data Model Test Train Validation SNN-1L 98.3 $\pm$ 0.0 98.3 $\pm$ 0.0 n.a. SNN-2L 95.6 $\pm$ 0.1 95.5 $\pm$ 0.1 n.a. SNN-3L 93.3 $\pm$ 0.1 93.2 $\pm$ 0.1 n.a. Baseline 25.0 25.0 n.a. SNN-1L 51.6 $\pm$ 0.1 50.6 $\pm$ 0.1 53.6 $\pm$ 0.1 SNN-2L 51.7 $\pm$ 0.1 51.5 $\pm$ 0.1 54.9 $\pm$ 0.1 SNN-3L 50.0 $\pm$ 0.1 49.0 $\pm$ 0.1 52.0 $\pm$ 0.1 Baseline 9.1 9.1 9.1 Table <ref> reports model performance for both data sets. In the synthetic task, all models achieve high accuracy ($>$ 0.9), where model depth is correlated with lower performance. Since learning in SNNs is an active research area itself [4], the reason for this phenomenon is unclear. Still, all models are able to solve the synthetic task well and are therefore useful for our analysis of well-performing SNNs. All models learn the ADL task similarly well, significantly outperforming the baseline (selecting the majority class). while not overfitting (cf. Table <ref>). While other work reports higher performance on ADL with complex ANNs [27], our SNN models are sufficiently accurate for evaluating TSA in a real-world setting. §.§ Evaluation Because explanation quality is a multidimensional property (e.g., [15]), we quantify the explanation quality of TSAS and TSANS objectively using mainly content-related properties of the Co-12 framework [15]. We use SAM as a baseline to investigate whether the incorporation of model weights improves explanations[3]. To the best of our knowledge, we are the first to apply such quantitative evaluation measures to explanations for SNNs. [3]We note that [14] developed SAM to explain image data and did not claim that these maps are applicable to other data types. Still, we believe it is applicable as it uses only spike times, and applied it to our time series data sets. §.§.§ Evaluation Setup Because we simulate our SNN models in our experiments as recurrent models with binary activations [24], i.e. in a non-neuro­morphic environment, calculating TSA on all test data is computationally not tractable. For an accurate and balanced evaluation of our explanations, we sub-sample the test data, choosing the same number of samples per class, to identify an explanation evaluation set. We assume that relevant features for the prediction are contained within a fixed time window of ${[t-1000, t]}$ where $t$ is the prediction time. For the synthetic data, we randomly select 25 samples per class (i.e., 100 overall). For each ADL subject, nine samples across the start, middle, and end of an activity are chosen per class in the test set. The start and end of the activity are defined as the first and last minute of this activity, respectively. Given these constraints, we sample $t$ at random, resulting in 180 samples (81 and 99 for subjects A and B). §.§.§ Evaluation Measures Correctness refers to whether the explanation reflects the true behavior of the model, which is universally desirable. To measure correctness, we incrementally delete ranked feature segments (i.e., most attributing first) and record model prediction performance [15]. The area under the curve of the resulting graph represents explanation correctness as explanation selectivity [28]. A low score is desirable, as the model performance is expected to drop significantly when highly attributing feature segments are deleted. We define feature segments as a number of continuous, strictly positively or negatively attributing time steps within one input dimension $d$ of $x$ that is at most 10 seconds long. This duration is assumed to capture the temporal dependencies at an appropriate information coarseness for both classification tasks posed by the synthetic and ADL data sets. Moreover, the attribution values are not expected to vary strongly within 10 seconds if they are either positive or negative. We implement this incremental feature deletion in temporal data as an inversion of the input spike train, similar to the perturbations proposed by [29] for the evaluation of XAI methods on time series data. Output-completeness assesses to which extent the set of identified important features $F$ is sufficient for prediction $\hat{y}$, i.e., $F\rightarrow\hat{y}$. A perfect output-complete explanation covers all important features relevant to the prediction but might include more features than necessary (cf. criterion compactness). We evaluate output-completeness by deleting unimportant feature segments and reporting the model performance upon deletion [15]. We define unimportant feature segments as having zero attribution because TSA produces unscaled feature attribution explanations for which an importance threshold is difficult to define. Contrary to the evaluation of correctness, we do not invert the unimportant feature segments to delete them. For correctness, we incrementally change small parts of the data so that the perturbation is not large in the beginning but accumulates. For evaluating output-completeness, however, we delete many features at once. If we inverted all the unimportant features, the data would become unrealistic and not reflect the original data distribution (e.g., multiple sensors active around the house while the subject cannot be in multiple locations at once) since we delete all unimportant features at once. Instead, we shuffle the unimportant features randomly in the time domain to delete any effect the unimportant features have at their original time. This resembles the permutation importance method for structured data [30] where the importance of features is measured by how much a score, in our case the model performance, changes upon random permutation of the features. If feature segments are truly unimportant, the model predictions should not change. Continuity refers to an explanation method's capability to generalize. An explanation method is continuous if it generates similar explanations for similar input. As non-continuous behavior is difficult for a user to understand, continuity is desirable. We measure continuity by inspecting the stability for slight variations in the input data. We use max-sensitivity defined as the maximum Frobenius norm of the difference between the explanations on original and slightly varied data [31]: \begin{equation} \label{eq: max-sensitivity} \textrm{Sens}_{\textrm{max}}(e, f, x, t, r) = \max_{||x'-x|| \leq r}||e(f,x', t)-e(f,x, t)|| \end{equation} where $e$ refers to the explanation, $f$ to the model, $x$ the input data, $t$ the current timestep and $r$ defines the neighborhood region in which perturbed data $x'$ is still considered as similar to $x$. We vary the data randomly by perturbing the duration of active sensors in the range of 10% of the original duration. Such perturbations are realistic given the task of activity prediction where the task duration is not rigid and different instances of the same activity can have a different pace, e.g. taking a bit more time for the activity “shower". Compactness refers to the size of an explanation, where a compact, in our case, sparse, attribution map is desirable. We measure the mean size of all extracted explanations as the sum of absolute attribution values for compactness: \begin{equation} \textrm{Compactness}(A) = \frac{1}{N}\sum_i^D\sum_j^t |a_{i,j}| \end{equation} where $A$ is the matrix of dimension $D\times t$, i.e. input dimensionality $D \times$ the timestep to be explained. $N$ denotes the total number of explanations extracted for the experiment. § RESULTS In addition to the quantitative analysis in Section <ref>, we present visual examples of explanations in Section <ref> to give an impression of TSA and analyze its coherence [15]. §.§ Quantitative Results Quantitative evaluation results at 95% CI of our TSAS and TSANS explanations compared to SAM [14] on Synthetic and ADL. Arrows indicate the direction of better performance. Continuity measured as max-sensitivity (no CI). Measure TSAS TSANS Baseline (SAM) TSAS TSANS Baseline (SAM) (r)1-5 (r)6-9 4@lCorrectness $\downarrow$ 16*[origin=c]90ADL SNN-1L 0.404 $\pm$ 0.096 0.115 $\pm$ 0.063 0.628 $\pm$ 0.095 0.086 $\pm$ 0.041 0.006 $\pm$ 0.011 0.329 $\pm$ 0.069 SNN-2L 0.809 $\pm$ 0.077 0.426 $\pm$ 0.097 0.762 $\pm$ 0.083 0.633 $\pm$ 0.070 0.170 $\pm$ 0.055 0.655 $\pm$ 0.069 SNN-3L 0.813 $\pm$ 0.076 0.520 $\pm$ 0.098 0.822 $\pm$ 0.075 0.505 $\pm$ 0.073 0.154 $\pm$ 0.053 0.377 $\pm$ 0.071 (r)2-2 (r)3-5 (r)7-9 4@lOutput-completeness $\uparrow$ SNN-1L 0.726 $\pm$ 0.087 1.000 $\pm$ 0.000 0.737 $\pm$ 0.086 0.996 $\pm$ 0.009 1.000 $\pm$ 0.000 0.624 $\pm$ 0.071 SNN-2L 0.692 $\pm$ 0.090 1.000 $\pm$ 0.000 0.724 $\pm$ 0.088 0.658 $\pm$ 0.069 0.957 $\pm$ 0.030 0.649 $\pm$ 0.070 SNN-3L 0.600 $\pm$ 0.096 0.989 $\pm$ 0.020 0.680 $\pm$ 0.091 0.631 $\pm$ 0.070 0.991 $\pm$ 0.014 0.553 $\pm$ 0.073 (r)2-2 (r)3-5 (r)7-9 4@lContinuity $\downarrow$ SNN-1L 0.282 0.334 0.679 0.652 0.474 2.715 SNN-2L 0.021 0.023 0.863 0.011 0.009 4.338 SNN-3L 0.001 0.002 0.676 0.002 0.002 53.287 (r)2-2 (r)3-5 (r)7-9 4@lCompactness $\downarrow$ SNN-1L 0.344 $\pm$ 0.009 0.411 $\pm$ 0.005 1.143 $\pm$ 0.029 0.730 $\pm$ 0.109 1.651 $\pm$ 0.138 9.634 $\pm$ 5.337 SNN-2L 0.007 $\pm$ 0.000 0.011 $\pm$ 0.000 0.760 $\pm$ 0.031 0.002 $\pm$ 0.000 0.003 $\pm$ 0.000 2.183 $\pm$ 0.633 SNN-3L 0.000 $\pm$ 0.000[3] 0.001 $\pm$ 0.000 0.321 $\pm$ 0.011 0.001 $\pm$ 0.000 0.001 $\pm$ 0.000 49.197 $\pm$ 293.630 9@l$^\mathrm{3}$While very small, the explanation size is $>0$. This value is rounded. The results of the quantitative analysis are presented in Table <ref>. Similar explanation performance trends can be observed from the synthetic and ADL experiments: In terms of correctness and output-completeness, TSANS is clearly superior to TSAS and SAM. This shows that the SNN's output is also based on the absence of spikes and that considering the absence of spikes is relevant for the explanation (cf. detailed discussion in Section <ref>). TSAS, however, does not show a clear improvement to SAM in correctness, implying that the additional consideration of weights to spike times is not significant for capturing the model's true behavior. SAM is slightly better than TSAS in terms of output-completeness in the synthetic task, while TSAS is better in the ADL experiments. Investigating the extent to which the inclusion of model weights in the computation of a feature attribution explanation for SNNs improves the explanation's completeness in finding all relevant parts of the model input is left for future work. The results in the evaluation of continuity show that TSAS and TSANS both are more stable than SAM in producing similar explanations for similar, but slightly different input data. Also in terms of explanation size, TSAS and TSANS generate smaller explanations than SAM where TSAS produces the most compact explanations. §.§ Visual Inspection TSA generates feature attribution explanations, which can be visualized and overlaid with the input data. We visually inspect TSA explanations in the case of correct prediction, misclassification, and deep SNN models. In all examples, we display only the last 7 (synthetic) and 40 seconds (ADL) of the sample due to space constraints. The visualizations show positive and negative class attributions in red and blue, respectively. White corresponds to an attribution value of zero. The color scale is explanation-specific and dictated by the largest absolute attribution value. The $y$-axis shows the input dimensions, i.e. the sensors of the data set. Sensor activation is visualized by spikes (vertical lines) at a time step. §.§.§ Correct Predictions Example visualizations for correct predictions of SNN-1L in the synthetic task are shown in Figure <ref>. Examples of the ADL task are shown in Figure <ref>. TSANS (left), TSAS (center), and SAM (right) activation of class C for SNN-1L's correct prediction of $y=$C for the synthetic task. Best viewed in color. TSAS breakfast class activation. TSANS breakfast class activation. SAM breakfast class activation. Attribution maps of our method compared to SAM [14] for the same example of a correct prediction of SNN-1L ($y=$Breakfast). Best viewed in color. Overall, the explanations generated by TSANS, TSAS, and SAM for SNN-1L seem quite similar: Recent time steps attribute stronger than time steps further in the past. All explanations also recognize the spiking input to be the important part, while TSANS also assigns attribution to non-spiking parts of the data. Both TSAS and TSANS show different attribution strengths between different input dimensions for the same time step, while SAM seems to assign similar non-zero values to the same time steps regardless of the input dimension. Explanations generated by SAM do not differentiate between positive and negative attributions. §.§.§ Misclassifications In Figure <ref>, explanations for an incorrect prediction of the model are displayed for the true class (Breakfast) as well as the predicted class (Lunch). While both maps look very similar, there are subtle differences. In both cases, the Cupboard sensor activation attributes positively to the classes. This makes sense because a kitchen sensor is likely to be connected to meal-related classes. Still, the map shows that the model connects this sensor's activation with the Lunch class rather than the Breakfast class at this point in the data. It is also noticeable that the model bias is negatively attributing to the prediction in both cases, with the negative attribution in the last time step being larger for Lunch. However, in the Breakfast class case, the (negative) bias is slightly stronger across time, suggesting that the model is biased against Breakfast. Given the stronger positive attribution of the Cupboard sensor activation and the slight bias against Breakfast, Lunch, a likely similar-looking class, is predicted. This example highlights the informativeness of negative and positive class attribution, as SAM would be unable to distinguish this. TSAS lunch class activation. TSAS breakfast class activation. Visualizations of TSAS for an incorrect prediction. The numerical values at time steps 3577 and 3600 were added manually.´The attribution of the cupboard is stronger for the lunch class than the breakfast class. Best viewed in color. §.§.§ Deep SNNs Figure <ref> shows examples from TSANS explanations extracted from SNN-2L and SNN-3L on the same sample as Figure <ref>. The examples show that the decay rate $\gamma$ is important for SNN models, as it dictates how far into the past spikes can influence the model prediction at time $t$. SNN-2L and SNN-3L have different $\gamma$. SNN-2L, with a steeper decay, can only consider the last two to three time steps while SNN-3L can look further into the past. Furthermore, the examples also show a limitation of TSA for deep models. Attribution values tend to alternate in an input dimension between positive and negative values, which could be a result of the repeated multiplication of values in $[-1, 1]$ during the aggregation of attribution scores across layers. This indicates a need to further study TSA on deep SNNs. TSANS breakfast class activation for SNN-2L. TSANS breakfast class activation for SNN-3L. TSANS explanations from SNN-2L and SNN-3L on the same example of a correct prediction. Best viewed in color. § DISCUSSION The quantitative evaluation shows that TSA achieves a significant improvement over SAM in both the synthetic and ADL data sets. More specifically, TSANS outperforms SAM in all tested properties while TSAS is better at explaining SNN-1L and roughly on par with SAM for explaining SNN-2L and SNN-3L. Overall, no local explanation method outperforms all others in all tested properties, which demonstrates the multi-dimensionality of explanation quality. In comparison to SAM, TSAS and TSANS both consider the model weights in the computation of attribution directly. Since the weights are an essential part of a neural network, we hypothesized that their direct consideration would improve explanation quality. Our results show that this is largely the case, especially for explanations of SNN-1L. For this model, both TSAS and TSANS showed significant improvements to SAM in all tested properties except for output-completeness in the synthetic task. The continuity of TSA could also be positively impacted by considering weights $W$, as $W$ is a constant factor across time, which scales the NCS. Our SNN models exhibit weights $W$ with weight values $|<1|$, hence the attribution values are kept small, leading to more compact explanations. In cases where weight values are large, weights could be normalized for explanation compactness. Additionally, TSA distinguishes between positive and negative attribution because we consider the excitatory and inhibitory nature of $W$, whereas SAM is unable to make this distinction. This can also be observed in the examples shown (Figure <ref>), where TSAS and TSANS both assign a negative bias attribution while SAM marks the bias as positively attributing. Nevertheless, $W$ could potentially cause the decreased explanation performance for TSA explanations of deeper SNNs. Signs may cancel each other out as weighted NCS are multiplied across layers. Furthermore, $W$ could lead to vanishing attribution for deep models in our case (e.g., indicated by the compactness of deep SNNs), due to repeated multiplication of values $|<1|$. This can be investigated in future work. The results show that absent spikes are relevant. We extended the definition of the NCS by the contribution score for absent spikes in TSANS. The quantitative results clearly show that inactive input dimensions are relevant to the model prediction. While the models mainly use spikes for their prediction, the inactivity of other neurons is also relevant. Our synthetic data set with logical OR is designed such that the absence of spikes is decisive for the class label, and such reasoning should be correctly reflected in the explanation. In the real-world ADL data set, the active Toaster sensor is for example important to the model at the same time as the bathroom sensors' inactivity to classify Breakfast. As both correctness and output-completeness of TSANS present significant improvements to SAM and TSAS, it is likely that the model has learned the connection between non-spikes and classes. Particularly the experiments with synthetic data verify that this observation is not specific to the models trained on the ADL task, but also holds for accurate temporally coded SNN models. Since SAM and TSAS do not consider absent spikes, they generally perform worse in the evaluation for content-related measures. The evaluation of continuity and compactness uses absolute values to determine the explanation's performance. As SAM and TSAS are restricted to defining attribution only for spikes, the amount of change in the explanation is also limited. Therefore, TSAS explanations are more compact than TSANS. However, both do not exhibit a large difference in continuity, which indicates that TSA itself is continuous. §.§ Limitations While the evaluation results indicate that TSA improves upon SAM in terms of explanation quality, there are limitations to consider. First, TSA, like SAM, is a post-hoc explanation method. This means that it is applied to trained SNN models. Any unreasonable-looking or unexpected explanation could therefore either be rooted in erroneous model behavior or in errors of the explanation method [32]. To ensure that the latter is not the case, we systematically and quantitatively evaluated TSA in our work, where we found that the applicability of TSA to deep SNNs likely requires further research. Second, we tested only SNNs based on LIF. While TSA only relies on spike times, membrane potentials and weights and is therefore applicable to any spike generation mechanism, the computation of the NCS requires a specified decay parameter $\gamma$. With LIF and other integrate-and-fire models, the choice of $\gamma$ is straightforward. For more complex models that do not specify $\gamma$ directly (e.g. Hodgkin-Huxley neurons [33]), it must be defined first before using TSA to explain. Third, the evaluation does not consider user-related tasks since the technical feasibility of the method was in focus first. We did not explicitly test for comprehensibility, but it is an important property of explanations [11]. Hence, TSA as an explanation method for SNNs does not have sufficient maturity for usage yet. Instead, it offers an effective starting point for further research on explanations for SNNs. § CONCLUSION We present a local explanation method to address the outcome explanation problem for SNNs. We define Temporal Spike Attribution (TSA), a feature attribution explanation that uses model-internal variables to explain the prediction of time series classification. The two variants of TSA differ in the consideration of absent spike contribution, namely TSAS and TSANS. We demonstrate TSA with three SNNs of different depths with temporal data and evaluated it in a multi-faceted quantitative analysis. We found that TSANS is superior in correctness and output-completeness, which shows the importance of considering absent spikes. There is no substantial difference between the TSA variants in terms of continuity and compactness. Besides, a decrease in the quality of TSA is observed for deep SNNs, which we attribute to the aggregation across layers. We find that it is advantageous to consider all available information for explaining SNN predictions. Future studies could focus on the explanation of deep SNNs and human-comprehensibility based on TSA. [1] W. Maass, “Networks of spiking neurons: The third generation of neural network models,” Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997. [2] W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, Neuronal dynamics: From single neurons to networks and models of cognition / Wulfram Gerstner, Werner M. Kistler, Richard Naud, Liam Paninski. Cambridge: Cambridge University Press, 2014. [3] F. Ponulak and A. Kasinski, “Introduction to spiking neural networks: Information processing, learning and applications,” Acta Neurobiol. Exp. (Wars.), vol. 71, no. 4, pp. 409–433, 2011. [4] X. Wang, X. Lin, and X. Dang, “Supervised learning in spiking neural networks: A review of algorithms and evaluations,” Neural networks : the official journal of the International Neural Network Society, vol. 125, pp. 258–280, 2020. [5] A. F. Murray, Pulse-Based Computation in VLSI Neural Networks, pp. 87––109. Cambridge, MA, USA: MIT Press, 1999. [6] S. Sharmin, P. Panda, S. S. Sarwar, C. Lee, W. Ponghiran, and K. Roy, “A comprehensive analysis on adversarial robustness of spiking neural networks,” in 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, 2019. [7] Z. Bing, C. Meschede, F. Röhrbein, K. Huang, and A. C. Knoll, “A survey of robotics control based on learning-inspired spiking neural networks,” Frontiers in Neurorobotics, vol. 12, 2018. [8] M. R. Azghadi, C. Lammie, J. K. Eshraghian, M. Payvand, E. Donati, B. Linares-Barranco, and G. Indiveri, “Hardware implementation of deep network accelerators towards healthcare and biomedical applications,” IEEE Transactions on Biomedical Circuits and Systems, vol. 14, no. 6, pp. 1138–1159, 2020. [9] J. He, S. L. Baxter, J. Xu, J. Xu, X. Zhou, and K. Zhang, “The practical implementation of artificial intelligence technologies in medicine,” Nature medicine, vol. 25, no. 1, pp. 30–36, 2019. [10] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018. [11] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, pp. 1–42, 2019. [12] C. Molnar, G. Casalicchio, and B. Bischl, “Interpretable machine learning – a brief history, state-of-the-art and challenges,” in ECML PKDD 2020 Workshops, (Cham), pp. 417–431, Springer International Publishing, 2020. [13] A. Jeyasothy, S. Sundaram, S. Ramasamy, and N. Sundararajan, “A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights,” CoRR, vol. abs/1904.11367, 2019. [14] Y. Kim and P. Panda, “Visual explanations from spiking neural networks using inter-spike intervals,” Scientific Reports, vol. 11, no. 1, 2021. [15] M. Nauta, J. Trienes, S. Pathak, E. Nguyen, M. Peters, Y. Schmitt, J. Schlötterer, M. van Keulen, and C. Seifert, “From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai,” ACM Comput. Surv., feb 2023. Just Accepted. [16] B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. sayres, “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV),” in Proceedings of the 35th International Conference on Machine Learning (J. Dy and A. Krause, eds.), vol. 80 of Proceedings of Machine Learning Research, pp. 2668–2677, PMLR, 10–15 Jul 2018. [17] M. Nauta, R. van Bree, and C. Seifert, “Neural prototype trees for interpretable fine-grained image recognition,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14928–14938, 2021. [18] M. T. Ribeiro, S. Singh, and C. Guestrin, “"why should i trust you?": Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, (New York, NY, USA), p. 1135–1144, Association for Computing Machinery, 2016. [19] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, (Red Hook, NY, USA), p. 4768–4777, Curran Associates Inc., 2017. [20] R. Poyiadzi, X. Renard, T. Laugel, R. Santos-Rodríguez, and M. Detyniecki, “On the overlooked issue of defining explanation objectives for local-surrogate explainers,” ArXiv, vol. abs/2106.05810, 2021. [21] U. Bhatt, A. Weller, and J. M. F. Moura, “Evaluating and aggregating feature-based model explanations,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020 (C. Bessiere, ed.), pp. 3016–3022, ijcai.org, 2020. [22] R. Srinivasan and A. Chander, “Explanation perspectives from the cognitive sciences - A survey,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020 (C. Bessiere, ed.), pp. 4812–4818, ijcai.org, 2020. [23] Y. Liu, S. Khandagale, S. Khandagale, C. White, and W. Neiswanger, “Synthetic Benchmarks for Scientific Research in Explainable Machine Learning,” in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (J. Vanschoren and S. Yeung, eds.), vol. 1, 2021. [24] E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, 2019. [25] F. J. Ordóñez, P. de Toledo, and A. Sanchis, “Activity recognition using hybrid generative/discriminative models on home environments using binary sensors,” Sensors (Basel, Switzerland), vol. 13, no. 5, pp. 5460–5477, 2013. [26] L. Mosley, A balanced approach to the multi-class imbalance problem. [27] R. A. Hamad, M. Kimura, L. Yang, W. L. Woo, and B. Wei, “Dilated causal convolution with multi-head self attention for sensor human activity recognition,” Neural Computing and Applications, 2021. [28] G. Montavon, W. Samek, and K. Müller, “Methods for interpreting and understanding deep neural networks,” Digit. Signal Process., vol. 73, pp. 1–15, 2018. [29] U. Schlegel, H. Arnout, M. El-Assady, D. Oelke, and D. A. Keim, “Towards a rigorous evaluation of xai methods on time series,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 4197–4201, 2019. [30] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001. [31] C.-K. Yeh, C.-Y. Hsieh, A. Suggala, D. I. Inouye, and P. K. Ravikumar, “On the (in)fidelity and sensitivity of explanations,” in Advances in Neural Information Processing Systems (H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds.), vol. 32, Curran Associates, Inc., 2019. [32] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89, 2018. [33] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, pp. 500–544, 1952. [34] F. Zenke and S. Ganguli, “Superspike: Supervised learning in multilayer spiking neural networks,” Neural computation, vol. 30, no. 6, pp. 1514–1541, 2018. § DETAILS ON DATA SETS All code for synthetic data set generation as well as preprocessing of real-world data is available in our Github repository <https://github.com/ElisaNguyen/tsa-explanations>. §.§ Synthetic Data The synthetic data set is generated as a smaller and simpler version of the real-world data, which shall be easy to learn. It consists of two-dimensional time series data with four classes. The data is binary (i.e., $x_{i,t} \in \{0,1\}$). We generate the data by sampling both the duration and activation of $x_i$ at random. The maximum duration for an activity is set at 600 time steps, and we generate 900 000 time steps sequentially as the entire data set. Once the data is generated, we add labels per time step according to the data, as specified in Figure <ref> of the main paper. There are no misclassified time steps. §.§ Real-world Data The real-world data set we use is the Binary ADL data set [25], which is openly available in the UCI machine learning repository. The data set consists of the start and end times of activities and their corresponding labels across two subjects, A and B. Given this information, we generate continuous time series across the complete duration of the data set. Gaps between activities were filled with inactivity in all sensors. We introduce the Other class for these time gaps. We found two cases where the activity end precedes the start (Index 78, 80 of subject A). We excluded these activities from the data set as either the data collection or labeling is faulty and filled their time with inactivity. § SNN MODEL DEFINITION AND TRAINING The SNN models (SNN-1L, SNN-2L and SNN-3L) were built as fully connected recurrent networks with binary activations, using discretized formulas of the network dynamics, in accordance with [24]. Likewise, the models are also trained with surrogate gradient descent using a fast sigmoid surrogate. Moreover, we adapt the training procedure to our data set that exhibits a temporal order (i.e., activities follow one after another in time). The membrane potential state $u(t)$ is retained between data samples to reflect the temporal dependencies. So, state variables of the model are initialized with the last states of the last simulation. The maximum membrane potential at the output layer at each time step $\Delta t$ determines the prediction. This allows the use of regular loss functions for optimization. Similar to [34], the models are optimized with negative log-likelihood. While the main focus of our work is explaining SNNs and not their optimization, the models should demonstrate a clear improvement in predictive performance to pure chance and perform reasonably well, so that the models have learned something worth explaining. Due to the simplicity of the synthetic classification task, we omit hyperparameter tuning. The model hyperparameters were determined across all models beforehand (Table <ref>) Hyperparameter Choice $\Delta t$ 0.001 $\tau_{syn}$ 0.01 $\tau_{mem}$ 0.001 Learning rate 0.001 Batch size 128 Size of hidden layer 10 Hyperparameters used for model building with synthetic data set. Standard hyperparameters are not sufficient in the ADL task. Therefore, the hyperparameters of the networks are tuned in a greedy optimization process for 20 epochs under the assumption of independence for this task. With the tuned hyperparameters, the final models were fully retrained on the training set. As regularization, early stopping with a patience of 10 epochs was used, monitoring the validation loss. The final hyperparameters are shown in Table <ref>. Hyperparameter SNN-1L SNN-2L SNN-3L $\Delta t$ 0.001 0.001 0.001 $\tau_{syn}$ 0.01 0.01 0.01 $\tau_{mem}$ 0.01 0.001 0.01 Learning rate 0.01 0.001 0.001 Batch size 128 256 512 Size of hidden layer 1 - 100 50 Size of hidden layer 2 - - 25 Results of greedy hyperparameter optimization for all models on the binary ADL task. Tested ranges are: {0.01, 0.001, 0.0001} for $\Delta t$ and learning rate, {0.1, 0.01, 0.001} for $\tau_{syn}$ and $\tau_{mem}$, {128, 256, 512} for batch size, {25, 50, 100, 200} for hidden layer sizes. The scripts used for hyperparameter optimization and model training are provided in our GitHub repository <https://github.com/ElisaNguyen/tsa-explanations>. § EVALUATION MEASURE COMPUTATION In this section, pseudocode and formulas are given for this work's quantitative evaluation of explainability performance. The scripts used in our experiments are also published in our GitHub repository (). Correctness is computed in explanation selectivity [28], shown in algorithm <ref>. The deletion of feature segments is performed as an inversion of the time series, in line with [29]. Explanation selectivity Let $e$ be the explanation function that results in feature attribution map $A(x, t)$ describing the attributions to the predicted class, $f(x, t)$ denote the model's prediction on an input $x\in X$ at time $t$. Let $R$ be the total number of feature segments of $x$, $N$ the size of the test set $X$ and $Y$ be the corresponding ground-truth labels for $X$. $x\in X$ $t = 1,2,...,T$ with $T$ being the duration of $x$ $A(x, t) \gets e(f, x, t)$ Define $R$ feature segments. Sort the feature segments in descending order by their mean attribution values. rank $r=0,1,...,R$ $x^{\text{inv@}r} \gets$ Invert feature segment $x^{(r)}$ so that $x^{(r)}=|x^{(r)}-1|$. $\hat{y}^{\text{inv@}r}\gets f(x^{\text{inv@}r}, t)$ Let $X^{\text{inv@}r}$ denote $X$ with feature segments up to rank $r$ inverted. rank $r=0,1,...,R$ Compute Balanced Accuracy of $\hat{Y}^{\text{inv@}r}, Y$. Compute explanation selectivity as the AUC of the graph resulting from the performance of the model depending on the amount of feature segments inverted. Output-completeness is measured with a model preservation check upon deletion of unimportant feature segments, specified in algorithm <ref>. Model preservation check Let $f(x, t)$ be a SNN model's prediction on input $x\in X$ at time $t$, $e$ be the explanation function which results in attribution map $A(x, t)$ that describes the attribution to the predicted class. Let $\epsilon$ be the threshold for feature importance (in our experiments $\epsilon = 0$). $x\in X$ $t = 1,2,...,T$ with $T$ being the duration of $x$ $A(x, t) \gets e(f, x, t)$. Mask $A$ where $|a|>\epsilon$. $x_p\gets$ Perturb unmasked area of $A$. $\hat{y}_p \gets f(x_p, t)$ Compute balanced accuracy of $\hat{Y}_p, \hat{Y}$ as a model preservation check. Continuity is measured in max-sensitivity[31], denoted as Sens$_{\textrm{max}}$: \begin{equation} \label{eq: max-sensitivity} \textrm{Sens}_{\textrm{max}}(e, f, x, t, r) = \max_{||x'-x|| \leq r}||e(f,x', t)-e(f,x, t)|| \end{equation} where $e$ refers to the explanation, $f$ to the model, $x$ the input data, $t$ the current timestep and $r$ defines the neighborhood region in which perturbed data $x'$ is still considered as similar to $x$. Compactness is computed as follows: \begin{equation} \textrm{Compactness}(A) = \frac{1}{N}\sum_i^D\sum_j^t |a_{i,j}| \end{equation} where $A$ is the matrix of dimension $D\times t$, i.e. input dimensionality $D \times$ the timestep to be explained. $N$ denotes the total number of explanations extracted for the experiment.
# Improving dual-microphone speech enhancement by learning cross-channel features with multi-head attention ###### Abstract Hand-crafted spatial features, such as inter-channel intensity difference (IID) and inter-channel phase difference (IPD), play a fundamental role in recent deep learning based dual-microphone speech enhancement (DMSE) systems. However, learning the mutual relationship between artificially designed spatial and spectral features is hard in the end-to-end DMSE. In this work, a novel architecture for DMSE using a multi-head cross-attention based convolutional recurrent network (MHCA-CRN) is presented. The proposed MHCA-CRN model includes a channel-wise encoding structure for preserving intra-channel features and a multi-head cross-attention mechanism for fully exploiting cross-channel features. In addition, the proposed approach specifically formulates the decoder with an extra SNR estimator to estimate frame-level SNR under a multi-task learning framework, which is expected to avoid speech distortion led by end-to-end DMSE module. Finally, a spectral gain function is adopted to further suppress the unnatural residual noise. Experiment results demonstrated superior performance of the proposed model against several state- of-the-art models. Index Terms— dual-microphone speech enhancement, multi-head cross-attention, SNR estimator, spatial cues extraction, channel-independent encoding ## 1 Introduction Speech communication function of mobile devices has been well-designed and widely used as a convenient tool for contacting others due to its portable characteristics. The quality and intelligibility of the received speech can be severely degraded by background noise if the far-end talker is in an adverse acoustic environment. To attenuate background noise, a two-channel microphone array is typically deployed, where a primary microphone is placed on the bottom of a mobile phone and a secondary microphone on the top. Conventional dual-channel speech enhancement approaches are based on signal processing, and can be divided into two categories, the blind source separation (BSS) approaches [1, 2], and the beamforming [3, 4] approaches. Although these conventional approaches are fast and lightweight, their performance and robustness are not reliable in a complex acoustic environment. Recently, with the success of deep learning based single-channel speech enhancement [5, 6], dual-channel speech enhancement works have been developed on exploring deep learning approaches with conventional speech enhancement methods. One is that the deep neural network (DNN) is used to enhance each microphone signal separately, after which a beamformer is used to linearly integrate the dual-channel signals [7, 8]. Experimental results show that the DNN-beamformer approach gives better results than conventional approaches and shows robustness in terms of various noisy types and SNR ranges. However, the DNN only learns temporal-spectral features of each channel while ignoring the spatial features of the target speech in the DNN-beamformer approach. In order to sufficiently leverage the spatial features of dual-channel data, several approaches have been proposed, which apply the spatial features, such as interaural phase or intensity difference (IPD, IID) [9], as additional inputs for improving objective intelligibility and perceptual quality. Despite performance improvement by learning spectral features together with spatial features, the mutual relationship between spatial and spectral information is difficult to learn by a simple DNN, which may cause the under-utilization of spatial information. Fig. 1: Schematic diagram of MHCA-CRN. The feature maps generated by the encoding layer are interchanged between channels after processing each convolutional block, which is shown in the gray box. Furthermore, MHCA denotes multi-head cross-attention. To overcome the aforementioned limitation, this paper is motivated to design a convolutional recurrent network (CRN), which can separately process each channel for preserving intra-channel features while interchanging information between encoded channels for fully exploiting the spatial information of dual- channel data. For this purpose, this paper proposes a channel-wise encoding structure to process each input channel independently for preserving intra- channel features, and a multi-head cross-attention (MHCA) mechanism to boost network performance by effectively aggregating cross-channel spatial information. In addition, to maintain superior speech quality, the proposed model formulates the decoder as a multi-task learning framework with an auxiliary task of SNR estimation which has proven to be beneficial to the perceived speech quality [10]. The rest of this paper is organized as follows: the model architecture is presented in Section 2. Section 3 is the dataset and experimental settings. Section 4 demonstrates the results and analysis, and a conclusion is shown in Section 5. ## 2 Proposed MHCA-CRN Model The proposed MHCA based CRN model (MHCA-CRN) treats the dual-microphone enhancement as a supervised learning task, as shown in Fig. 1. First, the proposed model separately encodes the extracted feature from each channel of the noisy signal and interchanges information between encoded channels by using MHCA after each downsampling block. Then the encoded features of both channels are concatenated and fed to LSTM blocks for aggregating temporal contexts. The output of the LSTM blocks is separately fed to the SNR estimator and decoder blocks under a multi-task learning framework. Finally, the output of the SNR estimator block is used to compute a frame-level spectral gain function to remove the residual noise in the estimated spectrum. ### 2.1 Encoder-decoder structure The encoder processes each channel independently, in order to preserve the intra-channel feature of dual-channel data and to explicitly utilize the cross-channel relationship. Each encoder contains several stacked 2-D convolutional layers, each of which is followed by batch normalization [11] and exponential linear unit (ELU) [12]. The dilation is applied to the layers along the frequency axis. The generated feature maps from the encoder of each channel are used as inputs to the MHCA block, which are then interchanged between these two channels. The main objective of the cross-channel attention block is to derive the relationship between two channels. The decoder is the mirror representation of each encoder except all the convolution layers are replaced with deconvolution layers. Skip connections are introduced to compensate for information loss during the encoding process of the primary microphone. ### 2.2 Multi-head cross-attention The MHCA module (shown in Fig 2) is designed for synchronizing time delay between two channels, which holds the spatial information of the target speaker. The MHCA takes transformed feature maps corresponding to the encoder of primary channel, $\textbf{X}_{1}$, processed a $1\times 1$-convolution block to form the query, and takes transformed feature maps of the encoder of reference channel, $\textbf{X}_{2}$, to form the key-value pair by using two $1\times 1$-convolution blocks. The proposed MHCA first computes the query, and key-value pair for obtaining the attention component A, given by $\textbf{A}=\text{softmax}(\textbf{Q}\textbf{K}^{\top})\textbf{V}$ (1) where Q, K, and V denote the query, key, and value, respectively. Intuitively, the multiplication operation between Q and K emphasizes the regions which are slowly varying in time and have high power. What is more, the output of MHCA block Z is computed by: $\textbf{Z}=\text{sigmoid}(\textbf{X}_{1}+\textbf{A})$ (2) Consequently, the Z is weight value that are re-scaled between 0 and 1 through a sigmoid activation function. Fig. 2: Multi-head cross-attention module. ### 2.3 SNR estimator Previous researches have proved that directly training DNN models may inevitably cause a certain amount of speech distortion [13]. To this end, we propose to utilize SNR estimator to estimate frame-level SNR under a multi- task learning framework for maintaining speech quality while reducing noise. The input of the SNR estimator is the feature maps obtained by two LSTM layers, then it is fed to a convolution layer with sigmoid activation to estimate the frame-level SNR. The training target for the SNR estimator is the mapped SNR [14], which is a mapped version of instantaneous SNR. The definition of instantaneous SNR is as follows: $\xi_{dB}(t,f)=20\log_{10}(|S(t,f)|/|N_{1}(t,f)|)$ (3) where $t$ and $f$ denote the time and frame index, $\xi_{dB}(t,f)$ is scaled in [0, 1] and can be viewed as a priori SNR, $S(t,f)$ and $N_{1}(t,f)$ are respectively the clean and noise spectrum of the primary channel. In addition, it is assumed that $\xi_{dB}(t,f)$ is distributed normally with mean, $\mu_{f}$, and variance, $\sigma^{2}_{f}$: $\xi_{dB}(t,f)\sim\mathcal{N}(\mu_{f},\sigma^{2}_{f})$. The mapped SNR is given by $\bar{\xi}(t,f)=1+\Big{[}\frac{1}{2}+\text{erf}\Big{(}\frac{\xi_{dB}(t,f)-\mu_{f}}{\sigma_{f}\sqrt{2}}\Big{)}\Big{]}$ (4) where “erf” is the error function. During inference (shown in Fig 3), the SNR estimate, $\hat{\xi}(t,f)$ is computed by $\hat{\xi}(t,f)=10^{((\sigma_{f}\sqrt{2}\text{erf}^{-1}(2\hat{\bar{\xi}}(t,f)-1)+\mu_{f})/10)}$ (5) where $\hat{\xi}(t,f)$ is output from SNR estimator. ### 2.4 Loss function Since proposed MHCA-CRN model formulates the decoder with an extra SNR estimator under a multi-task learning framework, the proposed MHCA-CRN model is trained by a combination of two losses. First, the MSE loss to guide the learning of SNR estimator, $L_{SNR}=\text{MSE}(\xi(t,f),\hat{\xi}(t,f))$ (6) Second, the loss [15] for target speech spectrum reconstruction is given by: $\begin{split}L_{f}=\frac{1}{T\times F}\sum_{t=1}^{T}\sum_{f=1}^{F}|(|S(t,f)|)-(|\hat{S}(t,f)|)|\end{split}$ (7) Finally, the total loss is $L=L_{f}+\alpha L_{SNR}$ (8) Since both loss values are not on the same scale, we empirically set $\alpha$ to 10. ### 2.5 Target speech reconstruction For further suppressing the residual noise, the proposed model adopts the spectral gain function , $G^{SNR}(t,f)$, for dual-microphone speech enhancement, which is represented as $G^{SNR}(t,f)=\frac{\hat{\xi}(t,f)}{\hat{\xi}(t,f)+1}$ (9) where $\hat{\xi}(t,f)$ is the estimated SNR obtained by SNR estimator. The computed final gain $G^{SNR}(t,f)$ is then multiplied to the estimated spectrum $\hat{S}(t,f)$ to suppress the residual noise, which is then combined with the noisy phase to resynthesize the time-domain waveform of the enhanced speech, as shown in Fig 3. Fig. 3: Diagram of target speech reconstruction. Table 1: PESQ and STOI comparison for the different models. A higher score means better performance where the BOLD text indicates the best performance for each metric. Test SNR | Channel | -5 dB | 0 dB | 5 dB | 10 dB ---|---|---|---|---|--- Meric | - | STOI(%) | PESQ | STOI(%) | PESQ | STOI(%) | PESQ | STOI(%) | PESQ Unprocessed | Dual | 57.06 | 1.37 | 69.33 | 1.88 | 80.59 | 2.12 | 87.63 | 2.49 DeepXi [14] | Single | 77.48 | 1.88 | 90.27 | 2.36 | 92.17 | 2.66 | 95.74 | 3.11 CB-NR [16] | Dual | 54.42 | 1.46 | 68.31 | 2.03 | 77.57 | 2.38 | 88.10 | 2.74 CRN-PSM [9] | Dual | 78.20 | 1.76 | 87.30 | 2.17 | 92.76 | 2.59 | 95.76 | 2.99 DC-CRN [17] | Dual | 86.54 | 2.48 | 92.64 | 2.94 | 95.88 | 3.20 | 97.47 | 3.43 MHCA-CRN | Dual | 86.58 | 2.51 | 92.83 | 3.03 | 95.96 | 3.24 | 97.52 | 3.46 -without spectral mapping | Dual | 84.98 | 2.17 | 89.80 | 2.57 | 93.82 | 2.92 | 95.47 | 3.19 -without SNR estimator | Dual | 85.76 | 2.39 | 92.34 | 2.88 | 95.47 | 3.11 | 97.02 | 3.38 -without MHCA blocks | Dual | 82.93 | 2.08 | 90.06 | 2.64 | 94.16 | 2.98 | 95.84 | 3.16 ## 3 Experimental setup ### 3.1 Data preparation 29 hours and 1 hour of speech are selected from Librispeech corpus as training and validation sets, respectively. The noises are from the DEMAND dataset. In addition, this paper simulate room impulse response (RIR) by the IMAGE method [18]. Specifically, two microphones with 2cm interval are placed at the center of a $5m$(length) $\times$ $5m$(width) $\times$ $3m$(height) room, and noise source which are placed at 1.5m away from the center of the two microphones and ranged from $0^{\circ}$ to $360^{\circ}$ spaced by $10^{\circ}$. For each mixture, a speech and a slice of noise are randomly chosen and are placed at two different positions, and the speech and noise are mixed under the randomly SNR levels ranging from -5dB to 10dB. In addition, the frame length is 32 ms and the hop size is 16 ms. The Hanning window is used as the analysis window. The sampling rate is 16 kHz. A 512-point discrete Fourier transform is used to extract complex short-time Fourier transform (STFT) spectrograms. ### 3.2 Baselines and training details The proposed MHCA-CRN model has been compared with four other baselines: (1) CB-NR [16]: A coherence-based dual-channel noise reduction algorithm; (2) DeepXi [14]: A minimum mean-square error (MMSE) approach for single-channel speech enhancement by using deep learning; (3) CRN-PSM [9]: a CRN approach to predict phase sensitive mask (PSM) for dual-microphone speech enhancement; (4) DC-CRN [17]: a densely-connected CRN approach for mobile communication based on dual-channel complex spectral mapping. To better validate the proposed structure and strategies, we add three ablation experiments. Firstly, we remove the SNR estimator and keep deconvolution layers to predict the clean speech spectrum directly. Secondly, we keep the SNR estimator only and remove the deconvolution layers for comparing the performance between single channel deep learning based MMSE approach, i.e. DeepXi, and dual-channel deep learning based MMSE approach. Finally, we remove the MHCA blocks to evaluate the effectiveness of MHCA. For the training step, all models are trained with Adam optimizer for stochastic gradient descent (SGD) based optimization. The learning rate is set to 0.001. All training samples are zero-padded to have the same number of time steps as the longest sample. Fig. 4: From left to right: Visualization of MHCA mask at layer 1, layer 3, and layer 5. ## 4 Results and analysis The speech enhancement systems are evaluated using Perceptual Evaluation Speech Quality (PESQ) and Short Term Objective Intelligibility (STOI). Experimental results are summarized in Table 1. According to Table 1, we have the following observations: Firstly, the deep learning based methods significantly improve both STOI and PESQ metrics, and outperform the conventional approach, i.e., CB-NR. Secondly, the MHCA-CRN without spectral mapping performs better than DeepXi in terms of PESQ and STOI, which indicates that the dual-channel based model show more robustness than single-channel based model. Finally, apart from that, the proposed MHCA- CRN achieves better results than MHCA-CRN without SNR estimator, the proposed MHCA-CRN consistently outperforms the state-of-the-art model, i.e., DC-CRN, in both metrics. This indicate the effectiveness of the SNR estimator. Table 1 furthermore shows that the MHCA has significantly improved network performance. The visualization of MHCA masks at different convolution layers is as shown in Fig 4. In early layers, the masks pay more attention to certain feature map channels, for example, the $4^{th}$ and $9^{th}$ channels of the feature map are highlighted by the mask at the first layer. What is more, when the layer is going deeper, the shape of the MHCA mask is changing in order to synchronize the time delay between two channels. Take, for example, mask at $3^{rd}$ layer highlights several channels but at different time frames, while mask at $7^{th}$ highlights certain channels at all time frames. This is demonstrated that the spatial cues between dual-channel can be implicitly exploited by MHCA. ## 5 Conclusion In this paper, we propose an MHCA-CRN for dual-microphone speech enhancement, aiming to straightforwardly and efficiently exploit spatial information. The model adopts a channel-wise encoding structure to process each input channel independently for preserving intra-channel features and uses the MHCA mechanism to aggregate cross-channel spatial information. Furthermore, an SNR estimator is adopted along with the decoder to estimate frame-level SNR under a multi-task learning framework for further improving the speech quality. Finally, a spectral gain function is adopted to remove unnatural residual noise. Experimental results show that our proposed method can suppress the noise meanwhile maintaining better intelligibility. ## References * [1] Mohamed Djendi and Rédha Bendoumia, “A new adaptive filtering subband algorithm for two-channel acoustic noise reduction and speech enhancement,” Computers & Electrical Engineering, vol. 39, no. 8, pp. 2531–2550, 2013. * [2] Rahima Henni, Mohamed Djendi, and Mustapha Djebari, “A new efficient two-channel fast transversal adaptive filtering algorithm for blind speech enhancement and acoustic noise reduction,” Computers & Electrical Engineering, vol. 73, pp. 349–368, 2019\. * [3] S Applebaum and Dean Chapman, “Adaptive arrays with main beam constraints,” IEEE Transactions on Antennas and Propagation, vol. 24, no. 5, pp. 650–662, 1976. * [4] K Buckley and L Griffiths, “An adaptive generalized sidelobe canceller with derivative constraints,” IEEE Transactions on antennas and propagation, vol. 34, no. 3, pp. 311–319, 1986. * [5] J Rouat, “Computational auditory scene analysis: Principles, algorithms, and applications (wang, d. and brown, gj, eds.; 2006)[book review],” IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 199–199, 2008. * [6] Xinmeng Xu, Yang Wang, Dongxiang Xu, Yiyuan Peng, Cong Zhang, Jie Jia, and Binbin Chen, “Multi-stage progressive speech enhancement network,” Proc. Interspeech 2021, pp. 2691–2695, 2021. * [7] Xiong Xiao, Shinji Watanabe, Hakan Erdogan, Liang Lu, John Hershey, Michael L Seltzer, Guoguo Chen, Yu Zhang, Michael Mandel, and Dong Yu, “Deep beamforming networks for multi-channel speech recognition,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 5745–5749. * [8] Hakan Erdogan, John R Hershey, Shinji Watanabe, Michael I Mandel, and Jonathan Le Roux, “Improved mvdr beamforming using single-channel mask prediction networks.,” in Interspeech, 2016, pp. 1981–1985. * [9] Ke Tan, Xueliang Zhang, and DeLiang Wang, “Real-time speech enhancement using an efficient convolutional recurrent network for dual-microphone mobile phones in close-talk scenarios,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 5751–5755. * [10] Aaron Nicolson and Kuldip K Paliwal, “Masked multi-head self-attention for causal speech enhancement,” Speech Communication, vol. 125, pp. 80–96, 2020. * [11] Sergey Ioffe and Christian Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. PMLR, 2015, pp. 448–456. * [12] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015. * [13] Chengyu Zheng, Xiulian Peng, Yuan Zhang, Sriram Srinivasan, and Yan Lu, “Interactive speech and noise modeling for speech enhancement,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 14549–14557. * [14] Aaron Nicolson and Kuldip K Paliwal, “Deep Xi as a front-end for robust automatic speech recognition,” in 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE). IEEE, 2020, pp. 1–6. * [15] Ashutosh Pandey and Deliang Wang, “On adversarial training and loss functions for speech enhancement,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5414–5418. * [16] Youna Ji, Jun Byun, and Young-cheol Park, “Coherence-based dual-channel noise reduction algorithm in a complex noisy environment.,” in INTERSPEECH, 2017, pp. 2670–2674. * [17] Ke Tan, Xueliang Zhang, and DeLiang Wang, “Real-time speech enhancement for mobile communication based on dual-channel complex spectral mapping,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6134–6138. * [18] Jont B Allen and David A Berkley, “Image method for efficiently simulating small-room acoustics,” The Journal of the Acoustical Society of America, vol. 65, no. 4, pp. 943–950, 1979.
# Computron: Serving Distributed Deep Learning Models with Model Parallel Swapping Daniel Zou${}^{\text{1, *}}$ Xinchen Jin${}^{\text{2}}$ Xueyang Yu${}^{\text{2}}$ Hao Zhang${}^{\text{3}}$ James Demmel${}^{\text{1}}$ ${}^{\text{1}}$UC Berkeley ${}^{\text{2}}$ShanghaiTech University ${}^{\text{3}}$UC San Diego ###### Abstract Many of the most performant deep learning models today in fields like language and image understanding are fine-tuned models that contain billions of parameters. In anticipation of workloads that involve serving many of such large models to handle different tasks, we develop Computron, a system that uses memory swapping to serve multiple distributed models on a shared GPU cluster. Computron implements a model parallel swapping design that takes advantage of the aggregate CPU-GPU link bandwidth of a cluster to speed up model parameter transfers. This design makes swapping large models feasible and can improve resource utilization. We demonstrate that Computron successfully parallelizes model swapping on multiple GPUs, and we test it on randomized workloads to show how it can tolerate real world variability factors like burstiness and skewed request rates. Computron’s source code is available at https://github.com/dlzou/computron. <EMAIL_ADDRESS> ## 1 Introduction In recent years, researchers and practitioners have dramatically improved the performance of deep learning models, particularly large language models (LLMs), using two techniques: massive parameterization and fine-tuning. Many pre-trained models with billions of parameters have been released, and each of them is being customized for a myriad of tasks through fine-tuned variants. In a plausible scenario, organizations would host many of these large models, each similar in architecture and size but tuned to some specific task, to serve the needs of their internal personnel and external users. The usual way to serve a model using GPUs is to keep all of its parameters in GPU memory so that inference runs directly on the accelerator device. When a model is too large to fit in a single GPU’s memory, a common technique is to distribute it to multiple GPUs through model parallelism. The amount of memory onboard each GPU is limited, so an organization would need to purchase many GPUs to serve all of its models, which could be quite expensive. Worse, the costly hardware is underutilized when some models receive requests at low or irregular rates. Among existing ML serving systems, some such as AlpaServe [7] and Energon-AI [4] employ model parallelism to serve large models in a distributed fashion. There are also systems like Clockwork [5] that use memory swapping to overcome the limitation of GPU memory and improve utilization. In this paper, we present Computron, a prototype serving system that unifies model parallelism and swapping. Computron makes it possible to serve multiple large distributed models that, in total, can exceed the memory capacity of the GPU cluster they share. In terms of usability, Computron supports Colossal- AI’s [6] functionality for easy model parallelism during model development, and it integrates with asynchronous Python frameworks for service deployment. We discuss several ordering and synchronization problems that constrain the design of such a system, and we explain how our design solves these problems to achieve parallelized swapping on multiple GPUs. We evaluate Computron in two ways. First, we isolate the swapping component to demonstrate that model parallel swapping does in fact reduce the time taken to load a distributed model into GPU memory. Second, we test Computron under more realistic conditions on randomly generated workloads that simulate conditions where requests may be bursty and skewed to a subset of models. ## 2 Background and Problem Deep learning is being rapidly adopted in countless business and scientific applications. In many of these applications where some form of service deployment is involved, the serving system is a crucial component of the deep learning workflow. These systems generally operate in a request-response paradigm by listening to inference requests, running the requested model on inputs on specialized hardware such as GPUs, then responding with the output. The design of such a system revolves around the tradeoff of reducing latency so that end users experience less waiting versus increasing efficiency to save operational cost, all without compromising model accuracy. Recent research in language models has popularized the practice of fine- tuning, where either a portion or all of a pre-trained model’s parameters are trained on new data from a specific task in order to achieve higher accuracy. For example, the pre-trained BERT [3] model can be fine-tuned to a variety of language understanding tasks—from text classification to part-of-speech tagging to natural language inference—just by retraining the last layers, and GPT-3 [2] has been fine-tuned on human feedback so that the resulting InstructGPT [10] model can better align to user intentions. Fine-tuning generally involves none or minor modifications of the model architecture. As fine-tuned models become commonplace, so will workloads that involve serving multiple models with highly similar memory footprints and access patterns. A second trend spearheaded by the language modeling community has been to increase model size in pursuit of better accuracy and generalizability. At the extreme, Megatron-Turing NLG [11] contains 530 billion parameters, and the comparatively space-efficient LLaMA [12] still contains up to 65 billion parameters. On Chatbot Arena [14], a platform where humans rate the quality of language model outputs, most models with competitive performance have at least 6 billion parameters. Serving multiple instances of such large models would exceed the memory capacity of all consumer GPUs and many high-end ones. The standard solution is to distribute large models across multiple GPUs using model parallelism. Two forms, tensor parallelism (TP) and pipeline parallelism (PP), are well-studied and commonly used in training workloads [8], but they are just recently beginning to see use in serving systems as well. Even when all models can fit into aggregate GPU memory, the AlpaServe [7] team found that there is still reason to use model parallelism in a serving system because it can reduce latency in real world workloads. Another challenge of serving deep learning models is that real world serving workloads are often unpredictable. Request arrival distributions may be bursty. Furthermore, across multiple models, request rates may be skewed—some models may receive a lot more requests than others—and the rates may shift over time. Hosting all of these models in GPU memory leads to underutilization in the face of dynamic request patterns, as resources are over-provisioned to models with low request rates. The resource allocation imbalance is worse for localized services that expect irregular traffic, and the hardware cost is higher when larger models distributed across multiple devices are involved. Both of these factors act against the interests of smaller organizations. We survey a number of prior works and find that while there are many designs from which to take inspiration, to the best of our knowledge, no single system addresses both the model parallelism and the resource utilization challenges that we have outlined. AlpaServe [7] and Energon-AI [4] are capable of serving large models using model parallelism, but they host all models on GPUs following some form of static assignment and are thus bounded by the amount of available GPU memory. Clockwork [5] serves many deep learning models on a limited number of GPUs by swapping models between CPU and GPU memory, which works well when the models are on the order of millions of parameters or less. However, this approach is not suited for larger models with billions of parameters that can take several seconds to transfer. ZeRO-Inference [1] parallelizes parameter transfers across GPUs in order to multiply CPU-GPU bandwidth, but it is meant for individual layers within a single massive model. ## 3 Design In §2, we provide motivation for the problem of serving multiple distributed deep learning models, and we identify the key issue of resource underutilization when handling bursty, skewed requests. To deal with these challenges, we propose a serving system that co-locates multiple models on the same cluster of GPUs and dynamically swaps distributed model parameters between CPU and GPU memories. Active models are swapped into GPU memory so that requests can be served quickly, while unused models are swapped out to reduce the unnecessary consumption of resources. The underlying technique of offloading unused models to CPU memory has already been applied by state-of-the-art serving systems like Clockwork [5] to great effect. Our particular approach is comparable to demand paging in the context of virtual memory management; when a model is requested whose parameters do not currently reside in GPU memory, a replacement policy is used to pick another model to swap out, and then the requested model is swapped in. Like other systems, we assume the existence of large CPU memory to hold unused model parameters. More sophisticated fetching algorithms can be used here instead, but are beyond the scope of this paper. Inspired by prior works, we seek to investigate whether model parallelism can also be used to reduce the latency of model swapping. We hypothesize that on systems where GPUs have independent PCIe links to the CPU, by specifying higher degrees of TP and PP to distribute model shards across more GPUs, model parameter shards can be loaded in parallel to take advantage of greater aggregate link bandwidth between the CPU and GPUs. A similar optimization is used by ZeRO-Inference. Should this prove true in practice, it would become feasible to perform dynamic swapping while serving large models that share a group of devices, just like smaller models. However, a number of design considerations arise when we put our hypothesis to the test. ### 3.1 Architecture Figure 1: Computron architecture for $TP=2$, $PP=2$. One worker is launched per GPU. Two models, A and B, are co-located in the same parallel configuration. Our system uses an engine-worker architecture to manage multiple distributed model instances at the same time. The centralized engine receives, queues, and waits for the completion of all model requests. Workers are launched per GPU in accordance with a user-provided parallel configuration (TP and PP dimensions) to manage shards of model parameters. Because we assume all models have a similar size and architecture, we make the simplification to co-locate each distributed model instance onto the GPU cluster using the same configuration. Fig. 1 gives an example of the architecture for a $TP=2$, $PP=2$ configuration. When the engine receives a request for some model, it pushes the request object along with a timestamp into a queue specifically for that model. Concurrently, the engine repeatedly picks a queue to pop oldest request objects, then packs and submits them to workers in the first pipeline stage as a single batch entry. Workers at each pipeline stage evaluate batch entries in submitted order, up to the last stage, at which point last stage workers send the batch output back to the engine. PP communication occurs through FIFO pipes, while TP communication is done through distributed collectives, as represented by the arrows in Fig. 1. ### 3.2 Model Parallel Swapping Figure 2: Broadcasted load entry violates load dependency. Figure 3: Synchronous load entry reduces loading parallelism and causes unnecessary blocking. In our design, the responsibility of making swapping decisions is delegated to the engine, so in addition to submitting batch entries, the engine can initiate another type of action through what we refer to as load entries. A load entry commands a worker to either load or offload the parameters of an instance. Challenges arise when designing how load entries should be submitted to and processed by distributed workers. A model can only be evaluated on a batch entry after the model’s parameters are loaded into GPU memory, so as the engine schedules batch and load entries in some order it deems correct, workers must respect the load dependencies of that schedule. Furthermore, data dependencies between adjacent pipeline stage workers delay when later stage workers receive batch entries, ruling out certain designs like broadcasting the load entry, as illustrated in Fig. 2. These load and data dependencies are resolved if workers synchronously process load entries in pipeline order just like batch entries, but this naive solution has two issues: a batch entry to some model is unnecessarily blocked by load entries to another unrelated model, and no loading parallelism is achieved by workers of different stages in the same pipeline, as shown in Fig. 3. Figure 4: Comparison of how batch entries and load entries are processed in a linear worker pipeline. We propose an asynchronous mechanism for handling load entries that mitigates these issues. After being submitted by the engine, load entries are pipelined through worker stages just like batch entries, but a worker does not wait for loading to complete before passing the load entry forward to the next stage. This can be done using the stream feature of the CUDA programming model. On top of the default CUDA stream that executes kernels for model inference, each worker launches two additional streams to run loading and offloading operations concurrently. A load entry is completed when every worker finishes loading/offloading and sends a response back to the engine. The engine is responsible for avoiding load dependency violations by making sure batch entries for a model are submitted to workers only after that model has been fully loaded. This design allows a later batch entry to proceed without waiting for a previous load entry involving another model to complete, and this also enables workers of different stages to load shards of a model’s parameters in parallel. The paths of batch and load entries through one branch of the system pipeline are depicted in Fig. 4. One more detail is the use of pinned memory. CUDA requires the CPU-side data buffer to be in page-locked memory during CPU-GPU data transfers to prevent interruptions caused by paging. Data objects on CPU are stored in paged memory by default, so data transfers would incur an extra copy on the CPU side from paged memory to page-locked memory. We eliminate this extra data movement by making sure that when a model is offloaded, the parameters are kept pinned in CPU memory. ## 4 Implementation We build Computron as a serving system that supports model parallel swapping based on the considerations presented in §3. We borrow some components from Energon-AI [4], such as the RPC-based FIFO pipe implementation used for communication between pipelined worker stages. Just like Energon-AI, Computron is compatible with Colossal-AI [6] functionality, meaning that users can easily incorporate model parallelism in their models with minimal changes to PyTorch source code. As Computron launches, Colossal-AI automatically handles setting up the context and communication groups for model parallelism, and it does so using the same configuration for each instance. The engine is implemented using Python’s asyncio library, and request scheduling is done in a completely asynchronous fashion. Because of this, Computron integrates with asynchronous Python web frameworks such as FastAPI. Requests are scheduled in batches based on the oldest timestamp, and model swapping uses an LRU replacement policy. ## 5 Evaluation We design two separate sets of experiments in order to characterize Computron’s performance. In the first set of experiments, we intentionally induce the worst case for handling each request and measure how the time to swap models scales with model parallelism. In the second set, we generate simulated request workloads using a random arrival process to study how Computron handles more realistic scenarios. We conduct experiments on a single GPU node of the Perlmutter supercomputer managed by NERSC. The GPU node has one AMD EPYC 7763 CPU and four NVIDIA A100 GPUs, each connected to the CPU through a PCIe 4.0 x16 link [9]. ### 5.1 Swapping Latency In §3, we hypothesize that model parallelism linearly decreases the time taken to load and offload model parameters between CPU and GPU memory. To check this hypothesis, we design an experiment that forces the worst case scenario where each request must perform a swap. We launch models concurrently and configure the engine to only allow one model to reside in GPU memory at any given time. We then send alternating blocking requests to the two models and measure the times taken to swap and execute a model at each request. The model size is fixed in order to test the strong scaling properties of CPU-GPU swapping. The model chosen is OPT-13B [13], an open-source pre-trained transformer language model released by Meta AI. Using half-precision floats, OPT-13B has a memory footprint of about 24 GB. This model is chosen because it can fit into the memory of a single A100 GPU, which serves as a baseline for swapping time. Before running the experiment, we estimate the lower bound for how long swapping should take for comparison. Each CPU-GPU link has a bandwidth of 32 GB/s, so a single GPU is expected to load or offload an OPT-13B model instance in $24/32=0.75$ seconds. On our test system, aggregate CPU-GPU bandwidth increases linearly with the number of GPUs, so as the model is distributed to more GPUs using either TP or PP, the swapping time is expected to inversely decrease. Swapping time includes both the offloading of one model and the loading of another, and because our asynchronous implementation overlaps the two, we measure from when the offload entry is submitted to when both offload and load entries are completed. Figure 5: Swapping latency with changing TP scale. With the theoretical lower bound in mind, we first run the experiment with three trials that scale the degree of model TP. We use a small input token length of 2. Fig. 5 visualizes the results of these trials on $TP=1$, $TP=2$, and $TP=4$, all with $PP=1$. The left plot examines how average time spent swapping scales with TP. These trials confirm that the swap latency does decrease as TP increases, as we hypothesized. However, the latency on a single GPU is noticeably higher than the lower bound, and the scaling appears to be less than linear; this difference may be explained by the alpha-beta communication model. Model parameters are transmitted not as one long stream, but as separate messages for the individual tensors. Each TP shard still contains the same number of tensors as the original model albeit smaller, so the same number of messages must be sent by each worker when loading. Taking the expression $\alpha+\beta*n$ for total latency, while message size $n$ is reduced per worker, the per message latency term $\alpha$ remains the same, leading to sublinear scaling. The right plot shows swapping and execution times in proportion to the end-to- end latency. From the plot, it is clear that swapping latency remains the bottleneck in all cases, but as the number of GPUs increases, the proportion of overall time spent swapping decreases; this highlights how model parallelism benefits swapping even more than execution. Figure 6: Swapping latency with changing PP scale. Next, we run an experiment that varies PP degree between 1, 2, and 4 worker stages. Similar to the TP experiments, Fig. 6 shows that increasing PP also decreases the swapping latency. We postulate that in this case, sublinear scaling stems from delays as a load entry is pipelined through workers. Since workers process batch entries synchronously, load entries, despite being asynchronous, must still wait for their turn. Figure 7: Swapping latency for $TP=2$, $PP=2$. TP and PP are often used together in practice, so we ran an additional trial with $TP=2$, $PP=2$. From Fig. 7, we see that the mixed parallelism configuration has lower latency than both pure TP and pure PP for the same number of workers, and it in fact approaches the ideal scaling target. The positive effect of mixing parallelism may be because the previously described overheads in the TP case and in the PP case are lessened at smaller degrees. ### 5.2 Simulated Workloads We characterize the practical performance of our model parallel swapping design with simulated workloads of serving multiple OPT-13B models on the same cluster of four A100 GPUs. For all experiments conducted, we follow the configuration of $TP=2$, $PP=2$. Each simulation trial begins with several warm up requests that are not recorded. Then, requests are sent to all models over a 30 second period, with the distribution of requests to each model following a random independent Gamma arrival process. Each request has an input token length of 8. Across simulations, we vary two parameters: the assignment of mean arrival rates to each model, and the coefficient of variation (CV) that is shared by all models. For our purposes, assigning different mean arrival rates simulates how request rates skew toward a subset of models, while CV adjusts the burstiness of requests. For instance, $CV=4$ is a high degree of burstiness, and $Rates=(10,1,1)$ represents a skew toward the first model relative to the other two. Our first set of simulations serves three models at once, limiting to at most two models in GPU memory at all times, and we check that GPU memory usage approximately matches the footprint of two OPT-13B models. The maximum batch size is 8. Tab. 1 summarizes the average end-to-end latencies for the grid of parameters we measured, with three variations in skew and three variations in CV. Fig. 8 visualizes request latency CDFs of all models combined for each pair of (Rates, CV). We observe a common pattern that as CV increases from 0.25 to 4, the latency tends to decrease, which can be seen both in the table of average latencies and in the CDF curves in each plot shifting toward the top left corner. This suggests Computron performs better when request patterns are bursty. Intuitively, bursty request distributions mean higher likelihood of consecutive requests to the same model, so less model swaps occur because the engine schedules the oldest request first with LRU replacement. In the three-model simulation, changing the skew of request rates only marginally increases the maximum latency, and in general has little impact on the overall latency distribution. This provides evidence that Computron can tolerate workloads with imbalanced requests rates. Though our simulations only test static request rate assignments, we expect that this tolerance can also extend to dynamic scenarios where the skewness of request rates changes over time. Skew | $CV=0.25$ | $CV=1$ | $CV=4$ ---|---|---|--- $(1,1,1)$ | 1.262 | 0.606 | 0.518 $(10,1,1)$ | 1.172 | 0.886 | 0.550 $(10,10,1)$ | 1.014 | 0.716 | 0.374 Table 1: Average latencies for combinations of (Rates, CV) when serving 3 models with only 2 models in GPU memory. Figure 8: Latency CDFs for combinations of (Rates, CV) when serving 3 models with only 2 models in GPU memory. The second set of simulations serves six models at once, limiting to at most four models in GPU memory at all times and with the maximum batch size set to 32. The results in Fig. 9 show similar patterns as the previous three-model simulations. When $CV=4$, the latency distribution of serving six models is actually lower than serving three models on average based on Tab. 2, which indicates that good resource utilization can be achieved when requests are bursty. On the other hand, latencies of lower CV trials are scaled by approximately a factor of two. A possible explanation for this is that in lower CV trials, GPUs have already been maximally utilized conditioned on the request distribution and scheduling order, so doubling the workload leads to doubled latency. Looking at the bigger picture, both the three-model and the six-model simulations reveal that many requests take longer than the isolated latency measurements from §5.1 would suggest. Two possible causes for this are the scheduling algorithm and the choice of maximum batch size. Our simple oldest- request-first scheduling algorithm overlooks global information that may be used to reduce average latency. The maximum batch size trades off between the rate at which a model’s request queue is drained and the compute time of that batch, and it may have some interactions with the request arrival distributions. We defer more thorough investigation of these effects to future work. Skew | $CV=0.25$ | $CV=1$ | $CV=4$ ---|---|---|--- $(1,1,1,1,1,1)$ | 1.847 | 1.282 | 0.174 $(10,10,1,1,1,1)$ | 2.017 | 1.413 | 0.229 $(10,10,10,10,1,1)$ | 1.535 | 1.470 | 0.312 Table 2: Average latencies for combinations of (Rates, CV) when serving 6 models with only 4 models in GPU memory. Figure 9: Latency CDFs for combinations of (Rates, CV) when serving 6 models with only 4 models in GPU memory. ## 6 Conclusion We design and implement Computron, a system that is capable of serving multiple deep learning models with billions of parameters on the same cluster of GPUs, and it can exceed aggregate GPU memory capacity through model parallel swapping. In isolated tests, we demonstrate that our design takes advantage of both TP and PP to speed up the swapping of distributed models. Simulated random workloads show that our system can tolerate bursty and skewed request patterns. These features enable organizations to efficiently serve many cutting-edge large models for different tasks when compute resources are limited. An optimization that may significantly improve serving performance is to speculatively load or offload models. In real world scenarios, requests to different models are often not independent processes, but instead have predictable patterns, such as the same model being requested many times consecutively to generate a sequence, a subset of models often being requested in some fixed order, or a model being more frequently requested at a certain time of day. More sophisticated load scheduling algorithms with predictive capabilities can drastically reduce the number of on-demand swaps, and by extension, serving latency. A problem that has not been resolved in this work is handling models with different sizes, and even different model parallelism configurations. Our system currently assumes that every model instance is evenly distributed across the cluster in the same way and with the same memory footprint. Removing that assumption brings many complexities such as the decision problem of what to load/offload when swapping and whether workers should handle each model differently. ## References * [1] Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Olatunji Ruwase, Shaden Smith, Minjia Zhang, Jeff Rasley, and Yuxiong He. Deepspeed-inference: Enabling efficient inference of transformer models at unprecedented scale. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’22. IEEE Press, 2022\. * [2] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. * [3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. * [4] Jiangsu Du, Ziming Liu, Jiarui Fang, Shenggui Li, Yongbin Li, Yutong Lu, and Yang You. Energonai: An inference system for 10-100 billion parameter transformer models, 2022. * [5] Arpan Gujarati, Reza Karimi, Safya Alzayat, Wei Hao, Antoine Kaufmann, Ymir Vigfusson, and Jonathan Mace. Serving DNNs like clockwork: Performance predictability from the bottom up. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pages 443–462. USENIX Association, November 2020. * [6] Shenggui Li, Jiarui Fang, Zhengda Bian, Hongxin Liu, Yuliang Liu, Haichen Huang, Boxiang Wang, and Yang You. Colossal-ai: A unified deep learning system for large-scale parallel training, 2022. * [7] Zhuohan Li, Lianmin Zheng, Yinmin Zhong, Vincent Liu, Ying Sheng, Xin Jin, Yanping Huang, Zhifeng Chen, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Alpaserve: Statistical multiplexing with model parallelism for deep learning serving, 2023. * [8] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Anand Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training on gpu clusters using megatron-lm, 2021. * [9] Nersc. Perlmutter architecture. https://docs.nersc.gov/systems/perlmutter/architecture/. * [10] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022\. * [11] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model, 2022. * [12] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. * [13] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. * [14] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
Quantification of Ebola virus replication kinetics in vitro Laura E. Liaoa,1 Jonathan Carruthersb,1 Sophie J. Smitherc CL4 Virology Teamc,2 Simon A. Wellerc Diane Williamsonc Thomas R. Lawsc Isabel García- Dorivald Julian Hiscoxd Benjamin P. Holdere Catherine A. A. Beaucheminf,g Alan S. Perelsona Martín López-Garcíab Grant Lytheb John Barrh Carmen Molina- Parísb,3 a Theoretical Biology and Biophysics, Los Alamos National Laboratory, Los Alamos, NM, USA 87545 b Department of Applied Mathematics, School of Mathematics, University of Leeds, Leeds LS2 9JT, UK c Defence Science and Technology Laboratory, Salisbury SP4 0JQ, UK d Institute of Infection and Global Health, University of Liverpool, Liverpool, L69 7BE, UK e Department of Physics, Grand Valley State University, Allendale, MI, USA 49401 f Department of Physics, Ryerson University, Toronto, ON, Canada M5B 2K3 g Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) Research Program at RIKEN, Wako, Saitama, Japan, 351-0198 h School of Molecular and Cellular Biology, University of Leeds, Leeds LS2 9JT, UK 1 These authors contributed equally to this work. 2 Membership list can be found in the Acknowledgments section. 3<EMAIL_ADDRESS> ## Abstract Mathematical modelling has successfully been used to provide quantitative descriptions of many viral infections, but for the Ebola virus, which requires biosafety level 4 facilities for experimentation, modelling can play a crucial role. Ebola modelling efforts have primarily focused on in vivo virus kinetics, e.g., in animal models, to aid the development of antivirals and vaccines. But, thus far, these studies have not yielded a detailed specification of the infection cycle, which could provide a foundational description of the virus kinetics and thus a deeper understanding of their clinical manifestation. Here, we obtain a diverse experimental data set of the Ebola infection in vitro, and then make use of Bayesian inference methods to fully identify parameters in a mathematical model of the infection. Our results provide insights into the distribution of time an infected cell spends in the eclipse phase (the period between infection and the start of virus production), as well as the rate at which infectious virions lose infectivity. We suggest how these results can be used in future models to describe co- infection with defective interfering particles, which are an emerging alternative therapeutic. ## Author summary The two deadliest Ebola epidemics have both occurred in the past five years, with one of these epidemics still ongoing. Mathematical modelling has already provided insights into the spread of disease at the population level as well as the effect of antiviral therapy in Ebola-infected animals. However, a quantitative description of the replication cycle is still missing. Here, we report results from a set of in vitro experiments involving infection with the Ecran strain of Ebola virus. By parameterizing a mathematical model, we are able to determine robust estimates for the duration of the replication cycle, the infectious burst size, and the viral clearance rate. ## Introduction The world’s second largest Ebola outbreak is currently underway in the Democratic Republic of Congo. Ebola virus (EBOV) causes severe and fatal disease with death rates of up to 90% [1]. There is an urgent need to prevent and treat EBOV infections, but no antiviral drugs or monoclonal antibodies have been approved in Africa, the EU, or the US. Recently the first EBOV vaccine has been approved by European regulators [2]. Experimental therapies [3], including antiviral drugs (remdesivir [4] and favipiravir [5, 6]) and a cocktail of monoclonal antibodies (ZMapp) [7], have been assessed in the 2013–2016 West Africa Ebola virus disease outbreak. Other promising monoclonal antibody therapies, called mAb114 and REGN-EB3, have been deployed in the current 2018–2019 Kivu Ebola epidemic [8]. A better understanding of the precise infection kinetics of EBOV is warranted. Mathematical modelling of viral dynamics has provided a quantitative understanding of within-host viral infections, such as HIV [9], influenza [10], Zika [11], and more recently, EBOV. Mathematical modelling studies have analyzed the plasma viral load dynamics of EBOV-infected animals (mice [12], non-human primates [13, 14]) while under therapy with favipiravir, and have identified estimates of favipiravir efficacy and target drug concentrations. In addition, mechanistic models of innate and adaptive immune responses were used to provide an explanation of EBOV infection dynamics in non-human primates [14], and of differences between fatal and non-fatal cases of human infection [15]. Moreover, mathematical models have been used to predict the effect of treatment initiation time on indicators of disease severity [12, 15] and survival rates [14], to predict the clearance of EBOV from seminal fluid of survivors [16], and to theoretically explore treatment of EBOV-infected humans with antivirals that possess different mechanisms of action (i.e., nucleoside analog, siRNA, antibody) [15]. Alongside the progress made in understanding within-host infections, a complementary view of infection can be provided by mathematical modelling of infections at the in vitro level. Combined with in vitro time course data, mathematical models (MMs) have provided a detailed quantitative description of the viral replication cycle of influenza A virus [17, 18], SHIV [19, 20], HIV [21], and other viruses [22, 23, 24, 25, 26]. Such studies yield estimates of key quantities such as the basic reproductive number (defined as the number of secondary infections caused by one infected cell in a population of fully susceptible cells), half-life of infected cells, and viral burst size, which cannot be obtained directly from data [27]. In the context of in vitro infections, parameterized MMs have been used to predict the outcome of competition experiments between virus strains [28, 29, 30] (i.e., which strain dominates in a mixed infection), map differences in genotype to changes in phenotype [28, 30] (e.g., associate a single mutation to ten-fold faster viral production), quantify fitness differences between virus strains [31, 32] (e.g., which strain has a larger infectious burst size), quantify the contribution of different modes of transmission (cell-to-cell versus cell- free) [21], and identify the target of antiviral candidates [33] (e.g., whether a drug inhibits viral entry or viral production). One prior study [34] utilized in vitro infection data from the literature to estimate EBOV infection parameters, but had several parameter identifiability issues due to insufficient data. Our goal is to obtain robust estimates of viral infection parameters that characterize the EBOV replication cycle. We follow a mathematical modelling approach that has been successfully applied in the analysis of other viral infections in vitro [17]. To this end, we performed a suite of in vitro infection assays (single-cycle, multiple-cycle, and viral infectivity decay assays) using EBOV and Vero cells, and collected detailed extracellular infectious and total virus time courses. The viral kinetic data were simulated with a multicompartment ordinary differential equation MM, and posterior distributions of the MM parameters were estimated using a Markov chain Monte Carlo (MCMC) approach. We estimate that one EBOV-infected cell spends $\sim$30\text{\,}\mathrm{h}$$ in an eclipse phase before it releases infectious virions at a rate of $13\text{\,}{\mathrm{h}}^{-1}$, over its infectious lifetime of $\sim$83\text{\,}\mathrm{h}$$. The number of infectious virions produced over an infected cell’s lifetime is $\sim 1000$, with an estimated basic reproductive number of $\sim 600$. We also discuss challenges in collecting other types of virus dynamic data (e.g., intracellular viral RNA or cell counts). ## Results #### Ebola Virus Kinetics In Vitro Vero cell monolayers were infected with EBOV at a multiplicity of infection (MOI) of 5, 1, . Infectious and total virus concentrations were determined from extracellular virus harvested from the supernatant of each well at various times post-infection (Fig. 1 A–C, E—F). At the start of infection, the virus concentrations do not rise for some time, reflecting the time it takes for viral entry, replication and release. After $24\text{\,}\mathrm{h}$, the virus concentrations grow exponentially as infected cells begin producing virus. When all cells in the well are infected, the virus concentrations peak at approximately and $10^{13}\text{\,}\mathrm{c}\mathrm{o}\mathrm{p}\mathrm{y}\mathrm{/}\mathrm{m}\mathrm{L}$ and the peak is sustained for $\sim$72\text{\,}\mathrm{h}$$. Thereafter, the virus concentrations decline when virus production ceases, presumably due to the death of infected cells. Additionally, the kinetics of viral infectivity decay and virus degradation were assessed with a mock yield assay (Fig. 1 D, G). In the mock yield assay, an inoculum of virus was incubated in wells under the same conditions as the growth assays, but in the absence of cells, and sampled over time. Fig 1: Kinetics of EBOV infection in vitro and mock yield assays. Vero cell monolayers were infected with EBOV at a multiplicity of infection (MOI) 5, 1, or , as indicated. At various times post-infection, the infectious ($\text{TCID}_{50}$/mL; A–C) and total virus (copy/mL; E–G) in the supernatant were determined. A mock yield assay was also performed to quantify the decay of infectious (D) and total virus (G). In each assay, the experimental data (circles) were collected either in duplicate (MOI 5) or triplicate (all other assays). Note that the total virus concentration collected in the MOI 5 infection was omitted from the analysis due to inconsistencies in the peak value (S1 Appendix, Fig. A). The lines represent the pointwise median of the time courses simulated from our MM, which are bracketed by 68% (light grey) and 95% (dark grey) credible regions (CR). These data were used to extract the posterior probability likelihood distributions of the infection parameters (Fig. 2). Note that parameters of the calibration curve used to convert cycle threshold values (Ct) to total virus (copy/mL) were also estimated (Fig. 3). The variability introduced from this conversion is shown by two error bars on each total virus data point, indicating the 68% (same colour) and 95% (black) CR. #### Mathematical Model of Viral Infection and Parameter Estimates The in vitro EBOV infection kinetics were captured with a MM that has been used successfully in past works to capture influenza A virus infection kinetics in vitro [30, 28, 31]. The MM is given by the system of ordinary differential equations: $\displaystyle\frac{\mathrm{d}T}{\mathrm{d}t}$ $\displaystyle=-\beta TV_{\text{inf}}$ $\displaystyle\frac{\mathrm{d}E_{1}}{\mathrm{d}t}$ $\displaystyle=\beta TV_{\text{inf}}-\frac{n_{E}}{\tau_{E}}E_{1}$ $\displaystyle\frac{\mathrm{d}E_{i=2,3,...,n_{E}}}{\mathrm{d}t}$ $\displaystyle=\frac{n_{E}}{\tau_{E}}\left(E_{i-1}-E_{i}\right)$ $\displaystyle\frac{\mathrm{d}I_{1}}{\mathrm{d}t}$ $\displaystyle=\frac{n_{E}}{\tau_{E}}E_{n_{E}}-\frac{n_{I}}{\tau_{I}}I_{1}$ (1) $\displaystyle\frac{\mathrm{d}I_{j=2,3,...,n_{I}}}{\mathrm{d}t}$ $\displaystyle=\frac{n_{I}}{\tau_{I}}\left(I_{j-1}-I_{j}\right)$ $\displaystyle\frac{\mathrm{d}V_{\text{inf}}}{\mathrm{d}t}$ $\displaystyle=p_{\text{inf}}\sum_{j=1}^{n_{I}}I_{j}-c_{\text{inf}}V_{\text{inf}}$ $\displaystyle\frac{\mathrm{d}V_{\text{tot}}}{\mathrm{d}t}$ $\displaystyle=p_{\text{tot}}\sum_{j=1}^{n_{I}}I_{j}-c_{\text{tot}}V_{\text{tot}}$ In this MM, susceptible uninfected target cells $T$ can be infected by infectious virus $V_{\text{inf}}$ with infection rate constant $\beta$, and subsequently enter the non-productive eclipse phase $E_{i=1,\ldots,n_{E}}$, followed by a transition into the productively infectious phase $I_{j=1,\ldots,n_{I}}$. The eclipse and infectious phases are divided into a number of compartments given by $n_{E}$ and $n_{I}$, respectively, such that the time spent in each phase follows an Erlang distribution with an average duration of $\tau_{E,I}\pm\frac{\tau_{E,I}}{\sqrt{n_{E,I}}}$. While cells are in the infectious phase, they produce infectious (total) virus $V_{\text{inf}}$ ($V_{\text{tot}}$) at a rate $p_{\text{inf}}$ ($p_{\text{tot}}$), which lose infectivity (viability) at rate $c_{\text{inf}}$ ($c_{\text{tot}}$). The MM Eq. (1) captures both infectious virus, quantified by $\text{TCID}_{50}$ measurements of supernatant samples, and total virus, quantified by quantitative, real-time, reverse transcriptase PCR (hereafter, RT-qPCR). The latter experimental quantity was obtained by converting cycle threshold (Ct) values from RT-qPCR to copy number (Fig. 3) using Eq. (3) (Methods). Predicted virus time courses from the MM are shown in Fig. 1 where the solid lines represent the pointwise median and the grey bands show narrow 95% credible regions (CR), indicating that the MM reproduces the viral kinetic data well. Using a Markov chain Monte Carlo (MCMC) approach, we obtained posterior probability likelihood distributions (PostPLDs) for each of the MM parameters (Fig. 2). Narrow PostPLDs were extracted with mild correlations between parameters (S1 Appendix, Fig. C), indicating good practical identification of all parameters. #### A Quantitative Description of the EBOV Lifecycle The MCMC analysis gives us the following quantitative description of the EBOV lifecycle within Vero cells. An EBOV-infected Vero cell spends approximately $30\text{\,}\mathrm{h}$ with a 95% credible region of [$26\text{\,}\mathrm{h}$, $37\text{\,}\mathrm{h}$] in the eclipse phase before progeny EBOV successfully bud. Subsequently, infectious virus is produced at a rate of 13 [10, 20] virions per cell per hour over a duration of $83\text{\,}\mathrm{h}$ [$64\text{\,}\mathrm{h}$, $95\text{\,}\mathrm{h}$] before virus production ceases due to cell death. An infectious burst size of 1096 progeny virions [1000, 1259] is released from each infected cell over its virus-producing lifetime. Once infectious virus enters the cell culture medium, they lose infectivity at a rate of $0.06\text{\,}{\mathrm{h}}^{-1}$ [$0.055\text{\,}{\mathrm{h}}^{-1}$, $0.068\text{\,}{\mathrm{h}}^{-1}$], which is comparable to other viruses such as influenza A virus [31] or SHIV [19]. Overall, the in vitro spread of infection is rapid, as characterized by an infecting time of $2\text{\,}\mathrm{h}$ [$1.6\text{\,}\mathrm{h}$, $2.7\text{\,}\mathrm{h}$], which is defined as the time required for a single virus-producing cell to infect one more [35]. These dynamics imply a large basic reproductive number of $589\text{\,}\mathrm{[}$398, 1000], which is defined as the number of secondary infections caused by a single infected cell in a population of fully susceptible cells. Notably, we find that the durations of both the eclipse and infectious phases follow a normal-like distribution, as given by $n_{E}$ of 13 [8, 23] and $n_{I}$ of 14 [3, 85]. This implies that the eclipse phase comprises a sequence of many distinct steps of short duration, without any one step lasting significantly longer than the rest. Likewise, the same interpretation applies to the infectious phase. The normal-like distribution of the eclipse phase resembles that of influenza A virus [30], but contrasts with the fat- tailed eclipse phase distribution of SHIV [20] which is likely due to a process in the phase that is longer than the rest (e.g., integration). Moreover, neither the eclipse nor infectious phase are exponentially distributed ($n=1$) as is commonly assumed in analyses with MMs. Such an assumption has been shown to impact estimates of antiviral efficacy that are based on patterns of viral load decay under simulated therapy in HIV patients [20]. Fig 2: Estimated parameter distributions of EBOV infection in vitro. Posterior probability likelihood distributions (PostPLDs) of parameters in the MM (A–G) were estimated using MCMC and the data in Fig. 1. Secondary parameters were derived from these estimates (H–J). Note that the PostPLDs corresponding to the number of eclipse and infectious phase compartments are integer-valued. The remaining PostPLDs of parameters describing the total virus and calibration curve are in S1 Appendix, Fig. B. ## Discussion In this work we performed time-course Ebola virus (EBOV) infection experiments at multiple MOIs in vitro and applied MCMC methods to precisely parameterize a mathematical model (MM) of the infection. We extracted fundamental quantities concerning the timing and viral production of EBOV replication. This theoretical-experimental approach maximized the output of the costly and difficult experiments, which must be performed in biosafety level 4 facilities. Previous studies of the EBOV lifecycle rely on safer virus-like particles [36]. The only previously known MM of EBOV infection in vitro [34] is restricted in its use due to problems with parameter identifiability; specifically, the existence of strong correlations between parameters, such as the rates of virus degradation and virus production. By obtaining a more complete set of experimental observations, we have provided the first detailed quantitative characterization of EBOV infection kinetics. Some of our estimates of timescales in the EBOV infection kinetics fill gaps in the knowledge of this virus, while others expose some tension with prior mathematical modelling work. The eclipse phase, excluded from the previous in vitro MM [34], has been found to be a significant part of the replication cycle. Lasting approximately $30\text{\,}\mathrm{h}$, it is longer than the eclipse phase for influenza A virus and HIV infections in humans (4–$24\text{\,}\mathrm{h}$) [10, 37]. Although the eclipse phase is included in existing MMs of EBOV-infected animals, its duration has never been estimated, and the assumed values used in these studies were considerably shorter than the value we identify here [12, 14]. Moreover, the observation that the length of the eclipse phase follows an Erlang distribution is contrary to these previous MMs, where it has been represented more simply as an exponentially distributed time. These MMs also fix the value of the decay rate of infectious virus to ensure that other parameters remain identifiable [12]. Here, the robust estimate of this decay rate demonstrates the benefit of performing a mock yield assay. Existing MMs of in vivo EBOV infection in humans and non-human primates provide considerably shorter estimates of the infection cycle (12.5–$15.3\text{\,}\mathrm{h}$) compared to the estimate of $114\text{\,}\mathrm{h}$ ($\tau_{E}+\tau_{I}$) obtained here [12, 15]. Such a difference is likely attributed to the inclusion of an implicit immune response in these in vivo models, thereby accounting for the enhanced clearance of infected cells by immune cells, such as CD8+ T cells [38]. This also explains why a faster viral decay rate can be expected in vivo, and subsequently why estimates of the basic reproduction number are greater here than those obtained from in vivo MMs (5.96-9.01) [12, 15]. It remains to be determined whether Vero cells are representative of the cells targeted by EBOV in vivo, but by understanding EBOV replication in Vero cells, we have a foundation from which more complex cell culture models might be developed. In addition to virus measurements, previous studies have included susceptible and infected cell measurements to fully parameterize the MM and obtain robust estimates of the viral kinetics parameters [19, 39]. We initially set out to obtain a more diverse data set that also included the kinetics of dead cells and intracellular RNA over the course of infection, but encountered unexpected challenges. To quantify cell viability, we treated infected monolayers at various times with Trypan blue, which stains cells that have lost the ability to exclude dye. Unfortunately, we were unable to associate this marker of cell death to a stage of the viral lifecycle in our MM without making additional assumptions. Ultimately, when we extended the MM to include these data, the newly introduced parameters were dependent on these assumptions, and the extracted values of the original parameters were largely unaffected (S1 Appendix). To determine intracellular viral kinetics, the supernatants from infected cell cultures were removed and the remaining monolayers were washed and trypsinized for quantification via $\text{TCID}_{50}$ assay and RT-qPCR. These samples showed a high level of EBOV RNA and $\text{TCID}_{50}$ as early as 4 hours post-infection, which remained at a constant level up to 1 day post-infection, but rose thereafter (S1 Appendix). Additionally, the ratio of RNA-to-$\text{TCID}_{50}$ resembled the ratio observed in the supernatant. Thus, these measurements likely reflect the large amount of cell-associated virions that remained after washing, effectively obscuring the intracellular RNA signal. While a highly-controlled in vitro system was necessary to achieve our precise characterization of the EBOV infection kinetics, the applicability of these results to a clinical situation is not immediately obvious, and represents a serious limitation of the study. Nevertheless, our findings have some relevance to understanding the EBOV infection in vivo. EBOV initially replicates within macrophages and dendritic cells in subcutaneous and submucosal compartments, but dissemination in the blood results in the infection of multiple organs throughout the body [40]. Many different cell types are infected with varying susceptibility to infection, as well as varying levels of viral replication. While the infection unfolds, EBOV blocks IFN production early on [6, 41]. In this sense, studying the infection of Vero cells—which are IFN-deficient—narrowly models the infection of one type of epithelial cell during the early stages of an EBOV infection in vivo. Vero cells serve as a standard host cell for replication and are widely used for testing antivirals in vitro [42], as well as in the development of viral vaccines [43, 44, 45]. Mathematical modelling of EBOV infections in vitro using Vero cells has relevance to such applications, particularly in the study of emerging therapeutics. While we provided a quantitative depiction of extracellular infection by EBOV as a valuable first step, we envisioned that the MM could be extended to include intracellular viral RNA kinetics had the appropriate data been collected. Such multiscale modelling approaches have been used to provide insight into virus growth and also to the understanding of direct-acting antivirals [46, 47, 48, 49]. We hope that these experiences might help guide future efforts to obtain informative cell and intracellular data. As an alternative antiviral strategy, there has been renewed interest in pursuing defective interfering particles (DIPs) [50] of highly pathogenic viruses. A DIP is a viral particle that contains defective interfering RNA (DI RNA), which can be a shortened version of the parent genome that renders a DIP replication-incompetent on its own (because it may lack the gene for an essential viral component such as viral polymerase), but also elicits virus- interfering properties. Within a cell co-infected by both DIPs and virus, the DI RNA has a replicative advantage over the full-length RNA and outcompetes it to produce more DIPs than virus progeny, effectively reducing the infectious virus yield. EBOV DI RNA has been observed [51] but much remains to be understood. Like with any other antiviral, MMs can be used to determine the efficacy and mechanism of action of candidate DI RNAs, and to explore the impact of dose and timing [15]. In particular, our estimates of EBOV infection kinetics parameters are directly applicable to future mathematical modelling of the interactions between EBOV and EBOV DIPs in vitro. Our estimated EBOV infection parameters may also describe certain aspects of EBOV DIP infection. For example, since DIPs have the same viral proteins and capsid as virions, they would infect cells with the same infection rate constant, $\beta$. Since DIPs also piggyback on the virus’ replication cycle, we might expect the same eclipse and infectious phase lengths ($\tau_{E},\tau_{I}$) in a DIP and virus co-infected cell. In summary, the MM described here characterizes the replication cycle of EBOV in a quantitative manner that will be beneficial for those creating in vitro models to aid the development of antivirals and vaccines. We have made use of a valuable set of in vitro results, carefully considering the structure of the MM in order to maximize the information we can extract from them. ## Materials and Methods ### Cells and Virus Vero C1008 cells (ECACC Cat. No.85020206) were obtained from Culture Collection, Public Health England, UK. Vero C1008 cells were maintained in Dulbecco’s minimum essential media supplemented with 10% (v/v) foetal calf serum, 1% (v/v) L-glutamine and 1% (v/v) penicillin/streptomycin (Sigma). For experimental purposes, the foetal calf serum concentration was reduced to 2% (v/v). Ebola virus H. sapiens-tc/COD/1976/Yambuku-Ecran, hereafter referred to as EBOV was used in all studies. This virus, previously known as EBOV “E718” [52] was supplied by Public Health England. Passage 5 material was used to infect Vero C1008 cells. Virus was harvested on day 5 post-inoculation and titrated to produce a working stock at . ### Quantification of Virus EBOV was titrated in 96-well plates using the endpoint fifty percent tissue culture infectious dose ($\text{TCID}_{50}$) assay [53]. Briefly, virus was ten-fold serially diluted in 96 well plates of Vero C1008 cells. After one week of incubation at $37\text{\,}\mathrm{\SIUnitSymbolDegree}\Celsius$/5% $\mathrm{CO}_{2}$, all wells were observed under the microscope and scored for presence or absence of cytopathic effects. The 50% endpoint was then calculated using the method of Reed & Muench [54]. RNA extractions were performed using the QiAMP Viral RNA Mini Kit (Qiagen, UK). Two $50\text{\,}\mathrm{\SIUnitSymbolMicro L}$ elutions were performed for each sample to increase the volume available for RT-PCR. The genetic material of EBOV was quantified using the RealStar® Filovirus Screen RT-PCR Kit (Altona diagnostics, Country) following the instructions of the manufacturer. This assay has been performed many times against a standard curve of plasmid containing the L gene from EBOV. The number of genomes can be estimated from the Ct values as described in Eq. (3). In this context, the number of genomes might consist of incomplete negative sense RNA molecules encoding this sequence of the L gene. However, we do not believe that these will be common ($<5\%$) based upon observations made with next generation sequencing (paper in preparation). MOI 5 experiments were analysed using a BIORAD CFX Connect – Real Time System, while samples for the remaining MOIs were analysed using a QuantStudio 7 Flex Real-Time PCR System. Signal from control RNA was compared between experiments and machines and we found no evidence of differences. The parameters of the calibration curve required to convert Ct values to total virus used samples from the MOI 5 experiments. ### Infections Twenty-four-well plates were seeded with Vero C1008 cells at $10^{5}\text{\,}\mathrm{c}\mathrm{e}\mathrm{l}\mathrm{l}\mathrm{s}\mathrm{/}\mathrm{m}\mathrm{L}$. EBOV was added at MOIs of either 5, 1, or 0.1. Vero cells were grown to 90% confluence for all infections. The cell culture medium was not changed during the experiment and all cultures reached confluence within $24\text{\,}\mathrm{h}$ (S1 Appendix). At pre-determined intervals post- infection samples were taken by aspiration of supernatant from wells. Samples were stored at $-80\text{\,}\mathrm{\SIUnitSymbolDegree}\Celsius$ prior to enumeration by TCID50 assay and RNA extraction for PCR. Note that the RNA from the MOI 5 infection was omitted from further analysis due to inconsistencies in the peak viral RNA, compared to the MOI 1 and 0.1 infections (S1 Appendix, Fig. A). The viability of Vero cells in the absence of infection is not known under these conditions, however, we have observed these cells for 168 h at 24 h intervals and observed only occasional cells that can be stained with the viability stain Trypan blue. ### Mock yield or infectivity decay assay EBOV was added to twenty-four-well plates at a final estimated density of $5\times 10^{5}$ $\text{TCID}_{50}$. At pre-determined intervals post- infection samples were taken by aspiration of supernatant from wells. Samples were stored at $-80\text{\,}\mathrm{\SIUnitSymbolDegree}\Celsius$ prior to enumeration by $\text{TCID}_{50}$ assay and RNA extraction for PCR. ### Construction of the standard RT-qPCR curve The concentration of viral genome copies (copy/mL) in a standard sample $i$ ($V_{\text{STD},i}$) and the number of doubling RT-qPCR cycles ($C_{t,\text{STD},i}$) required for this concentration of copies to reach an arbitrarily fixed, chosen threshold concentration ($Q_{t}$), are linked by the equation $\displaystyle Q_{t}$ $\displaystyle=V_{\text{STD},i}\ (2\varepsilon)^{C_{t,\text{STD},i}}$ $\displaystyle\ln(V_{\text{STD},i})$ $\displaystyle=\underbrace{\ln(Q_{t})}_{y\text{-intercept}}-\underbrace{\ln(2\varepsilon)}_{\text{slope}}\ C_{t,\text{STD},i}$ (2) where $\varepsilon$ is the efficacy of the RT-qPCR doubling, which should ideally be equal to one (i.e., exactly doubles at each cycle) but can vary about this value. In constructing the standard curve, we took five standard samples ($V_{\text{STD},i=1...5}$) with known copy concentrations (via their mass) and determined their corresponding $C_{t,\text{STD},i}$. These data are shown in Fig. 3. Fig 3: Standard RT-qPCR curve. Cycle threshold values (Ct) were converted to total virus (copy/mL) using the above calibration curve, where the parameters of the curve were estimated as a part of the analysis. The lines represent the pointwise median of the time courses simulated from our MM, which are bracketed by 68% (light grey) and 95% (dark grey) CR. ### Conversion of sample RT-qPCR $C_{t}$ values into $V_{\text{tot}}$ In quantifying the concentration of total virus, $V_{\text{tot}}$ (copy/mL), in the extracellular virus samples collected from infection experiments, Eq. (2) was used as follows $\displaystyle\ln({V_{\text{tot}}}_{,i})=\ln(Q_{t})-\ln(2\varepsilon)\ C_{t,\text{sample},i}\equiv\mathcal{F}(C_{t,\text{sample},i})$ (3) where ${V_{\text{tot}}}_{,i}$ is the concentration of copies in sample $i$, given its RT-qPCR-determined $C_{t,\text{sample},i}$ value. Here, $\ln(Q_{t})$ and $\ln(2\varepsilon)$ are two parameters to be estimated as part of the MCMC parameter estimation process described later in this section. As different values for these two parameters are sampled in the MCMC process, the total virus concentration data points vary. The variation in the conversion is denoted by error bars on each total virus data point in Fig. 1 E–G. ### Mock-yield assay model Loss of virus infectivity or loss of viral genome integrity over time typically follows an exponential decay [22]. As such, the mock-yield (MY) or infectivity decay assay can be captured via $V(t)=V_{0}\,\mathrm{e}^{-c\,t}$, such that the experimental MY data are expected to follow $\displaystyle\ln(V_{\text{inf}}(t))$ $\displaystyle=\ln(V^{\text{MY}}_{\text{inf},0})-c_{\text{inf}}\ t$ (4) $\displaystyle\ln(V_{\text{tot}}(t))$ $\displaystyle=\ln(V^{\text{MY}}_{\text{tot},0})-c_{\text{tot}}\ t$ (5) where $V_{\text{inf}}(t)$ and $V_{\text{tot}}(t)$ are the concentrations of infectious ($\text{TCID}_{50}$/mL) and total (copy/mL) virus after an incubation of time $t$ under the same conditions used during the infection experiments, given the EBOV rate of loss of infectivity ($c_{\text{inf}}$) or integrity ($c_{\text{tot}}$), and initial concentrations, $V^{\text{MY}}_{\text{inf},0}$ and $V^{\text{MY}}_{\text{tot},0}$. These data are shown in Fig. 1 D, G. ### Simulated infections and parameter estimation In estimating the MM parameters, the following experimental data were considered simultaneously: the RT-qPCR standardized curve (5 data points), the MY assays (24 data points: 4 time points in triplicate for $C_{t}$ and $V_{\text{inf}}$), and three infection assays at MOI of 5 (24 data points: 6 time points in duplicate for $C_{t}$ and $V_{\text{inf}}$), MOI of 1 (54 data points: 9 time points in triplicate for $C_{t}$ and $V_{\text{inf}}$), and MOI of 0.1 (53 data points: 9 time points in triplicate for $C_{t}$ and $V_{\text{inf}}$, minus one contaminated sample in $C_{t}$). Eq. (2) was used to capture the RT-qPCR standard curve, and its agreement with the 5 experimental data points was computed as the sum-of-squared residuals (SSR) $\displaystyle\text{SSR}^{\text{STD}}=\frac{\sum_{i=1}^{5}\left[\mathcal{F}(C_{t,\text{STD},i})-\ln(V_{\text{STD},i})\right]^{2}}{\sigma_{V_{\text{tot}}}^{2}}\ ,$ where $\sigma_{V_{\text{tot}}}^{2}$ is the variance, or squared of the standard error, in experimentally measured $V_{\text{tot}}$, which will be discussed in more details below. Eq. (4) and Eq. (5) were used to capture the MY experiment, performed in triplicate, and sampled at 4 time points, for each of $C_{t}$ and $V_{\text{inf}}$, and agreement was computed as $\displaystyle\text{SSR}^{\text{MY}}=\frac{\sum_{i=1}^{12}\left[\ln(V^{\text{MY}}_{\text{inf},0})-c_{\text{inf}}\ t_{i}-\ln(V_{\text{inf}}(t_{i}))\right]^{2}}{\sigma_{V_{\text{inf}}}^{2}}$ $\displaystyle+\frac{\sum_{i=1}^{12}\left[\ln(V^{\text{MY}}_{\text{tot},0})-c_{\text{tot}}\ t_{i}-\mathcal{F}(C_{t,i})\right]^{2}}{\sigma_{V_{\text{tot}}}^{2}}$ Finally, MM Eq. (1) was used to reproduce the infection experiments, performed in triplicate, and at 3 different MOIs (5, 1, and 0.1), and quantified via both $\text{TCID}_{50}$ and RT-qPCR. In reproducing the infections, initial conditions (at $t=0$) were such that $T(0)=1$, $E_{i=1,...,n_{E}}(0)=I_{j=1,...,n_{I}}(0)=0$, and the initial infectious and total virus concentrations for the 3 MOIs were computed as: $\displaystyle V_{\text{inf}}(0)$ $\displaystyle=V^{\text{INF}}_{\text{inf},0}\times\text{MOI}$ $\displaystyle V_{\text{tot}}(0)$ $\displaystyle=V^{\text{INF}}_{\text{tot},0}\times\text{MOI}$ where MOI was either 5, 1 or 0.1, and ($V^{\text{INF}}_{\text{inf},0}$, $V^{\text{INF}}_{\text{tot},0}$) are 2 parameters to be estimated. Agreement between MM Eq. (1) and experimental infection data was computed as $\displaystyle\text{SSR}^{\text{INF}}=\sum_{\text{MOI}=[5,1,0.1]}\frac{\sum_{i}\left[\ln(V_{\text{inf}}^{\text{MM}}(t_{i})-\ln(V_{\text{inf}}(t_{i}))\right]^{2}}{\sigma_{V_{\text{inf}}}^{2}}$ $\displaystyle+\frac{\sum_{i}\left[\ln(V_{\text{tot}}^{\text{MM}}(t_{i})-\mathcal{F}(C_{t,i})\right]^{2}}{\sigma_{V_{\text{tot}}}^{2}}$ where $\sigma_{V_{\text{inf}}}^{2}=0.1$ and $\sigma_{V_{\text{tot}}}^{2}=0.1$ correspond to the variance in $\ln(V_{\text{inf}})$ and $\ln(V_{\text{tot}})$, respectively, estimated as the variance of the residuals between the 2 to 3 replicates of $\ln(V_{\text{inf}})$ or $\ln(V_{\text{tot}})$ measured at each time point and their corresponding mean, across all (STD, MY, and INF) experimental data collected. A total of 15 parameters — 6 parameters associated with experimental conditions ($\ln(Q_{t})$, $\ln(2\varepsilon)$, $\ln(V^{\text{MY}}_{\text{inf},0})$, $\ln(V^{\text{MY}}_{\text{tot},0})$, $V^{\text{INF}}_{\text{inf},0}$, $V^{\text{INF}}_{\text{tot},0}$) and 9 parameters more closely associated with EBOV infection kinetics ($c_{\text{inf}}$, $c_{\text{tot}}$, $p_{\text{inf}}$, $p_{\text{tot}}$, $\beta$, $\tau_{E}$, $\tau_{I}$, $n_{E}$, $n_{I}$) — were estimated (Table 1) from 160 experimental data points using the python MCMC implementation phymcmc [55], a wrapping library for emcee [56]. Posterior probability likelihood distributions (PostPLDs) were obtained based on the parameter likelihood function $\displaystyle\ln(\mathcal{L}(\vec{p}))=-\frac{1}{2}\left[\text{SSR}^{\text{STD}}(\vec{p})+\text{SSR}^{\text{MY}}(\vec{p})+\text{SSR}^{\text{INF}}(\vec{p})\right]$ and the assumption of linearly uniform or $\ln$-uniform priors, where $\vec{p}$ is the 15-parameter vector. Table 1: Estimated parameters of EBOV infection in vitro. Parameter | Mode [95% CR] ---|--- Infectiousness, $\beta$ ($\frac{\mathrm{mL}}{\text{TCID}_{50}\cdot\mathrm{h}}$) | $10^{-6.48\ [-6.7,-6.3]}$ Eclipse phase length, $\tau_{E}$ (h) | $30.5\ [26,37]$ Number of eclipse compartments, $n_{E}$ | $13\ [8,23]$ Infectious phase length, $\tau_{I}$ (h) | $83.2\ [64,95]$ Number of infectious compartments, $n_{I}$ | $14\ [3,85]$ Infectious virus production rate, $p_{\text{inf}}$ ($\frac{\text{TCID}_{50}}{\mathrm{cell}\cdot\mathrm{h}}$) | $10^{1.12\ [1,1.3]}$ Total virus production rate, $p_{\text{tot}}$ ($\frac{\mathrm{RNA}}{\mathrm{cell}\cdot\mathrm{h}}$) | $10^{6.46\ [6.3,6.7]}$ Rate of loss of infectious virus, $c_{\text{inf}}$ (/h) | $0.0614\ [0.055,0.068]$ Rate of virus degradation, $c_{\text{tot}}$ (/h) | $0.00817\ [0.0035,0.013]$ Initial infectious virus inoculum, $V^{\text{INF}}_{\text{inf},0}$ ($\frac{\text{TCID}_{50}}{\mathrm{mL}}$) | $10^{5.39\ [5.3,5.5]}$ Initial total virus inoculum, $V^{\text{INF}}_{\text{tot},0}$ ($\frac{\mathrm{RNA}}{\mathrm{mL}}$) | $10^{11.5\ [11,12]}$ MY initial infectious virus inoculum, $\ln(V^{\text{MY}}_{\text{inf},0})$ | $13.7\ [13,14]$ MY initial total virus inoculum, $\ln(V^{\text{MY}}_{\text{tot},0})$ | $28.2\ [28,29]$ Standard RT-qPCR curve $y$-intercept, $\ln(Q_{t})$ | $37.8\ [37,39]$ Standard RT-qPCR curve slope, $\ln(2\varepsilon)$ | $0.613\ [0.57,0.66]$ Basic reproductive number, $R_{0}$ | $10^{2.77\ [2.6,3]}$ Infectious burst size, $p_{\mathrm{inf}}\tau_{I}$ ($\frac{\text{TCID}_{50}}{\mathrm{cell}}$) | $10^{3.04\ [3,3.1]}$ Infecting time, $t_{\mathrm{inf}}$ (h) | $10^{0.335\ [0.21,0.43]}$ ## Supporting information ##### S1 Appendix. Kinetics of cell-associated virus and cell viability. ## Acknowledgments Members of the CL4 Virology Team include Lin Eastaugh, Lyn M. O’Brien, James S. Findlay, Mark S. Lever, Amanda Phelps, Sarah Durley-White, Jackie Steward and Ruth Thom. The authors would like to acknowledge Joseph Gillard for helpful discussions and the International Centre for Mathematical Sciences (ICMS), where the mathematical model was developed during a Research-in-Groups programme. ## References * 1. Feldmann H, Geisbert TW. Ebola haemorrhagic fever. Lancet. 2011;377(9768):849–862. doi:10.1016/S0140-6736(10)60667-8. * 2. Callaway E. Make Ebola a thing of the past: First vaccine against deadly virus approved. Nature. 2019;doi:10.1038/d41586-019-03490-8. * 3. Rojek A, Horby P, Dunning J. Insights from clinical research completed during the west Africa Ebola virus disease epidemic. Lancet Infect Dis. 2017;17(9):e280–e292. doi:10.1016/S1473-3099(17)30234-7. * 4. Dörnemann J, Burzio C, Ronsse A, Sprecher A, De Clerck H, Van Herp M, et al. First newborn baby to receive experimental therapies survives Ebola virus disease. J Infect Dis. 2017;215(2):171–174. doi:10.1093/infdis/jiw493. * 5. Sissoko D, Laouenan C, Folkesson E, M’Lebing AB, Beavogui AH, Baize S, et al. Experimental treatment with favipiravir for Ebola virus disease (the JIKI trial): A historically controlled, single-arm proof-of-concept trial in Guinea. PLoS Med. 2016;13(3):e1001967. doi:10.1371/journal.pmed.1001967. * 6. Perez SC, Folkesson E, Anglaret X, Beavogui AH, Berbain E, Camara AM, et al. Challenges in preparing and implementing a clinical trial at field level in an Ebola emergency: A cast study in Guinea, West Africa. Plos Negl Trop Dis. 2017;11(6):e0005545. doi:10.1371/journal.pntd.0005545. * 7. Group TPIW. A randomized, controlled trial of ZMapp for Ebola virus infection. N Engl J Med. 2016;375(15):1448–1456. doi:10.1056/NEJMoa1604330. * 8. Maxmen A. Experimental Ebola drugs face tough test in war zone. Nature. 2018;561(7721):14. doi:10.1038/d41586-018-06132-7. * 9. Perelson AS, Neumann AU, Markowitz M, Leonard JM, Ho DD. HIV-1 dynamics in vivo: Virion clearance rate, infected cell life-span, and viral generation time. Science. 1996;271(5255):1582–1586. doi:10.1126/science.271.5255.1582. * 10. Baccam P, Beauchemin CAA, Macken CA, Hayden FG, Perelson AS. Kinetics of influenza A virus infection in humans. J Virol. 2006;80(15):7590–7599. doi:10.1128/JVI.01623-05. * 11. Best K, Guedj J, Madelain V, de Lamballerie X, Yon Lim S, Osuna CE, et al. Zika plasma viral dynamics in nonhuman primates provides insights into early infection and antiviral strategies. Proc Natl Acad Sci USA. 2017;114(33):8847–8852. doi:10.1073/pnas.1704011114. * 12. Madelain V, Oestereich L, Graw F, Nguyen THT, de Lamballerie X, Mentré F, et al. Ebola virus dynamics in mice treated with favipiravir. Antiviral Res. 2015;123:70–77. doi:10.1016/j.antiviral.2015.08.015. * 13. Guedj J, Piorkowski G, Jacquot F, Madelain V, Nguyen T, Rodallec A, et al. Antiviral efficacy of favipiravir against Ebola virus: A translational study in cynomolgous macaques. PLoS Med. 2018;15(3):e1002535. doi:10.1371/journal.pmed.1002535. * 14. Madelain V, Baize S, Jacquot F, Reynard S, Fizet A, Barron S, et al. Ebola viral dynamics in nonhuman primates provides insights into virus immuno-pathogenesis and antiviral strategies. Nat Commun. 2018;9(1):4013. doi:10.1038/s41467-018-06215-z. * 15. Martyushev A, Nakaoka S, Sato K, Noda T, Iwami S. Modelling Ebola virus dynamics: Implications for therapy. Antiviral Res. 2016;135:62–73. doi:10.1016/j.antiviral.2016.10.004. * 16. Sissoko D, Duraffour S, Kerber R, Kolie JS, Beavogui AH, Camara AM, et al. Persistence and clearance of Ebola virus RNA from seminal fluid of Ebola virus disease survivors: a longitudinal analysis and modelling study. The Lancet Glob Health. 2017;5(1):e80–e88. doi:10.1016/S2214-109X(16)30243-1. * 17. Handel A, Liao LE, Beauchemin CAA. Progress and trends in mathematical modelling of influenza A virus infections. Curr Opin Syst Biol. 2018;12:30–36. doi:10.1016/j.coisb.2018.08.009. * 18. Möhler L, Flockerzi D, Sann H, Reichl U. Mathematical model of influenza A virus production in large-scale microcarrier culture. Biotechnol Bioeng. 2005;90(1):46–58. doi:10.1002/bit.20363. * 19. Iwami S, Holder BP, Beauchemin CAA, Morita S, Tada T, Sato K, et al. Quantification system for the viral dynamics of a highly pathogenic simian/human immunodeficiency virus based on an in vitro experiment and a mathematical model. Retrovirology. 2012;9:18. doi:10.1186/1742-4690-9-18. * 20. Beauchemin CAA, Miura T, Iwami S. Duration of SHIV production by infected cells is not exponentially distributed: Implications for estimates of infection parameters and antiviral efficacy. Sci Rep. 2017;7:42765. doi:10.1038/srep42765. * 21. Iwami S, Takeuchi JS, Nakaoka S, Mammano F, Clavel F, Inaba H, et al. Cell-to-cell infection by HIV contributes over half of virus infection. Elife. 2015;4. doi:10.7554/eLife.08150. * 22. Beauchemin CAA, Kim YI, Yu Q, Ciaramella G, DeVincenzo JP. Uncovering critical properties of the human respiratory syncytial virus by combining in vitro assays and in silico analyses. PLoS ONE. 2019;14(4):e0214708. doi:10.1371/journal.pone.0214708. * 23. Gonzàlez-Parra G, De Ridder F, Huntjens D, Roymans D, Ispas G, Dobrovolny HM. A comparison of RSV and influenza in vitro kinetic parameters reveals differences in infecting time. PLoS ONE. 2018;13(2):e0192645. doi:10.1371/journal.pone.0192645. * 24. Fukuhara M, Iwami S, Sato K, Nishimura Y, Shimizu H, Aihara K, et al. Quantification of the dynamics of enterovirus 71 infection by experimental-mathematical investigation. J Virol. 2013;87(1):701–705. doi:10.1128/JVI.01453-12. * 25. Gonzàlez-Parra G, Dobrovolny HM, Aranda DF, Chen-Charpentier B, Rojas RAG. Quantifying rotavirus kinetics in the REH tumor cell line using in vitro data. Virus Res. 2018;244:53–63. doi:10.1016/j.virusres.2017.09.023. * 26. Wethington D, Harder O, Uppulury K, Stewart WCL, Chen P, Kang T, et al.. Mathematical modeling identifies the role of adaptive immunity as a key controller of respiratory syncytial virus (RSV) titer in cotton rats; 2018. * 27. Iwami S, Sato K, Boer RJD, Aihara K, Miura T, Koyanagi Y. Identifying viral parameters from in vitro cell cultures. Front Microbiol. 2012;3:319. doi:10.3389/fmicb.2012.00319. * 28. Pinilla LT, Holder BP, Abed Y, Boivin G, Beauchemin CAA. The H275Y neuraminidase mutation of the pandemic A/H1N1 virus lengthens the eclipse phase and reduces viral output of infected cells, potentially compromising fitness in ferrets. J Virol. 2012;86(19):10651–10660. doi:10.1128/JVI.07244-11. * 29. Song H, Pavlicek JW, Cai F, Bhattacharya T, Li H, Iyer SS, et al. Impact of immune escape mutations on HIV-1 fitness in the context of the cognate transmitted/founder genome. Retrovirology. 2012;9:89. doi:10.1186/1742-4690-9-89. * 30. Paradis EG, Pinilla L, Holder BP, Abed Y, Boivin G, Beauchemin CAA. Impact of the H275Y and I223V mutations in the neuraminidase of the 2009 pandemic influenza virus in vitro and evaluating experimental reproducibility. PLoS ONE. 2015;10(5):e0126115. doi:10.1371/journal.pone.0126115. * 31. Simon PF, de La Vega MA, Paradis E, Mendoza E, Coombs KM, Kobasa D, et al. Avian influenza viruses that cause highly virulent infections in humans exhibit distinct replicative properties in contrast to human H1N1 viruses. Sci Rep. 2016;6:24154. doi:10.1038/srep24154. * 32. Iwanami S, Kakizoe Y, Morita S, Miura T, Nakaoka S, Iwami S. A highly pathogenic simian/human immunodeficiency virus effectively produces infectious virions compared with a less pathogenic virus in cell culture. Theor Biol Med Model. 2017;14(1):9. doi:10.1186/s12976-017-0055-8. * 33. Ikeda H, Godinho-Santos A, Rato S, Vanwalscappel B, Clavel F, Aihara K, et al. Quantifying the antiviral effect of IFN on HIV-1 replication in cell culture. Sci Rep. 2015;5:11761. doi:10.1038/srep11761. * 34. Nguyen VK, Binder SC, Boianelli A, Meyer-Hermann M, Hernandez-Vargas EA. Ebola virus infection modeling and identifiability problems. Front Microbiol. 2015;6:257. doi:10.3389/fmicb.2015.00257. * 35. Holder BP, Beauchemin CAA. Exploring the effect of biological delays in kinetic models of influenza within a host or cell culture. BMC Public Health. 2011;11 Suppl 1:S10. doi:10.1186/1471-2458-11-S1-S10. * 36. Biedenkopf N, Hoenen T. Modeling the Ebolavirus life cycle with transcription and replication-competent viruslike particle assays. In: Ebolaviruses. Springer; 2017. p. 119–131. * 37. Dixit NM, Markowitz M, Ho DD, Perelson AS. Estimates of intracellular delay and average drug efficacy from viral load data of HIV-infected individuals under antiretroviral therapy. Antivir Ther. 2004;9(2):237–246. * 38. Gupta M, Greer P, Mahanty S, Shieh WJ, Zaki SR, Ahmed R, et al. CD8-mediated protection against Ebola virus infection is perforin dependent. The Journal of Immunology. 2005;174(7):4198–42020. * 39. Schulze-Horsel J, Schulze M, Agalaridis G, Genzel Y, Reichl U. Infection dynamics and virus-induced apoptosis in cell culture-based influenza vaccine production-Flow cytometry and mathematical modeling. Vaccine. 2009;27:2712–2722. doi:10.1016/j.vaccine.2009.02.027. * 40. Chertow DS, Shekhtman L, Lurie Y, Davey RT, Heller T, Dahari H. Modeling challenges of Ebola virus–host dynamics during infection and treatment. Viruses. 2020;12(1):106. * 41. Edwards MR, Liu G, Mire CE, Sureshchandra S, Luthra P, Yen B, et al. Differential regulation of interferon responses by Ebola and Marburg virus VP35 proteins. Cell reports. 2016;14(7):1632–1640. * 42. Postnikova E, Cong Y, DeWald LE, Dyall J, Yu S, Hart BJ, et al. Testing therapeutics in cell-based assays: Factors that influence the apparent potency of drugs. PLoS ONE. 2018;13(3):e0194880. doi:10.1371/journal.pone.0194880. * 43. Barrett NP, Mundt W, Kistner O, Howard MK. Vero cell platform in vaccine production: moving towards cell culture-based viral vaccines. Expert Rev Vaccines. 2009;8(5):607–618. doi:10.1586/erv.09.19. * 44. Barrett NP, Terpening SJ, Snow D, Cobb RR, Kistner O. Vero cell technology for rapid development of inactivated whole virus vaccines for emerging viral diseases. Expert Rev Vaccines. 2017;16(9):883–894. * 45. Paillet C, Forno G, Kratje R, Etcheverrigaray M. Suspension-Vero cell cultures as a platform for viral vaccine production. Vaccine. 2009;27(46):6464–6467. * 46. Guedj J, Dahari H, Rong L, Sansone ND, Nettles RE, Cotler SJ, et al. Modeling shows that the NS5A inhibitor daclatasvir has two modes of action and yields a shorter estimate of the hepatitis C virus half-life. Proc Natl Acad Sci USA. 2013;110(10):3991–3996. doi:10.1073/pnas.1203110110. * 47. Heldt FS, Frensing T, Pflugmacher A, Gröpler R, Peschel B, Reichl U. Multiscale modeling of influenza A virus infection supports the development of direct-acting antivirals. PLoS Comput Biol. 2013;9:e1003372. doi:10.1371/journal.pcbi.1003372. * 48. de M Quintela B, Conway JM, Hyman JM, Guedj J, dos Santos RW, Lobosco M, et al. A new age-structured multiscale model of the hepatitis C virus life-cycle during infection and therapy with direct-acting antiviral agents. Front Microbiol. 2018;9:601. doi:10.3389/fmicb.2018.00601. * 49. Zitzmann C, Kaderali L. Mathematical analysis of viral replication dynamics and antiviral treatment strategies: From basic models to age-based multi-scale modeling. Front Microbiol. 2018;9:1546. doi:10.3389/fmicb.2018.01546. * 50. Rezelj VV, Levi LI, Vignuzzi M. The defective component of viral populations. Curr Opin Virol. 2018;33:74–80. doi:10.1016/j.coviro.2018.07.014. * 51. Calain P, Monroe MC, Nichol ST. Ebola virus defective interfering particles and persistent infection. Virology. 1999;262(1):114–128. doi:10.1006/viro.1999.9915. * 52. Kuhn JH, Lofts LL, Kugelman JR, Smither SJ, Lever MS, Groen Gv, et al. Reidentification of Ebola virus E718 and ME as Ebola virus/H.sapiens-tc/COD/1976/Yambuku-Ecran. Genome Announc. 2014;2(6):pii: e01178–14. doi:10.1128/genomeA.01178-14. * 53. Smither SJ, Lear-Rooney C, Biggins J, Pettitt J, Lever MS, Jr GGO. Comparison of the plaque assay and 50% tissue culture infectious dose assay as methods for measuring filovirus infectivity. J Virol Methods. 2013;193(2):565–571. doi:10.1016/j.jviromet.2013.05.015. * 54. Reed LJ, Muench H. A simple method of estimating fifty per cent endpoints. Am J Epidemiol. 1938;27(3):493–497. doi:10.1093/oxfordjournals.aje.a118408. * 55. Beauchemin CAA. phymcmc: A convenient wrapper for emcee; 2019. https://github.com/cbeauc/phymcmc. * 56. Foreman-Mackey D, Hogg DW, Lang D, Goodman J. emcee: The MCMC hammer. Publ Astron Soc Pac. 2013;125(925):306–312. doi:10.1086/670067.
# KoCHET: a Korean Cultural Heritage corpus for Entity-related Tasks Gyeongmin Kim, Jinsung Kim11footnotemark: 1, Junyoung Son11footnotemark: 1, Heuiseok Lim Korea University, Korea {totoro4007, jin62304, s0ny<EMAIL_ADDRESS>These authors have equally contributed to this work Corresponding author ###### Abstract As digitized traditional cultural heritage documents have rapidly increased, resulting in an increased need for preservation and management, practical recognition of entities and typification of their classes has become essential. To achieve this, we propose KoCHET\- a Korean cultural heritage corpus for the typical entity-related tasks, i.e., named entity recognition (NER), relation extraction (RE), and entity typing (ET). Advised by cultural heritage experts based on the data construction guidelines of government- affiliated organizations, KoCHET consists of respectively 112,362, 38,765, 113,198 examples for NER, RE, and ET tasks, covering all entity types related to Korean cultural heritage. Moreover, unlike the existing public corpora, modified redistribution can be allowed both domestic and foreign researchers. Our experimental results make the practical usability of KoCHET more valuable in terms of cultural heritage. We also provide practical insights of KoCHET in terms of statistical and linguistic analysis. Our corpus is freely available at https://github.com/Gyeongmin47/KoCHET. ## 1 Introduction Recently there has been an increasing interest in the preservation of national historical artifacts and traditional cultural heritage, and also grows up the importance of effective management of them through digitization and archival. As the amount of digitized information materials increases rapidly, information extraction (IE) tasks in natural language processing (NLP), such as named entity recognition (NER), relation extraction (RE), and entity typing (ET), have become an essential and fundamental step in the field of historical document analysis. Despite the necessity of a well-refined entity-centric corpus specialized in domestic cultural heritage, unfortunately, there no exists any cultural heritage domain-specialized corpus in Korean. Moreover, conventional entity- related systems deal only with a coarse set of entity types such as person, location, and organization which is significantly limited in terms of application. This absence of cultural heritage domain-specialized corpus and narrow coverage of entity types hinders the effective digitization of domestic historical documents because training the model with general corpus for entity-related tasks cannot afford to learn enough significant entity types such as pagodas, historical sites and intangible heritage, and their relations. Furthermore, not in the cultural heritage domain, the existing entity-related datasets supervised by the public institutions have a complicated procedure for data acquisition, and they are also restricted from modification and redistribution. These cumbersome procedures and restrictions have been stumbling blocks for researchers against the rapid increase in digitized cultural heritage materials over the past few decades. To address these difficulties against the conservation of Korean cultural heritage, we introduce a new dataset collection called KoCHET\- Korean Cultural Heritage corpus for Entity-related Tasks, a high-quality Korean cultural heritage domain-specialized dataset for NER, RE, and ET tasks. For corpus construction, we crawled the e-museum digitized data of the National Museum of Korea111https://www.emuseum.go.kr/ (including data from all 50 museums) as the source text which is for the interested public. We selectively used resources from the museums in which the details of artifacts were registered; moreover, for the completeness of the attribute data, we limited the chronological range of the data from the prehistoric era to the Korean Empire era, excluding the Japanese colonial period. For the annotation, the categorization for classes and attributes appropriate was defined and developed following the 2020 Named Entity Corpus Research Analysis222https://www.korean.go.kr which was published under the guidelines as institutional organizations. As our corpus focuses on the entity features, it has more detailed and abundant entity types including diverse cultural heritage artifacts, compared to the existing accessible datasets that aim to deal with several downstream tasks in addition to entity-related tasks. Furthermore, the ET of KoCHET is the first freely available corpus for the ET task in Korea. In addition to providing these values, this paper provides detailed statistics and linguistic analysis of KoCHET for each entity-related task to demonstrate their applicability and enhance understanding of the data, along with baseline experiments with language models. Our contributions are summarized as follows: * • We introduce KoCHET designed for entity-related tasks. This guarantees a high- quality corpus without restrictions regarding modification and redistribution. Moreover, to the best of our knowledge, the ET corpus is the first proposed corpus in Korean. * • We categorized the detailed entity types specialized in the cultural heritage domain, which is essential for preserving our cultural and historical artifacts, thereby contributing as an alternative to the increased demand for the digitalized archiving of cultural heritage documents. * • We prove the applicability of our entity-abundant corpus in each task by providing statistics and linguistic analysis, along with the experiments with pre-trained language models. ## 2 Related Works As domains that require expertise, such as the cultural heritage, contain entities or relationships that rarely appear in general domains, the necessity of a corpus specialized in the domain is obvious. Despite such demand, Korean does not yet have a corpus specialized in the cultural heritage area, unlike other languages. ### 2.1 General cultural heritage corpora There have been the disclosures of corpora in an effort to preserve traditional culture including the cultural heritage, composing data from the perspective of the entity-related tasks that we deal with. For example, these include a Czech NER corpus constructed based on public optical character recognition data of Czech historical newspapers (Hubková et al., 2020), a Chinese corpus suitable for the computational analysis of historical lexicon and semantic change (Zinin and Xu, 2020), and an English corpus that is one of the most commonly used large corpora in diachronic studies in English (Alatrash et al., 2020). ### 2.2 Korean public corpora ##### The National Institute of Korean Language , which is an institution that has established the norms for Korean linguistics, constructed a large-scale dataset333https://stdict.korean.go.kr/ for the study of new computational linguistics of Korean (Kim, 2006). ##### AI HUB is a massive dataset integration platform444https://aihub.or.kr/ hosted by the National Information Society Agency (NIA)555https://www.nia.or.kr/, a government-affiliated organization. To support the development of the Korean artificial intelligence industry for the NLP field, the NIA disclosed domain- specific corpora and 27 datasets have been released or are being prepared. ##### Electronics and Telecommunications Research Institute , as part of the Exo-brain project666http://exobrain.kr/pages/ko/result/outputs.jsp, provides corpora for NLP tasks such as morphological analysis, entity recognition, dependency parsing, and question answering, and guidelines for building such high-quality corpora777https://www.etri.re.kr/. In addition to public datasets opened by public institutions, there is a Korean dataset publicly available for free without the requirement for an access request. ##### Korean Language Understanding Evaluation (KLUE) dataset was recently released to evaluate the ability of Korean models to understand natural languages with eight diverse and typical tasks (Park et al., 2021b). The tasks include natural language inference, semantic textual similarity, dependency parsing, NER, and RE. ## 3 KoCHET Following the guidelines of Korean institutional organizations, KoCHET is a domain specialized corpus for cultural heritage, which ensures quality and can be freely accessed. In this section, we report the annotation process and guidelines in detail. ### 3.1 Annotation Process To improve the quality of annotations on our entity-rich corpus related to cultural heritage, we conducted the annotation process based on expertise in the cultural heritage domain. ##### Annotation Guidelines The raw corpus annotated by each annotator is equally divided by the category. The annotators were instructed to follow two types of rules by the aforementioned entity guidelines in Section 1; one is related to tagging units and categories, and the other is the principle of unique tagging. The minimum unit is based on one word for the tagging units and categories. In addition, it is applied only to cases written in Korean, where the notation is possible. It is not tagged in the case of Chinese characters and English, but if it is read in Korean, it is included in the tagging range. For the principle of unique tagging, there are cases of duplication in entities that belong to two or more semantic regions. This guideline grants a single tag to a semantically suitable word and refers to assigning only one tag by prioritizing it accordingly. There are two cases in which this principle should be applied. The first case is where the entity belongs to two semantic categories regardless of the context. The second refers to the case where it may vary depending on the context. In both cases, tagging is determined according to the pre-defined priority. ##### Annotator Training and Cross-Checking We recruited 34 college and graduate annotators who have been professionally educated on the cultural heritage domain in Korea to participate in the annotation process. All annotators were trained for a week, and each of them was familiarized with the annotation guideline and conducted practice annotation on test samples. The annotation team met once every week to review and discuss each member’s work during the annotation process. All entity types and relations were reviewed by four cross-checking annotators, afterward, were additionally checked by two expert supervisors. The discrepancy between annotators on the annotated entity types and relations is also discussed and agreed upon in the period. These procedures allowed the reliability and validity of KoCHET on the cultural heritage objects to be improved. ### 3.2 Schema for Task Annotation #### 3.2.1 Named Entity Recognition Label | Train | Dev | Test ---|---|---|--- Counts (%) Artifacts (AF) | 91,453 (35.57) | 11,374 (35.54) | 11,366 (35.35) Person (PS) | 51,758 (20.13) | 6,455 (20.17) | 6,744 (20.97) Term (TM) | 25,781 (10.02) | 3,175 (9.92) | 3,159 (9.82) Date (DT) | 23,636 (9.19) | 2,943 (9.20) | 3,078 (9.57) | Political --- location (LCP) 20,076 (7.80) | 2,375 (7.42) | 2,384 (7.41) Civilization (CV) | 15,404 (5.99) | 1,929 (6.03) | 1,835 (5.71) Material (MT) | 8,893 (3.45) | 1,160 (3.62) | 1,046 (3.25) Location (LC) | 6,881 (2.67) | 857 (2.68) | 881 (2.74) Animal (AM) | 4,376 (1.70) | 578 (1.81) | 566 (1.76) Plant (PT) | 3,952 (1.53) | 549 (1.72) | 498 (1.55) | Geographical --- location (LCG) 2,821 (1.09) | 354 (1.11) | 348 (1.08) Event (EV) | 2,045 (0.79) | 254 (0.79) | 248 (0.77) Table 1: The counts of entities and their distributions (%) in our NER data. As described in Table 1, we defined 12 entity types. They were tagged with the character-level beginning-inside-outside (BIO) tagging scheme, which is the generally adopted method for sequence labeling problems. For example, “아시아 (Asia): Geographical Location (LCG)” is tagged as “아: B-LCG,” “시: I-LCG,” “아: I-LCG.” Therefore, we evaluated the model not only with entity-level F1 score but also with character-level F1 score (Park et al., 2021b). ##### Label Description * • Artifacts (AF) generally refer to objects created by humans corresponding to common and proper nouns and also include cultural properties. Therefore, artificial materials such as buildings, civil engineering constructions, playground names, apartments, and bridges fall under this category. * • Person (PS) is a category for content related to people, including real persons, mythical figures, fictional characters in games/novels, occupations, and human relationships. * • Term (TM) includes the color, direction, shape, or form that describes an artifact. Patterns and drawings are classified as TM, owing to the characteristics of movable cultural properties. * • Civilization (CV) is defined as terms related to civilization/culture. It targets words classified by detailed civilizations/cultures, such as clothing and food. * • Date (DT) includes all entities related to date and time, such as date, period, specific day, or season, month, year, era/dynasty. However, in the case of an unclear period that cannot be tagged with a separate entity, tagging is not performed. * • Material (MT) includes a substance used as a material or an expression for the substance. In other words, it indicates the entity corresponding to the detailed classification of a substance (metal, rock, wood, etc.). When an entity can be tagged as both natural objects (AM, PT) and MT, tagging as MT takes precedence. * • Geographical location (LCG), Political location (LCP), and Location (LC) are defined as geographical names, administrative districts, and other places, respectively. * • Animal (AM) and Plant (PT) are defined as animals and plants, respectively, excluding humans. If it is applied as a subject of a picture, it is also included in the category of animals and plants. * • Event (EV) contains entities for a specific event/accident. In principle, social movements and declarations, wars, revolutions, events, festivals, etc., fall under this category and should be classified only if they exist as a separate entity. #### 3.2.2 Relation Extraction Unlike the other existing corpora, our corpus has the advantage of capturing various relationships between multiple entities that are included in a sentence because more than one relation can exist per raw sentence. We consider the relations between annotated entities in the NER annotation procedure. In the case of certain tokens, it can be a subject or an object depending on the relationship with other tokens. A relationship in the form of a self-relationship between identical tokens does not exist. Label | Train | Dev | Test ---|---|---|--- Counts (%) A depicts B | 14,157 (22.09) | 1,803 (22.45) | 1,711 (21.85) A documents B | 10,214 (15.94) | 1,244 (15.49) | 1,220 (15.58) A hasSection B | 6,542 (10.21) | 818 (10.19) | 776 (9.91) A servedAs B | 6,546 (10.22) | 780 (9.71) | 740 (9.45) A hasCreated B | 6,136 (9.58) | 759 (9.45) | 744 (9.50) A OriginatedIn B | 5,456 (8.51) | 679 (8.45) | 663 (8.47) A consistsOf B | 4,331 (6.76) | 569 (7.09) | 586 (7.48) A isConnectedWith B | 3,489 (5.44) | 501 (6.24) | 461 (5.89) A fallsWithin B | 3,454 (5.39) | 415 (5.17) | 483 (6.17) A isUsedIn B | 1,906 (2.97) | 238 (2.96) | 244 (3.12) A hasTime B | 934 (1.46) | 111 (1.38) | 95 (1.21) A wears B | 798 (1.25) | 97 (1.21) | 86 (1.10) A hasCarriedOut B | 112 (0.17) | 15 (0.19) | 19 (0.24) A hasDestroyed B | 5 (0.01) | 2 (0.02) | 3 (0.04) Table 2: Relation counts and distributions (%) for our RE corpus. Figure 1: Visualization of all the labels that cover 84% of the entity types is shown on the left side, and 106 general and fine-grained entities with their distributions (%) are shown on the right side. As shown in Table 2, our RE corpus consists of 14 labels, and these were defined based on the Encyves ontology research of the National Culture Research Institute888http://dh.aks.ac.kr/Encyves/wiki. ##### Label Description * • “A depicts B” implies the relationship between an object and its color, shape or pattern, etc. For example, “Green Door” corresponds to this relationship. It can also represent a descriptive relationship such as “Picture of a place- the place where it was taken” or “Picture of a person-the person who is the object of the painting.” * • “A documents B” implies “$\scriptstyle\sim$ records -.” ;a relationship such as “Record-The person who records it” can be represented by this. It also indicates the relationship like a record written on an object such as “Postcard-Explanation" or a specific language written on a document such as “Record-Chinese characters.” * • “A hasSection B” indicates “$\scriptstyle\sim$ is located at -.” It represents the relationship between a statue, building, or specific attraction and a location, such as a certain city and place. * • “A servedAs B" implies “$\scriptstyle\sim$ is the role of -,” which corresponds to the relationship between a person, and his/her position or occupation, etc. * • “A hasCreated B” demonstrates, for example, “Person-Documents” or “Person- Painting,” which refers to the relationship between a person and a document such as a book, map, or drawing, or his/her activities to record works. * • “A OriginatedIn B” means “$\scriptstyle\sim$ is discovered at –” or “$\scriptstyle\sim$ is produced at -(time).” It indicates that cultural property is produced at a specific time such as “Craft-Year" or is discovered at a particular place such as “Object-Place," or is produced at a certain site such as “Document-Place." For example, the relation between earrings and tombs or a newspaper and the company of the newspaper fall into this. * • “A consistsOf B” refers to the relation between an object and its raw ingredients, such as soil, iron, and wood that constitute an object. * • “A isConnectedWith B” represents a person-to-person association. The relationships between two positions or a person and the position he or she holds do not fall into this. * • “A fallsWithin B” implies “$\scriptstyle\sim$ is denominated as -.” It indicates the relationship of alternate names such as “Person-Specific name," or between a name and designation in front of the name, or between words that refer to synonymous concepts such as “Verse-Poetry.” * • “A isUsedIn B” indicates “$\scriptstyle\sim$ is used for the purpose of -” or literally “$\scriptstyle\sim$ is used in -.” For example, it can also indicate the material used for a certain object, such as “Raw material-Clothes.” The relationship between an object and the place where the object is used, such as a signboard and a palace, or the relationship between certain means of performing a function and an object such as “Bowl-Rice cake” can correspond to this category. * • “A hasTime B” implies “$\scriptstyle\sim$ has happened at -.” For example, it can indicate the relationship between a particular event and a specific date, such as “Presidential election-1928." The relation between a specific date and a certain work, such as the year of production of a work and the year of construction of a building, can fall under this category, for example, “Year- Craftwork." * • “A wears B” implies “$\scriptstyle\sim$ puts - on.” For instance, not only clothes such as school uniforms but also crafts, etc. may correspond to the object argument. * • “A hasCarriedOut B” indicates “- is caused by $\scriptstyle\sim$.” It can represent a relationship between a specific organization or group and an event conducted by it, such as a festival or social movement. * • “A hasDestroyed B” implies the event that caused destruction such as “War- Destroyed place," or the collapse of a country in a specific year such as “Country-Year,” or the relationship in which a building, structure, monument, etc. is destroyed at a particular period. Sentence with Entity Mention | Entity Types ---|--- | 조선시대에는 전통 관습을 잇기 위한 많은 향로가 제작되었다. --- (In the Joseon dynasty, many fragrance burners were created for traditional customs.) | DT_DYNASTY, DT_DURATION --- LCP_COUNTRY, LCP_CITY, LCP_COUNTY LC_OTHERS, AF_DOCUMENTS | 노란 바탕의 모란이 양쪽에 그려져 있다. --- (The yellow background peony is drawn on both sides.) | PT_FLOWER, PT_TYPE, PT_OTHERS, --- TM_SHAPE | 19세기 후반 청주의 재정을 파악할 수 있는 자료가 있다. --- (There are data to comprehend the finances of Cheongju in the late 19th century.) | DT_YEAR, DT_DYNASTY, DT_DURATION --- Table 3: Examples including entity mentions and their fine-grained entity types. Entity mentions and the correct types in the given context are bold. All fine-grained entity types are shown in Figure 1. #### 3.2.3 Fine-grained Entity Typing Given a sentence and entity mention within it, the ET task predicts a set of noun phrases that describe the mention type. For example, in “김홍도는 조선 후기의 화가이다. (Kim Hong-do was a painter of the Joseon era of Korea.),” Joseon should be typed as “dynasty/Date” and not “country/Location.” This typification is crucial for context-sensitive tasks such as RE, coreference resolution, and question answering (e.g., “In which era was Kim Hong-do, an artist?”). Unlike high resource languages, we found that the Korean corpus for the ET task has not been released. In dealing with this data scarcity problem and promoting universal studies, we release a Korean ET task corpus for the first time, to the best of our knowledge. The schema for the ET task was designed with reference to the data construction process of the Fine-Grained Entity Recognition dataset (Ling and Weld, 2012). Considering the properties of the cultural heritage domain, we categorized the 12 general entity types aforementioned in the NER task (Section 3.2.1) into a fine-grained set of 94 types with detailed meanings. Particularly, the cultural taxonomy defined in the Cultural Properties Protection Law999www.cha.go.kr was applied to AF, and the 2004 Cavalier- Smith’s classification system (Cavalier-Smith, 2004) was applied to the biological scope of PT and AM. All fine-grained entity types are detailed in Figure 1. The fine-grained entities for entity-related downstream tasks in the cultural heritage domain enable a more detailed contextualized representation for each entity mention than the previous typing schemas, which only predict relatively coarse types of entities. Table 3 lists three example sentences with entity mention that can represent several fine-grained types. Given a sentence with an entity mention, the appropriate type that describes the role of the entity span in the sentence should be predicted. Our fine-grained entity types can embrace all the existing general types and categorize them in greater detail. Accordingly, they can let models understand richly the noun phrases including entity, compared to when the models are trained to predict only relatively coarse types. For Figure 1, the circle on the left shows the visualization of fine-grained entity types that possess approximately 84% among all labels in the corpus, and the set on the right shows the detailed distributions of all fine-grained types. Each example includes 2.94 fine-grained entities on average; there are up to nine several fine-grained entity types per entity. The category to which the most entities belong is “AF_DOCUMENTS," which possesses 17.9%, and that on the second place is “PS_NAME," having 16.7%. ##### Label Description * • 12 general types: PS, AF, AM, CV, DT, EV, PT, MT, TM, LC, LCG, LCP * • 94 fine-grained types, which were mapped to the cultural heritage-specialized fine-grained entity labels, were inspired by prior works (Ling and Weld, 2012; Gillick et al., 2014; Choi et al., 2018). Index | Example sentences ---|--- 1 | 앞면 좌측 하단에 ‘한번사신레꼬-드는승질상밧고거-나믈느지는안슴니다’ 문구가 있음. 2-2[0.8pt/1.2pt] | There is a phrase ‘한번사신레꼬-드는승질상밧꼬거-나믈느지는안슴니다’(archaic Korean) in the left corner of the front side. 2 | 1면에는 안창호씨(安昌浩氏)의 연설, 편집실 여언(餘言) 등의 기사가, $\cdot\cdot\cdot$, 인쇄됨. 2-2[0.8pt/1.2pt] | On the first page, articles such as Mr. Changho Ahn(安昌浩氏)(Chinese character)’s speech and editorial comments(餘言)(Chinese character), $\cdot\cdot\cdot$, were printed. 3 | ‘戰爭の訓示’, $\cdot\cdot\cdot$, 등의 기사와 일본 언어학자 가나자와 쇼자부로(金澤庄三郞, 1872$\sim$1967)의 현대 국어 음운에 대한 연구물인 「朝鮮語發音篇」의 일부를 게재함. 2-2[0.8pt/1.2pt] | ‘戰爭の訓示(Japanese)’, $\cdot\cdot\cdot$, the articles and 「朝鮮語發音篇」(Chinese character), the part of a study on the modern Korean phonology of Japanese linguist Kanazawa Shouzaburou(金澤庄三郎(Chinese character), 1872∼1967) were published. Table 4: Example sentences contained in our corpus. These examples include not only Korean but also Japanese and Chinese characters. Also, they contain archaic expressions that are not used in modern times. These characteristics make it more suitable for the learning of cultural heritage domain. Note that we omitted some of the words in the sentence for brevity. ### 3.3 Analysis on KoCHET #### 3.3.1 Diachronic and Linguistic Analysis There are mainly two differences between the entities in the proposed corpus and those commonly used. First, archaic expressions that are not used in modern times are frequently shown in our corpus. Specifically, such expressions continually appear when ancient documents or historical artifacts are quoted. Let us consider the phrase “한번사신레꼬-드는승질상밧고거-나믈느지는안슴니다” in sentence 1 in Table 4. Although it is written using syllables of modern Korean, the grammar and the vocabulary are fairly dissimilar from those of contemporary Korean, such as word spacing and syllabification, i.e., separation rule between the units of the word. When translating the sentence with quotation marks into modern Korean, it can be expressed as “한번 사신 레코드는 성질상 바꾸거나 무르지는 않습니다 (Once a record is purchased, it cannot be exchanged or refunded due to its characteristics)." Second, several entities contained in KoCHET written in Korean are followed by the descriptions written in either Chinese or Japanese characters. For example, as shown in sentence 2 in Table 4, the description with Chinese characters in parentheses follows the entity “안창호씨," and is usually written such as “안창호씨(安昌浩氏)." Further, Japanese characters are also present throughout the corpus, enhancing the polyglot property of the corpus, as shown in sentence 3. Therefore, to fully understand such expression types in our corpus, multilingual factors of language models should be considered; particularly in the case of token classification tasks, in which the meaning of each token directly affects the model performance. | Task --- Train | Dev | Test NER | # of examples | 89,884 | 11,245 | 11,233 # of entities | 393,076 | 32,003 | 32,153 RE | # of examples | 31,012 | 3,876 | 3,877 # of relations | 64,080 | 8,031 | 7,831 ET | # of examples | 90,558 | 11,320 | 11,320 # of mentions | 266,209 | 33,226 | 33,395 Table 5: Statistics of KoCHET for each task. Model | NER | RE | ET ---|---|---|--- Entity F1 ($\sigma$) | Character F1 ($\sigma$) | F1 ($\sigma$) | F1 ($\sigma$) Multilingual fine-tuned Models Multilingual BERT | 59.81 (0.09) | 71.80 (0.12) | 80.85 (0.39) | 91.64 (0.10) XLM-RoBERTa-base | 76.57 (0.13) | 82.69 (0.09) | 80.29 (0.53) | 91.13 (0.16) Korean fine-tuned Models KLUE-BERT-base | 39.31 (0.10) | 55.63 (0.15) | 82.44 (0.18) | 93.08 (0.27) KLUE-RoBERTa-base | 38.92 (0.28) | 55.47 (0.21) | 82.42 (0.57) | 92.80 (0.17) Table 6: Experiments results on the NER, RE, and ET tasks. F1 score (%) is used for the evaluation metric with $\sigma$ which shows the standard deviation of the score. We divide the baseline models into two parts: the Multilingual models and the Korean models, marking the highest performances with bold text. #### 3.3.2 Statistics The overall statistics of KoCHET are showed in Table 5. For the NER corpus, 457,232 entities from 112,362 examples in total. For the RE corpus, 79,942 relations from 38,765 examples were annotated in total. For the ET corpus, 332,830 entity mentions from 113,198 examples were annotated in total. The annotated corpus was divided into three subsets for each task, i.e., a ratio of 8:1:1 for training, development, and testing, respectively. In this section, we describe our corpus statistically in the order of NER, RE, and ET. First, as shown in Table 1, we used 12 entity types for our cultural heritage NER corpus. Due to the properties of the cultural heritage domain, the three primary entity types, i.e., artifacts (AF), person (PS), and term (TM), account for the majority of the total entity population. AF, PS, and TM entities possess approximately 36%, 20%, and 10%, respectively, which are used as crucial information in the cultural heritage domain. The AF type includes cultural assets and historical landmarks, the TM type includes patterns or traces engraved on certain cultural assets, and the PS type particularly includes not only general people but also particular types of persons such as mythical figures. On the other hand, the EV type occupies the most minor proportion, approximately 0.8%, because our corpus especially aims to concentrate on the cultural heritage. Second, Table 2 demonstrates the distribution of 14 RE labels. In the case of “A depicts B” and “A documents B,” cultural assets left in a specific form such as records, drawings, and photographs are included, whereas “A hasSection B" contains cultural heritage or historical landmarks located at a specific place. Among them, “A depicts B,” “A documents B,” and “A hasSection B” are the most relationship labels with approximately 22%, 16%, and 10% of the total, respectively. “A depicts B” and “A documents B" include cultural assets left in a specific form such as records, drawings, and photographs, whereas “A hasSection B" contains cultural heritage or historical landmarks located at a particular place. “A hasDestroyed B" has the smallest proportion with ten relations in total because, in actual history, significant events such as the collapse of a nation or the loss of cultural properties are not as diverse as the types of general cultural assets. Finally, among the fine-grained entity types, the “AF_DOCUMENTS" type, such as historical documents, occupies the largest part with 17.9%, and “PS_NAME" including the names of historical figures, takes second place by occupying 11.5%. On the other hand, the entity types to which belong to the AM, PT, MT, and EV almost account for under 1.0%. ## 4 Experiment The detailed experimental settings are in Appendix A. ##### Experimental results According to Table 6, two tendencies are observed. One is that in the NER task, the multilingual models, i.e., multilingual BERT and xlm-RoBERTa-base, showed better performance by more than 30% difference in both Entity F1 and Character F1 scores compared to the Korean models, i.e., KLUE-BERT-base and KLUE-RoBERTa-base. The other is that in the RE and ET tasks, the performances of the Korean models were at least 1.1% higher than those of the multilingual models. ##### Experimental Analysis As the token classification tasks are directly affected by segmentation (Kim et al., 2021; Park et al., 2021a), models with linguistic knowledge of Chinese and Japanese overperform in such tasks (Pires et al., 2019). In other words, the multilingual models are considered to segment better each token composed of various languages, especially in the NER corpus. In addition, in Table 7, the Korean models, i.e., KLUE-BERT-base and KLUE-RoBERTa-base show a significantly higher ratio of unknown tokens than the multilingual language models. It is attributed that the NER task requires more polyglot features of the model compared to the other tasks, i.e., RE and ET, which has the properties of sentence classification tasks. On the other hand, as the RE or ET task does not classify all tokens in a sentence, the correct answer can be satisfactorily inferred from only the given Korean words; thereby, the language models pre-trained in Korean show better performance in the two tasks compared to the multilingual model. Model | | UNK_dev (%) --- | UNK_test (%) --- Multilingual BERT | 0.8156% | 0.7684% XLM-RoBERTa-base | 0.1952% | 0.1810% KLUE-BERT-base | 5.8670% | 5.9677% KLUE-RoBERTa-base | 5.8670% | 5.9677% Table 7: Unknown (UNK) token ratio (%) of each model for development and testing set in the corpus. Baseline models pre-trained in Korean show the same proportions because they use identical vocabulary and tokenizers. ## 5 Conclusion In this paper, we introduced KoCHET \- a Korean cultural heritage corpus for three typical entity-related tasks, i.e., NER, RE, and ET. Unlike the existing public Korean datasets with additional restrictions, KoCHET obviated the cumbersome prerequisite and can be freely modified and redistributed. Furthermore, we proved the applicability of our entity-abundant corpus with the experiments employing the various pre-trained language models and provided practical insights regarding the statistical, diachronic, and linguistic analysis. Above all, the most significant contributing point is that the disclosure of our corpus is expected to serve as a cornerstone for the development of IE tasks for a traditional cultural heritage. We hope that the continuous effort to preserve cultural heritage with the effective management of digitized documents containing cultural artifacts is encouraged by this research. ## Acknowledgements This research is supported by Ministry of Culture, Sports and Tourism and Korea Creative Content Agency(Project Number: R2020040045), MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2018-0-01405) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation), and Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques). ## References * Alatrash et al. (2020) Reem Alatrash, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2020\. Ccoha: Clean corpus of historical american english. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 6958–6966. * Cavalier-Smith (2004) Thomas Cavalier-Smith. 2004. Only six kingdoms of life. _Proceedings of the Royal Society of London. Series B: Biological Sciences_ , 271(1545):1251–1262. * Choi et al. (2018) Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 87–96. * Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8440–8451. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Gillick et al. (2014) Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014\. Context-dependent fine-grained entity type tagging. _arXiv preprint arXiv:1412.1820_. * Hubková et al. (2020) Helena Hubková, Pavel Král, and Eva Pettersson. 2020. Czech historical named entity corpus v 1.0. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 4458–4465. * Kim et al. (2021) Gyeongmin Kim, Junyoung Son, Jinsung Kim, Hyunhee Lee, and Heuiseok Lim. 2021. Enhancing korean named entity recognition with linguistic tokenization strategies. _IEEE Access_ , 9:151814–151823. * Kim (2006) Hansaem Kim. 2006. Korean national corpus in the 21st century sejong project. In _Proceedings of the 13th NIJL International Symposium_ , pages 49–54. National Institute for Japanese Language Tokyo. * Ling and Weld (2012) Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In _Twenty-Sixth AAAI Conference on Artificial Intelligence_. * Park et al. (2021a) Chanjun Park, Sugyeong Eo, Hyeonseok Moon, and Heuiseok Lim. 2021a. Should we find another model?: Improving neural machine translation performance with ONE-piece tokenization method without model modification. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers_ , pages 97–104, Online. Association for Computational Linguistics. * Park et al. (2021b) Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Yongsook Song, Taehwan Oh, et al. 2021b. Klue: Korean language understanding evaluation. _arXiv preprint arXiv:2105.09680_. * Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine learning in python. _Journal of Machine Learning Research_ , 12(85):2825–2830. * Pires et al. (2019) Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4996–5001. * Wang et al. (2020) Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Guihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. _arXiv preprint arXiv:2002.01808_. * Zinin and Xu (2020) Sergey Zinin and Yang Xu. 2020. Corpus of Chinese dynastic histories: Gender analysis over two millennia. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 785–793, Marseille, France. European Language Resources Association. ## Appendix A Experimental Setup As the baseline models, we employed two global language models: multilingual bidirectional encoder representations from transformers (BERT) (Devlin et al., 2019) and a cross-lingual language model XLM-RoBERTa-base (Conneau et al., 2020) containing the Korean language, and two KLUE language models: KLUE-BERT- base, KLUE-RoBERTa-base, which were recently published covering various Korean downstream tasks. In all the model experiments, the performance of each model was measured five times, and the average of each result was evaluated as the final result. Further, we set our environment for the experiment with four A6000 GPUs and 384 GB memory. The hyperparameters in the fine-tuning step were set as follows. The learning rate and weight decay were consistently set at 5e-5 and 0.01 across all three tasks. The number of training epochs was set to 10 in NER, RE and 3 in ET. The batch size in training and testing procedures was set to 128 in NER, RE and 256 in ET. In the case of max sequence length, the lengths of 256 and 128 were used for each task. We evaluated our system by employing F1 score, which is standard metric for classification tasks. Specifically, the evaluation metrics for NER task were Entity F1 and Character F1 based on previous research (Park et al., 2021b). Entity F1 is a metric that is recognized as a correct answer only when all types included in an entity are matched accurately. Conversely, Character F1 is a metric that evaluates each type of syllable in a sentence individually. The evaluation metrics for the RE task were F1 score in the Scikit-learn library (Pedregosa et al., 2011). As for ET, we adopted the evaluation metrics of loose F1 score following the same evaluation criteria used in previous works (Ling and Weld, 2012; Wang et al., 2020).
11institutetext: Dao Thanh Hai 22institutetext: School of Science, Engineering and Technology, RMIT University Vietnam 22email<EMAIL_ADDRESS>33institutetext: Isaac Woungang 44institutetext: Department of Computer Science, Toronto Metropolitan University, Toronto, ON, Canada 44email: <EMAIL_ADDRESS> # On Network Design and Planning 2.0 for Optical-computing-enabled Networks Dao Thanh Hai Isaac Woungang ###### Abstract In accommodating the continued explosive growth in Internet traffic, optical core networks have been evolving accordingly thanks to numerous technological and architectural innovations. From an architectural perspective, the adoption of optical-bypass networking in the last two decades has resulted in substantial cost savings, owning to the elimination of massive optical- electrical optical interfaces. In optical-bypass framework, the basic functions of optical nodes include adding (dropping) and cross-connecting transitional lightpaths. Moreover, in the process of cross-connecting transiting lightpaths through an intermediate node, these lightpaths must be separated from each other in either time, frequency or spatial domain, to avoid unwanted interference which deems to deteriorate the signal qualities. In light of recently enormous advances in photonic signal processing / computing technologies enabling the precisely controlled interference of optical channels for various computing functions, we propose a new architectural paradigm for future optical networks, namely, optical-computing- enabled networks. Our proposal is defined by the added capability of optical nodes permitting the superposition of transitional lightpaths for computing purposes to achieve greater capacity efficiency. Specifically, we present two illustrative examples highlighting the potential benefits of bringing about in-network optical computing functions which are relied on optical aggregation and optical XOR gate. The new optical computing capabilities armed at optical nodes therefore call for a radical change in formulating networking problems and designing accompanying algorithms, which are collectively referred to as optical network design and planning 2.0 so that the capital and operational efficiency could be fully unlocked. To this end, we perform a case study for network coding-enabled optical networks, demonstrating the efficacy of optical-computing-enabled networks and the challenges associated with greater complexities in network design problems compared to optical-bypass counterpart. ## 1 Introduction From an architectural perspective, optical networking has been shifted from optical-electrical-optical mode to optical-bypass operations so that transiting lightpaths could be optically cross-connected from one end to the other end rather than being undergone unnecessary optical-electrical-optical conversions efficient . Optical-bypass networking has then gone a long way from a conceptual proposal to a widely adopted technology by network operators in the last two decades all-optical . However, one of the major challenges in scaling up the optical networks to support explosive traffic growth in a greater efficiency manner is the fact that signal processing functions are mostly implemented in electrical domain based on digital signal processing. This involves a number of the well-established procedures including mainly optical-to-electrical conversion, electronic sampling, digital signal processing and finally, back-conversion to optical domain. Here, the critical bottleneck lies in the electronic sampling rate and in circumventing this major concern, solutions are directed towards migrating certain signal processing functions to the optical domain. Indeed, photonic signal processing / computing technologies offer a new powerful way for handling high speed signals thanks to its inherent merits of wide bandwidth, transparency and energy-efficiency optical_processing_1 . In optical-bypass networking, the basic functions of optical nodes are to add (drop) and cross-connect the transiting lightpaths. Moreover, in cross- connecting transiting lightpaths through an intermediate node, these lightpaths must be maximally separated from each other in either time, frequency or spatial domain all-optical to avoid unwanted interference which deems to deteriorate the signal qualities. This turns out to be a fundamental limitation as various optical computing operations could be performed between such transitional lightpaths to generate the output signals which are spectrally more-efficient than its inputs. In light of recent enormous advances in photonic computing technologies enabling the controlled interference of optical channels for various computing capabilities agg1 ; agg2 ; xor3 ; nc_others10 , we propose a new architectural paradigm for future optical networks, namely, optical-computing-enabled networks. Our proposal is defined by the added capability of optical nodes permitting the superposition of transitional lightpaths for computing purposes to realize a greater capacity efficiency. Specifically, we present two illustrative examples highlighting the potential benefits of bringing about in-network optical computing which are relied on optical aggregation agg1 ; agg2 and optical XOR gate xor3 ; nc_others10 . The new optical computing capabilities armed at optical nodes calls for a radical change in optical network design and planning in order to fully reap spectral and cost benefits as well as operational efficiency. To this end, we perform a case study for network coding-enabled optical networks, demonstrating the efficacy of optical- computing-enabled networks and challenges associated with greater complexities in network design problems compared to optical-bypass counterpart. The paper is structured as followed. In Section 2, we introduce a new concept of optical-computing-enabled paradigm. We also highlight the applications of two optical computing operations, namely, optical aggregation and optical XOR whose enabling technologies have been progressing fast and their integration to future optical networks could be foreseen. We also address the computational impact and intricacies for network design and planning in the paradigm of optical-computing networking. Next, as a case study to reveal more complicated network design problems arisen in the optical-computing-enabled network, we focus on the network coding-enabled scenarios and formulate the routing, wavelength and network coding assignment problem in Section 3. Section 4 is dedicated to showcase the numerical evaluations comparing our proposal that leverages the use of optical XOR encoding within the framework of optical-computing-enabled networks to the traditional optical-bypass networking. The comparison is drawn on a realistic COST239 and NSFNET network topologies. Finally, Section 5 concludes the paper. ## 2 Optical-computing-enabled Paradigm The optical-computing-enabled paradigm is characterized by the key property that optical nodes are empowered with the optical computing capability. Specifically, two or more optical channels could be optically mixed together to compute a new optical channel and thus, to attempt achieving a greater capacity efficiency hai_tnsm ; hai_ro ; hai_springer3 ; hai_mttw21 . In light of tremendous progresses in optical computing technologies permitting precisely controlled interference between optical channels, in-transit lightpaths traversing the same optical node are offered unique opportunities to optically superimpose to each other to generate the output signals which are spectrally more-efficient than their inputs. Such optical operations involving the interaction of transitional optical channels pave the way for redefining the optical network architecture, disrupting the conventional assumption of keeping transitional lightpaths untouched. In this context, optical-computing-enabled framework is foreseen to be the next evolution of optical-bypass networking. In this section, we highlight the efficient use and impact of introducing two optical computing operations, namely, optical aggregation (de-aggregation) and optical XOR gate to optical networks. It is noticed that the enabling technologies for realizing such two optical computing capabilities have been accelerating and therefore technical readiness of integration to optical nodes could be foreseen. Of course, there are many other ways for mixing two optical channels and in the future, as photonic computing technologies move forward, a wide range of computing functions could be technologically realized. These advances will be expected to have massive impacts to optical networks from both design, planning, operation and management. ### 2.1 Optical Aggregation and De-aggregation Aggregation of lower-speed channels into a single higher-speed one has been a key function in the operation of optical networks. The goal of doing so is to achieve a greater capacity efficiency by freeing up the lightpaths of the lower wavelength utilization. Traditionally, this function has been performed in the electronic domain by terminating optical channels, re-assembling, re- modulating and finally back-converting to the optical domain. Clearly, there have been many limitations of doing so and thus, it is not scalable for higher bit-rate operations. In mitigating this major issue, the concept of optical aggregation has recently been proposed, implemented and pushed forward agg11 ; agg12 . The main industrial player for this revolutionary effort is INFINERA, whose the goal is to develop a new ecosystem of devices and components, with the capability of transforming the traditional operation of the optical nodes. In term of functionality, an optical aggregator can add two or more optical channels of lower bit-rate and/or lower-order modulation format into a single higher bit-rate and higher-order modulation format one. In this example, we consider the use of an optical aggregator and de-aggregator whose function is to combine two QPSK signals into a single 16-QAM channel and vice-versa. Fig. 1 illustrates the schematic diagram for adding two QPSK channels of lower bit- rate into a single 16-QAM channel of higher bit-rate. In doing so, the spectral efficiency could thus by improved twice. Figure 1: Schematic illustration of the aggregation of two QPSK signals into a single 16-QAM one and vice-versa Figure 2: Traffic Provisioning in Optical- bypass Networking Figure 3: Optical-computing-enabled Paradigm with Optical Aggregation and De-aggregation In leveraging the use of the aforementioned aggregator for optical networks, we first consider the conventional way of accommodating traffic demands in optical-bypass networking. Fig. 2 shows the routing and wavelength assignment for two demands $a$ and $b$ of the same line-rate 100G and format QPSK. Due to the wavelength uniqueness constraint on a link, two wavelengths are needed on link $XI$ and $IC$. Now, let us consider the case that at node $X$, the optical aggregation is enabled. By permitting the optical aggregation, two 100G QPSK transitional lightpaths crossing node $X$ could be optically added to generate the output signal of 200G, which is modulated on the 16-QAM format. In Fig. 3, it is clearly observed that by having a single wavelength channel of 200G capacity, a greater capacity efficiency has been realized. At the common destination node $C$, the aggregated lightpath could be decomposed into constituent ones and such decomposition operation could be performed either in an optical or electrical domain. It is important to note that in order to maximize the aggregation opportunities, new network design and planning algorithms should be developed to determine the pairing of demands for aggregation, the respective aggregation node and more importantly, the transmission parameters for aggregated lightpaths. ### 2.2 Optical XOR Encoding and Decoding Technologies for realizing all-optical logic gates have been accelerating in recent years that permit performing the bit-wise exclusive-or (XOR) between optical signals of very high bit-rates and/or different modulation formats xor3 ; nc_others10 . Different from the aggregation operation, the optical XOR encoding output is kept at the same bit-rate and/or format as the inputs. A functional description of such device is shown in Fig. 5(a), where two optical signals of 100G QPSK on different wavelength are coded together to generate the output X of the same bit-rate and format. Figure 4: Traffic Provisioning in Optical-bypass Networking Figure 5: Optical- computing-enabled Paradigm with Optical XOR Encoding and Decoding In exploiting the optical XOR gate, we focus on the protection scenario and we assume that there are two demands with dedicated protection. The provisioning of such two demands are shown in Fig. 4 for the optical-bypass framework. Because the protection lightpath of demand $a$ and demand $b$ cross the same links $XI$ and $IC$, it therefore requires at least two wavelengths on those links. Looking at Fig. 5 (b) for the optical-computing-enabled paradigm when node $X$ is armed with the optical XOR encoding capability, the protection signal of demand $a$ and demand $b$ could thus be optically encoded to produce the signal $X=a\oplus b$. Such encoded output is routed all the way from node $X$ to the shared destination node $C$. By doing so, only one wavelength channel on link $XI$ and $IC$ is sufficient, resulting in a spectral saving of $50\%$. Although the protection signals of demand $a$ and demand $b$ is encoded, it is always possible to recover the original signal for both demand $a$ and demand $b$ in any case of a single link failure. The recovery is as simple as the encoding by making use of the XOR operation on the two remaining signals. Specifically, if the working signal of demand $b$ is lost, it can be retrieved in an another way by $b=(a\oplus b)\oplus a$. The combination of optical encoding and dedicated protection appears to be matched to attain a greater capacity efficiency while keeping a near-immediate recovery speed. Nevertheless, in order to realize the encoding benefits, more complicated network design problems emerge. The more intricacies are related to the determination of the pair of demands for encoding and the selection of the routes and/or transmission parameters for the encoded lightpaths. ### 2.3 Impact for Network Design and Planning It should be noted that the optical-computing-enabled paradigm introduces more networking flexibility by permitting the precisely controlled interference among two or more favorable lightpaths, and this poses important ramifications for the network design algorithms to maximize the potential benefits hai_comletter ; hai_systems ; hai_comcom ; hai_comcom2 ; hai_rtuwo ; hai_springer2 . In optical-bypass networking, the central problem for the network design and planning is the routing and resource allocation. For solving such problem, the selection of the route and assigning transmission parameters, including the wavelength/spectrum and/or format, is determined for each individual demand. In optical-computing-enabled networking, more complicated network design problems arise due to the interaction of transitional lightpaths. Specifically, in addition to determining of the route and transmission parameters for each demand, the pairing of demands and subsequently, the selection of route and assigning transmission parameters for special lightpaths involving the interaction of two or many demands, must also be identified. This represents a radical change in the network design and planning, disrupting the conventional set of algorithms that have been developed for optical-bypass networking in many years. In recognizing this disruption, we call for a new framework, that is, optical network design and planning 2.0, which encompasses new problems emerging from various ways that transitional lightpaths could be optically mixed and accompanying algorithms including exact and heuristic solutions for solving them. In the subsequent section, we formulate the problem entitled, the routing, wavelength and network coding assignment problem arising in the application of optical XOR for optical-computing-enabled network and highlight how such problem is different from its counterpart, that is, routing and wavelength assignment in optical-bypass network. ## 3 Routing, Wavelength and Network Coding Assignment Problem Formulation in Optical-computing-enabled Network In this section, we consider the design of network coding-enabled networks to support a set of traffic demands with minimum wavelength link cost. The optical encoding scheme to be used is the simple XOR, where the input signals of the same wavelength, line-rate and format, are XOR-coded to produce the output signal of the same wavelength and format. The main advantage of such scheme, XOR coding between signals of the same wavelength, is the elimination of a probe signal and therefore, could be highly cost-efficient xor-model . Moreover, for ease of operations, the encoding is restricted only on the protection signals of demands having the same destination node and the decoding is only taken place at the destination. Furthermore, each demand is permitted to have maximum one encoding operation. In this framework, there are a set of sufficient constraints on the network coding assignment for any two code-able demands, namely: i) two demands must have common destination, ii) two demands must use the same wavelength, iii) the link-disjoint constraint between their working paths and between one’s working path to the another’s protection path, iv) their protection paths must have a common sub-path whose one end is the shared destination. Inputs: * • $G(V,E)$: A graph representing the physical network topology with $|V|$ nodes and $|E|$ fiber links. * • $D$: A set representing the traffic demands, indexed by $d$. Each demand $d\in D$ requests one wavelength capacity (e.g., 100 Gbps) * • $W$: A set representing the wavelengths on each fiber link, indexed by $w$. The link capacity measured in number of wavelength is $|W|$ Outputs: * • Routing and wavelength assignment for each lightpath * • Determination of pair of demands for optical XOR encoding and determination of respective coding nodes. * • Routing and wavelength assignment for encoded lightpaths * • The usage of wavelength on each link Objective: Minimize the wavelength link usage The mathematical model is formulated in the form of integer linear programming (ILP). In addition to typical variables and constraints accounting for the selection of route and assigning wavelength for each demand, new variables and constraints emerge as the interaction of demands have been introduced and thus, causing the mathematical model one order of magnitude computationally harder than its counterpart, that is, the traditional routing and wavelength assignment in optical-bypass networking Algorithm1 . In acknowledging the NP- nature of the model, we therefore propose the following scalable heuristic (Algorithm 1) that could be used in large networks. 1 Input: $G(V,E),D,W$ Output: $\alpha_{e,w}^{d}$, $\beta_{e,w}^{d}$, $\theta_{w}^{d}$, $z_{e,w}^{d,v}$, $\delta_{v}^{d}$, $f_{d_{1}}^{d_{2}}$, $\gamma_{e,w}$ 2 3for _node $v\in V$_ do 4 Find demand $d\in D:r(d)=v$ 5 Insert $d$ into set $X_{v}$ 6 7Sort $X_{v}$ according to its size $|X_{v}|$ in descending order 8 9for _demand $d\in X_{v}$_ do 10 Find $k$ shortest cycles (modified Suurballe algorithm) including working and routing route for demand $d$ 11 12 13Sort demand $d\in X_{v}$ according to its length of $k$ cycles in descending order 14 /* Perform routing, encoding and wavelength assignment for all sorted demands */ 15 16for _$d\in X_{v}$_ do 17 Select one cycle for demand $d$ out of $k$ cycles 18 Perform encoding with a suitable demand 19 First-fit Wavelength Assignment 20 21 Algorithm 1 Heuristic Solution ## 4 Numerical Results This section presents the numerical evaluations comparing our proposal that leverages the use of optical XOR encoding within the framework of optical- computing-enabled networks to the traditional optical-bypass networking. The comparison is drawn on a realistic COST239 and NSFNET network topologies as shown in Fig. 6. The metric for comparison is a traditional one, that is, the wavelength link cost. Two designs, namely, WNC and NC are performed, where WNC refers to the routing and wavelength assignment in optical-bypass networking and NC refers to the more advanced problem, i.e., routing, wavelength and network coding assignment in optical-computing-enabled networks. We first test the solutions from solving the ILP models and heuristic on small-scale topology, 6 nodes, all of degree 3 as shown in Fig. 6 (a). The traffic is randomly generated between node pairs with the unit capacity (i.e., one wavelength) and the fiber capacity is 40 wavelengths. We consider three increasing load corresponding to $30\%$, $70\%$ and $100\%$ (full mesh) node- pair traffic exchange. The result in Table 1 (except the full mesh) is averaged over 20 samples. For the network coding-based design in full mesh traffic, the computation is overly long and thus, the results is obtained after 10 hours of running. The well-studied heuristic for WNC achieves optimal results which are on a par with its ILP model while the heuristic for NC produces reasonably good solutions with tight gaps compared to its ILP model, avoiding the overly long computational time. Due to the sub-optimal nature of the heuristic algorithms, the gain obtained by these algorithms is slightly reduced compared to the one obtained from the ILP. Figure 6: Network Topologies under Test Table 1: Performance Comparison between Exact Solution and Heuristic One Load | ILP | Heuristic ---|---|--- WNC | NC | Gain | WNC | NC | Gain $30\%$ | $32.6$ | $30.6$ | Max: $9\%$, Mean: $6\%$ | $32.6$ | $31.2$ | Max: $9\%$, Mean: $4\%$ $70\%$ | $75.8$ | $67.9$ | Max: $12\%$, Mean: $10\%$ | $75.8$ | $70.6$ | Max: $9\%$, Mean: $7\%$ $100\%$ | $108$ | $99$ | Max=Mean= $8\%$ | 108 | 100 | Max=Mean= $7\%$ We apply the heuristic algorithm for larger networks, NSFNET and COST239 topologies with the same setting as in the 6-node topology about traffic generation, fiber capacity and number of traffic samples. The results are presented in Table 2. It can be observed that up to about $8\%$ gain could be achieved with the NSFNET network. For more densely connected COST239 network, the lower gain is obtained, up to $5\%$. It is evident that the solution from NC cases is always better than that from the WNC, resulting in improved capacity efficiency. Compared to the findings on O-E-O case icc , where the gain was reported up to $20\%$, there is reduced gain in all-optical case. This may be due to wavelength-related constraints for network coding assignments, curbing the coding capability among the demands. Moreover, it should be noted that the gain is highly dependent on the structure of the network topology, traffic and network design algorithms. Table 2: Numerical Results for Realistic topologies Topo | Load | WNC | NC | Gain | No Coding Operation ---|---|---|---|---|--- NSFNET | $30\%$ | 318.3 | 299.5 | Max = $8\%$, Mean = $6\%$ | 9.7 $70\%$ | 730.4 | 682.4 | Max = $8\%$, Mean = $7\%$ | 24.5 $100\%$ | 1048 | 981 | Max = Mean = $6\%$ | 35 COST239 | $30\%$ | 126.2 | 123.3 | Max = $3\%$, Mean = $2\%$ | 1.4 $70\%$ | 295.6 | 285.4 | Max = $5\%$, Mean = $3\%$ | 5.1 $100\%$ | 420 | 404 | Max = Mean = $4\%$ | 8 ## 5 Conclusion This paper presents a new networking paradigm for future optical networks named, optical-computing-enabled framework. As a potential candidate for the next evolution of optical-bypass architecture, our proposal aims to exploit the optical computing capability at optical nodes so that greater capacity efficiency could be achieved. In highlighting the potential benefits of such optical-computing-enabled framework, we brought out two revealing examples leveraging the efficient use of optical aggregation and optical XOR gate. A numerical case study for the network coding-enabled design was provided to demonstrate the efficacy and the more complicated network design algorithm of optical-computing-enabled networking compared to the optical-bypass counterpart. Albeit still primitive, the perspective of permitting optical mixing at intermediate nodes heralds a reinvention of optical networking, altering the way we think about optical network architecture. Although the experiments and practical demonstration of optical-computing-enabled networks are still ahead, the disconnection between the theoretical studies and the implementation realities are only just beginning to be rectified as enabling technologies have been more maturing and the huge profits have been foreseen. It should be pointed out that network design algorithms play a key role in achieving the operational efficiency of a network and thus, more robust and carefully designed algorithms should be developed to optimize the advantages of optical- computing-enabled networking. As for future works, we plan to develop an ecosystem of new research problems and the accompanying algorithms, collectively referred to as optical network design and planning 2.0, to capture a wide range of optical computing operations between in-transit lightpaths at their optimal usage scenarios. ## References * (1) Saleh, A., Simmons, J.M.: Technology and architecture to enable the explosive growth of the internet. Communications Magazine, IEEE 49(1), 126–132 (2011). DOI 10.1109/MCOM.2011.5681026 * (2) Saleh, A., Simmons, J.M.: All-optical networking: Evolution, benefits, challenges, and future vision. Proceedings of the IEEE 100(5), 1105–1117 (2012). DOI 10.1109/JPROC.2011.2182589 * (3) Willner, A.E., Fallahpour, A., Alishahi, F., Cao, Y., Mohajerin-Ariaei, A., Almaiman, A., Liao, P., Zou, K., Willner, A.N., Tur, M.: All-optical signal processing techniques for flexible networks. Journal of Lightwave Technology 37(1), 21–35 (2019). DOI 10.1109/JLT.2018.2873245 * (4) Wang, H., Pan, L., Ji, Y.: All-optical aggregation and de-aggregation of 4×bpsk-16qam using nonlinear wave mixing for flexible optical network. IEEE Journal of Selected Topics in Quantum Electronics 27(2), 1–8 (2021). DOI 10.1109/JSTQE.2019.2943375 * (5) Li, Q., Yang, X., Yang, J.: All-optical aggregation and de-aggregation between 8qam and bpsk signal based on nonlinear effects in hnlf. Journal of Lightwave Technology 39(17), 5432–5438 (2021). DOI 10.1109/JLT.2021.3084353 * (6) Chen, L.K., Li, M., Liew, S.C.: Breakthroughs in photonics 2014: Optical physical-layer network coding, recent developments, and challenges. IEEE Photonics Journal 7(3), 1–6 (2015). DOI 10.1109/JPHOT.2015.2418264 * (7) Kotb, A., Zoiros, K.E., Guo, C.: 1 tb/s all-optical xor and and gates using quantum-dot semiconductor optical amplifier-based turbo-switched mach–zehnder interferometer. Journal of Computational Electronics (2019). DOI 10.1007/s10825-019-01329-z * (8) Hai, D.T.: On routing, wavelength, network coding assignment, and protection configuration problem in optical-processing-enabled networks. IEEE Transactions on Network and Service Management 20(3), 2504–2514 (2023). DOI 10.1109/TNSM.2023.3283880 * (9) Hai, D.T.: Optical-computing-enabled network: An avant-garde architecture to sustain traffic growth. Results in Optics 13, 100,504 (2023). DOI https://doi.org/10.1016/j.rio.2023.100504. URL https://www.sciencedirect.com/science/article/pii/S2666950123001566 * (10) Hai, D.T.: Optical networking in future-land: from optical-bypass-enabled to optical-processing-enabled paradigm. Optical and Quantum Electronics 55(864) (2023). DOI 10.1007/s11082-023-05123-x. URL https://doi.org/10.1007/s11082-023-05123-x * (11) Hai, D.T.: Quo vadis, optical network architecture? towards an optical-processing-enabled paradigm. In: 2022 Workshop on Microwave Theory and Techniques in Wireless Communications (MTTW), pp. 193–198 (2022). DOI 10.1109/MTTW56973.2022.9942542 * (12) Welch, D., Napoli, A., Bäck, J., Sande, W., Pedro, J., Masoud, F., Fludger, C., Duthel, T., Sun, H., Hand, S.J., Chiang, T.K., Chase, A., Mathur, A., Eriksson, T.A., Plantare, M., Olson, M., Voll, S., Wu, K.T.: Point-to-multipoint optical networks using coherent digital subcarriers. Journal of Lightwave Technology 39(16), 5232–5247 (2021). DOI 10.1109/JLT.2021.3097163 * (13) Bäck, J., Wright, P., Ambrose, J., Chase, A., Jary, M., Masoud, F., Sugden, N., Wardrop, G., Napoli, A., Pedro, J., Iqbal, M.A., Lord, A., Welch, D.: Capex savings enabled by point-to-multipoint coherent pluggable optics using digital subcarrier multiplexing in metro aggregation networks. In: 2020 European Conference on Optical Communications (ECOC), pp. 1–4 (2020). DOI 10.1109/ECOC48923.2020.9333233 * (14) Hai, D.T.: Leveraging the survivable all-optical wdm network design with network coding assignment. IEEE Communications Letters 21(10), 2190–2193 (2017). DOI 10.1109/LCOMM.2017.2720661 * (15) Hai, D.T., Chau, L.H., Hung, N.T.: A priority-based multiobjective design for routing, spectrum, and network coding assignment problem in network-coding-enabled elastic optical networks. IEEE Systems Journal 14(2), 2358–2369 (2020). DOI 10.1109/JSYST.2019.2938590 * (16) Hai, D.T.: A bi-objective integer linear programming model for the routing and network coding assignment problem in wdm optical networks with dedicated protection. Computer Communications 133, 51 – 58 (2019). DOI https://doi.org/10.1016/j.comcom.2018.08.006 * (17) Hai, D.T.: On routing, spectrum and network coding assignment problem for transparent flex-grid optical networks with dedicated protection. Computer Communications (2019). DOI https://doi.org/10.1016/j.comcom.2019.08.005 * (18) Hai, D.T.: Re-designing dedicated protection in transparent wdm optical networks with xor network coding. In: 2018 Advances in Wireless and Optical Communications (RTUWO), pp. 118–123 (2018). DOI 10.1109/RTUWO.2018.8587873 * (19) Hai, D.T.: Network coding for improving throughput in wdm optical networks with dedicated protection. Optical and Quantum Electronics 51(387) (2019). DOI 10.1007/s11082-019-2104-5. URL https://doi.org/10.1007/s11082-019-2104-5 * (20) Porzi, C., Scaffardi, M., Potì, L., Bogoni, A.: All-optical xor gate by means of a single semiconductor optical amplifier without assist probe light. In: 2009 IEEE LEOS Annual Meeting Conference Proceedings, pp. 617–618 (2009). DOI 10.1109/LEOS.2009.5343425 * (21) Varvarigos, E., Christodoulopoulos, K.: Algorithmic aspects of optical network design. In: Optical Network Design and Modeling (ONDM), 2011 15th International Conference on, pp. 1–6 (2011) * (22) Øverby, H., Biczók, G., Babarczi, P., Tapolcai, J.: Cost comparison of 1+1 path protection schemes: A case for coding. In: 2012 IEEE International Conference on Communications (ICC), pp. 3067–3072 (2012). DOI 10.1109/ICC.2012.6363928
# Elastic Bayesian Model Calibration Devin Francom Los Alamos National Laboratory J. Derek Tucker Sandia National Laboratories Gabriel Huerta Sandia National Laboratories Kurtis Shuler Sandia National Laboratories Daniel Ries Sandia National Laboratories ###### Abstract Functional data are ubiquitous in scientific modeling. For instance, quantities of interest are modeled as functions of time, space, energy, density, etc. Uncertainty quantification methods for computer models with functional response have resulted in tools for emulation, sensitivity analysis, and calibration that are widely used. However, many of these tools do not perform well when the model’s parameters control both the amplitude variation of the functional output and its alignment (or phase variation). This paper introduces a framework for Bayesian model calibration when the model responses are misaligned functional data. The approach generates two types of data out of the misaligned functional responses: one that isolates the amplitude variation and one that isolates the phase variation. These two types of data are created for the computer simulation data (both of which may be emulated) and the experimental data. The calibration approach uses both types so that it seeks to match both the amplitude and phase of the experimental data. The framework is careful to respect constraints that arise especially when modeling phase variation, but also in a way that it can be done with readily available calibration software. We demonstrate the techniques on a simulated data example and on two dynamic material science problems: a strength model calibration using flyer plate experiments and an equation of state model calibration using experiments performed on the Sandia National Laboratories’ Z-machine. Keywords: amplitude/phase variability, Bayesian model calibration, functional data analysis, strength material calibration. ## 1 Introduction In domains of science and engineering where modeling is an important part of investigation and discovery, quantifying uncertainty in model inferences and predictions can be essential in order for model performance to be trusted. When a model has uncertain parameters (or inputs), model calibration is the act of tuning the parameters so that the model produces a desired response. Most often, models are calibrated to experimental or observational measurements, so that calibration seeks to make the model response reflect reality. Model calibration (also known as inversion) is often a poorly identified problem, where multiple combinations of inputs produce equally valid solutions. Kennedy and O’Hagan (2001) proposed Bayesian model calibration as a systematic approach to calibrating a model in the face of all of the sources of uncertainty, so that calibration uncertainty is quantified. These sources of uncertainty are parameter uncertainty, measurement uncertainty, emulation or surrogate model uncertainty (an emulator is a fast surrogate for a more expensive model), and model form error (i.e., model misspecification, model discrepancy or model bias). Often, computer model outputs are functional in nature, producing an output measured over space and/or time. The majority of calibration work on these types of outputs has been done on features extracted from the outputs such as peak points or critical values (Walters et al., 2018). However, the process for extracting these features can be tedious, prone to error, and is problem specific. Other calibration approaches have been developed for use with multivariate or functional response (Higdon et al., 2008; Bayarri et al., 2007; Francom et al., 2019), though they are prone to problems when presented with functional response data that are misaligned. Our goal is to extend functional emulation and calibration methods for use when the response is misaligned, and to do so without human-intensive feature engineering. There has been considerable effort in statistics to develop methods that can analyze functional data objects without loss of information. Such methodology is known as functional data analysis and has a rich history. An excellent introduction to this field is given in several books including Ramsay and Silverman (2005), Horvath and Kokoszka (2012), and Srivastava and Klassen (2016). An interesting aspect of most functional data is that the underlying variability can be ascribed to two sources. These two sources are termed the amplitude ($y$ or vertical) variability and the phase ($x$ or horizontal or warping) variability. Capturing these two sources of variability is crucial when modeling and monitoring functional data in a process control architecture, and can greatly affect the construction of statistics (e.g., tolerance bounds, model parameters). In this work, we refer to methods that handle both amplitude and phase variability in functional data as _elastic_. This important concept is illustrated in Figure 1 through a simulated example. In Figure 1 we have two functions that contain both a peak and a trough. Each of the functions contain variability in the height of the peak and valley and large variability in its placement. The relative heights of the peaks can be attributed to the amplitude variability, while the different locations of the peaks constitute the phase variability. The phase variability can be accounted for by first aligning the functions. As an example, the right panel shows time-aligned functions. The alignment involves a transformation of the horizontal axis via the warping function shown in the middle panel. The aligned function ($f_{1}(\gamma(t))$) captures the amplitude variability while the warping functions ($\gamma$) captures the phase variability. The Bayesian model calibration method introduced in this paper considers the shape of the data by accounting for both directions of variability. Figure 1: Demonstration of amplitude and phase variability in functional data. Left: Original functions. Middle: Warping Function (”phase variability”). Right: Aligned functions (”amplitude variability”). While standard calibration solutions are still applicable to misaligned functions, especially when emulation is unnecessary, we demonstrate that using functional metrics that consider the misaligned nature of the data can produce more accurate calibration solutions and more reasonable model formulations. Specifically, we use elastic functional data analysis methods, Srivastava and Klassen (2016); Tucker et al. (2013); Marron et al. (2015), to construct a metric to measure the distance between functions and to specify a likelihood. We do this by decomposing our functional responses into aligned functions and warping functions, as shown in Figure 1. For expensive models we can then build an emulator or surrogate for the aligned functional responses, and, under a suitable transformation, an emulator for the warping functions. These two emulators can be used to calibrate in such a way that a proper distance is used. The strengths of this approach are that (1) emulation is likely to be more accurate when applied to aligned data instead of misaligned data (Francom et al., 2022), (2) discrepancy modeling can be done in such a way that it is isolated to the phase or the amplitude part of the model, and (3) the alignment procedure ensures the isometry property holds and therefore, the calibration is performed using a proper distance. Similarly, Kleiber et al. (2014) proposed a functional calibration approach using a deformation that relies on the $\mathbb{L}^{2}$ metric in the standard function space. However, this metric is degenerate and exhibits a “pinching effect” which the elastic methods do not have, as demonstrated in Srivastava and Klassen (2016). We apply our approach to two dynamic material science applications described in the following subsections. ### 1.1 Equation of state exploration using Z-machine experiments This problem is motivated by data from Brown et al. (2014) which conducted dynamic materials experiments on tantalum (Ta) generated with pulsed magnetic fields using Sandia National Laboratories’ Z-machine, Savage et al. (2007). A basic version of the experiments performed at the Z-machine can be seen in Figure 2 (left). The Z-machine delivers electrical currents along an aluminum (Al) panel creating massive pressure driving an impulse into a tantalum sample. Stress waves flow through the tantalum, and the experiment results in measurements of velocity on the far side of the sample as a function of time. More details on the execution of the experiment can be found in Brown et al. (2014) and Lemke et al. (2005). Figure 2 (right) shows the functional response of these nine experiments. Figure 2: Left: Example of experiments at the Z-machine. Right: Velocity response functions from nine experiments. The computer model input parameters consist of calibration parameters of key interest and experiment-specific parameters that are not completely known. ### 1.2 Material strength exploration using gas gun experiments Boteler and Dandekar (2006) performed a series of experiments where a flyer was shot from a gas gun into a plate and the velocity of the opposite surface of the plate was measured (as a function of time) as the resultant shock wave moved through it. This experiment has similarities to the Z-machine example above, except that the shock waves that propagate through the plate are driven by the flyer’s impact instead of the more continuous electrical current drive. Figure 3 shows three velocimetry curves measured during three of these flyer plate gas gun experiments. Walters et al. (2018) used these experiments to parameterize a strength model for aluminum. Also similar to the Z-machine computer model, flyer plate impact simulations include both inputs to be calibrated (strength model parameters) and experiment-specific parameters that are uncertain (e.g., the exact velocity of the flyer). Figure 3: From Walters et al. (2018), the response from three gas gun experiments where both flyer and plate were aluminum alloys. The structure of the rest of this paper is as follows. Section 2 gives a high level overview of model calibration based on the Kennedy and O’Hagan (2001) framework. Section 3.1 describes the types of variability for functional output, functional alignment with a proper distance metric, and how to measure the amount of variability in amplitude and phase space. Section 3.2 describes the elastic Bayesian model calibration method as well as various modeling choices in Section 3.3. Section 4 gives two simulation studies comparing the proposed elastic functional calibration method with functional response calibration that does not account for misalignment. Section 5 applies the proposed method to the two material model calibration problems detailed above. Lastly, Section 6 provides general conclusions and considerations on future work. ## 2 Bayesian Model Calibration ### 2.1 Univariate Response The traditional approach to Bayesian model calibration, introduced by Kennedy and O’Hagan (2001) and expanded in Higdon et al. (2004), seeks to calibrate parameters of a model using observations. Let $y(\bm{x},\bm{u})$ denote the model where $\bm{x}$ denotes conditions that are certain, often fixed conditions of an experiment, and $\bm{u}$ denotes uncertain parameters in need of calibration. Let $z(\bm{x})$ denote an observation. In these approaches, $\bm{x}$ could include functional variables like space or time and/or conditions at which an experiment was performed. Let $n$ be the number of observations or experimental conditions measured, so that our observed data are $z(\bm{x}_{1}),\dots,z(\bm{x}_{n})$. Then the calibration model is $\displaystyle z(\bm{x}_{i})$ $\displaystyle=y(\bm{x}_{i},\bm{\theta})+\delta(\bm{x}_{i})+\epsilon(\bm{x}_{i}),~{}~{}\epsilon(\bm{x}_{i})\sim\mathcal{N}(0,\sigma_{\epsilon}^{2})$ (1) where $\bm{\theta}$ denotes the best set of calibration parameters, $\delta$ denotes (latent) error in the form of the model (often called model discrepancy), and $\epsilon$ denotes measurement or observation error in $z$. The Gaussian likelihood specified here assumes that measurement errors are independent and identically distributed, but other measurement error models can be used. The unknowns in this model are the calibration parameters $\bm{\theta}$, the measurement error variance $\sigma^{2}_{\epsilon}$, and the form of the discrepancy function. In order for this model to be identifiable, priors for each of the unknowns need to be chosen carefully. If $n$ is small, the prior for $\sigma^{2}_{\epsilon}$ will be influential. Additionally, there is a natural trade off between the calibration parameters and the model discrepancy, so that at least one of these needs to be well constrained by the prior. For instance, we could use a Gaussian process prior for the discrepancy function $\delta$ in such a way that we prefer small-valued smooth functions, or we could try to specify priors that prefer positive values or monotone functions (Higdon et al., 2004; Brynjarsdóttir and O’Hagan, 2014). In many realistic scenarios the model $y$ is expensive to evaluate, requiring powerful computing and non-trivial amounts of time. This slows down the evaluation of the likelihood and makes inference impractical. In these scenarios, we build a surrogate model (or emulator) to use in place of $y$, which is trained using a (relatively) small number of model evaluations, $\\{y(\bm{x}_{j},\bm{u}_{j})\\}_{j=1}^{N_{sim}}$ (Sacks et al., 1989). The full Bayesian approach to inference then seeks to learn all the unknowns (calibration parameters, model discrepancy, measurement error, and emulator parameters) conditional on all of the data ($n$ observations and $N_{sim}$ model runs). Higdon et al. (2004) use this approach with a Gaussian process emulator, while Kennedy and O’Hagan (2001) fix some of the Gaussian process emulator and/or discrepancy parameters in advance for computational and identifiability reasons. More generally, Liu et al. (2009) describe cases where modularization of the model will result in better inference, and many practitioners use modularized approaches either for philosophical or computational reasons. ### 2.2 Functional Response Various authors have extended the Bayesian model calibration approach above to be more explicitly suited to functional response. This can vastly increase the training data size, but Gaussian processes with Kronecker covariance structures can help produce scalable models (Williams et al., 2006). A somewhat different approach is used by Gu and Berger (2016), involving fitting a Gaussian process for each output while sharing some parameters across models, and, with a suitable discrepancy model, the calibration can be obtained. The most common approach is to project the functional response onto basis functions and build functional models for calibration in the reduced dimension basis coefficient space (Higdon et al., 2008; Bayarri et al., 2007; Francom et al., 2019). Let $t$ denote the functional variable (e.g., time), and let $z(t,\bm{x}_{i})$ denote an experimental measurement from the $i^{th}$ experiment at functional variable $t$. Similarly, let $y(t,\bm{x}_{i},\bm{u})$ denote a simulation of the $i^{th}$ experiment at the functional variable $t$ with input parameters $\bm{u}$. Then an approach to Bayesian model calibration with functional response specifies $\displaystyle z(t,\bm{x}_{i})$ $\displaystyle=y(t,\bm{x}_{i},\bm{\theta})+\delta(t,\bm{x}_{i})+\epsilon_{i}(t,\bm{x}_{i}),~{}~{}\epsilon(t,\bm{x}_{i})\sim\mathcal{N}(0,\sigma_{\epsilon}^{2}).$ (2) The typical approach to inference is to discretize $t$ onto a grid $t_{1},\dots,t_{N_{T}}$, which generates $N_{T}$ dimensional vectors $\bm{z}(\bm{x}_{i})$, $\bm{y}(\bm{x}_{i},\bm{\theta})$, and $\bm{\delta}(\bm{x}_{i})$ of the respective functions evaluated on the discretized grid, which simplifies the model to a multivariate representation, $\displaystyle\bm{z}(\bm{x}_{i})$ $\displaystyle=\bm{y}(\bm{x}_{i},\bm{\theta})+\bm{\delta}(\bm{x}_{i})+\bm{\epsilon}(\bm{x}_{i}),~{}~{}\bm{\epsilon}(\bm{x}_{i})\sim N(\bm{0},\sigma_{\epsilon}^{2}\bm{I}).$ (3) As in the univariate case, when $y$ is expensive to evaluate, we will require an emulator or surrogate model in order to evaluate the likelihood function quickly. Higdon et al. (2008) projects the model runs onto functional principal components and projects the discrepancy onto a separate flexible basis, and inference is carried out in the resulting low dimensional space. Bayarri et al. (2007) project the model runs and discrepancy onto a wavelet basis and carry out the inference in the low dimensional space. Francom et al. (2019) projects the model runs onto functional principal components, allows the discrepancy to be full-dimensional, and fits the emulator in a modular fashion. All of these calibration approaches can be applied to misaligned functional data, but all will have difficulty with emulation and the specification of model discrepancy, as they will only deal explicitly in amplitude variation. Additionally, distances computed between the different functions will not be proper distances since they would not account for the phase variability. ## 3 Elastic Bayesian Model Calibration In this section, we will review elastic functional data analysis, introduce our elastic approach to calibration, and suggest various modeling choices and practical consierations. ### 3.1 Elastic Functional Data Analysis #### 3.1.1 Types of variability and model discrepancy in functional outputs When model outputs are functional, we must consider two types of variability in the functional outputs: amplitude and phase variability. Amplitude variability is variability in the output for a fixed time (t), or, more simply, y-axis variation. Phase variability is variability in time, or, more simply, x-axis variability. For computer model applications, input variables can induce both phase and amplitude variability in the computer model outputs, resulting in fundamentally different shapes over the range of plausible inputs. Additionally, model discrepancy can result in an imperfect match between the computer model prediction and observed data at the correct value of the model inputs. With misaligned functional data, we must consider model discrepancy in both phase and amplitude variability to accurately represent discrepancy- induced shape distortions in the functional predictions. When model discrepancy is not solely driven by amplitude variability, pointwise (non-elastic) calibration (as in Section 2.2) will produce biased estimates of the calibration parameters. Another large problem with the use of the $\mathbb{L}^{2}$ metric is known as the pinching problem, Ramsay and Li (1998). Specifically, if we have two functions, $f_{1}$ and $f_{2}$ and the range$(f_{1})$ is entirely above the range$(f_{2})$, the $\mathbb{L}^{2}$ metric becomes degenerate and pinches the warped function. To address this problem, Srivastava et al. (2011) introduced a mapping for functional data called the square-root velocity function or SRVF that improves functional alignment, and provides fundamental mathematical equalities that lead to the formal development of this topic. Moreover, the metric used in the alignment is a proper distance and avoids the pinching effects of the standard $\mathbb{L}^{2}$ metric in function space without the use of a penalty. We propose functional calibration metrics that account for both phase and amplitude variation while properly measuring the distance between functions. #### 3.1.2 Functional Alignment To explain metrics for comparing functional data in a calibration setting, we simplify to the comparison of two functions of $t$, $z(t,\bm{x}_{i})$ and $y(t,\bm{x}_{i},\bm{\theta})$. Varying $\bm{\theta}$ will change the shape of $y(t,\bm{x}_{i},\bm{\theta})$ so we seek to find $\bm{\theta}$ such that $z(t,\bm{x}_{i})$ and $y(t,\bm{x}_{i},\bm{\theta})$ are optimally matched, where the optimality criterion considers the distance between the functions in both amplitude and phase. For notational convenience, and to emphasize that we are comparing these functions for a fixed $\bm{x}_{i}$ and $\bm{\theta}$, we rewrite $z(t,\bm{x}_{i})$ and $y(t,\bm{x}_{i},\bm{\theta})$ as $z(t)$ and $y(t)$ for the rest of this subsection. To measure the distance between $z(t)$ and $y(t)$, we use elastic functional data analysis (EFDA) (Srivastava and Klassen, 2016). The main premise behind EFDA is to construct a proper distance metric between the computational prediction $y(t)$ and the experimental data $z(t)$. To construct this metric, a continuous mapping $\gamma_{y\rightarrow z}(t):[0,1]\rightarrow[0,1]$ between $y(t)$ and the experimental data $z(t)$ is constructed such that $\gamma$ is a diffeomorphism. The function $\gamma_{y\rightarrow z}(t)$ is referred to as a warping function, as it measures phase distortions in $y(t)$ such that $y\circ\gamma_{y\rightarrow z}(t)=y(\gamma_{y\rightarrow z}(t))$ aligns with $z(t)$. Srivastava et al. (2011); Tucker et al. (2013) show that by applying a specific transformation to the original functions $z(t)$ and $y(t)$, there exist simple expressions for the amplitude and phase distance between functions. We now describe how to construct this transformation and how to define the distance metrics on the transformed data. The functions $z(t)$ and $y(t)$ are transformed to their square-root velocity functions (SRVFs). That is, we define the SRVF of $f(t)$ as: $\displaystyle q(t)$ $\displaystyle=\operatorname{sgn}(\dot{f}(t))\sqrt{|\dot{f}(t)|}$ (4) where $\dot{f}$ denotes the time derivative of $f$. The SRVF is a bijective mapping up to a translation; that is, $f(t)$ can be uniquely determined from $q(t)$ and a single point on the curve $f(t)$. Let $q_{z}(t)$ and $q_{y}(t)$ denote the SRVFs of $z(t)$ and $y(t)$. The warping function that aligns $y$ to $z$, denoted $\gamma_{y\rightarrow z}$, can be estimated solving the following optimization problem via Dynamic Programming (Tucker et al., 2013) $\displaystyle\gamma_{y\rightarrow z}=\operatorname*{arg\,inf}_{\gamma\in\Gamma}||q_{z}-(q_{y}\circ\gamma)\sqrt{\dot{\gamma}}||$ (5) or using a Bayesian approach (Cheng et al., 2016; Lu et al., 2017). An advantage of this approach to warping function estimation is that the analyst does not have to specify landmarks for function alignment; the estimation of $\gamma$ is achieved by using the group structure of $\Gamma$. When we use the optimization in Equation 5 in later sections, we will refer to this as a decomposition (specifically the warping decomposition) because it decomposes a misaliged function into an aligned function and a warping function. Given an alignment function $\gamma(t)$ and aligned SRVF, $(q_{y}\circ\gamma\sqrt{\dot{\gamma}})(t)$, we can construct measures of phase and amplitude variability. Specifically, amplitude variability is measured as: $\displaystyle d_{a}(q_{z},q_{y})$ $\displaystyle=||q_{z}-(q_{y}\circ\gamma_{y\rightarrow z})\sqrt{\dot{\gamma}_{y\rightarrow z}}||^{2},$ (6) Srivastava et al. (2011) show that this $\mathbb{L}^{2}$ distance on the transformed and aligned SRVF functions is a proper distance metric for amplitude. #### 3.1.3 Measuring phase distance Defining a measure of phase variability is more difficult than for amplitude variability, because the space of warping functions, $\Gamma$, is an infinite- dimensional nonlinear manifold, and therefore cannot be treated as a standard Hilbert space. Since we would like to exploit Riemannian-geometric structure when making inferences about the warping functions, we again apply a specific transformation to the warping functions such that we can use a standard distance metric (the $\mathbb{L}^{2}$ norm) to measure distance. Specifically, we represent an element $\gamma\in\Gamma$ by the square-root of its derivative $\psi=\sqrt{\dot{\gamma}}$. Note that this is the same as the SRVF defined earlier, and takes this form since $\dot{\gamma}>0$. The identity $\gamma_{z\rightarrow z}$ maps to a constant function with value $\psi_{z\rightarrow z}(t)=1$, which corresponds to no warping. Since $\gamma(0)=0$, the mapping from $\gamma$ to $\psi$ is a bijection and one can reconstruct $\gamma$ from $\psi$ using $\gamma(t)=\int_{0}^{t}\psi(s)^{2}ds$. An important advantage of this transformation is that since $\|\psi\|^{2}=\int_{0}^{1}\psi(t)^{2}dt=\int_{0}^{1}\dot{\gamma}(t)dt=\gamma(1)-\gamma(0)=1$, the set of all such $\psi$’s is the positive orthant of the Hilbert sphere $\Psi=\mathbb{S}_{\infty}^{+}$ (i.e., a unit sphere in the Hilbert space $\mathbb{L}^{2}$). In other words, the square-root representation simplifies the complicated geometry of $\Gamma$ to a unit sphere. The distance between any two warping functions, i.e., the phase distance, is exactly the arc-length between their corresponding SRVFs on the unit sphere $\mathbb{S}_{\infty}$: $d_{p}(\gamma_{1},\gamma_{2})=d_{\psi}(\psi_{1},\psi_{2})\equiv\cos^{-1}\left(\int_{0}^{1}\psi_{1}(t)\psi_{2}(t)dt\right)\ .$ While the geometry of $\Psi\subset\mathbb{S}_{\infty}$ is more tractable, it is still a nonlinear manifold and computing distances remains difficult. Instead, we use a tangent (vector) space at a certain fixed point for further analysis. The tangent space at any point $\psi\in\Psi$ is given by: $T_{\psi}(\Psi)=\\{v\in\mathbb{L}^{2}|\int_{0}^{1}v(t)\psi(t)dt=0\\}$. To map between the representation space $\Psi$ and tangent spaces, one requires the exponential and inverse-exponential mappings. The exponential map at a point $\psi\in\Psi$ denoted by $\exp_{\psi}:T_{\psi}(\Psi)\mapsto\Psi$, is defined as $\displaystyle\exp_{\psi}(v)$ $\displaystyle=\cos(\|v\|)\psi+\sin(\|v\|)\frac{v}{\|v\|},$ (7) where $v\in T_{\psi}(\Psi)$. Thus, $\exp_{\psi}(v)$ maps points from the tangent space at $\psi$ to the representation space $\Psi$. Similarly, the inverse-exponential map, denoted by $\exp_{\psi}^{-1}:\Psi\mapsto T_{\psi}(\Psi)$, is defined as $\displaystyle\exp_{\psi}^{-1}(\psi_{1})$ $\displaystyle=\frac{\kappa}{\sin(\kappa)}(\psi_{1}-\cos(\kappa)\psi),$ (8) where $\kappa=d_{p}(\gamma_{1},\gamma)$. This mapping takes points from the representation space to the tangent space at $\psi$. The tangent space representation $v$ is sometimes referred to as a _shooting vector_. A sensible point for calibration on $\Psi$ is to define the tangent space at the identity element $\psi_{z\rightarrow z}$. Since we are aligning the simulations relative to the experimental data $z$, phase variability is defined relative to $q_{z}$. Therefore, the warping function for the experimental data is $\gamma_{z\rightarrow z}$ (no warping is required) and the exponential map of this function is 0 by definition. Therefore, deviations in $v$ from 0 represent deviation in phase from the experimental data. ### 3.2 Using Elastic FDA within Bayesian Model Calibration With the metrics for distance defined in the previous section, we are ready to detail how to do Bayesian model calibration with misaligned functional responses, which we call elastic Bayesian model calibration. We decompose the observations from experiment $i$ into aligned functions and warping functions (each specific to experiment $i$) so that $\displaystyle z(t,\bm{x}_{i})$ $\displaystyle=\tilde{z}(t,\bm{x}_{i})\circ_{t}\gamma_{z\rightarrow z}(t,\bm{x}_{i})$ (9) where $\circ_{t}$ emphasizes that the composition is only in $t$, such that $f(a,b)\circ_{a}g(a,c)=f(g(a,c),b)$. This is, of course, the identity warping, so nothing is happening in this step, but we are explicit about this to allow for generalizations later. We similarly decompose each simulation (e.g., simulation $j$) with $\displaystyle y(t,\bm{x}_{j},\bm{u}_{j})$ $\displaystyle=\tilde{y}(t,\bm{x}_{j},\bm{u}_{j})\circ_{t}\gamma_{y\rightarrow z}(t,\bm{x}_{j},\bm{u}_{j}).$ (10) using the warping decomposition of Equation 5. To facilitate modeling with proper distance metrics, we transform the warping functions into shooting vector space with $\displaystyle\bm{v}_{z\rightarrow z}(\bm{x}_{i})$ $\displaystyle=\exp_{\psi}^{-1}\left(\sqrt{\dot{\gamma}_{z\rightarrow z}(\bm{x}_{i})}\right)$ (11) $\displaystyle\bm{v}_{y\rightarrow z}(\bm{x}_{j},\bm{u}_{j})$ $\displaystyle=\exp_{\psi}^{-1}\left(\sqrt{\dot{\gamma}_{y\rightarrow z}(\bm{x}_{j},\bm{u}_{j})}\right).$ (12) We can then use these aligned model runs and associated shooting vectors for emulator building, if desired. #### 3.2.1 Calibration when Emulation is Unnecessary Our calibration model using the aligned data and shooting vectors then has likelihood $\displaystyle\tilde{z}(t,\bm{x}_{i})$ $\displaystyle=\tilde{y}(t,\bm{x}_{i},\bm{\theta})+\delta_{\tilde{y}}(t,\bm{x}_{i})+\epsilon_{\tilde{z}}(t,\bm{x}_{i}),~{}~{}\epsilon_{\tilde{z}}(t,\bm{x}_{i})\sim\mathcal{N}(0,\sigma_{\tilde{z}}^{2})$ (13) $\displaystyle\bm{v}_{z\rightarrow z}(\bm{x}_{i})$ $\displaystyle=\bm{v}_{y\rightarrow z}(\bm{x}_{i},\bm{\theta})+\bm{\delta}_{v}(\bm{x}_{i})+\bm{\epsilon}_{v}(\bm{x}_{i}),~{}~{}\bm{\epsilon}_{v}(\bm{x}_{i})\sim\mathcal{N}(0,\sigma_{v}^{2}\bm{I}).$ (14) In order to do inference with this model, we discretize $t$ so that equation 13 can be re-written in a vector form, $\tilde{\bm{z}}(\bm{x}_{i})=\tilde{\bm{y}}(\bm{x}_{i},\bm{\theta})+\bm{\delta}_{\tilde{y}}(\bm{x}_{i})+\bm{\epsilon}_{\tilde{z}}(\bm{x}_{i})$, and the likelihood can then be written as $\displaystyle f\left(\tilde{\bm{z}}(\bm{x}_{1}),\dots,\tilde{\bm{z}}(\bm{x}_{n}),\bm{v}_{z\rightarrow z}(\bm{x}_{1}),\dots,\bm{v}_{z\rightarrow z}(\bm{x}_{n})~{}|~{}\bm{\theta},\sigma^{2}_{\tilde{z}},\sigma^{2}_{v},\bm{\beta}_{\tilde{y}},\bm{\beta}_{v}\right)=$ $\displaystyle\prod_{i=1}^{n}\left[\mathcal{N}\left(\tilde{\bm{z}}(\bm{x}_{i})~{}|~{}\tilde{\bm{y}}(\bm{x}_{i},\bm{\theta})+\bm{\delta}_{\tilde{y}}(\bm{x}_{i}),~{}\sigma_{\tilde{z}}^{2}\bm{I}\right)~{}\mathcal{N}\left(\bm{v}_{z\rightarrow z}(\bm{x}_{i})~{}|~{}\bm{v}_{y\rightarrow z}(\bm{x}_{i},\bm{\theta})+\bm{\delta}_{v}(\bm{x}_{i}),~{}\sigma_{v}^{2}\bm{I}\right)\right]$ where $\bm{\beta}_{\tilde{y}}$ and $\bm{\beta}_{v}$ parameterize the discrepancy. With priors specified for $\bm{\theta}$, $\sigma^{2}_{\tilde{z}}$, $\sigma^{2}_{v}$, $\bm{\beta}_{\tilde{y}}$, and $\bm{\beta}_{v}$, the posterior $\pi\left(\bm{\theta},\sigma^{2}_{\tilde{z}},\sigma^{2}_{v},\bm{\beta}_{\tilde{y}},\bm{\beta}_{v}~{}|~{}\tilde{\bm{z}}(\bm{x}_{1}),\dots,\tilde{\bm{z}}(\bm{x}_{n}),\bm{v}_{z\rightarrow z}(\bm{x}_{1}),\dots,\bm{v}_{z\rightarrow z}(\bm{x}_{n})\right)$ is proportional to likelihood multiplied by priors and can be sampled with MCMC. The main benefit of using this approach when an emulator is unnecessary is that the discrepancy model can be specified in a more reasonable way, and will be used with a proper distance metric in the likelihood. Note that this approach requires the warping decomposition be called for each likelihood evaluation. Even though this is just the decomposition of a single curve and fairly fast, this expense can make emulation more desirable. #### 3.2.2 Calibration when Emulation is Necessary Assume that the set of $N_{sim}$ model runs is decomposed into aligned functions and warping functions. We can then use separate or joint emulators with training inputs $\left\\{\bm{x}_{j},\bm{u}_{j}\right\\}_{j=1}^{N_{sim}}$ and outputs $\left\\{\tilde{\bm{y}}(\bm{x}_{j},\bm{u}_{j}),v_{y\rightarrow z}(\bm{x}_{j},\bm{u}_{j})\right\\}_{j=1}^{N_{sim}}$. A full Bayesian approach to emulation and calibration results in a joint likelihood of the observation data and model runs while a modular Bayesian approach fits the emulator first and then uses the emulator in the calibration model. Because these models result in likelihoods and posteriors that are not very different from the likelihood without an emulator, we omit the explicit likelihood here. For Gaussian process emulators, the likelihood under the full Bayesian model can be seen in Higdon et al. (2004, 2008). ### 3.3 Modeling Choices and Practical Considerations To this point, we have discussed how using standard functional response calibration tools with misaligned functional responses is possible but prone to problems. We also mentioned that emulation techniques for functional response do not perform well when applied to misaligned functional responses, as demonstrated in Francom et al. (2022). We then showed how elastic functional data analysis methods use proper distance metrics by aligning functions in SRVF space and using (1) distance between aligned functions and (2) distance between shooting vectors (transformed warping functions). Then we introduced how to frame functional response computer model calibration such that it uses these proper elastic metrics. Simply put, we use the warping decomposition to separate our misaligned functional responses into aligned functional responses and shooting vectors, and we use these two new datasets instead of the original data when calibrating. Below we discuss additional modeling choices and practical considerations for using this methodology. #### 3.3.1 Warping decomposition Uncertainty: Because we are using an optimization technique to obtain the warping functions, we are fixing a part of the model that could be considered uncertain. This is a modeling choice, similar to the choice of Kennedy and O’Hagan (2001) to fix some emulator parameters at maximum likelihood values or to the choice of Higdon et al. (2008) to not allow for uncertainty in the basis representation of the functional response (i.e., the functional principal components are fixed). If the warping functions were to be inferred jointly with all of the other unknowns in a Bayesian framework, this would lead to a much greater computational burden and would require much more specialized calibration software. A possible shortcut could be to use modular Bayes techniques to propagate uncertainty from the warping decomposition to calibration uncertainty while cutting the feedback from the calibration to the warping decomposition (Liu et al., 2009; Plummer, 2015). Regularization: Another practical consideration is for the family of warping functions allowed in the warping decomposition, $\Gamma$. If these functions are not smooth or regularized, their variation can be difficult to predict using the parameters, resulting in emulators with large residual variance. However, if they are over regularized, they may not align the functional responses enough. In case one wants to control the amount of warping or elasticity this can be done as described in Wu and Srivastava (2011) using a penalty on equation 5. Choice of alignment reference: This decomposition can occur by either aligning the model to the experiment, as described above, or by aligning to some other common element (e.g., one of the model runs). Hence, the quantities $\gamma_{y\rightarrow z}$, $\gamma_{z\rightarrow z}$, $\bm{v}_{y\rightarrow z}$, and $\bm{v}_{z\rightarrow z}$ used above can be replaced with $\gamma_{y\rightarrow y^{*}}$, $\gamma_{z\rightarrow y^{*}}$, $\bm{v}_{y\rightarrow y^{*}}$ and $\bm{v}_{z\rightarrow y^{*}}$ for some common reference $y^{*}(t)$ like the first model run. There are a few reasons why this could be useful. First, the model runs have no discrepancy or noise, which means that warping to them can be more stable. Second, exploration of the computer model is often of interest even if there is no calibration data, in which case emulation is frequently used and can be performed more accurately when accounting for misalignment. Modeling transformed aligned curves: Recall that we use the SRVF transformation of the original curves to obtain the warping decomposition. However, we opt not to build the model for the aligned data in the SRVF space. Modeling in derivative space means that when models are transformed back to the native space, errors are integrated and unrealistic heteroskedasticity arises. Note that this problem is muted when transforming from shooting vectors to warping functions because of the highly-constrained shape of the warping functions. #### 3.3.2 Emulation Francom et al. (2022) showed that taking alignment into account can significantly improve emulator accuracy and efficiency, though their approach relied on a model for landmarks (rather than shooting vectors) to build warping functions. They found that, in many cases, training separate emulators for aligned functions and warping functions is more desirable than training a single emulator for both. This is because the warping function model will often have larger unexplained variation (i.e., larger residual variance than the aligned data model, even under appropriate standardization) because of the latent nature of the warping functions. Subtle variations in the warping function regularization level can result in fairly significant variations in the shooting vectors, and while this variation is less pronounced when transforming back into warping function space, the larger variation can “corrupt” (Liu et al., 2009) the modeling of the aligned data unless careful precautions are taken. The easiest precaution is to merely create the two emulators independently. This works well in practice. #### 3.3.3 Discrepancy models The decomposition of misaligned data into aligned data and shooting vectors not only facilitates better emulation and calibration likelihood (distance metrics), it also facilitates more realistic discrepancy modeling. For instance, if the discrepancy is a time shift, that can be expressed through the shooting vector discrepancy model $\bm{\delta}_{v}$. If the discrepancy is a change to one feature of the curves, that can be handled directly through the aligned data discrepancy model $\bm{\delta}_{\tilde{y}}$. The specification of $\bm{\delta}_{\tilde{y}}$ can be reasoned about in a natural way, for instance a set of basis functions could be used as in Higdon et al. (2008). However, the specification of $\bm{\delta}_{v}$ is more nuanced because it is in a transformed space. For instance, adding a constant in that space (e.g., $\bm{\delta}_{v}(\bm{x}_{i})=\bm{1}$) has no effect because the shooting vectors are scaled by their norm in the exponential map that transforms them back to a SRVF on the unit sphere. Hence, a constant time shift discrepancy is not achieved by adding a constant to the shooting vectors. We have found that a piecewise linear basis for shooting vector discrepancy is effective at capturing time shifts, as we will demonstrate in the flyer plate analysis below. #### 3.3.4 Residual error models We will typically assume independence between $\bm{\epsilon}_{\tilde{z}}$ and $\bm{\epsilon}_{v}$, but this can be relaxed. Of greater interest is the assumption of independence within $\bm{\epsilon}_{\tilde{z}}$ and $\bm{\epsilon}_{v}$. As mentioned above, adding a correlation structure to the residual model still results in appropriate distance metrics when the Mahalanobis distance is used in the likelihood calculation. This can be done explicitly to avoid the problems identified in Brown and Hund (2018). #### 3.3.5 Using Standard Functional Calibration Tools for Elastic Calibration Our formulation results in a typical Gaussian-likelihood pointwise- calibration, though using the aligned responses and the shooting vectors instead of the original misaligned functional responses (the likelihood function for the elastic calibration is formed by combining vector versions of equations 13 and 14). This means that the same tools that were discussed in Section 2.2 can be used for elastic calibration, making the application of these methods very practical. For instance, SEPIA (Gattiker et al., 2023) implements the model of Higdon et al. (2008), and Francom et al. (2023) implements a similar model but allows for modularization of the emulator as in Francom et al. (2019) as well as MCMC that is robust to posterior multimodality (via tempering). Both of these tools can be used to do elastic calibration, and the warping decomposition can be achieved using fdasrsf (Tucker, 2023). Further, there are several choices available to emulate computer models with functional response, some variations of which are described in Francom et al. (2022); Hutchings et al. (2023). In the examples that follow, we use a Bayesian multivariate adaptive regression spline (BMARS) (Francom et al., 2018) emulator for both the aligned simulated curves and their corresponding shooting vectors, paired with the calibration approach in Francom et al. (2023). We chose BMARS due to its performance and the ability to scale to a large number of training points which we have in our equation of state and material strength data examples, though the methods are agnostic to emulator choice. ## 4 Simulated Examples We create two simulated examples where both will have three parameters to calibrate and misaligned functional response simulated using a Gaussian density function. One of the examples is formulated without model discrepancy and the other with discrepancy. ### 4.1 Example 1 The first simulated data set is constructed from the following model where each function is a parameterized Gaussian pdf: $y(t,\bm{u})=\frac{u_{1}}{0.05\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{t-(\sin(2\pi u_{0}^{2})/4-u_{0}/10+0.5)}{0.05}\right)^{2}\right)+0u_{2}.$ The calibration parameters are $\bm{u}=[u_{0},u_{1},u_{2}]$, though the model only uses the first two of these parameters. There are no experimental parameters ($\bm{x}$), so we omit these from the notation used in the previous sections. We generated a set of 100 functions (model runs) using $\bm{u}_{1},\dots,\bm{u}_{100}$ where each $\bm{u}_{j}$ was sampled uniformly within the unit cube. The ‘experimental data’ $z(t)$ was generated using the parameter values $\bm{u}^{*}=[0.1028,0.5930,0]$ without any model discrepancy, so that $z(t)=y(t,\bm{u}^{*})$. Figure 4 presents the simulated model runs in grey and the experimental data shown in black. Figure 4: Simulated curves $y(t,\bm{u}_{1}),\dots,y(t,\bm{u}_{100})$ and experiment data $z(t)$. We separated the phase and amplitude by aligning the simulated curves to the experimental data curve utilizing the elastic methodology from Section 3.1. Figure 5(a) presents the aligned curves to the experimental data, while Figure 5(b) presents the corresponding warping functions of both the simulated curves ($\gamma_{y\rightarrow z}(t,\bm{u}_{j})$ for $j=1,\dots,100$) and the experimental data ($\gamma_{z\rightarrow z}(t)$, which is the identity). Figure 5(c) presents the corresponding shooting vectors for the simulation ($\bm{v}_{y\rightarrow z}(\bm{u}_{j})$) and the experimental data ($\bm{v}_{z\rightarrow z}$). As noted previously, if the simulated curves are aligned to the experiment data, the corresponding warping function will be the identity function and the shooting vectors will all be equal to zero. (a) Aligned simulated functions. (b) Warping functions. (c) Shooting vectors. Figure 5: Alignment of simulated curves to experimental data. We then built separate BMARS emulators for the aligned curves and the shooting vectors and performed a (modular) elastic Bayesian model calibration as described in Section 3.2 to infer the posterior. (a) Calibrated and aligned curves. (b) Calibrated warping functions. (c) Calibrated shooting vectors. Figure 6: Predictive posterior samples after calibration of the simulated data. Figure 6 presents some posterior predictive samples after calibration of the aligned curves, the corresponding warping functions and shooting vectors (in blue). Figure 7 presents some posterior predictive samples in the original data space (in blue) with the simulated curves/model runs (in grey) and the experimental data (in black). The predictive samples cover the experimental data with small uncertainty, showing good performance of the emulator at $\bm{\theta}$. A pairwise plot summarizing the posterior samples of $\bm{\theta}$ (marginal and bivariate) is shown in Figure 8. The true value of $\bm{\theta}$ is represented by the mark ($x$) in the lower triangular and the vertical line on the diagonal. The true parameter values are covered by high density regions in the posterior, indicating they are recovered well by the model. Additionally, the distribution of the nuisance parameter resembles a uniform distribution (prior) and does not affect the calibration. Figure 7: Predictive posterior samples after calibration of simulated data in the original data space. Figure 8: Pair plot of parameters after calibration of simulated data with truth shown by the x and vertical line. To compare our elastic method to the standard method, we fitted a BASS emulator on the original simulated computer model and directly performed a modular Bayesian model calibration. Figure 9 presents the posterior predictive samples after calibration of the simulated data in original data space. We see that the predictions do not resemble the experimental data well and have more than one peak, given that phase variability is not taken into account. The underlying computed mean from the emulator is not representative of the underlying data. Additionally, Figure 10 presents the pairwise plot of the calibrated parameters with the true value shown by the mark. The resulting posterior distribution covers the the true value of the parameters, but with much larger uncertainty. In comparison to Figure 8, we can immediately notice the benefit of adopting BMC with an elastic framework. This is more clearly shown in Figure 11 where the marginal posteriors and the bivariate posterior contours (at 95%) are being compared for the traditional calibration (not aligned) and the elastic functional data calibration (aligned). Figure 9: Predictive posterior samples after calibration of the simulated data in original data space when the emulator was fitted on the original data space. Figure 10: Pairs plot of the posterior samples of the parameters after calibration of the simulated data where the true value is marked by the x and vertical line. The emulator was fitted directly on the original data. Figure 11: Pairs plot of the marginal posteriors and bivariate posterior contours (95%) for the calibrations with alignment and no alignment. ### 4.2 Example 2 To explore how the methodology handles model discrepancy, another simulated data set was generated based on the model, $y(t,\bm{u})=\frac{u_{1}}{0.05\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{t-(u_{0}+0.1)}{0.05}\right)^{2}\right).$ for which 300 functions were generated with $u_{0}$ and $u_{1}$ being drawn from the $U[0,0.3]$ distribution. These functions are considered as the “computer model output.” An additional nuisance parameter, $u_{2}$, was drawn from the standard uniform distribution $U[0,1]$. The experimental data was generated according to $z(t)=\frac{u^{*}_{1}}{0.05\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{t-(u^{*}_{0}+0.3)}{0.05}\right)^{2}\right).$ where $\bm{u}^{*}=[u^{*}_{0},u^{*}_{1}]=[0.3,0.2]$, where the shift in time of $0.2$ indicates model discrepancy. Similar to the previous example, the 300 generated functions were aligned to the experimental data and emulators were built based on the aligned functions and shooting vectors, which were used for calibration. A more traditional approach to this problem would be to pick out important features of the misaligned curves and do the emulation and calibration (including discrepancy) using the features. As an example, we compare against a calibration that reduces each curve to its maximum value and the timing of the maximum value. We then build an emulator to jointly model these features, and we specify a constant shift discrepancy model. Due to the inclusion of the discrepancy model, posterior predictive samples from both the elastic and standard method have good agreement and coverage of the experimental curve. As shown in the right panel of Figure 12, the posteriors of $\bm{\theta}$ for the standard method have a credible interval that cover the true parameter values. However, the left panel shows that the posteriors of $\theta$ from the elastic functional method have also good coverage and provide tighter credible intervals. This is more clearly shown in Figure 13 where the posteriors from both approaches are compared via marginal distributions and bivariate contours. (a) Elastic Method (b) Standard Method Figure 12: Predictive posterior samples after calibration of the simulated data with discrepancy in the original data space. Figure 13: Pairs plot of the marginal posteriors and bivariate marginal contours (95%) for the calibration that used only features and the one that used the proposed methodology. ## 5 Dynamic Material Model Calibration We explore the application of the elastic functional Bayesian calibration method to two real world applications, a tantalum equation of state calibration and a material strength calibration for aluminum alloy. ### 5.1 Tantalum Z-Machine In this application, we seek to calibrate the equation of state (EoS) of tantalum (Ta) generated from pulsed magnetic fields (Brown et al., 2014). We are aiming to estimate parameters describing the compressibility (relationship between pressure and density) to understand better how materials compress to extreme pressures. Ta is an ideal material for this study as it is able to remain in its initial crystal structure to pressures up to 10 million times standard atmospheric pressure (Soderlind and Moriarty, 1998). A description of the Ta experiments is shown in Figure 2. The experiments were conducted using Sandia National Laboratories’ Z-machine, which is a pulsed power drive that can deliver massive electric currents over short time scales. These currents were forced to flow along an aluminum (Al) panel, producing a large magnetic pressure which drives a time-dependent stress wave (impulse) into the system. Ta samples and transparent lithium fluoride (LiF) windows were glued to the panel such that the stress wave propagates sequentially through each of these materials. The material properties are modeled using a physically motivated form given by Vinet et al. (1989). This form describes the pressure-density $(P-\rho)$ response as $P(\rho)=3B_{0}\left(\frac{1-\eta}{\eta^{2}}\right)\exp\left(\frac{3}{2}(B^{\prime}_{0}-1)(1-\eta)\right),$ where $\eta=\sqrt[3]{\rho_{0}/\rho}$, $\rho_{0}$ is the initial density and $B_{0}$ and $B^{\prime}_{0}$ are the bulk modulus and its pressure derivative at ambient conditions. From the computer experiment perspective, we will work with 6 inputs (3 EoS parameters ($\rho,B_{0},B^{\prime}_{0}$) and 3 experiment-specific parameters) and output velocity curves on a grid of 100 equidistant time points. As described in (Brown et al., 2014), the main goal is to provide inferences on the 3 EoS parameters with uncertainty quantification and to propagate these inferences to the Vinet model. As it was done in the simulation examples, the computer model output was aligned to the experimental data, and an emulator was fitted to the aligned computer model and the corresponding shooting vectors. We then performed a modular elastic Bayesian model calibration utilizing the formulation above. There are a total of 9 experiments and the 3 EoS parameters were calibrated across the experiments and 4 experiment specific parameters. Figure 14 presents the calibration results for the Ta experiments. The black curves shown in the figure correspond to the experimental velocity curves for the 9 experiments. The shaded colored regions are the 95% prediction intervals that result from the elastic functional Bayesian calibration. The prediction intervals exhibit good agreement with each experimental curve. The residuals defined as the difference between the experimental data and the calibrated predicted mean, are shown in the right panel of Figure 14. Each of the residuals are color coded to the corresponding experiment from the left panel. The full functional approach has tighter coverage of the experimental curves and smaller residual values compared to the resulting predictions and residuals that stem from the scaled ESS approach of Brown and Hund (2018). Figure 14: (Left) Experiment velocities of Ta shown in black compared with 95% prediction intervals from the elastic functional Bayesian calibration. (Right). Corresponding color-coded residuals (difference between experiment and calibration mean) for each experiment Furthermore, Figure 15 presents a pairwise plot of the posterior distribution samples of the calibrated EoS parameters for Ta with the elastic approach. The posteriors are well concentrated within the parameter space and the medians compare to those found in Brown and Hund (2018), with a reduced uncertainty around the same parameter values. It should be remarked that the elastic FDA calibration approach does not involve any likelihood scaling. To assess the calibration further, we compared the posterior estimates of the EoS to those reported in the literature for the Vinet model. Figure 16 presents a plot of the pressure versus density Vinet curve, with a 95% credible interval generated from the elastic calibration shown in red. The blue curve corresponds to the loading path determined analytically by using the standard techniques in the dynamic materials community as described in (Brown et al., 2014). The dashed curve is the state of the art theoretical calculation given by Greeff et al. (2009), which has been used to simulate these types of Ta experiments. The posterior estimate of the curve from the elastic calibration lies just above the theoretical calculation and below the analytical result, which corresponds with what is expected when using the analytic analysis (Kraus et al., 2016). The estimates of the physical parameters resulting from the calibration tend to agree well with those from previous work and provide uncertainty quantification. Figure 15: Pair plot of the posterior densities of EoS parameters for Tantalum Figure 16: Calibrated Ta material response to the Vinet model compared with the analytic results in Brown et al. (2014) and the theoretical calculations in Greeff et al. (2009). ### 5.2 Flyer Plate Impact Material strength characterizes how a material temporarily or permanently deforms as it experiences pressure. This is of interest in various areas of science and engineering, with applications in the aerospace, medical, and automotive industries (Gray et al., 2005). When experimentation is difficult, material strength models are an important tool for predicting how a material will react to pressure. One such model is the Johnson-Cook strength model (Johnson and Cook, 1983), for which the model behavior is dictated by a relatively small collection of material-specific physical constants. To set these constants, scientists and engineers rely on experimentation, hence calibrating the material strength model parameters to experimental measurements. In this application, we consider the calibration of the Johnson-Cook material strength model for an aluminum alloy using a set of plate impact experiments. Walters et al. (2018) performed a Bayesian model calibration using plate impact experiments by reducing the velocimetry curves to a small set of features considered to be important to strength. We show how elastic Bayesian model calibration relies on the entire curve without using human-intensive feature engineering. Plate impact experiments achieve high pressure on a material sample by shooting a flyer at high velocity into the sample (plate). Lasers measure how the free surface of the sample moves as the shock wave moves through it. The result is a trace of the velocity of the free surface of the sample over time, called a velocimetry curve. The first column of Figure 17 shows measured velocimetry curves along with 1000 simulated velocimetry curves using different settings of the Johnson-Cook model and for three experiments. To obtain these simulations, the Johnson-Cook model is used within a larger hydrodynamics code that is expensive to evaluate. Additionally Figure 17 shows the aligned and warping functions along with the shooting vectors corresponding to the measured and simulated curves. The misalignment of the experimental curves to the simulated is clear. Figure 17: Measured and simulated velocimetry curves. Aligned, warping functions and shooting vectors Figure 18: Basis functions used to capture a time shift discrepancy in shooting vector space for the flyer plate example. (a) Original Data and 1000 simulated velocimetry curves (b) Calibrated Posterior Predictive Samples for the aligned curves (c) Calibrated Posterior Predictive Samples for the warping functions. (d) Calibrated Posterior Predictive Samples for the shooting vectors. Figure 19: (a) The original measured velocimetry curve with 1000 computer model runs. (b) Calibrated posterior predictive samples of velocimetry curve; experimental data and model runs. (a) Posterior discrepancy in shooting vector space. (b) Posterior discrepancy in warping function space. From the computer experiment perspective, we will work with 11 inputs: 5 Johnson-Cook parameters plus 6 experiment-specific parameters (two for each experiment) and the output velocimetry curves on a grid of 200 time points. The 1,000 velocimetry simulations are aligned to the experimental data, used to build an emulator and perform the elastic calibration. To allow for a time shift discrepancy, especially relevant to experiments 2 and 3, we use three piecewise constant and one piecwise linear basis functions in shooting vector space as shown in Figure 18. More specifically, we parameterize $\bm{\delta}_{v}(\bm{x}_{i})=\bm{D}\bm{d}(\bm{x}_{i})$ where $\bm{D}$ is the $200\times 4$ matrix of basis functions and $\bm{d}(\bm{x}_{i})$ is a vector of $4$ basis coefficients that can vary by experiment. In the approach of Higdon et al. (2008), $d_{k}(\cdot)$ would be given a Gaussian process prior so that discrepancy could be predicted for a new experiment (with experimental settings $\bm{x}^{*}$). In this case, we are not interested in predicting the discrepancy at new experimental settings, so we specify a standard normal prior for each $d_{k}(\bm{x}_{i})$. This indicates that we want to allow for a time shift, but would prefer to have none ($d_{k}(\bm{x}_{i})=0$) if possible. The prior variance of $d_{k}(\bm{x}_{i})$ can be modified in order to favor more or less discrepancy. The form of the basis functions in Figure 18 is carefully chosen to be constant in domains where the aligned data are roughly constant and a linear function in the region where the aligned data are most active. Figure 19(a) presents the experimental data, the computer simulated models shown in grey and the posterior predictive samples of the velocimetry curve after calibration shown in blue. In a similar way, Figures 19(b), 19(c) and 19(d) show the posterior predictive samples of the aligned curves, warping functions and shooting vectors respectively. In all of these cases, the predictive samples have good coverage of the experimental data. Furthermore, Figure 21 presents a pairs plot of the posterior distribution of the 11 calibrated parameters with diagonal elements representing marginal distributions while the lower diagonal are bivariate contours and pair sample plots for the upper diagonal. Compared to Walters et al. (2018), we are able to use the entirety of the functional data while they used a few hand selected features. Figure 20(a) shows the posterior distribution of shooting vector discrepancies, $\bm{\delta}_{v}(\bm{x}_{1})$, $\bm{\delta}_{v}(\bm{x}_{2})$, and $\bm{\delta}_{v}(\bm{x}_{3})$. Figure 20(b) shows the difference between two sets of warping functions: those that include the discrepancy in Figure 20(a) and those that do not, indicating the type of shift that these discrepancies induce. Figure 21: Pairwise plot and marginal densities for the posterior distribution of the 11 calibrated parameters for the flyer plate experiment. ## 6 Conclusion Model calibration for functional responses is typically more difficult than in more traditional settings. Adjusting input parameters so model output matches experimental output despite discrepancy is seldom trivial, but can be even more challenging for functional data when altering input parameters results in amplitude and phase variability in the responses. Traditional methods ignore these aspects which are unique to models with functional responses, putting them at risk for higher levels of bias and lower levels of efficiency. In this paper we develop methods to handle amplitude and phase variability in a systematic way, resulting in more efficient and more accurate estimates of the model parameters. The improvements in these estimators is achieved through better handling of the functional responses as opposed to collecting more data. The elastic Bayesian model calibration procedure presented here uses information from both the amplitude and phase space to calibrate the parameters, in contrast with more common calibration methods which would only look at the amplitude space. The benefits of this approach are demonstrated in simulation studies where amplitude and phase variability are present. The simulation studies show the elastic Bayesian model calibration approach results in superior estimation of the model parameters and the predicted functional responses. Information about the amount of warping necessary to align the functions provides an additional indirect benefit, as it provides valuable insight into the model’s input parameters. Standard methods which do not handle the phase and amplitude variability separately, may result in predicted functional responses which do not fit the data well and suggest much larger uncertainty in calibration parameters. These results make a strong case for the elastic modeling approach, as most functional data have some misalignment and phase discrepancy. Applying the method to the Z-machine data and flyer-plate data generated similar results to previous studies without the need of likelihood scaling as in Brown and Hund (2018) or feature selection as in Walters et al. (2018). The elastic Bayesian calibration model yields credible intervals with good agreement of the experimental data using a principled approach. From a theoretical perspective, the elastic Bayesian calibration approach is satisfying in that it treats the model’s output as functional throughout the calibration procedure. Future work may investigate some of the additional modeling assumptions made in the elastic calibration approach. For example, the modeling assumption that $\tilde{y}$ and $v$ are independent needs further assessment. Additionally, uncertainty introduced from the warping procedure is not considered in the analysis presented here. A more complete treatment of this uncertainty may be integrated throughout the calibration procedure. However, the additional uncertainty from warping is small in the noiseless cases presented in this paper. ## Acknowledgment This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories; a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. ## References * Bayarri et al. [2007] M. J. Bayarri, J. O. Berger, R. Paulo, J. Sacks, J. A. Cafeo, J. Cavendish, C. Lin, and J. Tu. A framework for validation of computer models. _Technometrics_ , 49(2), 2007. * Boteler and Dandekar [2006] J. M. Boteler and D. P. Dandekar. Dynamic response of two strain-hardened aluminum alloys. _Journal of applied physics_ , 100(5):054902, 2006\. * Brown and Hund [2018] J. Brown and L. Hund. Estimating material properties under extreme conditions by using bayesian model calibration with functional outputs. _Journal of the Royal Statistical Society, Series C_ , 67(4):1023–1045, 2018. * Brown et al. [2014] J. L. Brown, C. S. Alexander, J. R. Asay, T. J. Vogler, D. H. Dolan, and J. L. Belof. Flow strength of tantalum under ramp compression to 250 gpa. _Journal of Applied Physics_ , 115(4):043530, 2014\. doi: 10.1063/1.4863463. * Brynjarsdóttir and O’Hagan [2014] J. Brynjarsdóttir and A. O’Hagan. Learning about physical parameters: The importance of model discrepancy. _Inverse Problems_ , 30(11):114007, 2014. * Cheng et al. [2016] W. Cheng, I. L. Dryden, X. Huang, et al. Bayesian registration of functions and curves. _Bayesian Analysis_ , 11(2):447–475, 2016. * Francom et al. [2018] D. Francom, B. Sanso, A. Kupresanin, and G. Johannesson. Sensitivity analysis and emulation for functional data using bayesian adaptive splines. _Statistica Sinica_ , 28(2):791–816, 2018. * Francom et al. [2019] D. Francom, B. Sanso, V. Bulaevskaya, D. Lucas, and M. Simpson. Inferring atmospheric release characteristics in a large computer experiment using bayesian adaptive splines. _Journal of the American Statistical Association_ , 114(528):1450–1465, 2019. * Francom et al. [2022] D. Francom, B. Sanso, and A. Kupresanin. Landmark-warped emulators for models with misaligned functional response. _SIAM/ASA Journal on Uncertainty Quantification_ , 10(1):125–150, 2022. * Francom et al. [2023] D. Francom, P. Trubey, et al. LANL/impala, 2023. URL https://github.com/lanl/impala. * Gattiker et al. [2023] J. Gattiker, N. Klein, E. Lawrence, and G. Hutchings. LANL/SEPIA, 2023. URL https://doi.org/10.5281/zenodo.4048801. * Gray et al. [2005] G. T. R. Gray, P. J. Maudlin, L. M. Hull, Q. K. Zuo, and S. Chen. Predicting material strength, damage, and fracture the synergy between experiment and modeling. _Journal of Failure Analysis and Prevention_ , 5(3):7–17, 2005. * Greeff et al. [2009] C. W. Greeff, S. P. Rudin, S. D. Crockett, and J. M. Wills. The cold equation of state of tantalum. _Melville: American Institute of Physics_ , pages 681–684, 2009. * Gu and Berger [2016] M. Gu and J. O. Berger. Parallel partial gaussian process emulation for computer models with massive output. _The Annals of Applied Statistics_ , 10(3):1317–1347, 2016. * Higdon et al. [2004] D. Higdon, M. Kennedy, J. C. Cavendish, J. A. Cafeo, and R. D. Ryne. Combining field data and computer simulations for calibration and prediction. _SIAM Journal on Scientific Computing_ , 26(2):448–466, 2004. * Higdon et al. [2008] D. Higdon, J. Gattiker, B. Williams, and M. Rightley. Computer model calibration using high-dimensional output. _Journal of the American Statistical Association_ , 103(482), 2008. * Horvath and Kokoszka [2012] L. Horvath and P. Kokoszka. _Inference for Functional Data with Applications_. Springer, 2012. * Hutchings et al. [2023] G. Hutchings, B. Sansó, J. Gattiker, D. Francom, and D. Pasqualini. Comparing emulation methods for a high-resolution storm surge model. _Environmetrics_ , 34(3):e2796, 2023. * Johnson and Cook [1983] G. R. Johnson and W. H. Cook. A constitutive model and data for metals subjected to large strains, high strain rates, and high temperatures. _Proceedings 7th International Symposium on Ballistics_ , pages 541–547, 1983. * Kennedy and O’Hagan [2001] M. C. Kennedy and A. O’Hagan. Bayesian calibration of computer models. _Journal of the Royal Statistical Society. Series B, Statistical Methodology_ , pages 425–464, 2001. * Kleiber et al. [2014] W. Kleiber, S. R. Sain, and M. J. Wiltberger. Model calibration via deformation. _SIAM/ASA Journal on Uncertainty Quantification_ , 2(1):545–563, 2014. * Kraus et al. [2016] R. G. Kraus, J. P. Davis, C. T. Seagle, D. E. Fratanduono, D. C. Swift, J. L. Brown, and J. H. Eggert. Dynamic compression of copper to over 450 gpa: a high-pressure standard. _Phys. Rev. B_ , 93(13):134105, 2016. * Lemke et al. [2005] R. W. Lemke, M. D. Knudson, D. E. Bliss, K. Cochrane, J. Davis, A. A. Giunta, H. C. Harjes, and S. A. Slutz. Magnetically accelerated, ultrahigh velocity flyer plates for shock wave experiments. _Journal of Applied Physics_ , 98(7):073530, 2005\. doi: 10.1063/1.2084316. * Liu et al. [2009] F. Liu, M. J. Bayarri, J. O. Berger, et al. Modularization in Bayesian analysis, with emphasis on analysis of computer models. _Bayesian Analysis_ , 4(1):119–150, 2009. * Lu et al. [2017] Y. Lu, R. Herbei, and S. Kurtek. Bayesian registration of functions with a gaussian process prior. _Journal of Computational and Graphical Statistics_ , 26(4):894–904, 2017. * Marron et al. [2015] J. S. Marron, J.O. Ramsay, L.M. Sangalli, and A. Srivastava. Functional data analysis of amplitude and phase variation. _Statistical Science_ , 30(4):468–484, 2015. * Plummer [2015] M. Plummer. Cuts in bayesian graphical models. _Statistics and Computing_ , 25(1):37–43, 2015\. * Ramsay and Li [1998] J. O. Ramsay and X. Li. Curve registration. _Journal of the Royal Statistical Society, Ser. B_ , 60(2):351–363, 1998. * Ramsay and Silverman [2005] J. O. Ramsay and B. W. Silverman. _Functional Data Analysis_. Springer, 2005. * Sacks et al. [1989] J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn. Design and analysis of computer experiments. _Statistical Science_ , pages 409–423, 1989. * Savage et al. [2007] M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, et al. An overview of pulse compression and power flow in the upgraded z pulsed power driver. In _2007 16th IEEE International Pulsed Power Conference_ , volume 2, pages 979–984. IEEE, 2007. doi: 10.1103/PhysRevSTAB.13.010402. * Soderlind and Moriarty [1998] P. Soderlind and J. A. Moriarty. First-principles theory of ta up to 10 mbar pressure: structural and mechanical properties. _Phys. Rev. B,_ , 57:10340–10350, 1998. * Srivastava and Klassen [2016] A. Srivastava and E. Klassen. _Functional and Shape Data Analysis_. Springer, 2016. * Srivastava et al. [2011] A. Srivastava, W. Wu, S. Kurtek, E. Klassen, and J. S. Marron. Registration of functional data using Fisher-Rao metric. _arXiv:1103.3817v2 [math.ST]_ , 2011. URL http://arxiv.org/abs/1103.3817v2. * Tucker [2023] J. D. Tucker. fdasrsf, 2023. URL https://github.com/jdtuck/fdasrsf_python. * Tucker et al. [2013] J. D. Tucker, W. Wu, and A. Srivastava. Generative models for functional data using phase and amplitude separation. _Computational Statistics and Data Analysis_ , 61:50–66, 2013. * Vinet et al. [1989] P. Vinet, J. Rose, J. Ferrante, and J. Smith. Universal features of the equation of state of solids. _J. Phys. Condnsd Matt._ , 1:1941–1963, 1989. * Walters et al. [2018] D. J. Walters, A. Biswas, E. C. Lawrence, D. C. Francom, D. J. Luscher, D. A. Fredenburg, K. R. Moran, C. M. Sweeney, R. L. Sandberg, J. P. Ahrens, et al. Bayesian calibration of strength parameters using hydrocode simulations of symmetric impact shock experiments of al-5083. _Journal of Applied Physics_ , 124(20):205105, 2018. * Williams et al. [2006] B. Williams, D. Higdon, J. Gattiker, L. Moore, M. McKay, S. Keller-McNulty, et al. Combining experimental data and computer simulations, with an application to flyer plate experiments. _Bayesian Analysis_ , 1(4):765–792, 2006. * Wu and Srivastava [2011] W. Wu and A. Srivastava. An information-geometric framework for statistical inferences in the neural spike train space. _Journal of Computational Neuroscience_ , 31:725–748, 2011\.
# Practically Enhanced Hyperentanglement Concentration for Polarization- spatial Hyperentangled Bell States with Linear Optics and Common Single-photon Detectors Gui-Long Jiang,1 Wen-Qiang Liu,2 and Hai-Rui Wei1<EMAIL_ADDRESS>1 School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China 2 Center for Quantum Technology Research and Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurements (MOE), School of Physics, Beijing Institute of Technology, Beijing 100081, China (Received 22 September 2022; revised 10 December 2022; accepted 13 February 2023; published 14 March 2023) ###### Abstract Hyperentanglement, defined as the simultaneous entanglement in several independent degrees of freedom (DOFs) of a quantum system, is a fascinating resource in quantum information processing with its outstanding merits. Here we propose heralded hyperentanglement concentration protocols (hyper-ECPs) to concentrate an unknown partially less polarization-spatial hyperentangled Bell state with available linear optics and common single-photon detectors. By introducing time-delay DOFs, the schemes are highly efficient in that the success of the scheme can be accurately heralded by the detection signatures, and postselection techniques or photon-number-resolving detectors, necessary for previous experiments, are not required. Additionally, our linear optical architectures allow certain states, where concentration fails, to be recyclable, and a trick makes the success probabilities of our schemes higher than those of previous linear optical hyper-ECPs. ###### pacs: 03.67.Hk, 03.65.Ud, 03.67.Mn, 03.67.Pp ## I Introduction Entanglement, as an extremely characteristic quantum mechanical phenomenon, plays a significant part in quantum information processing and has been widely applied to quantum communication and quantum computation 1 ; 2 . The entangled photons, which can carry quantum information through various qubitlike degrees of freedom (DOFs), have been recognized as the optimum candidates in long- distance quantum communication tasks, such as quantum key distribution 3 ; 4 ; 5 , quantum teleportation 6 ; 7 , quantum dense coding 8 ; 9 , quantum secret sharing 10 ; 11 ; 12 , and quantum secure direct communication 13 ; 14 ; 15 . Encoding qubits in the polarization DOF of photons has been particularly appealing, for their arbitrary single-qubit manipulation can be easily achieved with two quarter-wave plates and one half-wave plate in an extremely fast and accurate manner single-qubit . Recently, hyperentanglement, defined as the simultaneous entanglement in multiple independent DOFs of a quantum system, has captured much attention as it can considerably increase the capacity of quantum communication and speed up quantum computation 17 ; 18 ; momentum1 ; momentum2 ; OAM1 ; OAM2 ; OAM3 ; HPQC1 ; HPQC2 ; HPQC3 ; HBSA1 . Diverse kinds of hyperentanglement have been experimentally demonstrated in detail, e.g., polarization-spatial 17 , polarization-spatial-time 18 , polarization-momentum momentum1 ; momentum2 , and polarization-orbital- angular-momentum OAM1 ; OAM2 DOFs. The hyperentangled states can break the channel capacity limit of superdense coding OAM3 and provide substantial applications in hyperparallel optical quantum computing HPQC1 ; HPQC2 ; HPQC3 and hyperparallel communication, such as hyperentanglement swapping HBSA1 ; HBSA2 , hyperentangled Bell states analysis 17 ; HBSA1 ; HBSA2 ; HBSA3 ; HBSA4 ; HBSA-wei , and hyperentanglement purification HEPP1 ; HEPP2 ; HEPP3 . Moreover, hyperentangled states can be used to accomplish some specific tasks that are impossible to achieve in single-DOF systems, such as complete and deterministic Bell states analysis with linear optics momentum2 ; BSA1 ; BSA2 ; BSA3 . However, the entangled photons will inevitably be affected by environment noise during transmission and storage processes in long-distance quantum communication, resulting in a great probability of degradation of the fidelity and security of protocols. One effective method to prevent the degradation of entanglement in photon systems is entanglement concentration protocol (ECP), which can distill maximally entangled states from less-entangled pure states. Another approach is entanglement purification protocol (EPP), which is applied to extract maximally entangled states with high fidelity from a number of mixed entangled states EPP1 ; EPP2 ; EPP3 ; EPP4 ; EPP5 . In 1996, Bennett _et al._ ECP1 proposed an ECP by using the Schmidt projection method, and various interesting ECPs and improvements were later proposed and experimentally demonstrated ECP2 ; ECP2-experiment1 ; ECP3 ; ECP4 ; ECP5 ; ECP6 ; ECP7 . The current ECPs are mostly focused on single-DOF systems. In 2013, Ren _et al._ hyper-ECP1 proposed an interesting polarization-spatial hyper-ECP for known hyperentangled Bell states with linear optics by using the parameter- splitting structure. In 2015, Li and Ghose hyper-ECP2 presented two hyper- ECPs for hyperentangled states in the polarization and time-bin DOFs with known and unknown parameters by using linear elements with similar methods. Later, Ren and Wang _et al._ hyper-ECP3 designed polarization-spatial-time- bin hyper-ECPs for hyperentangled Bell states and hyperentangled Greenberger- Horne-Zeilinger (GHZ) states hyper-ECP4 with linear optical elements. In 2019, Li and Shen hyper-ECP02 reported asymmetrical hyper-ECPs for unknown and known photon systems entangled in polarization and orbital angular momentum DOFs. For less-entangled states with known parameters, parameter splitting is the current optimal method for entanglement concentration with linear optics hyper-ECP1 . While for less-entangled states with unknown parameters, polarization beam splitters (PBSs) are usually employed to complete parity-check measurement on the polarization photon pair hyper-ECP2 ; hyper-ECP3 ; hyper-ECP4 ; hyper-ECP02 , and postselections or sophisticated photon-number-resolving detectors are necessary to accomplish the scheme exactly with linear optics. In addition, some deterministic hyper-ECPs were proposed assisted by nonlinear platforms, such as cross-Kerr mediates hyper- ECP5 , quantum dots hyper-ECP6 , or diamond nitrogen-vacancy centers hyper- ECP7 ; hyper-ECP8 , and these methods are challenged by impracticality in experiments with the current technique. In this paper, we present two practical hyper-ECPs for partially hyperentangled states with unknown parameters. A maximally polarization- spatial-based hyperentangled Bell (GHZ) state is exactly extracted from a partially less hyperentangled Bell (GHZ) state by using linear optics and common single-photon detectors. Using the method of introducing temporal DOF in Ref. BSA3 , we orchestrate unbalanced interferences that allow postselection principles or sophisticated photon-number-resolving detectors to be avoided and the success of the schemes to be completely heralded by the detection signatures. As the heralded concentration equipment is composed only of linear optical elements and has less apparatus than the previous schemes, our schemes are quite applicable and practical with current technology. Besides, the success probability of our hyper-ECPs can be higher than that of the existing hyper-ECPs with linear optics. ## II Hyper-ECP for photon systems in unknown hyperentangled pure states In this section, we construct heralded hyper-ECPs for partially less hyperentangled Bell states and GHZ states in both polarization and spatial DOFs with unknown parameters by using linear optics. The basic principles of our hyper-ECPs for the hyperentangled Bell and GHZ states are shown in Fig. 1 and Fig. 3, respectively. ### II.1 Hyper-ECP for partially hyperentangled Bell states with unknown parameters Assume that two initial two-photon states $|\phi\rangle_{AB}$ and $|\phi\rangle_{A^{\prime}B^{\prime}}$ partially entangled in polarization and spatial DOFs simultaneously are generated from $S_{1}$ and $S_{2}$, respectively. Here $|\phi\rangle_{AB}$ and $|\phi\rangle_{A^{\prime}B^{\prime}}$ are given by the following forms: $\displaystyle\begin{split}&|\phi\rangle_{AB}=\frac{1}{2}(|HH\rangle+|VV\rangle)\otimes(|a_{1}b_{1}\rangle+|a_{2}b_{2}\rangle),\\\ &|\phi\rangle_{A^{\prime}B^{\prime}}=\frac{1}{2}(|HH\rangle+|VV\rangle)\otimes(|a_{1^{\prime}}b_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{2^{\prime}}\rangle).\end{split}$ (1) where $H$ ($V$) denotes the horizontal (vertical) polarized photon. The $|a_{1}\rangle$ and $|a_{2}\rangle$ ($|a_{1^{\prime}}\rangle$ and $|a_{2^{\prime}}\rangle$) are two spatial modes of photon $A$ ($A^{\prime}$) and the $|b_{1}\rangle$ and $|b_{2}\rangle$ ($|b_{1^{\prime}}\rangle$ and $|b_{2^{\prime}}\rangle$) are those of photon $B$ ($B^{\prime}$). Then, an initial four-photon state of the four-photon system consisting of photons $A$, $B$, $A^{\prime}$, and $B^{\prime}$ is given by $\displaystyle\begin{split}|\Phi_{0}\rangle=&|\phi\rangle_{AB}\otimes|\phi\rangle_{A^{\prime}B^{\prime}}\\\ =&\frac{1}{4}(|HH\rangle+|VV\rangle)\otimes(|HH\rangle+|VV\rangle)\\\ &\otimes(|a_{1}b_{1}\rangle+|a_{2}b_{2}\rangle)\otimes(|a_{1^{\prime}}b_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{2^{\prime}}\rangle).\end{split}$ (2) As shown in Fig. 1, photon $B$ in the spatial mode $|b_{1}\rangle$ ($|b_{2}\rangle$) and photon $A^{\prime}$ in the spatial mode $|a_{1^{\prime}}\rangle$ ($|a_{2^{\prime}}\rangle$) first pass through a 50:50 beam splitter BS1 (BS2), where the balanced BS1 or BS2 matrix is given by $\displaystyle\text{BS}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}1&1\\\ 1&-1\\\ \end{array}\right)$ (5) in the spatial mode $\\{|b_{i}\rangle,|a_{i^{\prime}}\rangle\\}$ ($i=1,2$) basis. After passing through BS1 and BS2, $|\Phi_{0}\rangle$ is transformed into $\displaystyle\begin{split}|\Phi_{1}\rangle=&\frac{1}{8}(|HH\rangle+|VV\rangle)\otimes(|HH\rangle+|VV\rangle)\\\ &\otimes[|a_{1}\rangle(|b_{1}\rangle+|a_{1^{\prime}}\rangle)+|a_{2}\rangle(|b_{2}\rangle+|a_{2^{\prime}}\rangle)]\\\ &\otimes[(|b_{1}\rangle-|a_{1^{\prime}}\rangle)|b_{1^{\prime}}\rangle+(|b_{2}\rangle-|a_{2^{\prime}}\rangle)|b_{2^{\prime}}\rangle].\end{split}$ (6) Then, photons $A$ and $A^{\prime}$ ($B$ and $B^{\prime}$) are sent to Alice (Bob), and owing to the influence of noise, $|\Phi_{1}\rangle$ may decay to a partially less entangled state $\displaystyle\begin{split}|\Phi_{2}\rangle=&\frac{1}{2}(\alpha|HH\rangle+\beta|VV\rangle)\otimes(\alpha|HH\rangle+\beta|VV\rangle)\\\ &\otimes[\gamma|a_{1}\rangle(|b_{1}\rangle+|a_{1^{\prime}}\rangle)+\delta|a_{2}\rangle(|b_{2}\rangle+|a_{2^{\prime}}\rangle)]\\\ &\otimes[\gamma(|b_{1}\rangle-|a_{1^{\prime}}\rangle)|b_{1^{\prime}}\rangle+\delta(|b_{2}\rangle-|a_{2^{\prime}}\rangle)|b_{2^{\prime}}\rangle].\end{split}$ (7) Here coefficients satisfy the normalization conditions $|\alpha|^{2}+|\beta|^{2}=1$ and $|\gamma|^{2}+|\delta|^{2}=1$, and they are unknown to the parties of communication. The purpose of the two distant parties is to distill the maximally hyperentangled Bell state $\displaystyle\begin{split}|\phi^{++}\rangle_{AB}=\frac{1}{2}(|HH\rangle+|VV\rangle)\otimes(|a_{1}b_{1}\rangle+|a_{2}b_{2}\rangle)\end{split}$ (8) from $|\Phi_{2}\rangle$. For clarity of exposition of the working principles of our ECP, base on the Hong-Ou-Mandel effect, we rewrite $|\Phi_{2}\rangle$ in the following normalized form: $\displaystyle\begin{split}|\Phi_{2}\rangle=&\frac{1}{\sqrt{2}}(\alpha^{2}|HHHH\rangle+\beta^{2}|VVVV\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{1^{\prime}}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle)\\\ &+\delta^{2}|a_{2}b_{2^{\prime}}\rangle(|b_{2}b_{2}\rangle-|a_{2^{\prime}}a_{2^{\prime}}\rangle)]\\\ &+\frac{\alpha\beta}{2}(|HVHV\rangle+|VHVH\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{1^{\prime}}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle-|b_{1}a_{1^{\prime}}\rangle\\\ &+|a_{1^{\prime}}b_{1}\rangle)+\delta^{2}|a_{2}b_{2^{\prime}}\rangle(|b_{2}b_{2}\rangle-|a_{2^{\prime}}a_{2^{\prime}}\rangle\\\ &-|b_{2}a_{2^{\prime}}\rangle+|a_{2^{\prime}}b_{2}\rangle)]\\\ &+\frac{\gamma\delta}{2}(\alpha^{2}|HHHH\rangle+\beta^{2}|VVVV\rangle\\\ &+\alpha\beta|HVHV\rangle+\beta\alpha|VHVH\rangle)\\\ &\otimes[|a_{1}b_{2^{\prime}}\rangle(|b_{1}b_{2}\rangle-|a_{1^{\prime}}a_{2^{\prime}}\rangle-|b_{1}a_{2^{\prime}}\rangle+|a_{1^{\prime}}b_{2}\rangle)\\\ &+|a_{2}b_{1^{\prime}}\rangle(|b_{2}b_{1}\rangle-|a_{2^{\prime}}a_{1^{\prime}}\rangle-|b_{2}a_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{1}\rangle)].\end{split}$ (9) Figure 1: (Color online) Schematic diagram of the hyper-ECP for a polarization-spatial-based hyperentangled Bell state with unknown parameters. $S_{1}$ and $S_{2}$ are partial hyperentanglement sources for $|\phi\rangle_{AB}$ and $|\phi\rangle_{A^{\prime}B^{\prime}}$, respectively. $\textrm{PBS}_{i,i^{\prime}}(i=1,2,\cdots,6)$ represents a polarization beam splitter, which transmits the horizontal polarization state $|H\rangle$ and reflects the vertical polarization state $|V\rangle$. $\textrm{HWP}^{45^{\circ}}$ represents a half-wave plate oriented at $45^{\circ}$, which is used to exchange the vertical and horizontal polarization, i.e., $|H\rangle\rightarrow|V\rangle$, $|V\rangle\rightarrow|H\rangle$. $\textrm{HWP}^{22.5^{\circ}}_{i}(i=1,2)$ is used to perform a Hadamard operation on polarization, i.e., $|H\rangle\rightarrow\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$, $|V\rangle\rightarrow\frac{1}{\sqrt{2}}(|H\rangle-|V\rangle)$. $\textrm{BS}_{i,i^{\prime}}(i=1,2,\cdots,5)$ denotes the 50:50 beam splitter. $D_{i}$ $(i=1,2,\cdots,8)$ represents the common single-photon detector. $Z^{S}$ and $Z^{P}$ are the classical feed-forward operations, which can be achieved by setting a PS and a $\textrm{HWP}^{0^{\circ}}$, respectively. As shown in Fig. 1, Bob flips firstly the polarization and spatial states of photon $B^{\prime}$, which causes $|\Phi_{2}\rangle$ to change into $\displaystyle\begin{split}|\Phi_{3}\rangle=&\frac{1}{\sqrt{2}}(\alpha^{2}|HVHH\rangle+\beta^{2}|VHVV\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{2^{\prime}}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle)\\\ &+\delta^{2}|a_{2}b_{1^{\prime}}\rangle(|b_{2}b_{2}\rangle-|a_{2^{\prime}}a_{2^{\prime}}\rangle)]\\\ &+\frac{\alpha\beta}{2}(|HHHV\rangle+|VVVH\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{2^{\prime}}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle-|b_{1}a_{1^{\prime}}\rangle\\\ &+|a_{1^{\prime}}b_{1}\rangle)+\delta^{2}|a_{2}b_{1^{\prime}}\rangle(|b_{2}b_{2}\rangle-|a_{2^{\prime}}a_{2^{\prime}}\rangle\\\ &-|b_{2}a_{2^{\prime}}\rangle+|a_{2^{\prime}}b_{2}\rangle)]\\\ &+\frac{\gamma\delta}{2}(\alpha^{2}|HVHH\rangle+\beta^{2}|VHVV\rangle\\\ &+\alpha\beta|HHHV\rangle+\beta\alpha|VVVH\rangle)\\\ &\otimes[|a_{1}b_{1^{\prime}}\rangle(|b_{1}b_{2}\rangle-|a_{1^{\prime}}a_{2^{\prime}}\rangle-|b_{1}a_{2^{\prime}}\rangle+|a_{1^{\prime}}b_{2}\rangle)\\\ &+|a_{2}b_{2^{\prime}}\rangle(|b_{2}b_{1}\rangle-|a_{2^{\prime}}a_{1^{\prime}}\rangle-|b_{2}a_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{1}\rangle)],\end{split}$ (10) where the polarization-based bit-flip operation is realized by two half-wave plates oriented at $45^{\circ}$ (HWP${}^{45^{\circ}}$s), and the spatial-based bit-flip operation is realized by the block composed of BS3, BS4, and one phase shifter (PS). The PS and the HWP${}^{45^{\circ}}$ matrices are given by $\displaystyle\text{PS}=\left(\begin{array}[]{cc}-1&0\\\ 0&-1\\\ \end{array}\right),\;\;\text{HWP}^{45^{\circ}}=\left(\begin{array}[]{cc}0&1\\\ 1&0\\\ \end{array}\right),$ (15) in the $\\{|H\rangle,|V\rangle\\}$ basis, respectively. Secondly, as shown in Fig. 1, by using four unbalanced interferometers, each consisting of two PBSs, Alice introduces time delays $t$ to $|H\rangle|a_{1^{\prime}}\rangle$, $2t$ to $|V\rangle|a_{1^{\prime}}\rangle$, $3t$ to $|V\rangle|a_{2^{\prime}}\rangle$, and $4t$ to $|H\rangle|a_{2^{\prime}}\rangle$, respectively. While Bob introduces time- delays $t$ to $|V\rangle|b_{2}\rangle$, $2t$ to $|H\rangle|b_{2}\rangle$, $3t$ to $|H\rangle|b_{1}\rangle$, and $4t$ to $|V\rangle|b_{1}\rangle$. The time delays can be achieved by lengthening optical circuits, and we assume that phase $\omega t=2n\pi$, where $n$ is the nonzero integer and the time for photons to pass through the entire device is much less than the two-photon coherence time. We can verify that after photons $A^{\prime}$ and $B$ are emitted from the four interferometers, $|\Phi_{3}\rangle$ becomes $\displaystyle|\Phi_{4}\rangle=|\Omega_{0}\rangle+|\Omega_{1}\rangle+|\Omega_{2}\rangle+|\Omega_{3}\rangle,$ (16) where $\displaystyle\begin{split}|\Omega_{0}\rangle=&\frac{\alpha\beta}{2}[\gamma^{2}(|HH\rangle|a_{1}b_{2^{\prime}}\rangle+|VV\rangle|a_{1}b_{2^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|b_{1}b_{1}\rangle|3t,4t\rangle-|HV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle|t,2t\rangle)\\\ &+\delta^{2}(|HH\rangle|a_{2}b_{1^{\prime}}\rangle+|VV\rangle|a_{2}b_{1^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|b_{2}b_{2}\rangle|2t,t\rangle-|HV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle|4t,3t\rangle)\\\ &+\gamma^{2}(|HH\rangle|a_{1}b_{2^{\prime}}\rangle-|VV\rangle|a_{1}b_{2^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|a_{1^{\prime}}b_{1}\rangle|t,4t\rangle-|HV\rangle|b_{1}a_{1^{\prime}}\rangle|3t,2t\rangle)\\\ &+\delta^{2}(|HH\rangle|a_{2}b_{1^{\prime}}\rangle-|VV\rangle|a_{2}b_{1^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|a_{2^{\prime}}b_{2}\rangle|4t,t\rangle-|HV\rangle|b_{2}a_{2^{\prime}}\rangle|2t,3t\rangle)],\end{split}$ (17) $\displaystyle\begin{split}|\Omega_{1}\rangle=&\frac{\gamma\delta}{2}[\alpha^{2}(|HV\rangle|a_{1}b_{1^{\prime}}\rangle+|HV\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|HH\rangle|b_{1}b_{2}\rangle|3t,2t\rangle-|HH\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle|t,4t\rangle)\\\ &+\alpha^{2}(|HV\rangle|a_{1}b_{1^{\prime}}\rangle-|HV\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|HH\rangle|a_{1^{\prime}}b_{2}\rangle|t,2t\rangle-|HH\rangle|b_{1}a_{2^{\prime}}\rangle|3t,4t\rangle)\\\ &+\beta^{2}(|VH\rangle|a_{1}b_{1^{\prime}}\rangle+|VH\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|VV\rangle|b_{1}b_{2}\rangle|4t,t\rangle-|VV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle|2t,3t\rangle)\\\ &+\beta^{2}(|VH\rangle|a_{1}b_{1^{\prime}}\rangle-|VH\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|VV\rangle|a_{1^{\prime}}b_{2}\rangle|2t,t\rangle-|VV\rangle|b_{1}a_{2^{\prime}}\rangle|4t,3t\rangle)],\end{split}$ (18) $\displaystyle\begin{split}|\Omega_{2}\rangle=&\frac{\alpha\beta\gamma\delta}{2}[(|HH\rangle|a_{1}b_{1^{\prime}}\rangle+|VV\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|b_{1}b_{2}\rangle|3t,t\rangle-|HV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle|t,3t\rangle)\\\ &+(|HH\rangle|a_{2}b_{2^{\prime}}\rangle+|VV\rangle|a_{1}b_{1^{\prime}}\rangle)\\\ &\otimes(|VH\rangle|b_{1}b_{2}\rangle|4t,2t\rangle-|VH\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle|2t,4t\rangle)\\\ &+(|HH\rangle|a_{1}b_{1^{\prime}}\rangle-|VV\rangle|a_{2}b_{2^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|a_{1^{\prime}}b_{2}\rangle|t,t\rangle-|HV\rangle|b_{1}a_{2^{\prime}}\rangle|3t,3t\rangle)\\\ &+(|HH\rangle|a_{2}b_{2^{\prime}}\rangle-|VV\rangle|a_{1}b_{1^{\prime}}\rangle)\\\ &\otimes(|HV\rangle|a_{2^{\prime}}b_{1}\rangle|4t,4t\rangle-|HV\rangle|b_{2}a_{1^{\prime}}\rangle|2t,2t\rangle)].\end{split}$ (19) $\displaystyle\begin{split}|\Omega_{3}\rangle=&\frac{1}{\sqrt{2}}[\alpha^{2}\gamma^{2}|HV\rangle|a_{1}b_{2^{\prime}}\rangle\\\ &\otimes(|HH\rangle|b_{1}b_{1}\rangle|3t,3t\rangle-|HH\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle|t,t\rangle)\\\ &+\beta^{2}\gamma^{2}|VH\rangle|a_{1}b_{2^{\prime}}\rangle\\\ &\otimes(|VV\rangle|b_{1}b_{1}\rangle|4t,4t\rangle-|VV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle|2t,2t\rangle)\\\ &+\alpha^{2}\delta^{2}|HV\rangle|a_{2}b_{1^{\prime}}\rangle\\\ &\otimes(|HH\rangle|b_{2}b_{2}\rangle|2t,2t\rangle-|HH\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle|4t,4t\rangle)\\\ &+\beta^{2}\delta^{2}|VH\rangle|a_{2}b_{1^{\prime}}\rangle\\\ &\otimes(|VV\rangle|b_{2}b_{2}\rangle|t,t\rangle-|VV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle|3t,3t\rangle)],\end{split}$ (20) Based on Eqs. (17-20), we can see that $|\Omega_{0}\rangle$ and $|\Omega_{1}\rangle$ indicate the failure of the protocol, that is, the scheme is terminated; $|\Omega_{2}\rangle$ has the potential to yield the maximal hyperentangled Bell state since it has the same coefficient for each item, whereas $|\Omega_{3}\rangle$ is promising for recycling to improve the success probability as it can yield a partially hyperentangled Bell state hyper-ECP5 ; hyper-ECP6 ; hyper-ECP7 . The rigorous argument is provided below. Thirdly, Alice (Bob) performs spatial-based and polarization-based Hadamard operations on photon $A^{\prime}$ ($B$) by $\text{BS}_{5^{\prime}}$ and $\text{HWP}_{1^{\prime},2^{\prime}}^{22.5^{\circ}}$ ($\text{BS}_{5}$ and $\text{HWP}_{1,2}^{22.5^{\circ}}$), respectively. Here the $\text{HWP}^{22.5^{\circ}}$ matrix is given by $\displaystyle\text{HWP}^{22.5^{\circ}}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}1&1\\\ 1&-1\\\ \end{array}\right),$ (23) in the $\\{|H\rangle,|V\rangle\\}$ basis. Here, $\text{BS}_{5^{\prime}}$ and $\text{BS}_{5}$ complete the transformations $\displaystyle\begin{split}&|a_{1^{\prime}}\rangle\rightarrow\frac{1}{\sqrt{2}}(|a_{1^{\prime}}\rangle+|a_{2^{\prime}}\rangle),\;\;|b_{1}\rangle\rightarrow\frac{1}{\sqrt{2}}(|b_{1}\rangle+|b_{2}\rangle),\\\ &|a_{2^{\prime}}\rangle\rightarrow\frac{1}{\sqrt{2}}(|a_{1^{\prime}}\rangle-|a_{2^{\prime}}\rangle),\;\;|b_{2}\rangle\rightarrow\frac{1}{\sqrt{2}}(|b_{1}\rangle-|b_{2}\rangle).\end{split}$ (24) For the desired state $|\Omega_{2}\rangle$ shown in Eq. (19), after photon $A^{\prime}$ ($B$) passes through $\text{BS}_{5^{\prime}}$ ($\text{BS}_{5}$) and $\text{HWP}_{1^{\prime},2^{\prime}}^{22.5^{\circ}}$ ($\text{HWP}_{1,2}^{22.5^{\circ}}$) in succession, if both photons have the same polarization, spatial mode, and time interval, they will interfere constructively or destructively with each other, e.g., $|HH\rangle|a_{1^{\prime}}b_{1}\rangle|t,t\rangle-|HH\rangle|a_{1^{\prime}}b_{1}\rangle|3t,3t\rangle\rightarrow 0$, $|HV\rangle|a_{1^{\prime}}b_{1}\rangle|t,t\rangle+|VH\rangle|b_{1}a_{1^{\prime}}\rangle|3t,3t\rangle\rightarrow 2|HV\rangle|a_{1^{\prime}}b_{1}\rangle D(0,0)$, and $|HV\rangle|b_{1}b_{2}\rangle|3t,t\rangle+|HV\rangle|b_{1}b_{2}\rangle|4t,2t\rangle\rightarrow 2|HV\rangle|b_{1}b_{2}\rangle D(2t,0)$ HBSA-wei ; BSA3 . Here $D(2t,0)$ represents that the first photon has a relative time delay of $2t$ compared with the second photon, while $D(0,0)$ means that the two photons have no relative time delay. Therefore, Eq. (20) is converted into $\displaystyle\begin{split}|\Omega_{2}^{\prime}\rangle=&\frac{\alpha\beta\gamma\delta}{4}[|\phi_{0}^{++}\rangle_{AB^{\prime}}\otimes(|HH\rangle|b_{1}b_{1}\rangle-|HH\rangle|b_{2}b_{2}\rangle\\\ &-|HH\rangle|b_{1}b_{2}\rangle+|HH\rangle|b_{2}b_{1}\rangle-|VV\rangle|b_{1}b_{1}\rangle\\\ &+|VV\rangle|b_{2}b_{2}\rangle+|VV\rangle|b_{1}b_{2}\rangle-|VV\rangle|b_{2}b_{1}\rangle\\\ &-|HH\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle+|HH\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle-|HH\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle\\\ &+|HH\rangle|a_{2^{\prime}}a_{1^{\prime}}\rangle+|VV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle-|VV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle\\\ &+|VV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle-|VV\rangle|a_{2^{\prime}}a_{1^{\prime}}\rangle)D(2t,0)\\\ &+|\phi_{0}^{--}\rangle_{AB^{\prime}}\otimes(|VH\rangle|b_{1}b_{1}\rangle-|VH\rangle|b_{2}b_{2}\rangle\\\ &-|VH\rangle|b_{1}b_{2}\rangle+|VH\rangle|b_{2}b_{1}\rangle+|HV\rangle|b_{2}b_{2}\rangle\\\ &-|HV\rangle|b_{1}b_{1}\rangle+|HV\rangle|b_{1}b_{2}\rangle-|HV\rangle|b_{2}b_{1}\rangle\\\ &-|HV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle+|HV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle-|HV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle\\\ &+|HV\rangle|a_{2^{\prime}}a_{1^{\prime}}\rangle+|VH\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle-|VH\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle\\\ &+|VH\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle-|VH\rangle|a_{2^{\prime}}a_{1^{\prime}}\rangle)D(2t,0)\\\ &+2|\phi_{0}^{+-}\rangle_{AB^{\prime}}\otimes(|HH\rangle|a_{2^{\prime}}b_{1}\rangle-|HH\rangle|a_{1^{\prime}}b_{2}\rangle\\\ &-|VV\rangle|a_{2^{\prime}}b_{1}\rangle+|VV\rangle|a_{1^{\prime}}b_{2}\rangle)D(0,0)\\\ &+2|\phi_{0}^{-+}\rangle_{AB^{\prime}}\otimes(|VH\rangle|a_{1^{\prime}}b_{1}\rangle-|HV\rangle|a_{1^{\prime}}b_{1}\rangle\\\ &-|VH\rangle|a_{2^{\prime}}b_{2}\rangle+|HV\rangle|a_{2^{\prime}}b_{2}\rangle)D(0,0)],\end{split}$ (25) where $\displaystyle\begin{split}&|\phi_{0}^{+\pm}\rangle_{AB^{\prime}}=\frac{1}{2}(|HH\rangle+|VV\rangle)\otimes(|a_{1}b_{1^{\prime}}\rangle\pm|a_{2}b_{2^{\prime}}\rangle),\\\ &|\phi_{0}^{\pm+}\rangle_{AB^{\prime}}=\frac{1}{2}(|HH\rangle\pm|VV\rangle)\otimes(|a_{1}b_{1^{\prime}}\rangle+|a_{2}b_{2^{\prime}}\rangle).\end{split}$ (26) For the recycling state $|\Omega_{3}\rangle$ shown in Eq. (20), we can infer that after the spatial-based and polarization-based Hadamard operations are applied on photons $A^{\prime}$ and $B$, it is transformed into $\displaystyle\begin{split}|\Omega_{3}^{\prime}\rangle=&\frac{\sqrt{(|\alpha|^{4}+|\beta|^{4})(|\gamma|^{4}+|\delta|^{4})}}{2\sqrt{14}}\\\ &\times[|\phi_{1}^{++}\rangle_{AB^{\prime}}\otimes(|HH\rangle|b_{1}b_{1}\rangle+|HH\rangle|b_{2}b_{2}\rangle\\\ &+|VV\rangle|b_{1}b_{1}\rangle+|VV\rangle|b_{2}b_{2}\rangle\\\ &-|HH\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle-|HH\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle\\\ &-|VV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle-|VV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle)D(0,0)\\\ &+2|\phi_{1}^{-+}\rangle_{AB^{\prime}}\otimes(|HV\rangle|b_{1}b_{1}\rangle+|HV\rangle|b_{2}b_{2}\rangle\\\ &-|HV\rangle|a_{1^{\prime}}a_{1^{\prime}}\rangle-|HV\rangle|a_{2^{\prime}}a_{2^{\prime}}\rangle)D(0,0)\\\ &+2|\phi_{1}^{+-}\rangle_{AB^{\prime}}\otimes(|HH\rangle|b_{1}b_{2}\rangle+|VV\rangle|b_{1}b_{2}\rangle\\\ &-|HH\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle-|VV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle)D(0,0)\\\ &+2|\phi_{1}^{--}\rangle_{AB^{\prime}}\otimes(|HV\rangle|b_{1}b_{2}\rangle+|HV\rangle|b_{2}b_{1}\rangle\\\ &-|HV\rangle|a_{1^{\prime}}a_{2^{\prime}}\rangle-|HV\rangle|a_{2^{\prime}}a_{1^{\prime}}\rangle)D(0,0)],\end{split}$ (27) where $\displaystyle\begin{split}|\phi_{1}^{+\pm}\rangle_{AB^{\prime}}=&(\alpha^{\prime}|HV\rangle+\beta^{\prime}|VH\rangle)\otimes(\gamma^{\prime}|a_{1}b_{2^{\prime}}\rangle\\\ &\pm\delta^{\prime}|a_{2}b_{1^{\prime}}\rangle),\\\ |\phi_{1}^{\pm+}\rangle_{AB^{\prime}}=&(\alpha^{\prime}|HV\rangle\pm\beta^{\prime}|VH\rangle)\otimes(\gamma^{\prime}|a_{1}b_{2^{\prime}}\rangle\\\ &+\delta^{\prime}|a_{2}b_{1^{\prime}}\rangle),\end{split}$ (28) with $\alpha^{\prime}=\frac{\alpha^{2}}{\sqrt{|\alpha|^{4}+|\beta|^{4}}}$, $\beta^{\prime}=\frac{\beta^{2}}{\sqrt{|\alpha|^{4}+|\beta|^{4}}}$, $\gamma^{\prime}=\frac{\gamma^{2}}{\sqrt{|\gamma|^{4}+|\delta|^{4}}}$, and $\delta^{\prime}=\frac{\delta^{2}}{\sqrt{|\gamma|^{4}+|\delta|^{4}}}$. Finally, the polarization and spatial information of photons $A^{\prime}$ and $B$ are measured by the PBSs and the common single-photon detector $D_{i}$ $(i=1,2,\cdots,8)$. For $|\Omega_{0}\rangle$ or $|\Omega_{1}\rangle$, after passing through $\text{BS}_{5}$, $\text{BS}_{5^{\prime}}$, $\text{HWP}^{22.5^{\circ}}_{1,2}$, and $\text{HWP}^{22.5^{\circ}}_{1^{\prime},2^{\prime}}$, this results in the arbitrary single-photon pair $(D_{i},D_{j})$ $(i,j=1,2,\cdots,8)$ triggered with the time interval of $t$ or $3t$, which means the scheme is failed. For $|\Omega_{2}^{\prime}\rangle$, as shown by Tab. 1, this results in the arbitrary single-photon pair $(D_{i},D_{j})$ fired with the time interval of $2t$, or Alice and Bob both having one detector triggered without time interval, and $|\Omega_{2}^{\prime}\rangle$ collapses to $|\phi_{0}^{\pm\pm}\rangle_{AB^{\prime}}$. After some classical feed-forward operations (see Tab. 1) are performed on photon $B^{\prime}$, Alice and Bob obtain the desired maximal hyperentangled Bell state $|\phi_{0}^{++}\rangle_{AB^{\prime}}$ with a success probability of $P_{1}=4|\alpha\beta\gamma\delta|^{2}$. As for $|\Omega_{3}^{\prime}\rangle$, this results in only one common single- photon detector $D_{i}$ fired or two single-photon detectors from one side, Alice or Bob, fired simultaneously. Meanwhile, $|\Omega_{2}^{\prime}\rangle$ collapses to one of the states given in Eq. (28). After implementing the corresponding feed-forward operations on photon $B^{\prime}$, Alice and Bob obtain the state $|\phi_{1}^{++}\rangle_{AB^{\prime}}$ with a probability of $(|\alpha|^{4}+|\beta|^{4})(|\gamma|^{4}+|\delta|^{4})$. Table 1: The relations between the detection signatures and the classical feed-forward operations to complete the hyper-ECP for hyperentangled Bell state. The spatial-based feed-forward operation $Z^{S}=|b_{1^{\prime}}\rangle\langle b_{1^{\prime}}|-|b_{2^{\prime}}\rangle\langle b_{2^{\prime}}|$ can be achieved by using a PS, and the polarization-based $Z^{P}=|H\rangle\langle H|-|V\rangle\langle V|$ can be accomplished by employing $\text{HWP}^{0^{\circ}}$. The superscript $2t$ on the detector $D_{i}$ indicates a relative time delay of 2t. $|\phi_{0}^{\pm\pm}\rangle_{AB^{\prime}}$ and $|\phi_{1}^{\pm\pm}\rangle_{AB^{\prime}}$ are the desired and recyclable outcomes, respectively. | Single- | Outcomes | Feed- | Success ---|---|---|---|--- Time | photon | of | forward | proba- interval | detectors | $A$ and $B^{\prime}$ | on $B^{\prime}$ | bility 0 | $D_{1},D_{2},D_{3},D_{4}$ | $|\phi_{1}^{++}\rangle_{AB^{\prime}}$ | none | | $D_{5},D_{6},D_{7},D_{8}$ | | | | $(D_{1},D_{2}),(D_{3},D_{4})$ | $|\phi_{1}^{-+}\rangle_{AB^{\prime}}$ | $Z^{P}$ | $(|\alpha|^{4}$ | $(D_{5},D_{6}),(D_{7},D_{8})$ | | | $+|\beta|^{4})$ | $(D_{1},D_{4}),(D_{2},D_{3})$ | $|\phi_{1}^{+-}\rangle_{AB^{\prime}}$ | $Z^{S}$ | $\times(|\gamma|^{4}$ | $(D_{5},D_{8}),(D_{6},D_{7})$ | | | $+|\delta|^{4})$ | $(D_{1},D_{3}),(D_{2},D_{4})$ | $|\phi_{1}^{--}\rangle_{AB^{\prime}}$ | $Z^{S},Z^{P}$ | | $(D_{5},D_{7}),(D_{6},D_{8})$ | | | 0 | $(D_{1},D_{8}),(D_{2},D_{7})$ | $|\phi_{0}^{+-}\rangle_{AB^{\prime}}$ | $Z^{S}$ | $4|\alpha\beta\gamma\delta|^{2}$ | $(D_{3},D_{6}),(D_{4},D_{5})$ | | | | $(D_{1},D_{6}),(D_{2},D_{5})$ | $|\phi_{0}^{-+}\rangle_{AB^{\prime}}$ | $Z^{P}$ | | $(D_{3},D_{8}),(D_{4},D_{7})$ | | | $2t$ | $(D_{1}^{2t},D_{2}),(D_{1},D_{2}^{2t})$ | $|\phi_{0}^{--}\rangle_{AB^{\prime}}$ | $Z^{S},Z^{P}$ | | $(D_{3}^{2t},D_{4}),(D_{3},D_{4}^{2t})$ | | | | $(D_{1}^{2t},D_{3}),(D_{1},D_{3}^{2t})$ | | | | $(D_{2}^{2t},D_{4}),(D_{2},D_{4}^{2t})$ | | | | $(D_{5}^{2t},D_{6}),(D_{5},D_{6}^{2t})$ | | | | $(D_{7}^{2t},D_{8}),(D_{7},D_{8}^{2t})$ | | | | $(D_{5}^{2t},D_{7}),(D_{5},D_{7}^{2t})$ | | | | $(D_{6}^{2t},D_{8}),(D_{6},D_{8}^{2t})$ | | | $2t$ | $(D_{1}^{2t},D_{1}),(D_{2}^{2t},D_{2})$ | $|\phi_{0}^{++}\rangle_{AB^{\prime}}$ | none | | $(D_{3}^{2t},D_{3}),(D_{4}^{2t},D_{4})$ | | | | $(D_{5}^{2t},D_{5}),(D_{6}^{2t},D_{6})$ | | | | $(D_{7}^{2t},D_{7}),(D_{8}^{2t},D_{8})$ | | | | $(D_{1}^{2t},D_{4}),(D_{1},D_{4}^{2t})$ | | | | $(D_{2}^{2t},D_{3}),(D_{2},D_{3}^{2t})$ | | | | $(D_{5}^{2t},D_{8}),(D_{5},D_{8}^{2t})$ | | | | $(D_{6}^{2t},D_{7}),(D_{6},D_{7}^{2t})$ | | | After flipping the polarization and spatial mode of photon $B^{\prime}$ of state $|\phi_{1}^{++}\rangle_{AB^{\prime}}$, it is transformed into $\displaystyle\begin{split}|\phi_{2}^{++}\rangle_{AB^{\prime}}=&(\alpha^{\prime}|HH\rangle+\beta^{\prime}|VV\rangle)\otimes(\gamma^{\prime}|a_{1}b_{1^{\prime}}\rangle\\\ &+\delta^{\prime}|a_{2}b_{2^{\prime}}\rangle).\end{split}$ (29) It is obvious that Eq. (29) has a similar form to a partially hyperentangled Bell state $(\alpha|HH\rangle+\beta|VV\rangle)\otimes(\gamma|a_{1}b_{1}\rangle+\delta|a_{2}b_{2}\rangle)$ as described in Ref. hyper-ECP1 . Then, applying the method in Ref. hyper-ECP1 , Alice and Bob can further extract the maximal hyperentangled Bell state $|\phi_{0}^{++}\rangle_{AB^{\prime}}$ from $|\phi_{2}^{++}\rangle_{AB^{\prime}}$ with a success probability of $4|\alpha^{\prime}\beta^{\prime}\gamma^{\prime}\delta^{\prime}|^{2}$. Therefore, this improves the total success probability of our hyper-ECP from $P_{1}$ to $\displaystyle\begin{split}P_{2}&=P_{1}+4|\alpha^{\prime}\beta^{\prime}\gamma^{\prime}\delta^{\prime}|^{2}(|\alpha|^{4}+|\beta|^{4})(|\gamma|^{4}+|\delta|^{4})\\\ &=4|\alpha\beta\gamma\delta|^{2}+\frac{4|\alpha\beta\gamma\delta|^{4}}{(|\alpha|^{4}+|\beta|^{4})(|\gamma|^{4}+|\delta|^{4})}.\end{split}$ (30) Figure 2: The total success probability of the presented hyper-Bell state concentration protocol (a) without and (b) with recycling state $|\phi_{1}^{++}\rangle_{AB^{\prime}}$. Figure 2 plots the success probabilities of $P_{1}$ and $P_{2}$ as a function of $|\alpha|^{2}$ and $|\gamma|^{2}$, where $|\alpha|^{2}=1-|\beta|^{2}\in(0,0.5]$ and $|\gamma|^{2}=1-|\delta|^{2}\in(0,0.5]$ are taken. As shown in Fig. 2, $P_{1}=0.25$ without recycling $|\phi_{1}^{++}\rangle_{AB^{\prime}}$, and it is equal to the result shown in Ref. hyper-ECP1 . However, after concentrating state $|\phi_{1}^{++}\rangle_{AB^{\prime}}$ again, as shown in Fig. 2, $P_{1}$ can be increased to $P_{2}=0.3125$ in principle. ### II.2 Hyper-ECP for partially hyperentangled GHZ states with unknown parameters Suppose two maximally hyperentangled GHZ states from $S_{1}$ and $S_{2}$ are given by $\displaystyle\begin{split}|\psi\rangle_{ABC}=&\frac{1}{2}(|HHH\rangle+|VVV\rangle)\\\ &\otimes(|a_{1}b_{1}c_{1}\rangle+|a_{2}b_{2}c_{2}\rangle),\\\ |\psi\rangle_{A^{\prime}B^{\prime}C^{\prime}}=&\frac{1}{2}(|HHH\rangle+|VVV\rangle)\\\ &\otimes(|a_{1^{\prime}}b_{1^{\prime}}c_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{2^{\prime}}c_{2^{\prime}}\rangle).\end{split}$ (31) The $|a_{k}\rangle$ ($|a_{k^{\prime}}\rangle$), $|b_{k}\rangle$ ($|b_{k^{\prime}}\rangle$), and $|c_{k}\rangle$ ($|c_{k^{\prime}}\rangle$), $k=1,2$, are spatial modes of photons $A$ ($A^{\prime}$), $B$ ($B^{\prime}$), and $C$ ($C^{\prime}$), respectively. As shown in Fig. 3, after photon $B$ in the spatial mode $|b_{1}\rangle$ ($|b_{2}\rangle$) and photon $A^{\prime}$ in the spatial mode $|a_{1^{\prime}}\rangle$ ($|a_{2^{\prime}}\rangle$) pass through BS, photons $A$, $B$, and $C$ are distributed to three distant parties Alice, Bob, and Charlie, respectively. Due to the operations of BSs and noisy channels, the six-photon system composed of photons $A$, $B$, $C$, $A^{\prime}$, $B^{\prime}$, and $C^{\prime}$ is given by $\displaystyle\begin{split}|\Psi_{0}\rangle=&\frac{1}{\sqrt{2}}(\alpha^{2}|HHHHHH\rangle+\beta^{2}|VVVVVV\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{1^{\prime}}c_{1}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle)|c_{1^{\prime}}\rangle\\\ &+\delta^{2}|a_{2}b_{2^{\prime}}c_{2}\rangle(|b_{2}b_{2}\rangle-|a_{2^{\prime}}a_{2^{\prime}}\rangle)|c_{2^{\prime}}\rangle]\\\ &+\frac{\alpha\beta}{2}(|HVHVHV\rangle+|VHVHVH\rangle)\\\ &\otimes[\gamma^{2}|a_{1}b_{1^{\prime}}c_{2}\rangle(|b_{1}b_{1}\rangle-|a_{1^{\prime}}a_{1^{\prime}}\rangle-|b_{1}a_{1^{\prime}}\rangle\\\ &+|a_{1^{\prime}}b_{1}\rangle)|c_{1^{\prime}}\rangle+\delta^{2}|a_{2}b_{2^{\prime}}c_{2}\rangle(|b_{2}b_{2}\rangle\\\ &-|a_{2^{\prime}}a_{2^{\prime}}\rangle-|b_{2}a_{2^{\prime}}\rangle+|a_{2^{\prime}}b_{2}\rangle)|c_{2^{\prime}}\rangle]\\\ &+\frac{\gamma\delta}{2}(\alpha^{2}|HHHHHH\rangle+\beta^{2}|VVVVVV\rangle\\\ &+\alpha\beta|HVHVHV\rangle+\beta\alpha|VHVHVH\rangle)\\\ &\otimes[|a_{1}b_{2^{\prime}}c_{1}\rangle(|b_{1}b_{2}\rangle-|a_{1^{\prime}}a_{2^{\prime}}\rangle-|b_{1}a_{2^{\prime}}\rangle\\\ &+|a_{1^{\prime}}b_{2}\rangle)|c_{1^{\prime}}\rangle+|a_{2}b_{1^{\prime}}c_{2}\rangle(|b_{2}b_{1}\rangle\\\ &-|a_{2^{\prime}}a_{1^{\prime}}\rangle-|b_{2}a_{1^{\prime}}\rangle+|a_{2^{\prime}}b_{1}\rangle)|c_{2^{\prime}}\rangle].\end{split}$ (32) The architecture of distilling the maximally hyperentangled GHZ states from $|\Psi_{0}\rangle$ is shown in Fig. 3, where Alice and Bob perform the same operations as in Fig. 1. After Alice, Bob, and Charlie have completed concentration operations, similar to that as described in Sec. II, the relationship between the output states and the corresponding detectors triggered is shown in Tab. 2. Here, these states in Tab. 2 are as follows: $\displaystyle\begin{split}|\psi^{+\pm}_{0}\rangle_{AB^{\prime}C}=&\frac{1}{2}(|HHH\rangle+|VVV\rangle)\\\ &\otimes(|a_{1}b_{1^{\prime}}c_{1}\rangle\pm|a_{2}b_{2^{\prime}}c_{2}\rangle),\\\ |\psi^{\pm+}_{0}\rangle_{AB^{\prime}C}=&\frac{1}{2}(|HHH\rangle\pm|VVV\rangle)\\\ &\otimes(|a_{1}b_{1^{\prime}}c_{1}\rangle+|a_{2}b_{2^{\prime}}c_{2}\rangle).\\\ |\psi^{+\pm}_{1}\rangle_{AB^{\prime}C}=&(\alpha^{\prime}|HVH\rangle+\beta^{\prime}|VHV\rangle)\\\ &\otimes(\gamma^{\prime}|a_{1}b_{2^{\prime}}c_{1}\rangle\pm\delta^{\prime}|a_{2}b_{1^{\prime}}c_{2}\rangle),\\\ |\psi^{\pm+}_{1}\rangle_{AB^{\prime}C}=&(\alpha^{\prime}|HVH\rangle\pm\beta^{\prime}|VHV\rangle)\\\ &\otimes(\gamma^{\prime}|a_{1}b_{2^{\prime}}c_{1}\rangle+\delta^{\prime}|a_{2}b_{1^{\prime}}c_{2}\rangle).\end{split}$ (33) Table 2: The relations between the detection signatures, the output states, and the classical feed-forward operations to complete the hyper-ECP for a hyperentangled GHZ state with unknown parameters. $|\psi_{0}^{\pm\pm}\rangle_{AB^{\prime}C}$ and $|\psi_{1}^{\pm\pm}\rangle_{AB^{\prime}C}$ are the desired and recyclable outcomes, respectively. Detectors | Detectors | Outcomes of | Feed-forward ---|---|---|--- of Charlie | of Alice and Bob | $A$, $B^{\prime}$, and $C$ | on $B^{\prime}$ $D_{H1}$ | $D_{1},\,\,\,D_{2},\,\,\,D_{3},\,\,\,D_{4},\,\,\,D_{5},\,\,\,D_{6},\,\,\,D_{7},\,\,\,D_{8}$ | $|\psi_{1}^{++}\rangle_{AB^{\prime}C}$ | none $D_{H2}$ | $(D_{2},D_{3}),\,(D_{1},D_{4}),\,(D_{6},D_{7}),\,(D_{5},D_{8})$ | | $D_{V1}$ | $(D_{1},D_{2}),\,(D_{3},D_{4}),\,(D_{5},D_{6}),\,(D_{7},D_{8})$ | | $D_{V2}$ | $(D_{1},D_{3}),\,(D_{2},D_{4}),\,(D_{5},D_{7}),\,(D_{6},D_{8})$ | | $D_{H2}$ | $D_{1},\,\,\,D_{2},\,\,\,D_{3},\,\,\,D_{4},\,\,\,D_{5},\,\,\,D_{6},\,\,\,D_{7},\,\,\,D_{8}$ | $|\psi_{1}^{+-}\rangle_{AB^{\prime}C}$ | $Z^{S}$ $D_{H1}$ | $(D_{2},D_{3}),\,(D_{1},D_{4}),\,(D_{6},D_{7}),\,(D_{5},D_{8})$ | | $D_{V2}$ | $(D_{1},D_{2}),\,(D_{3},D_{4}),\,(D_{5},D_{6}),\,(D_{7},D_{8})$ | | $D_{V1}$ | $(D_{1},D_{3}),\,(D_{2},D_{4}),\,(D_{5},D_{7}),\,(D_{6},D_{8})$ | | $D_{V1}$ | $D_{1},\,\,\,D_{2},\,\,\,D_{3},\,\,\,D_{4},\,\,\,D_{5},\,\,\,D_{6},\,\,\,D_{7},\,\,\,D_{8}$ | $|\psi_{1}^{-+}\rangle_{AB^{\prime}C}$ | $Z^{P}$ $D_{V2}$ | $(D_{2},D_{3}),\,(D_{1},D_{4}),\,(D_{6},D_{7}),\,(D_{5},D_{8})$ | | $D_{H1}$ | $(D_{1},D_{2}),\,(D_{3},D_{4}),\,(D_{5},D_{6}),\,(D_{7},D_{8})$ | | $D_{H2}$ | $(D_{1},D_{3}),\,(D_{2},D_{4}),\,(D_{5},D_{7}),\,(D_{6},D_{8})$ | | $D_{V2}$ | $D_{1},\,\,\,D_{2},\,\,\,D_{3},\,\,\,D_{4},\,\,\,D_{5},\,\,\,D_{6},\,\,\,D_{7},\,\,\,D_{8}$ | $|\psi_{1}^{--}\rangle_{AB^{\prime}C}$ | $Z^{S},Z^{P}$ $D_{V1}$ | $(D_{2},D_{3}),\,(D_{1},D_{4}),\,(D_{6},D_{7}),\,(D_{5},D_{8})$ | | $D_{H2}$ | $(D_{1},D_{2}),\,(D_{3},D_{4}),\,(D_{5},D_{6}),\,(D_{7},D_{8})$ | | $D_{H1}$ | $(D_{1},D_{3}),\,(D_{2},D_{4}),\,(D_{5},D_{7}),\,(D_{6},D_{8})$ | | $D_{H2}$ | $(D_{1},D_{8}),\,(D_{2},D_{7}),\,(D_{3},D_{6}),\,(D_{4},D_{5})$ | $|\psi_{0}^{++}\rangle_{AB^{\prime}C}$ | none $D_{V1}$ | $(D_{1},D_{6}),\,(D_{2},D_{5}),\,(D_{3},D_{8}),\,(D_{4},D_{7})$ | | $D_{H1}$ | $(D_{1}^{2t},D_{1}),\,(D_{2}^{2t},D_{2}),\,(D_{3}^{2t},D_{3}),\,(D_{4}^{2t},D_{4}),\,(D_{1}^{2t},D_{4}),\,(D_{2}^{2t},D_{3}),\,(D_{1},D_{4}^{2t}),\,(D_{2},D_{3}^{2t})$ | | | $(D_{5}^{2t},D_{5}),\,(D_{6}^{2t},D_{6}),\,(D_{7}^{2t},D_{7}),\,(D_{8}^{2t},D_{8}),\,(D_{5}^{2t},D_{8}),\,(D_{6}^{2t},D_{7}),\,(D_{5},D_{8}^{2t}),\,(D_{6},D_{7}^{2t})$ | | $D_{V2}$ | $(D_{1}^{2t},D_{2}),\,(D_{3}^{2t},D_{4}),\,(D_{1}^{2t},D_{3}),\,(D_{2}^{2t},D_{4}),\,(D_{1},D_{2}^{2t}),\,(D_{3},D_{4}^{2t}),\,(D_{1},D_{3}^{2t}),\,(D_{2},D_{4}^{2t})$ | | | $(D_{5}^{2t},D_{6}),\,(D_{7}^{2t},D_{8}),\,(D_{5}^{2t},D_{7}),\,(D_{6}^{2t},D_{8}),\,(D_{5},D_{6}^{2t}),\,(D_{7},D_{8}^{2t}),\,(D_{5},D_{7}^{2t}),\,(D_{6},D_{8}^{2t})$ | | $D_{H1}$ | $(D_{1},D_{8}),\,(D_{2},D_{7}),\,(D_{3},D_{6}),\,(D_{4},D_{5})$ | $|\psi_{0}^{+-}\rangle_{AB^{\prime}C}$ | $Z^{S}$ $D_{V2}$ | $(D_{1},D_{6}),\,(D_{2},D_{5}),\,(D_{3},D_{8}),\,(D_{4},D_{7})$ | | $D_{H2}$ | $(D_{1}^{2t},D_{1}),\,(D_{2}^{2t},D_{2}),\,(D_{3}^{2t},D_{3}),\,(D_{4}^{2t},D_{4}),\,(D_{1}^{2t},D_{4}),\,(D_{2}^{2t},D_{3}),\,(D_{1},D_{4}^{2t}),\,(D_{2},D_{3}^{2t})$ | | | $(D_{5}^{2t},D_{5}),\,(D_{6}^{2t},D_{6}),\,(D_{7}^{2t},D_{7}),\,(D_{8}^{2t},D_{8}),\,(D_{5}^{2t},D_{8}),\,(D_{6}^{2t},D_{7}),\,(D_{5},D_{8}^{2t}),\,(D_{6},D_{7}^{2t})$ | | $D_{V1}$ | $(D_{1}^{2t},D_{2}),\,(D_{3}^{2t},D_{4}),\,(D_{1}^{2t},D_{3}),\,(D_{2}^{2t},D_{4}),\,(D_{1},D_{2}^{2t}),\,(D_{3},D_{4}^{2t}),\,(D_{1},D_{3}^{2t}),\,(D_{2},D_{4}^{2t})$ | | | $(D_{5}^{2t},D_{6}),\,(D_{7}^{2t},D_{8}),\,(D_{5}^{2t},D_{7}),\,(D_{6}^{2t},D_{8}),\,(D_{5},D_{6}^{2t}),\,(D_{7},D_{8}^{2t}),\,(D_{5},D_{7}^{2t}),\,(D_{6},D_{8}^{2t})$ | | $D_{V2}$ | $(D_{1},D_{8}),\,(D_{2},D_{7}),\,(D_{3},D_{6}),\,(D_{4},D_{5})$ | $|\psi_{0}^{-+}\rangle_{AB^{\prime}C}$ | $Z^{P}$ $D_{H1}$ | $(D_{1},D_{6}),\,(D_{2},D_{5}),\,(D_{3},D_{8}),\,(D_{4},D_{7})$ | | $D_{V1}$ | $(D_{1}^{2t},D_{1}),\,(D_{2}^{2t},D_{2}),\,(D_{3}^{2t},D_{3}),\,(D_{4}^{2t},D_{4}),\,(D_{1}^{2t},D_{4}),\,(D_{2}^{2t},D_{3}),\,(D_{1},D_{4}^{2t}),\,(D_{2},D_{3}^{2t})$ | | | $(D_{5}^{2t},D_{5}),\,(D_{6}^{2t},D_{6}),\,(D_{7}^{2t},D_{7}),\,(D_{8}^{2t},D_{8}),\,(D_{5}^{2t},D_{8}),\,(D_{6}^{2t},D_{7}),\,(D_{5},D_{8}^{2t}),\,(D_{6},D_{7}^{2t})$ | | $D_{H2}$ | $(D_{1}^{2t},D_{2}),\,(D_{3}^{2t},D_{4}),\,(D_{1}^{2t},D_{3}),\,(D_{2}^{2t},D_{4}),\,(D_{1},D_{2}^{2t}),\,(D_{3},D_{4}^{2t}),\,(D_{1},D_{3}^{2t}),\,(D_{2},D_{4}^{2t})$ | | | $(D_{5}^{2t},D_{6}),\,(D_{7}^{2t},D_{8}),\,(D_{5}^{2t},D_{7}),\,(D_{6}^{2t},D_{8}),\,(D_{5},D_{6}^{2t}),\,(D_{7},D_{8}^{2t}),\,(D_{5},D_{7}^{2t}),\,(D_{6},D_{8}^{2t})$ | | $D_{V1}$ | $(D_{1},D_{8}),\,(D_{2},D_{7}),\,(D_{3},D_{6}),\,(D_{4},D_{5})$ | $|\psi_{0}^{--}\rangle_{AB^{\prime}C}$ | $Z^{S},Z^{P}$ $D_{H2}$ | $(D_{1},D_{6}),\,(D_{2},D_{5}),\,(D_{3},D_{8}),\,(D_{4},D_{7})$ | | $D_{V2}$ | $(D_{1}^{2t},D_{1}),\,(D_{2}^{2t},D_{2}),\,(D_{3}^{2t},D_{3}),\,(D_{4}^{2t},D_{4}),\,(D_{1}^{2t},D_{4}),\,(D_{2}^{2t},D_{3}),\,(D_{1},D_{4}^{2t}),\,(D_{2},D_{3}^{2t})$ | | | $(D_{5}^{2t},D_{5}),\,(D_{6}^{2t},D_{6}),\,(D_{7}^{2t},D_{7}),\,(D_{8}^{2t},D_{8}),\,(D_{5}^{2t},D_{8}),\,(D_{6}^{2t},D_{7}),\,(D_{5},D_{8}^{2t}),\,(D_{6},D_{7}^{2t})$ | | $D_{H1}$ | $(D_{1}^{2t},D_{2}),\,(D_{3}^{2t},D_{4}),\,(D_{1}^{2t},D_{3}),\,(D_{2}^{2t},D_{4}),\,(D_{1},D_{2}^{2t}),\,(D_{3},D_{4}^{2t}),\,(D_{1},D_{3}^{2t}),\,(D_{2},D_{4}^{2t})$ | | | $(D_{5}^{2t},D_{6}),\,(D_{7}^{2t},D_{8}),\,(D_{5}^{2t},D_{7}),\,(D_{6}^{2t},D_{8}),\,(D_{5},D_{6}^{2t}),\,(D_{7},D_{8}^{2t}),\,(D_{5},D_{7}^{2t}),\,(D_{6},D_{8}^{2t})$ | | After performing the feed-forward operations on photon $B^{\prime}$ shown in Tab. 2, Alice, Bob, and Charlie can obtain the maximally hyperentangled GHZ state $|\psi^{++}_{0}\rangle_{AB^{\prime}C}$ with a success probability of $4|\alpha\beta\gamma\delta|^{2}$ or a recyclable hyperentangled state $|\psi^{++}_{1}\rangle_{AB^{\prime}C}$ with a probability of $(|\alpha|^{4}+|\beta|^{4})(|\gamma|^{4}+|\delta|^{4})$. It is easy to verify that the total success probability of the scheme shown in Fig. 3 is equal to that of the one shown in Fig. 2 by recycling $|\psi^{++}_{1}\rangle_{AB^{\prime}C}$. Additionally, as long as the arbitrary single-photon pair $(D_{i},D_{j})$ $(i,j=1,2,\cdots,8)$ belonging to Alice and Bob is triggered with a time interval of $t$ or $3t$, the concentration process fails and is terminated, regardless of the detection signatures of Charlie. Figure 3: (Color online) Schematic diagram of the hyper-ECP for a hyperentangled GHZ state with unknown parameters. $S_{1}$ and $S_{2}$ are partial hyperentanglement sources for $|\psi\rangle_{ABC}$ and $|\psi\rangle_{A^{\prime}B^{\prime}C^{\prime}}$, respectively. ## III Discussion and Summary Hyperentanglement concentration is one of the methods to decrease the influence of channel noise in high-capacity long-distance quantum communication. ECP or hyper-ECP is used to deal with the case where the pure state remains after being subjected to noisy channels. In practice, when affected by noise, the input state is not necessarily still pure but may be a mixed state, in which case EPPs are required. In this article, we design a hyper-ECP for unknown hyperentangled Bell states in both the polarization and spatial DOFs that can be unambiguously heralded by the detection signatures. Remarkably, this heralded hyper-ECP can be extended to concentrate the GHZ state case. In the constructions of previous hyper-ECPs using linear optics, PBSs are employed to complete the parity-check measurement of polarization photon pairs hyper-ECP1 ; hyper-ECP2 ; hyper-ECP3 ; hyper-ECP4 . Meanwhile, postselection is necessary to explicitly identify the instances where each of the spatial modes contains exactly one photon without destruction of the incident photon, which leads to schemes that cannot be accomplished simply with linear optical elements. We avoid postselections or sophisticated photon-number-resolving detectors by introducing time-delay DOF as in Ref. BSA3 , and by the detection signatures of the common single-photon detector pair $(D_{i},D_{j})$, the instances $\\{|HH\rangle|a_{1}b_{1^{\prime}}\rangle$, $|HH\rangle|a_{2}b_{2^{\prime}}\rangle$, $|VV\rangle|a_{1}b_{1^{\prime}}\rangle$, $|VV\rangle|a_{2}b_{2^{\prime}}\rangle\\}$, $\\{|HV\rangle|a_{1}b_{2^{\prime}}\rangle$, $|HV\rangle|a_{2}b_{1^{\prime}}\rangle$, $|VH\rangle|a_{1}b_{2^{\prime}}\rangle$, $|VH\rangle|a_{2}b_{1^{\prime}}\rangle\\}$, and the other terms can be perfectly distinguished without destruction of the incident photon. Additionally, the success probability of our linear optic architectures can be increased from $\frac{1}{4}$ to $\frac{5}{16}$ by recycling partially hyperentangled states. In previous works, only hyper-ECPs assisted by nonlinear mediates allow the recyclable program to efficiently improve the success probability hyper-ECP5 ; hyper-ECP6 ; hyper-ECP7 ; hyper-ECP8 . That is attributed to the quantum anti-Zeno effect, which accelerates the dynamic evolution towards the target state by frequent measurement Zeno1 . Although the success probability of $\frac{5}{16}$ is still lower than that of nonlinear hyper-ECPs, linear optic implementations of our schemes are readily available in practice with current technology. In the overall process, we assume that the performances of linear optical elements and single-photon detectors are perfect. However, this does not work in a practical experiment, and the schemes may be negatively affected by noise during application Wang2018 . Quantum Zeno effect can enhance the measurement accuracy of entangled probes to resist these disadvantages Zeno2 . In summary, we present a practically enhanced hyper-ECP for the hyperentangled Bell state in both the polarization and spatial mode DOFs with unknown parameters using available linear optical elements and common single-photon detectors. By introducing the time-delay DOF to unbalanced interferences, the practicality of our protocol is efficiently enhanced as the scheme can be perfectly heralded by the unique detection signatures, and postselection or sophisticated photon-number-resolving detectors are not needed. Moreover, the success probability of our linear optical hyper-ECP is higher than that of Ref. hyper-ECP1 by recycling partially hyperentangled states. Note that different from previous works, the hyperparallel parity-check measurements in our scheme acting on polarization and spatial DOFs are simultaneously accomplished in one step. We also extend the hyper-ECP for hyperentangled Bell states to generic hyperentangled GHZ states. We will study nonpostselection EPP in the future. ## ACKNOWLEDGEMENTS This work is supported by the Fundamental Research Funds for the Central Universities under Grant No. FRF-TP-19-011A3 and the National Natural Science Foundation of China under Grant No. 11604012. ## References * (1) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, 2000). * (2) D. P. DiVincenzo, Quantum computation, Science 270, 255 (1995). * (3) A. K. Ekert, Quantum cryptography based on Bell’s theorem, Phys. Rev. Lett. 67, 661 (1991). * (4) X. F. Ma, P. Zeng, and H. Y. Zhou, Phase-Matching Quantum Key Distribution, Phys. Rev. X 8, 031043 (2018). * (5) S. T. Ren, Y. Wang, and X. L. Su, Hybrid quantum key distribution network, Sci. China-Inf. Sci. 65, 200502 (2022). * (6) X. L. Wang, X. D. Cai, Z. E. Su, M. C. Chen, D. Wu, L. Li, N. L. Liu, C. Y. Lu, and J. W. Pan, Quantum teleportation of multiple degrees of freedom of a single photon, Nature (London) 518, 516 (2015). * (7) P. Lipka-Bartosik and P. Skrzypczyk, Catalytic Quantum Teleportation, Phys. Rev. Lett. 127, 080502 (2021). * (8) C. H. Bennett and S. J. Wiesner, Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states, Phys. Rev. Lett. 69, 2881 (1992). * (9) Y. X. Chen, S. S Liu, Y. B. Lou, and J. T. Jing, Orbital Angular Momentum Multiplexed Quantum Dense Coding, Phys. Rev. Lett. 127, 093601 (2021). * (10) J. Bogdanski, N. Rafiei, and M. Bourennane, Experimental quantum secret sharing using telecommunication fiber, Phys. Rev. A 78, 062307 (2008). * (11) K. Senthoor and P. K. Sarvepalli, Communication efficient quantum secret sharing, Phys. Rev. A 100, 052313 (2019). * (12) S. M. Lee, S. W. Lee, H. Jeong, and H. S. Park, Quantum Teleportation of Shared Quantum Secret, Phys. Rev. Lett. 124, 060501 (2020). * (13) W. Zhang, D. S. Ding, Y. B. Sheng, L. Zhou, B. S. Shi, and G. C. Guo, Quantum Secure Direct Communication with Quantum Memory, Phys. Rev. Lett. 118, 220501 (2017). * (14) Z. W. Cao, L. Wang, K. X. Liang, G. Chai, and J. Y. Peng, Continuous-Variable Quantum Secure Direct Communication Based on Gaussian Mapping, Phys. Rev. Applied 16, 024012 (2021). * (15) Y. B. Sheng, L. Zhou, and G. L. Long, One-step quantum secure direct communication, Sci. Bull. 67, 367 (2022). * (16) B. N. Simon, C. M. Chandrashekar, and S. Simon, Hamilton’s turns as a visual tool kit for designing single-qubit unitary gates, Phys. Rev. A 85, 022323 (2012). * (17) T. J. Wang, Y. Lu, and G. L. Long, Generation and complete analysis of the hyperentangled Bell state for photons assisted by quantum-dot spins in optical microcavities, Phys. Rev. A 86, 042337 (2012). * (18) J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat, Generation of Hyperentangled Photon Pairs, Phys. Rev. Lett. 95, 260501 (2005). * (19) M. Barbieri, F. D. Martini, P. Mataloni, G. Vallone, and A. Cabello, Enhancing the Violation of the Einstein-Podolsky-Rosen Local Realism by Quantum Hyperentanglement, Phys. Rev. Lett. 97, 140407 (2006). * (20) M. Barbieri, G. Vallone, P. Mataloni, and F. D. Martini, Complete and deterministic discrimination of polarization Bell states assisted by momentum entanglement, Phys. Rev. A 75, 042317 (2007). * (21) A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, Entanglement of the orbital angular momentum states of photons, Nature (London) 412, 313 (2001). * (22) D. Bhatti, J. von Zanthier, and G. S. Agarwal, Entanglement of polarization and orbital angular momentum, Phys. Rev. A 91, 062303 (2015). * (23) J. T. Barreiro, T. C. Wei, and P. G. Kwiat, Beating the channel capacity limit for linear photonic superdense coding, Nat. Phys. 4, 282 (2008). * (24) B. C. Ren, G. Y. Wang, and F. G. Deng, Universal hyperparallel hybrid photonic quantum gates with dipoleinduced transparency in the weak-coupling regime, Phys. Rev. A 91, 032328 (2015). * (25) T. Li and G. L. Long, Hyperparallel optical quantum computation assisted by atomic ensembles embedded in double-sided optical cavities, Phys. Rev. A 94, 022343 (2016). * (26) B. C. Ren and F. G. Deng, Robust hyperparallel photonic quantum entangling gate with cavity QED, Opt. Express 25, 10863 (2017). * (27) T. C. Wei, J. T. Barreiro, and P. G. Kwiat, Hyperentangled Bell-state analysis, Phys. Rev. A 75, 060305 (2007). * (28) Y. B. Sheng, F. G. Deng, and G. L. Long, Complete hyperentangled-Bell-state analysis for quantum communication, Phys. Rev. A 82, 032318 (2010). * (29) Q. Liu and M. Zhang, Generation and complete nondestructive analysis of hyperentanglement assisted by nitrogen-vacancy centers in resonators, Phys. Rev. A 91, 062321 (2015). * (30) G. Y. Wang, Q. Ai, B. C. Ren, T. Li, and F. G. Deng, Error-detected generation and complete analysis of hyperentangled Bell states for photons assisted by quantum-dot spins in double-sided optical microcavities, Opt. Express 24, 28444 (2016). * (31) X. J. Zhou, W. Q. Liu, H. R. Wei, Y. B. Zheng, and Fang-Fang Du, Deterministic and complete hyperentangled Bell states analysis assisted by frequency and time interval degrees of freedom, Front. Phys. (Beijing) 17, 41502 (2022). * (32) G. Y. Wang, Q. Liu, and F. G. Deng, Hyperentanglement purification for two-photon six-qubit quantum systems, Phys. Rev. A 94, 032319 (2016). * (33) T. J. Wang, S. C. Mi, and C. Wang, Hyperentanglement purification using imperfect spatial entanglement, Opt. Express 25, 2969 (2017). * (34) F. F. Du, Y. T. Liu, Z. R. Shi, Y. X. Liang, J. Tang, and J. Liu, Efficient hyperentanglement purification for three-photon systems with the fidelity-robust quantum gates and hyperentanglement link, Opt. Express 27, 27046 (2019). * (35) S. P. Walborn, S. Pádua, and C. H. Monken, Hyperentanglement-assisted Bell-state analysis, Phys. Rev. A 68, 042313 (2003). * (36) C. Schuck, G. Huber, C. Kurtsiefer, and H. Weinfurter, Complete Deterministic Linear Optics Bell State Analysis, Phys. Rev. Lett. 96, 190501 (2006). * (37) B. P. Williams, R. J. Sadlier, and T. S. Humble, Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements, Phys. Rev. Lett. 118, 050501 (2017). * (38) C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K. Wootters, Purification of Noisy Entanglement and Faithful Teleportation via Noisy Channels, Phys. Rev. Lett. 76, 722 (1996). * (39) Y. B. Sheng and F. G. Deng, One-step deterministic polarization-entanglement purification using spatial entanglement, Phys. Rev. A 82, 044305 (2010). * (40) X. H. Li, Deterministic polarization-entanglement purification using spatial entanglement, Phys. Rev. A 82, 044304 (2010). * (41) F. Riera-Sàbat, P. Sekatski, A. Pirker, and W. Dür, Entanglement-Assisted Entanglement Purification, Phys. Rev. Lett. 127, 040502 (2021). * (42) C. X. Huang, X. M. Hu, B. H. Liu, L. Zhou, Y. B. Sheng, C. F. Li, and G. C. Guo, Experimental one-step deterministic polarization entanglement purification, Sci. Bull. 67, 593 (2022). * (43) C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher, Concentrating partial entanglement by local operations, Phys. Rev. A 53, 2046 (1996). * (44) T. Yamamoto, M. Koashi, and N. Imoto, Concentration and purification scheme for two partially entangled photon pairs, Phys. Rev. A 64, 012304 (2001). * (45) Z. Zhao, T. Yang, Y. A. Chen, A. N. Zhang, and J. W. Pan, Experimental Realization of Entanglement Concentration and a Quantum Repeater, Phys. Rev. Lett. 90, 207901 (2003). * (46) M. Yang, Y. Zhao, W. Song, and Z. L. Cao, Entanglement concentration for unknown atomic entangled states via entanglement swapping, Phys. Rev. A 71, 044302 (2005). * (47) Y. B. Sheng, F. G. Deng, and H. Y. Zhou, Nonlocal entanglement concentration scheme for partially entangled multipartite systems with nonlinear optics, Phys. Rev. A 77, 062325 (2008). * (48) Y. B. Sheng, L. Zhou, and S. M. Zhao, Efficient two-step entanglement concentration for arbitrary $W$ states, Phys. Rev. A 85, 042302 (2012). * (49) H. Zhang and H. B. Wang, Entanglement concentration of microwave photons based on the Kerr effect in circuit QED, Phys. Rev. A 95, 052314 (2017). * (50) S. S. Chen, H. Zhang, Q. Ai, and G. J. Yang, Phononic entanglement concentration via optomechanical interactions, Phys. Rev. A 100, 052306 (2019). * (51) B. C. Ren, F. F. Du, and F. G. Deng, Hyperentanglement concentration for two-photon four-qubit systems with linear optics, Phys. Rev. A 88, 012302 (2013). * (52) X. H. Li and S. Ghose, Hyperentanglement concentration for time-bin and polarization hyperentangled photons, Phys. Rev. A 91, 062302 (2015). * (53) B. C. Ren, H. Wang, F. Alzahrani, A. Hobiny, and F. G. Deng, Hyperentanglement concentration of nonlocal two photon six-qubit systems with linear optics, Ann. Phys. (New York) 385, 86 (2017). * (54) H. Wang, B. C. Ren, A. H. Wang, A. Alsaedi, T. Hayat, and F. G. Deng, General hyperentanglement concentration for polarization-spatial-time-bin multi-photon systems with linear optics, Front. Phys. (Beijing) 13, 130315 (2018). * (55) C. Y. Li and Y. Shen, Asymmetrical hyperentanglement concentration for entanglement of polarization and orbital angular momentum, Opt. Express 27, 13172 (2019). * (56) X. H. Li and S. Ghose, Efficient hyperconcentration of nonlocal multipartite entanglement via the cross-Kerr nonlinearity, Opt. Express 23, 3550 (2015). * (57) B. C. Ren and G. L. Long, General hyperentanglement concentration for photon systems assisted by quantum dot spins inside optical microcavities, Opt. Express 22, 6547 (2014). * (58) B. C. Ren and F. G. Deng, Hyperentanglement purication and concentration assisted by diamond NV centers inside photonic crystal cavities, Laser Phys. Lett. 10, 115201 (2013). * (59) B. C. Ren and G. L. Long, Highly efficient hyperentanglement concentration with two steps assisted by quantum swap gates, Sci. Rep. 5, 16444 (2015). * (60) Q. Ai, Y. Li, H. Zheng, and C. P. Sun, Quantum anti-Zeno effect without rotating wave approximation, Phys. Rev. A 81, 042116 (2010). * (61) B. X. Wang, M. J. Tao, Q. Ai, T. Xin, N. Lambert, D. Ruan, Y. C. Cheng, F. Nori, F. G. Deng, and G. L. Long, Efficient quantum simulation of photosynthetic light harvesting, npj Quantum Inf. 4, 52 (2018). * (62) X. Y. Long, W. T. He, N. N. Zhang, K. Tang, Z. D. Lin, H. F. Liu, X. F. Nie, G. R. Feng, J. Li, T. Xin, Q. Ai, and D. W. Lu, Entanglement-Enhanced Quantum Metrology in Colored Noise by Quantum Zeno Effect, Phys. Rev. Lett. 129, 070502 (2022).
# Towards Behavioral-aware Crowd Management System Yixin Zhang Carnegie Mellon University4616 Henry StreetPittsburghUSA <EMAIL_ADDRESS>, Tianyu Zhao University of California, IrvineIrvine, CA 92697IrvineUSA<EMAIL_ADDRESS>and Salma Elmalaki University of California, IrvineIrvine, CA 92697IrvineUSA<EMAIL_ADDRESS> (2023) ###### Abstract. Instances of casualties resulting from large crowds persist, highlighting the existing limitations of current crowd management practices. One notable drawback is the insufficient provision for disadvantaged individuals who may require additional time to evacuate due to their slower running speed. Moreover, the existing escape strategies may fall short of ensuring the safety of all individuals during a crowd surge. To address these pressing concerns, this paper proposes two crowd management methodologies. Firstly, we advocate for the implementation of a fair evacuation strategy following a surge event, which takes into account the diverse needs of all individuals, ensuring inclusivity and mitigating potential risks. Secondly, we propose a preventative approach involving the adjustment of attraction locations and switching between stage performances in large-crowded events to minimize the occurrence of surges and enhance crowd dispersion. To assess the effectiveness of our proposals, we used high-fidelity crowd management simulators. Our findings demonstrate the positive impact of the fair evacuation strategy on safety measures and inclusivity, which increases fairness by $41.8\%$ on average. Furthermore, the adjustment of attraction locations and stage performances has shown a significant reduction in the incidence of surges by $34\%$ on average, thereby enhancing overall crowd safety. crowd management, human-in-the-loop, fairness, surge prevention ††copyright: acmcopyright††journalyear: 2023††doi: XXXXXXX.XXXXXXX††conference: Under Review; ; ## 1\. Introduction Crowd surge incidents, which occur when a large number of people enter or exit a confined space, pose significant risks to public safety (Illiyas et al., 2013). Recent stampede incidents in Itaewon (Sharma et al., 2023) and Astroworld Festival (The New York Times, 2021) highlight the need for improved crowd management methods. In Itaewon, trapped individuals struggled to move or breathe, leading to $156$ deaths and $170$ injuries. At the Astroworld Festival, $10$ deaths and multiple injuries from the crush near the stage. Limited entry points, uneven terrain, and unexpected choke points increase the risk of stampedes. Event organizers must implement strategies, such as careful scheduling, venue planning, sufficient staffing, and clear communication to ensure attendees’ safety and prevent property damage. These tragedies emphasize the importance of effective crowd management at mass gathering events. “Smart Crowd Management and Control System” (CMS) is tasked with monitoring, directing, and managing large groups of people—with an eye toward safety, efficiency, and satisfaction. CMS requires a diverse range of knowledge, including engineering, technology, and understanding of crowd behavior (Sharma et al., 2018). The goal is to prevent crowd surge incidents through meticulous planning and execution. Effective crowd management involves multiple stages: pre-event planning, event monitoring and control, post-event feedback, and improvement. This holistic approach ensures continuous enhancement in crowd management strategies for future events. However, we believe that each of these approaches has different effects on healthy individuals, people with disabilities, children, pregnant women, and neurodivergent communities. Therefore, while managing the crowd flow, CMS should not only aim to maximize the efficiency, but also ensure fairness in distributing the adverse effect of the CMS control actions among different individuals. This vision is attainable with the advancement in sensor technologies to estimate the human state through wearable devices, and the decision-making algorithms that provide trade-offs between system performance, fairness, and privacy in multi-human environment (Elmalaki, 2021; Taherisadr et al., 2023). This technological leap in human sensing and decision-making algorithms should be exploited in CMS to ensure inclusivity in crowd management algorithms. In this paper, we advocate for implementing a fair evacuation strategy and prevention approaches that account for the diverse needs of all individuals. By embracing an inclusive approach, we can provide the necessary time and assistance to disadvantaged individuals, helping them to evacuate safely and efficiently. Through thoughtful planning and coordination, we can mitigate potential risks and minimize casualties. The contribution of this paper is in two aspects, (1) fair evacuation strategy: we investigated the correlation between assigning escape exits and the evacuation efficiency and fairness measured in time-to-escape, particularly after a crowd surge incident; and (2) prevention approach: we performed experiments for timely adjustments to the location of stages in big gathering events to prevent crowd surge incidents. We utilize high-fidelity CMS simulators, including Vadere (Kleinmeier et al., 2019) and NetLogo (Tisue and Wilensky, 2004) to simulate various crowd scenarios. ## 2\. Background & Related Work Crowd surges pose a significant risk when a large number of individuals attempt to enter or exit a confined area, leading to increased pressure and potential danger. In tightly packed crowds, people lose control of their movements and face difficulties breathing. The lack of space for recovery makes stumbling or falling particularly hazardous, putting individuals at risk of suffocation and injuries from being crushed. The probability of a surge occurring is closely related to crowd density. When there are 4-5 people per square meter, the crowd remains relatively safe, with enough space for individuals to make movement decisions. However, when the number exceeds 6 per square meter, the limited available space forces tight packing and diminishes individual control, significantly increasing the likelihood of a surge (Abuarafah et al., 2012). A single stumble or jolt within the crowd can trigger a chain reaction, creating voids that disrupt the crowd’s equilibrium. Subsequently, more people stumble into these voids, setting off a domino effect and generating additional voids. This interplay of forces can cause abrupt collapses, intensifying pressure and chaos within the crowd, potentially leading to injuries or fatalities if not managed properly (Aalami and Kattan, 2020). Throughout history, large-scale stampedes have taken place worldwide, leading to severe loss of life and property damage. On October 29, 2022, a Halloween event occurred in Seoul, South Korea, attracting tens of thousands of costumed attendees to the Itaewon district. This marked the first unrestricted Halloween celebration in over two years due to COVID-19 lockdowns. The massive crowd in the narrow streets, coupled with limited entry and exit points, created a dangerous situation. Videos from that night show trapped individuals struggling to move or breathe, fueling panic that spiraled out of control. This catastrophe led to one of South Korea’s worst stampede disasters, with 156 deaths and 170 crush injuries (Sharma et al., 2023). Similarly, on April 30, 2021, a devastating crowd surge occurred at Mount Meron, Israel, during an annual pilgrimage on the Jewish holiday of Lag BaOmer. The event attracted around 100,000 attendees and resulted in 45 deaths and approximately 150 injuries, making it Israel’s deadliest civilian disaster. The tragedy unfolded as people exited a mountainside compound, moving through a wet, sloping passageway towards a staircase. Witnesses described attendees tripping and slipping, while others unknowingly continued, causing trampling, crushing, and asphyxiation. Israeli media reported that COVID-19 precautions created unanticipated choke points. Additionally, bonfires were not lit simultaneously, leading to people attending multiple lightings and increasing the number of attendees. These tragic incidents highlight the importance of crowd management at mass gathering events. The sheer number of people in a confined space can create a dangerous situation that can quickly spiral out of control, resulting in stampedes and crush injuries. Factors such as limited entry and exit points, uneven terrain, and unexpected choke points can exacerbate the risk of a stampede. Therefore, it is crucial for event organizers and authorities to implement effective crowd management strategies to prevent such incidents. This comprises actions such as appropriate scheduling of event timing, meticulous event venue planning and design, sufficient staffing, and unambiguous communication and signage. Effective crowd management not only ensures the safety and well-being of attendees but also helps to prevent damage to property and infrastructure. In light of the recent stampede incidents, it is clear that crowd management should be given the utmost importance in planning and executing mass gathering events. Crowd management is a multifaceted field that necessitates knowledge of engineering and technology, as well as comprehension of crowd behavior and crowd flow management, encompassing psychological and sociological aspects (Sharma et al., 2018). By meticulous planning and execution, the objective of crowd management is to prevent crowd incidents (Martella et al., 2017). Effective crowd management is a holistic process that includes several stages. It begins with meticulous planning before the event, considering all potential scenarios and preparing for them. During the event, the crowd needs to be closely monitored and controlled to ensure everyone’s safety. After the event, it’s important to gather feedback to understand what worked well and what didn’t. Finally, these insights and lessons learned should be reported and used to improve crowd management strategies for future events. This approach ensures continuous improvement in managing crowds effectively (Sharma et al., 2018). In the pre-event planning stage, two primary technologies play a crucial role: crowd modeling and simulation, and social and web data mining. Crowd modeling and simulation enable the creation of virtual crowd scenarios, which serve as testing grounds for various crowd management strategies and their effectiveness. On the other hand, social and web data mining provides valuable insights into crowd demographics, behaviors, and trends. These insights help inform decision-making and enable the customization of crowd-management strategies to suit specific audience profiles. By leveraging these technologies in pre-event planning, crowd management can be approached with a greater level of knowledge, strategy, and effectiveness(Sharma et al., 2018). As for the in-event control period, the acquisition of crowd data during monitoring, decision-making based on data analysis, and the implementation of crowd control measures are three key steps for success(Sharma et al., 2018). The primary goal of crowd control during the event is to detect instances of mass panic and respond quickly to dangerous situations. Various existing research proposed numerous methods for detecting crowd density to prevent surge incidents or to enforce social distancing. For instance, infrared thermal video sequences have been employed to monitor and estimate the density of crowds in real-time during large-scale public events (Abuarafah et al., 2012). In addition, given the widespread Wi-Fi availability, it has been used to monitor crowd behavior and interaction (Zhou et al., 2020; Weppner et al., 2016). Post-event feedback is crucial for preventing future incidents, and in this regard, social media data plays a pivotal role. The system for situational awareness can be enhanced by integrating feedback from the crowd and information related to the crisis that comes from social media. For example, in a system known as the HADRian, social media data was scrutinized after the Boston Marathon bombing in April 2013 to identify any unexploded or additional bombs(Ulicny et al., 2013). Another example is Ushahidi, which is a versatile data collection, management, and visualization tool that enables data collection from multiple sources such as SMS, email, web, Twitter, and RSS, and offers robust features for post management and triaging through filters and workflows(Ushahidi, [n. d.]). Systems built on Ushahidi have been implemented worldwide in numerous situations, for instance, to oversee disaster relief efforts after the Haiti Earthquake in January 2010(Yuan et al., 2013). Moreover, the outbreak of COVID-19 needed new real-time approaches for crowd monitoring and management systems for social distancing. Furthermore, Virtual Reality (VR) technology has been applied to replicate the crucial parts of the 2010 Love Parade tragedy. Analyzing the emotional responses and stress levels of participants helps decision-makers gain enhanced insights into crowd management strategies for comparable occurrences (Zhao, 2016). Additionally, some studies have devised crowd management approaches that consider balancing crowd density and movement efficiency. One such study suggested a method that combines Geographic Information Systems and Agent-Based Modeling to simulate the movement of pilgrims during the Hajj days. The study simulated and evaluated five different scheduling plans for pilgrim movement and the results allow hajj authorities to make informed decisions about the most appropriate scheduling plans in terms of safety and effectiveness(Yaagoubi et al., 2023). ## 3\. Evacuation Fairness Experiment The importance of fairness in evacuating crowds lies in achieving a balanced distribution of evacuees across routes, ensuring equitable waiting times for different groups. Vulnerable groups, such as the elderly or pregnant women, are often overlooked in standard evacuation plans due to their physical limitations (United Nations, 2004). The motivation for this section is to propose evacuation plans that ensure similar evacuation times for all individuals, regardless of their physical condition. ### 3.1. Modeling and Simulation Our experimental goal is to explore the design of evacuation routes for different groups of people in various crowd-gathering locations, aiming to achieve both high evacuation efficiency and fairness towards vulnerable populations. In real-life scenarios, the different running speeds of vulnerable groups and normal individuals can influence evacuation times, potentially leading to hazards like pushing or tripping. To address this, we propose a strategy where a dedicated evacuation exit is designated exclusively for vulnerable groups, guided by mobile notifications or other means, while other individuals can use the nearest exit. We hypothesize that this design can reduce overall evacuation time, especially for vulnerable groups, ensuring efficiency and fairness simultaneously. To validate our hypothesis, we proposed $3$ strategies across multiple crowded event scenarios to assign the evacuation gate: * • Randomly gate assignment(RGA): Individuals evacuate by randomly selecting a gate without any specific guidance. * • Vulnerable people exclusive gate assignment(VEGA): Vulnerable individuals are directed to a designated gate exclusively, while normal people are assigned the closest gate. * • Closest gate assignment(CGA): All individuals are assigned the closest gate regardless of their physical state. We used Vadere (Kleinmeier et al., 2019) to simulate various crowd event setups which we call a map. Four evacuation exits were placed at the corners of the map. On the map, there are $1363$ people, consisting of $340$ vulnerable people and $1023$ healthy normal people. The average slow running speed of healthy and young individuals (aged 20-45 years) is $\approx$ 5.4 miles per hour or $\approx$ 2.4 meters per second (Lung et al., 2021). Hence, we set the average speed of normal people to be $1.0-1.3$ meters per timestep. Each timestep represents $0.48$ seconds. Vulnerable people, such as the elderly move at a slower pace We exploited the Optimal Steps Model (OSM) in Vadere which incorporates the psychological principle of “social distance” into its mathematical framework, which means that agents strive to avoid encroaching on others’ personal or intimate space and to prevent physical contact (Kleinmeier et al., 2019). (a)(b) (c)(d) Figure 1. The four scenarios of crowd distribution in Vadere, where blue dots represent individuals, and orange blocks represent exit locations. (a): Center crowd gathering; (b): Non-center crowd gathering; (c): Evenly crowd dispersing; (d): Unevenly crowd dispersing ### 3.2. Evaluation We simulated various scenarios for crowd distribution on the map illustrated in Figure 1: * • Scenario 1: Center crowd gathering: Serving as an exemplar for an event with a setup focused on the center stage. * • Scenario 2: Non-center crowd gathering: Serving as an exemplar for an event with a setup focused on a non-center stage. * • Scenario 3: Evenly crowd dispersing: Representing carnival event across the entire map. * • Scenario 4: Unevenly crowd dispersing: Representing carnival event with varying area of crowd densities: $75\%$ in top-left, $10\%$ in top-right, $10\%$ in bottom-left, and $5\%$ in bottom-right. | RGA | VEGA | CGA ---|---|---|--- $(V,N,Ratio)$ | $all$ | $(G1,G2,G3,G4)$ | $(V,N,Ratio)$ | $all$ | $(G1,G2,G3,G4)$ | $(V,N,Ratio)$ | $all$ | $(G1,G2,G3,G4)$ $S_{1}$ | (149.9, 80.9, 0.54) | 98.1 | (201, 187, 195, 195) | (154.3, 72.4, 0.47) | 92.8 | (198, 99, 88, 98) | (125.7, 68.3, 0.54) | 82.6 | (143, 144, 146, 150) $S_{2}$ | (153.4, 83.8, 0.55) | 101.1 | (143, 193, 250, 200) | (97.0, 83.1, 0.86) | 86.5 | (140, 105, 115, 103) | (132.1, 70.9, 0.54) | 86.2 | (90, 153, 204, 158) $S_{3}$ | (138.0, 70.5, 0.51) | 87.3 | (263, 254, 227, 257) | (137.4, 33.0, 0.24) | 59.0 | (262, 61, 63, 63) | (64.7, 33.5, 0.52) | 41.3 | (122, 116, 129, 112) $S_{4}$ | (140.7, 69.5, 0.49) | 87.3 | (226, 233, 258, 250) | (91.2, 58.3, 0.64) | 66.5 | (260, 93, 60, 90) | (79.0, 41.8, 0.53) | 51.1 | (132, 122, 120, 116) Table 1. Comparison of average evacuation times for vulnerable ($V$) and normal people ($N$) and across all all people ($all$), fairness index $Ratio=\frac{N}{V}$, and Gate Time $(G1,G2,G3,G4)$ using $3$ different strategies under $4$ different scenarios ($S_{1},\dots S_{4}$). Time is measured in the simulation step. We measure the fairness of the time-to-evacuation across all individuals. We used a metric fairness index (FI) which indicates the ratio of normal healthy people’s time-to-evacuation to vulnerable people’s time-to-evacuation. This means the higher the FI value, the higher the notion of fairness. Table 1 presents the results of average time-to-evacuation and FI using three different strategies. Gate Time $(G1,G2,G3,G4)$ shows the last human exiting time in each gate, providing insight into the gate utility. RGA shows the worst time-to-evacuation for all people in all scenarios. In scenarios 1 and 3, CGA demonstrates the best average time-to-evacuation for all people. In scenarios 2 and 4, VEGA achieves the highest FI of $86\%$ and $64\%$, respectively, with a slightly slower average time-to-evacuation compared to CGA for all individuals. #### Takeaways: Our preliminary experiment aims to investigate whether a specific evacuation strategy for vulnerable groups, such as VEGA, enhances fairness while maintaining overall efficiency. In crowd scenarios 1 and 3, the presence of a vulnerable exclusive gate may not improve fairness or efficiency when people initially gather around the center of each exit. Nonetheless, when a considerable number of vulnerable individuals congregate near a single exit, such as in scenarios 2 and 4, the VEGA strategy leads to a fairness improvement of $41.8\%$ on average, compared to both RGA and CGA. In scenario 2, the VEGA strategy demonstrates a fairness increase of $56\%$ and $59\%$ compared to RGA and CGA, while in scenario 4, it shows a fairness improvement of $31\%$ and $21\%$ compared to RGA and CGA. VEGA can decrease the time gap between vulnerable groups and others, and the average time-to-evacuation either reduces or remains relatively unchanged. In real-world scenarios, it is anticipated that some individuals may not adhere to the recommended guidelines set by the organizers or the automated crowd management system. Therefore, a comprehensive study into the social psychological dynamics of the crowd becomes imperative to better understand and address such situations. ## 4\. Preventative Strategy Our second experiment is motivated by the tragic Astroworld Festival accident in 2021 (The New York Times, 2021). The festival featured two stages: the main stage for the main performance and a secondary stage with performances by other artists throughout the day. During the concert night, after one performance at the secondary stage, the audience began moving towards the already crowded area near the main stage, resulting in a surge and crush near the stage, as depicted in Figure 2. Figure 2. The illustration of the 2021 Astroworld Festival accident. Our prevention strategy suggests alleviating congestion during stage performances by employing stage switching at designated time intervals. This approach aims to decrease the likelihood of overcrowding and improve crowd management throughout the event. To determine the optimal switching points between stages, we propose three metrics; the Panic State and the Surge State per individual, and the Crowded State for each subarea. We will explain how to determine these metrics in Section 4.1. We developed a tool using NetLogo, an agent-based programmable modeling environment (Tisue and Wilensky, 2004; Zia and Ferscha, 2020), to configure various user-definable attributes, such as position and walking speed. Figure 3. Our proposed simulator tool interface using NetLogo. The top-left and the bottom-right red rectangles in the simulation map represent two stages. The upper yellow dot and lower blue dot symbolize the restroom and bar, respectively. The blue area represents patches close to the left stage, while the green area represents patches close to the right stage. Map A Map B Map C Figure 4. Three different stage setups. ### 4.1. Modeling and Simulation We simulate a crowded scene with two stages in a two-dimensional square world composed of $51\times 51$ patches. Each patch is represented by a xy- coordinate point, with the origin $(0,0)$ located at the left bottom. The map is divided into multiple subareas using a $5\times 5$ grid, with each subareas containing $25$ patches. Each human, depicted as triangular shapes, resides on these patches and utilizes information from the underlying patch and neighboring patches and people to make decisions. The scene includes two stages, a bar, and a restroom. The stages are positioned at the far left and right sides of the scene, respectively, while the bar and restroom are located at the top and bottom sides of the scene. Figure 3 provides a visual representation of the map. In order to make the simulation scenes more realistic, the simulation considers four factors: the speed of humans, psychological factors, such as comfort zone and the preferred distance from the stage performance, and hesitation time the frequency and duration of visits to the bar or restroom, which can be changed in the interface, The first is speed. The paper uses the ”random” method in NetLogo’s code to randomly set the speed of half of the agents to 1 step per time-step, and the speed of the other half of the agents to 2 steps per time-step. The second is a comfort zone and the preferred distance. This paper takes into account that everyone has a different level of comfort when it comes to the distance from the stage. For example, some people like to watch performances right by the stage, while others are satisfied with watching from a slightly farther distance. Therefore, in the NetLogo code, we randomly set the comfort distance of each agent between 1-10 meters. Third, People in the scene randomly go to the bar and a restroom. The total time spent and the frequency of going to the bar or restroom could be set in the NetLogo interface. Our current setting is that 40% of the agents go to the bar or restroom every 50 time-steps, and the duration of each trip is 50 time- steps. Additionally, we have included adjustment bars for users to set the frequency and duration of going to the bar or restroom in the NetLogo interface according to their requirements. The last one is hesitation time. This paper considers the variation in the time agents take to decide whether to switch stages after a performance has ended. Some agents may move to another stage immediately, while others may stay around the original stage for some time before moving to another one. To capture this, we assigned a random hesitation time between 1-20 timesteps to each agent in the NetLogo code. Our prevention strategy focuses on determining the best time to switch stages, considering various parameters. To achieve this, we have developed a simulation tool that can evaluate different scenarios and parameters, allowing us to estimate the most suitable moment for switching the performance to a new stage. We propose using a set of metrics to determine the status of individuals and subareas during the event: * • Panic state: Once an individual is blocked and stuck on the way to the restroom or bar for some time more than a particular threshold (panic threshold (PT)) the individual enters the panic status. * • Surge state: Once an individual is blocked and stuck on the way to the stage for some time more than a particular threshold (surge threshold (ST)), the individual enters the surge state. * • Crowded state: A subarea enters the crowded state when over $70\%$ of its patches have people, and at least one person is in a panic or surge state. * • Switch index (SI): The switch index (SI) refers to a threshold on the time for which a subarea is continuously in the crowded state before the performance is switched to the other stage. When a subarea consistently stays in a crowded state, exceeding the threshold of the switch index (SI), and its two neighboring subareas are in a crowded state (not necessarily passed the threshold of the SI), it indicates a critical surge situation. At this point, the currently performing stage will receive instructions to stop, and the performance switches to another stage. (a)(b)(c) (d)(e) Figure 5. Frequency of switching stage(F) and Average Panic/Surge value with four different Switch Index (SI) (10, 20, 30, 40) using different parameters. (a): PN500, BRF50, PT10, ST30; (b): PN750, BRF50, PT10, ST30; (c): PN500, BRF30, PT10, ST30; (d): PN500, BRF50, PT10, ST40; (e): PN500, BRF50, PT20, ST30. ### 4.2. Evaluation We first explored the correlation between stage positions and the likelihood of surge accidents. The Astroworld tragedy highlighted the significant impact of stages’ close proximity. Figure 4 illustrates three maps with different stage positions: Map A features stages placed directly facing each other, Map B has one stage in the bottom-right corner and another in the middle on the left side, while Map C positions two stages at the top-left and bottom-right, respectively. In each map, the speed of half the people doubles the other half. Each individual’s comfort distance is randomly set between $1-10$ units patch-size, and their hesitation time is randomly assigned between $1-20$ time-steps. The Switch Index(SI) is set at $10$ time-steps. We assign $40\%$ for the people to go to the bar or restroom and each trip lasts for $50$ time- steps. Additionally, we have four adjustable parameters: total number of people (PN), frequency of bar/restroom visits (BRF), Panic Threshold (PT), and Surge Threshold (ST). The default parameters are set as follows: PN $=500$, BRF $=50$, PT $=10$, and ST $=30$. We utilize several metrics to assess crowd behavior. We consider the Frequency of stage switching (F) and Average Panic/Surge (APS). The Average Panic/Surge (APS) metric reflects the average number of people in Panic/Surge Status per timestep. These metrics collectively help us understand and evaluate the effectiveness of our strategies and response measures. Our experiments showed that in Map A, the Frequency of stage switching (F) is $0.010$ and the Average Panic/Surge (APS) is $0.85$. In Map B, the F is $0.009$ and APS is $1.05$. In Map C, the F is $0.007$ and APS is $0.62$. On average, Map C reduced F by $26\%$ and APS by $34\%$. The results in Map C show that increasing the distance between the two stages can reduce both the likelihood of crowd surge occurrence and the stage switching frequency. Therefore, the subsequent experiments will Map C to explore the effect of choosing different values of SI on F and APS under different parameters including: * • Total number of people (PN): Figure 4(a) is 500. In Figure 5(b), PN increases to 750, showcasing a more crowded environment. * • Frequency of bar/restroom use (BRF): Figure 4(a) is 50 while in Figure 5(c), BRF decreases to 30, indicating limited availability of restroom and bar facilities. * • Panic threshold (PT) and Surge threshold (ST): Different values of ST and PT represent varying audience compositions. Figure 4(a) has ST=10 and PT=10. In Figure 5(d)(e), with ST increasing to 40, and with PT increasing to 20. In all of these setups, we observed that the value of SI provides a tradeoff between the frequency of switching stages (F) and the average of panic/surge (APS). Indeed this tool can provide insights to the event organizers to balance the frequency of switching the stages. #### Takeaways: Our experiments have provided insights into the correlation between stage positions and crowd state. The analysis of SI and different parameters emphasizes the significance of adapting the SI according to the crowd state and the availability of facilities. In real situations, to estimate the individual state (panic/surge state), people’s intention to move can be detected using wearable sensors, such as accelerometers, gyroscopes, and heart rate data from wearable devices. These sensors enable the estimation of gait parameters, making them useful for determining individuals’ states. ## 5\. Future Work & Conclusion Crowd evacuation and surge prevention represent critical research directions due to their significant implications for public safety, urban planning, and smart building systems. As urbanization and population density continue to rise, ensuring efficient and safe evacuation procedures during emergencies or crowded events becomes increasingly challenging. Our research focuses on addressing crowd management challenges by exploring both evacuation strategies and preventive methodologies. We emphasize the importance of balancing fairness and efficiency in evacuation plans while considering psychological factors influencing individual social distancing behavior during evacuations. Our preliminary results showed that by utilizing simulation tools like Vadere, we can design evacuation strategies that consider vulnerable people. Additionally, we developed a NetLogo-based tool to simulate preventive strategies based on the current crowd state. In the future, we aim to further optimize our preventive approach by integrating post-event analysis and privacy protection measures. These steps will ensure ethical data usage and enhance the overall effectiveness of crowd management. ###### Acknowledgements. This research was partially supported by NSF award # CNS-2105084. ## References * (1) * Aalami and Kattan (2020) Soheila Aalami and Lina Kattan. 2020. Fairness and efficiency in pedestrian emergency evacuation: Modeling and simulation. _Safety science_ 121 (2020), 373–384. * Abuarafah et al. (2012) Adnan Ghazi Abuarafah, Mohamed Osama Khozium, and Essam AbdRabou. 2012. Real-time crowd monitoring using infrared thermal video sequences. _Journal of American Science_ 8, 3 (2012), 133–140. * Elmalaki (2021) Salma Elmalaki. 2021. Fair-iot: Fairness-aware human-in-the-loop reinforcement learning for harnessing human variability in personalized iot. In _Proceedings of the International Conference on Internet-of-Things Design and Implementation_. 119–132. * Illiyas et al. (2013) Faisel T Illiyas, Shibu K Mani, AP Pradeepkumar, and Keshav Mohan. 2013. Human stampedes during religious festivals: A comparative review of mass gathering emergencies in India. _International Journal of Disaster Risk Reduction_ 5 (2013), 10–18. * Kleinmeier et al. (2019) Benedikt Kleinmeier, Benedikt Zönnchen, Marion Gödel, and Gerta Köster. 2019. Vadere: An open-source simulation framework to promote interdisciplinary understanding. _arXiv preprint arXiv:1907.09520_ (2019). * Lung et al. (2021) Chi-Wen Lung, Ben-Yi Liau, Joseph A Peters, Li He, Runnell Townsend, and Yih-Kuen Jan. 2021. Effects of various walking intensities on leg muscle fatigue and plantar pressure distributions. _BMC Musculoskeletal Disorders_ 22, 1 (2021), 1–9. * Martella et al. (2017) C Martella, J Li, C Conrado, and A Vermeeren. 2017. On current crowd management practices and the need for increased situation awareness, prediction, and intervention. _Safety science_ 91 (2017), 381–393. * Sharma et al. (2023) Avinash Sharma, Brian McCloskey, David S Hui, Aayushi Rambia, Adam Zumla, Tieble Traore, Shuja Shafi, Sherif A El-Kafrawy, Esam I Azhar, Alimuddin Zumla, et al. 2023\. Global mass gathering events and deaths due to crowd surge, stampedes, crush and physical injuries-lessons from the Seoul Halloween and other disasters. _Travel medicine and infectious disease_ 52 (2023). * Sharma et al. (2018) Deepak Sharma, Amol P Bhondekar, AK Shukla, and C Ghanshyam. 2018. A review on technological advancements in crowd management. _Journal of Ambient Intelligence and Humanized Computing_ 9, 3 (2018), 485–495. * Taherisadr et al. (2023) Mojtaba Taherisadr, Stelios Andrew Stavroulakis, and Salma Elmalaki. 2023. adaPARL: Adaptive Privacy-Aware Reinforcement Learning for Sequential Decision Making Human-in-the-Loop Systems. In _Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation_. 262–274. * The New York Times (2021) The New York Times. 2021. _‘No Way Out’: A Sudden Life-and-Death Struggle at a Houston Concert_. https://www.nytimes.com/2021/11/06/us/travis-scott-crowd-surge.html Accessed: 2023-05-14. * Tisue and Wilensky (2004) Seth Tisue and Uri Wilensky. 2004. Netlogo: A simple environment for modeling complexity. In _International conference on complex systems_ , Vol. 21. Citeseer, 16–21. * Ulicny et al. (2013) Brian Ulicny, Jakub Moskal, and Mieczyslaw M Kokar. 2013. Situational Awareness from Social Media.. In _STIDS_. 87–93. * United Nations (2004) United Nations. 2004. _United Nations Enable - Accessibility Design for All_. https://www.un.org/esa/socdev/enable/designm/AD1-04.htm Accessed: 2023-05-14. * Ushahidi ([n. d.]) Ushahidi. [n. d.]. _Ushahidi Support_. Ushahidi. https://www.ushahidi.com/support/overview/ * Weppner et al. (2016) Jens Weppner, Benjamin Bischke, and Paul Lukowicz. 2016. Monitoring crowd condition in public spaces by tracking mobile consumer devices with wifi interface. In _Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing_. 1363–1371. * Yaagoubi et al. (2023) Reda Yaagoubi, Yehia Miky, Kamil Faisal, and Ahmed Al Shouny. 2023. A combined agent-based modeling and GIS approach for HAJJ crowd simulation. _Journal of Engineering Research_ 11, 1 (2023), 100014. * Yuan et al. (2013) Weiwei Yuan, Donghai Guan, Eui-Nam Huh, and Sungyoung Lee. 2013. Harness human sensor networks for situational awareness in disaster reliefs: a survey. _IETE Technical Review_ 30, 3 (2013), 240–247. * Zhao (2016) Hantao Zhao. 2016. _Crowd Simulation and Virtual Reality Experiments for 2010 Love Parade Disaster_. Master’s thesis. Department of Computer Science, ETH Zurich. * Zhou et al. (2020) Yuren Zhou, Billy Pik Lik Lau, Zann Koh, Chau Yuen, and Benny Kai Kiat Ng. 2020. Understanding crowd behaviors in a social event by passive wifi sensing and data mining. _IEEE Internet of Things Journal_ 7, 5 (2020), 4442–4454. * Zia and Ferscha (2020) Kashif Zia and Alois Ferscha. 2020. An agent-based model of crowd evacuation: combining individual, social and technological aspects. In _Proceedings of the 2020 ACM SIGSIM conference on principles of advanced discrete simulation_. 129–140.
# Heuristics for $k$-domination models of facility location problems in street networks Padraig Corcoran School of Computer Science & Informatics Cardiff University Wales, UK<EMAIL_ADDRESS>Andrei Gagarin School of Mathematics Cardiff University Wales, UK. ###### Abstract We present new greedy and beam search heuristic methods to find small-size $k$-dominating sets in graphs. The methods are inspired by a new problem formulation which explicitly highlights a certain structure of the problem. An empirical evaluation of the new methods is done with respect to two existing methods, using instances of graphs corresponding to street networks. The $k$-domination problem with respect to this class of graphs can be used to model real-world facility location problem scenarios. For the classic minimum dominating set ($1$-domination) problem, all except one methods perform similarly, which is due to their equivalence in this particular case. However, for the $k$-domination problem with $k>1$, the new methods outperform the benchmark methods, and the performance gain is more significant for larger values of $k$. ###### keywords: $k$-domination , heuristic methods , facility location. ††journal: Computers & Operations Research ## 1 Introduction A graph is a mathematical (combinatorial) abstraction that is commonly used to represent many real-world problems. A _simple graph_ consists of a set of objects called _vertices_ and a set of pairwise relations between the objects called _edges_. For example, a visual scene can be modelled as a graph [1]. Similarly, a street network can be modelled as a graph where locations are modelled as vertices and street segments connecting locations are modelled as edges [2]. Many optimization problems are formulated on graphs. These include the shortest path problem, which concerns computing a minimum length path between two vertices in a graph, and the vertex cover problem, which concerns computing a smallest subset of vertices that includes at least one endpoint of every edge. Many real-world problems in turn can be modelled as instances of these graph-theoretic problems. For example, the problem of finding a shortest path in a street network can be modelled as the problem of computing the shortest path in a graph which models that street network. We consider the minimum $k$-dominating set ($k$-domination) problem in graphs, which is one of the multiple domination problem types (e.g., see [3, 4]). Given a simple graph and a positive integer $k$, the minimum $k$-dominating set ($k$-domination) problem consists in finding a smallest possible (by cardinality) subset of graph vertices such that each vertex is an element of this subset or is adjacent to at least $k$ elements of this subset. Examples and general description for this kind of modelling can be found in Prolegomenon and Chapter $1$ of the classic book on domination in graphs [5]. Thai et al. [6] modelled the problem of computing a virtual backbone in a wireless ad-hoc or sensor network as a $k$-domination problem. Gagarin et al. [7] modelled the problem of optimizing the placement of electrical vehicle charging stations as a $k$-domination problem. Also, Khomami et al. [8] modelled the problem of maximizing influence in a social network as a $k$-domination problem. The $k$-domination problem has been proven to be $\mathcal{NP}$-hard [9], even, e.g., in split graphs [10]. As a consequence, unless the problem instance is reasonably small, one generally cannot use an exact method to compute an optimal solution in reasonable time (e.g., see the state-of-the-art deterministic algorithms and computational results in [11, 12]). Therefore, heuristic methods are normally used to find small-size $k$-dominating sets in reasonable time, assuming the solution may be suboptimal. In this article we propose a novel formulation of the $k$-domination problem. This new formulation makes explicit important structure in the problem, which is not present in existing formulations. We subsequently propose two heuristic methods for solving this problem, which exploit this structure. The methods in question use greedy and beam search approach ideas. We empirically evaluate these two methods with respect to street network reachability graphs. The $k$-domination problem with respect to this class of graphs can be used to model facility location problems in street networks [7]. The remainder of this paper is structured as follows. In Section 2 we review existing solutions to the $k$-domination problem. In Section 3 we formally define the $k$-domination problem and the proposed novel problem formulation. In this section we also describe the proposed heuristic methods for solving this problem. In Section 4 we present an experimental evaluation of the proposed methods with respect to existing baseline methods on street network reachability graphs. Finally, in Section 5 we draw some conclusions from this work and discuss some possible directions for future research. ## 2 Related Works In this section we review existing methods for computing solutions to the $k$-domination problem. We focus exclusively on the case where the graphs in question are unweighted and undirected. The methods described in this section do not naturally generalise to other types of graphs and instead specialized methods must be considered, e.g., see [13]. A number of authors have proposed methods for computing solutions to different variants of the $k$-domination problem. For example, Klasing and Laforest [4] considered the $k$-tuple domination problem which is a more constrained variation of the $k$-domination problem. Shang et al. [14] proposed a method for computing a $k$-tuple dominating set which is also $m$-connected. The $k$-domination problem is a classic optimization problem. As a consequence, a large number of methods for solving this problem have been proposed. These methods can broadly be distinguished with respect to the following five features. The first feature concerns whether the method in question is designed for the classic minimum dominating set problem, i.e. $1$-domination problem in our more general context ($k=1$). The second feature concerns whether the method in question automatically generalizes to the cases where $k>1$. The final three features concern whether the method in question uses a greedy search heuristic, a metaheuristic or an exact method to determine a solution. Both greedy search heuristic and metaheuristic methods attempt to compute a useful solution in a reasonable amount of time, where this solution may be not optimal. On the other hand, exact methods attempt to compute an optimal solution. Table 1 presents a summary of existing methods for the $k$-domination problem with respect to these five features. The distinction with respect to whether a method is applicable only to the case $k=1$ or automatically generalizes to the case $k>1$ is particularly important in the context of this work. Therefore, in Sections 2.1 and 2.2 we respectively review methods belonging to these two categories. | $k=1$ | $k\geq 1$ | Greedy | Metaheuristic | Exact ---|---|---|---|---|--- | | | Search | | Method Parekh [15] | ✓ | | ✓ | | Sanchis [16] | ✓ | | ✓ | | Eubank et al. [17] | ✓ | | ✓ | | Chellali et al. [18] | ✓ | ✓ | ✓ | | Hedar et al. [19] | ✓ | | | ✓ | Hedar et al. [20] | ✓ | | | ✓ | Ho et al. [21] | ✓ | | | ✓ | Nehez et al. [22] | ✓ | | | | ✓ Bird [11] | ✓ | | | | ✓ Assadian [12] | ✓ | | | | ✓ Couture et al. [23] | | ✓ | ✓ | | Gagarin et al. [3] | | ✓ | ✓ | | Gagarin et al. [7] | | ✓ | ✓ | | ✓ Table 1: Methods for computing solutions to the $k$-domination problem are distinguished with respect to five features. ### 2.1 Searching for minimum dominating sets ($k=1$) Existing solution methods for the classic minimum dominating set problem, i.e. the $k$-domination problem where $k=1$, can be broadly divided into greedy search heuristic, metaheuristic, and exact solution (deterministic) methods. We now review methods belonging to each of these categories in turn. #### Greedy Search Heuristic Methods The first and most commonly used standard greedy search heuristic for computing dominating sets ($k=1$) is described in Parekh [15]. The method initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a dominating set. The vertex added to $D$ at each iteration is determined by selecting a vertex from the set of vertices whose neighbourhood contains a maximum number of vertices currently not dominated. In this context, a vertex is not dominated if it is not an element of the set $D$ and not adjacent to any vertex in $D$. Sanchis [16] evaluated four greedy search heuristic methods for computing small-size dominating sets. The first method is entitled Greedy. This method initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a dominating set. The vertex added to $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices whose neighbourhood contains a maximum number of vertices currently not dominated. This method is similar to that described by Parekh [15] but with the addition of randomization in vertex selection. The second method is entitled Greedy_Rev. This method initializes a set $D$ to equal the set of graph vertices in question and iteratively removes vertices from $D$ until no further vertex can be removed while still maintaining the property that $D$ is a dominating set. The vertex removed from $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices which are eligible to be removed and have the maximum degree. The third method is entitled Greedy_Ran. This method is similar to that entitled Greedy with the exception that the vertex added to $D$ at each step is determined by selecting a vertex with probability proportional to the number of adjacent vertices currently not dominated. The final method is entitled Greedy_Vote. This method initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a dominating set. The vertex added to $D$ at each step is determined by selecting a vertex with probability proportional to the number of neighbours of its neighbours currently not dominated. The author evaluated the four above methods on two different classes of graphs and found the Greedy and Greedy_Vote methods to perform best. Eubank et al. [17] evaluated five greedy search heuristic methods for finding dominating sets. The first method is called RegularGreedy and is the same as the standard [15]. The second method is named FastGreedy. This method initializes a set $D$ to the empty set. It then iterates over the vertices in the graph considering vertices of greater degree first and adding each vertex to the set $D$ until it forms a dominating set. The third method is entitled VRegularGreedy. This method initializes a set $D$ to be the set of all neighbours of vertices of degree $1$. It subsequently applies the standard greedy approach [15] to the graph induced by vertices currently not in $D$. The fourth and fifth methods are called FastGreedy-1 and FastGreedy-2. Both methods are slight variations of the FastGreedy method described above. The authors evaluated the above five methods on a number of real-world social networks and random graphs. They found that the methods RegularGreedy and VRegularGreedy performed best. Chellali et al. [18] proposed a greedy search heuristic method which initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a dominating set. The vertex added to $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices currently not dominated. This method is implemented in the NetworkX software library which is a highly popular Python software library for graph analysis [24]. It is important to note that many of the greedy search heuristic methods reviewed above are also randomized methods. This combined with the general low computational complexity of these methods means that they can be applied to a given problem instance a large number of times with the best solution obtained being returned. #### Metaheuristic Methods Hedar and Ismail [19] proposed a number of genetic algorithms for finding dominating sets and evaluated them on a set of random graphs. The same authors later proposed a simulated annealing method to search for dominating sets [20]. They experimentally tested this method with respect to a stochastic local search method, the genetic algorithm of [19], and the method entitled Greedy proposed by Sanchis [16] on a set of random graphs. Their experiments show the simulated annealing and the genetic algorithm of [19] to perform best. Ho et al. [21] proposed a number of ant colony optimization methods for computing dominating sets. The authors evaluated these methods and a genetic algorithm on a set of random graphs. They found that an ant colony optimization method outperforms the genetic algorithm. #### Exact Methods Nehez et al. [22] proposed an integer linear programming (ILP) method for computing dominating sets. The authors evaluated this method against a randomized local search method and the standard greedy search heuristic [15] on a number of real-world graphs. The authors found that the ILP approach performed best but did not scale to large graphs. The same was shown by computational experiments in Gagarin and Corcoran [7], where an ILP formulation is described for a more general $k$-domination problem scenario. The state-of-the-art deterministic search methods for dominating sets in graphs have been recently developed and described by Bird [11] and Assadian [12]. The methods are based on backtracking, and the experimental results indicate that they are not likely to be practical for graphs with more than several hundred vertices. ### 2.2 Searching for small $k$-dominating sets ($k\geq 1$) The more general $k$-domination problem, where $k\geq 1$, is less well studied than the classic minimum dominating set problem ($k=1$). In fact, only a few solution methods described in the previous section generalize to solve the $k$-domination problem for any $k\geq 1$. These solution methods can be broadly divided into greedy search heuristics and exact (deterministic) algorithms. #### Greedy Search Heuristic Methods A generalization of the standard greedy algorithm ([15]) for computing $k$-dominating sets is described in [7]. Specifically, this method initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a $k$-dominating set. The vertex added to $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices whose neighbourhood contains a maximum number of vertices currently not dominated enough. In a certain sense, this simple greedy algorithm is inspired by the greedy approach to find $k$-tuple dominating sets in Klasing et al. [4]. Couture et al. [23] proposed a method which first computes a dominating set ($k=1$) by finding a maximal independent set in a graph. Next, their algorithm computes a maximal independent set for the vertices that are currently not $2$-dominated and adds those vertices to the dominating set to form a $2$-dominating set. This procedure is repeated $k$ times until a $k$-dominating set is found. Gagarin et al. [3] proposed a randomized algorithm which initializes a set $D$ as a random subset of graph vertices and then iteratively adds other vertices to $D$ if they are not dominated enough, mentioning some greedy ideas. The probability to initialize set $D$ randomly is shown to be optimal in general graphs. However, this probability had to be adjusted experimentally for graphs corresponding to real-world road networks in [7]. The last paper also experimentally compares the randomized approach to the basic greedy heuristic. #### Exact (deterministic) methods An ILP problem formulation for computing $k$-dominating sets is described in [7]. The experiments show that this method clearly does not scale to the size of two main graphs considered in the paper. Therefore a greedy search heuristic remains one of the main optimization tools in that research. Also, the computational results in [7] show that the ILP formulation solution approach scales less well for larger values of $k$. ## 3 New Heuristic Search Methods for $k$-Domination In this section we formally define the $k$-domination problem and present a novel formulation of this problem. This formulation is in turn used to develop two novel methods for solving the problem which use greedy and beam search heuristic ideas. We consider simple graphs $G=(V,E)$, where $V$ is a set of vertices and $E$ is a set of edges. Given a vertex $v\in V$, the open neighbourhood of $v$ is the set of all its neighbours in $G$, i.e. all vertices adjacent to $v$. It is denoted by $N(v)$. The closed neighbourhood of $v$ is $N(v)\cup\\{v\\}$, it is denoted by $N[v]$. For a given positive integer $k$, a _$k$ -dominating set_ of $G$ is a set $D\subseteq V$ such that each $v\in V$ is either an element of $D$ or is adjacent to at least $k$ elements of $D$. The _$k$ -domination problem_ concerns finding a $k$-dominating set of $G$ which is as small as possible. This is formally defined as the following optimization problem: $\displaystyle\operatorname*{arg\,min}_{D\subseteq V}$ $\displaystyle|D|$ (1) subject to $\displaystyle\forall\ v\in V\setminus D,\ |N(v)\cap D|\geq k$ In general, solving this optimization problem is $\mathcal{NP}$-hard [9, 10]. For any $D\subseteq V$ and $v\in V$, we define the parameter $C(D,v)$, which indicates a level of coverage of neighbours of $v$ with respect to the set $D$ in $G$ as follows: $C(D,v)=\text{min}(k,|N(v)\cap D|)$ (2) Then the optimization problem (1) can be formulated as the optimization problem (3). We prove this is an equivalent problem formulation in Theorem 1. $\displaystyle\operatorname*{arg\,max}_{D\subseteq V}$ $\displaystyle\sum_{v\in V\setminus D}C(D,v)$ (3) subject to $\displaystyle\forall\ v\in V\setminus D,\ |N(v)\cap D|\geq k$ ###### Theorem 1. Given a graph $G$, a solution $D$ to the optimization problem defined in (3) is a minimum size $k$-dominating set in $G$. ###### Proof. A set $D\subseteq V$ satisfying the constraints in (3) is a $k$-dominating set: each vertex $v\in V\setminus D$ is adjacent to at least $k$ elements in $D$. Therefore, $C(D,v)=\text{min}(k,|N(v)\cap D|)=k$ for all $v\in V\setminus D$. In turn, the value of the objective function in (3) equals $k(|V|-|D|)$. Since $|V|$ is a constant, the objective function is maximized when $D$ is a minimum size $k$-dominating set. In other words, the optimization problem (3) is equivalent to maximizing $|V\backslash D|$ in $G$. ∎ Since the $k$-domination problem is $\mathcal{NP}$-hard, one normally must use heuristic methods to find a reasonably small $k$-dominating set ([3]), sacrificing quality of the solution to a reasonable computational time ([7, 11, 12]). The problem formulation in (3) uses the parameter $C(D,v)$ from (2) to model the level to which vertices are dominated by a set $D$. This contrasts with the formulation (1), which simply requires satisfaction of the constraints. The additional information incorporated in formulation (3) can potentially be exploited by heuristic methods to make locally optimal decisions better. In the following subsections we describe two heuristic solution methods for the $k$-domination problem which use formulation (3). These methods are based on greedy and beam search heuristics ideas. ### 3.1 New Greedy Search Heuristic The problem formulation in (3) motivates Algorithm 1 which is a greedy search heuristic. The algorithm takes as input a graph $G=(V,E)$ and a positive integer $k$, and computes a $k$-dominating set $D$ for $G$. The algorithm initializes $D$ to be the empty set (line 1). Next, it iteratively adds vertices to $D$ until it forms a $k$-dominating set. The vertex added at each step is determined by selecting uniformly at random a vertex from the set of vertices whose addition maximizes the unconstrained objective function in (3) (see lines 1 to 1). Convergence of Algorithm 1 to a $k$-dominating set is a consequence of the fact that, unless a $k$-dominating set is formed earlier, the algorithm will converge to the case where $D=V$, which is trivially $k$-dominating in $G$. Algorithm 1 is also a randomized algorithm: in each iteration, if several vertices can increase the objective function value by the same maximum amount, one of these vertices is selected uniformly at random for addition to $D$. Input: A graph $G=(V,E)$, a positive integer $k$. Output: A $k$-dominating set $D$ of $G$. 1 begin 2 Initialize $D=\\{\\}$ 3 while _$|\\{v\in V\backslash D:|N(v)\cap D| <k\\}|>0$_ do 4 Find $\displaystyle U=\operatorname*{arg\,max}_{u\in V\setminus D}\sum_{v\in V\setminus(D\cup\\{u\\})}C(D\cup\\{u\\},v)$ 5 Sample $u\in U$ using a uniform distribution 6 Put $D=D\cup\\{u\\}$ 7 end while 8 return $D$ 9 end 10 Algorithm 1 Greedy Search Heuristic To better explain effectiveness and efficiency of the heuristic ideas used in Algorithm 1, we define the difference function $\Delta(D,u)$ (4) which represents the change in the objective function $C$ in (3) when a new vertex vertex $u$ is added to a given set $D\subset V$ ($u\not\in D$): $\begin{split}\Delta(D,u)&=\sum_{v\in V\setminus(D\cup\\{u\\})}C(D\cup\\{u\\},v)-\sum_{v\in V\setminus D}C(D,v)\\\ &=|\\{v\in N(u):v\notin D,|N(v)\cap D|<k\\}|-\text{min}(k,|N(u)\cap D|)\end{split}$ (4) Now we have $\begin{split}&\max_{u\in V\setminus D}\ \ \Delta(D,u)\\\ &=\max_{u\in V\setminus D}\left(\sum_{v\in V\setminus(D\cup\\{u\\})}C(D\cup\\{u\\},v)\right)-\sum_{v\in V\setminus D}C(D,v),\\\ \end{split}$ (5) where the sum $\displaystyle\sum_{v\in V\setminus D}C(D,v)$ is constant for a given set $D\subset V$. Therefore, in Algorithm 1, we have $\begin{split}U&=\operatorname*{arg\,max}_{u\in V\setminus D}\sum_{v\in V\setminus(D\cup\\{u\\})}C(D\cup\\{u\\},v)\\\ &=\operatorname*{arg\,max}_{u\in V\setminus D}\ \ \Delta(D,u)\end{split}$ (6) which is used when implementing Algorithm 1. The computational complexity of Algorithm 1 can be analyzed as follows. ###### Theorem 2. Algorithm 1 finds a $k$-dominating set in $G$ in $\mathcal{O}(n^{3})$ time, $n=|V|$. ###### Proof. In the worst case, the while loop will terminate after $n$ iterations when all vertices have been added to $D$. Each iteration of the while loop examines all vertices currently not in the set $D$. For each such vertex $u\in V\backslash D$, the change in the objective function following its addition is computed using the difference function $\Delta(D,u)$ from (4). Evaluating the difference function (4) in the worst case when $|N(u)|\in\Theta(n)$ can be done in $O(n)$ time. Therefore each iteration of the while loop takes $O(n^{2})$ time. ∎ To develop a better intuition of how Algorithm 1 works, notice that at each step the algorithm does not determine the vertex added to $D$ solely based on the number of vertices currently not dominated in its open or closed neighbourhood. In this context, a vertex is not dominated if it is not an element of $D$ and not adjacent to at least $k$ vertices in $D$. The algorithm also considers how much the vertices in question are already dominated. Specifically, all other things being equal, a vertex which is currently least dominated will be added to $D$ because it contributes least to the sum $\sum_{v\in V\setminus D}C(D,v)$ (note that the sum is over vertices currently not in the set $D$, see definition (4)). For example, if $C(D,v)<k$ for all $v\in V\backslash D$, we have $\sum_{v\in V\setminus(D\cup\\{u\\})}C(D\cup\\{u\\},v)=\sum_{v\in V\setminus D}C(D,v)+|N(u)\cap(V\setminus D)|-|N(u)\cap D|$ for each $u\in V\backslash D$. To illustrate this concept, consider the graph displayed in Figure 1. Suppose we wish to compute a $2$-dominating set and are given $D=\\{a\\}$. If the next vertex added to $D$ is determined solely based on the number of vertices currently not dominated in its corresponding open neighbourhood, vertices $b$ and $c$ are equally likely to be added – each of the neighbourhoods contains exactly one vertex currently not dominated. This may give a suboptimal result because adding vertex $b$ will not result in a $2$-dominating set. The result is the same if we consider the closed neighbourhood instead of the open neighbourhood of the vertices. On the other hand, the proposed algorithm will add to $D$ the vertex $c$, which provides the optimal solution. Figure 1: An illustrative graph. ### 3.2 Beam Search Heuristic Algorithm 1 is a greedy algorithm, and consequently it may converge to a suboptimal solution. To illustrate this, consider again the graph displayed in Figure 1 and the problem of computing a $2$-dominating set. The optimal $2$-dominating set for this problem instance is $\\{a,c\\}$. However, applying Algorithm 1 to this problem will first add vertex $b$ to $D$, followed by vertices $a$ and $c$, to give a larger $2$-dominating set $\\{a,b,c\\}$. To overcome this limitation, we propose a generalization of Algorithm 1 which uses a beam search heuristic instead of a pure greedy approach. A beam search algorithm is an iterative search method which at each step maintains a set of best intermediate or partial solutions [25]. The maximum size of this set is a constant hyper-parameter called the _beam width_. The proposed beam search with the beam width of one is equivalent to the greedy search of Section 3.1. The general beam search heuristic implemented in this work is described in Algorithm 2. This algorithm takes as input a graph $G=(V,E)$ and two positive integer parameters $k$ and $b$, and, using a beam of width $b$, finds a $k$-dominating set $D$ for $G$ by considering a collection $S$ of vertex subsets of $G$. The algorithm initializes $D$ to be the empty set (line 2) and $S$ to be a list containing a single partial solution corresponding to the empty set. In this context, a partial or intermediate solution is a subset of graph vertices. Next, the algorithm iteratively expands all subsets of vertices in $S$ (lines 2 to 2). Each partial solution $s\in S$, $s\subseteq V$, is expanded to form a set of partial solutions by adding one vertex currently not in $s$ in all possible ways, i.e. like in an exhaustive search. For example, consider the graph in Figure 1 and a partial solution set $\\{b\\}$. Expanding this vertex subset in all possible ways gives two partial solutions $\\{a,b\\},\\{b,c\\}$. Similarly, expanding the empty set $\\{\\}$ in the context of the same graph gives three partial solutions $\\{a\\},\\{b\\},\\{c\\}$. The algorithm next removes copies of partial solutions from the list of subsets $S$ (line 2). The list of intermediate solutions is then sorted in non-ascending (descending) order with respect to the objective function value in (3). If several partial solutions have the same objective function value, they are ordered randomly in $S$ (line 2). Then the top $b$ partial solutions are retained in $S$ (line 2). Finally, if there is a $k$-dominating set $s$ in $S$, i.e. a set $s$ satisfying constraints in (3), we put $D=s$, and it is returned as a solution to the problem (lines 2 and 2 respectively). Convergence of Algorithm 2 to a $k$-dominating set is a consequence of increasing cardinality of partial solutions in iteration and the fact that, unless a $k$-dominating set is found earlier, the algorithm will converge to the case where $D=V$, which is $k$-dominating. Algorithm 2 is also a randomized algorithm: as stated above, when sorting is performed, if several partial solutions have the same unconstrained objective function value, they are ordered randomly. The computational complexity of Algorithm 2 is stated in Theorem 3. Input: A graph $G=(V,E)$, positive integers $k$ and $b$. Output: A $k$-dominating set $D$ of $G$. 1 begin 2 Initialize $D=\\{\\}$ 3 Initialize $S=[\\{\\}]$ 4 while _$D=\\{\\}$_ do 5 $S^{\prime}=[]$ 6 for _$s\in S$_ do 7 $S^{\prime}=S^{\prime}\cup\text{expand}(s)$ 8 end for 9 $S=S^{\prime}$ 10 remove_duplicates($S$) 11 sort_descending($S$) 12 $S=S[1\dots b]$ 13 for _$s\in S$_ do 14 if _$s$ is $k$-dominating_ then 15 $D=s$ 16 17 end if 18 19 end for 20 21 end while 22 return $D$ 23 end 24 Algorithm 2 Beam Search Heuristic ###### Theorem 3. Algorithm 2 finds a $k$-dominating set in $\mathcal{O}(b^{2}n^{3})$ time, where $n=|V|$ and $b$ is the beam width. ###### Proof. In the worst case, the while loop will terminate after $n$ iterations. Each iteration of the while loop performs an expansion of intermediate solutions in $S$, which contains $O(b)$ subsets of vertices. Expanding each individual partial solution takes $O(n)$ steps. Then the list $S^{\prime}$ will contain $O(nb)$ elements. Checking for copies of subsets in line 2 can be done in $O(b^{2}n^{2})$ time. Evaluating a single element in the resulting list with respect to the objective function can be done in $O(n)$ time. Therefore, evaluating and sorting all elements in this list with respect to the objective function takes $O(bn^{2}+bn\log bn)$ steps. The overall computational time complexity is therefore $O(nbn+nb^{2}n^{2}+nbn^{2}+nbn\log bn)=O(b^{2}n^{3})$. ∎ Selecting the beam width hyper-parameter for Algorithm 2 represents a trade- off between computational complexity and the quality of obtained solution. That is, a larger beam width results in a better exploration of the partial solution space and the potential for finding a better solution. However, a larger beam width also results in higher computational complexity. To illustrate this, consider again the graph in Figure 1 and the problem of computing a $2$-dominating set. Applying Algorithm 2 to this graph with the beam width of one returns the $2$-dominating set $\\{a,b,c\\}$. On the other hand, applying this algorithm to the same graph with the beam width of three returns the smaller $2$-dominating set $\\{a,c\\}$. As mentioned earlier, Algorithm 2 with the beam width equal to one reduces to Algorithm 1. In Theorem 4, we establish a relationship between Algorithm 2 and a standard greedy approach for computing dominating sets ($k=1$) described in [15, 7]. Recall that the standard greedy algorithm initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a dominating set. The vertex added to $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices whose closed neighbourhood contains a maximum number of vertices currently not dominated. ###### Theorem 4. When computing a dominating set ($k=1$), Algorithm 2 with the beam width of one is equivalent to the standard greedy algorithm [15, 7]. ###### Proof. For the beam width of one, Algorithm 2 behaves greedily and at each step selects the vertex which maximizes the change in the unconstrained objective function of (3). It is possible to see that for the case $k=1$, the change in the objective function value by adding a vertex $v$ equals the number of not dominated vertices in the closed neighbourhood of $v$ minus one. In other words, the selection criteria functions used by Algorithm 2 and the standard greedy algorithm to rank vertices only differ by a constant value of minus one, implying the vertex rankings are the same. Consider two possible mutually exclusive cases corresponding to $v$ being currently dominated or not. If $v$ is currently not dominated, the change in the objective function value by adding $v$ equals the number of not dominated vertices in the open neighbourhood of $v$, which is the number of not dominated vertices in the closed neighbourhood of $v$ minus one. If $v$ is currently dominated, the change in the objective function value by adding $v$ equals the number of not dominated vertices in the open neighbourhood of $v$ minus one, which is, in this case, the number of not dominated vertices in the closed neighbourhood of $v$ minus one. ∎ ## 4 Computational Results and Analysis In this section we present an empirical evaluation of the two proposed heuristic methods for computing $k$-dominating sets with respect to two baseline methods. We perform this evaluation using a set of graphs corresponding to street network reachability graphs. The $k$-domination problem with respect to this class of graphs can be used to model facility location problems in street networks [7]. The remainder of this section is structured as follows. In Section 4.1 we formally define the concept of a street network reachability graph and the corresponding facility location problem. In Section 4.2 we present details of the street networks used in this evaluation. Section 4.3 describes the baseline methods against which the proposed methods are evaluated. Finally, in Section 4.4 we present empirical results of our evaluation. ### 4.1 Reachability Graphs and Facility Location A street network can be modelled as a weighted undirected graph $G^{s}=(V^{s},E^{s},w:E^{s}\rightarrow\mathbb{R})$, where the set of vertices $V^{s}$ corresponds to road intersections and dead-ends, while the set of edges $E^{s}$ corresponds to road segments connecting these vertices. The weight function $w$ assigns to each edge the length of the corresponding road segment measured in meters [26]. The street network of Cardiff city modelled as such a graph is displayed in Figure 2. Given a street network graph $G^{s}=(V^{s},E^{s},w:E^{s}\rightarrow\mathbb{R})$, we define its reachability graph $G^{r}_{t}=(V^{r},E^{r}_{t})$ as a simple unweighted graph with $V^{r}=V^{s}$ and $(u,v)\in E^{r}_{t}$ if and only if the length of shortest path (distance) between the corresponding vertices $u$ and $v$ in $G^{s}$ is less than a specified reachability threshold of $t$ meters [7]. The reachability graph corresponding to the Cardiff city street network of Figure 2 for $t=500$ meters is illustrated in Figure 2. In this figure, for the vertex represented by a blue circle, all adjacent vertices in the corresponding reachability graph $G^{r}_{t}$, $t=500$, are represented by red circles. The $k$-domination problem with respect to a street network reachability graph is a useful model for facility location problems [7]. By placing the facility in question at the locations corresponding to a $k$-dominating set, we ensure that any agent wishing to use the facility in the street network has a guaranteed minimum level of access options. Furthermore, by minimizing the size of a $k$-dominating set, we minimize the cost of providing this facility. Figure 2: (a) A graph modelling the street network of Cardiff city; (b) An illustration of reachability graph for the Cardiff city street network: all red vertices are adjacent to a vertex represented by a blue circle. Figure 3: A graph modelling the street network of Berlin city. All red vertices are adjacent to a given single vertex in the corresponding reachability graph. ### 4.2 Street Networks To evaluate the proposed methods with respect to street network reachability graphs we considered 20 medium sized street networks corresponding to twenty UK cities and 5 large sized street networks corresponding to international cities. For each city we selected a location in the city center and extracted the street network graph $G^{s}$ within a bounding box centred at this location. For each UK and international city a 3 and 15 kilometer bounding box respectively was used. The street networks in question were obtained from OpenStreetMap which is a crowdsourcing project for geographical data [27]. For each UK and international street network, the corresponding reachability graph $G^{r}_{t}$ was computed using a reachability threshold $t$ of 500 and 3000 meters respectively. The reachability graphs for the cities of Cardiff and Berlin computed using the above approach are illustrated in Figures 2 and 3 respectively. Tables 2 and 3 display the names of the UK and international cities respectively, the number of vertices and edges in the corresponding street network graphs $G^{s}$, and the number of vertices and edges in the corresponding reachability graphs $G^{r}_{t}$. City Name | No. vertices | No. edges | No. edges ---|---|---|--- | $G^{s}$ ($G^{r}_{t}$) | $G^{s}$ | $G^{r}_{t}$ Bath | 910 | 1,147 | 18,560 Belfast | 1,700 | 2,169 | 62,617 Brighton | 976 | 1,342 | 35,012 Bristol | 1,569 | 2,048 | 47,522 Cardiff | 1,127 | 1,466 | 23,155 Coventry | 1,175 | 1,507 | 26,689 Exeter | 1,250 | 1,475 | 31,997 Glasgow | 1,137 | 1,546 | 24,323 Leeds | 1,647 | 2,197 | 56,511 Leicester | 1,531 | 2,027 | 48,219 Liverpool | 1,273 | 1,721 | 42,564 Manchester | 1,991 | 2,696 | 77,286 Newcastle | 1,109 | 1,402 | 26,614 Nottingham | 1,739 | 2,134 | 51,595 Oxford | 479 | 581 | 8,396 Plymouth | 1,122 | 1,463 | 35,070 Sheffield | 1,582 | 2,065 | 50,534 Southampton | 796 | 1,062 | 19,942 Sunderland | 1,346 | 1,783 | 42,013 York | 1,044 | 1,228 | 23,774 Table 2: The number of vertices and edges in the street network graph $G^{s}$ and the corresponding reachability graph $G^{r}_{t}$ for $20$ UK cities. City Name | No. vertices | No. edges | No. edges ---|---|---|--- | $G^{s}$ ($G^{r}_{t}$) | $G^{s}$ | $G^{r}_{t}$ Belgrade, Serbia | 22,218 | 28,465 | 9,092,430 Berlin, Germany | 31,413 | 46,948 | 10,356,466 Boston, USA | 34,713 | 50,190 | 23,379,262 Dublin, Ireland | 35,172 | 41,744 | 20,513,936 Minsk, Belarus | 11,388 | 16,217 | 1,387,938 Table 3: The number of vertices and edges in the street network graph $G^{s}$ and the corresponding reachability graph $G^{r}_{t}$ for $5$ international cities. ### 4.3 Baseline Methods We considered the standard heuristic algorithm (“standard greedy”) [15, 7] and the algorithm by Couture et al. [23] as baseline solution methods. As discussed in the related works section of this paper, these are current state- of-the-art heuristic solution methods for the $k$-domination problem. We now briefly review each of these methods in turn. The standard heuristic algorithm (“standard greedy”) [15, 7] initializes a set $D$ to be the empty set and iteratively adds vertices to $D$ until it forms a $k$-dominating set. The vertex added to $D$ at each step is determined by selecting uniformly at random a vertex from the set of vertices whose closed neighbourhood currently contains a maximum number of not dominated enough vertices. The method of Couture et al. [23] first computes a dominating set ($k=1$) by computing a maximal independent set. Next, it computes a maximal independent set for those vertices that are currently not $2$-dominated, and adds them to the dominating set to form a $2$-dominating set. This procedure is repeated $k$ times until a $k$-dominating set is found. In our implementation, a greedy randomized sequential algorithm was used to compute the maximal independent sets [28]. ### 4.4 Empirical Results This section presents an empirical evaluation of the proposed methods for computing $k$-dominating sets in the street network reachability graphs described in Section 4.2 with respect to the baseline methods described in Section 4.3. The beam search heuristic method of Algorithm 2 has a single hyper-parameter of beam width. Recall that, this method with a beam width of one reduces to the greedy search heuristic method of Algorithm 1. For the medium size UK city graphs, we present results with respect to the beam search heuristic method for the three beam widths of $1$, $2$, and $4$. For the large international city graphs, we present results with respect to the greedy search heuristic method. Due to the high computational complexity of the beam search heuristic method, it was not feasible to apply this method to these large graphs. Since all new and baseline methods have a randomized component, they may find dominating sets of different sizes when run with different random seeds. To understand the effect of this randomness, for a given method and graph, we applied the method to the graph using ten different random seeds and reported the minimum, mean and standard deviation statistics of the resulting dominating set sizes. The minimum is a very relevant statistic because when using a randomized algorithm, one typically runs the algorithm multiple times and uses the best result achieved. Table 4 displays the statistics of the computed dominating set sizes for $k=1$ for each UK city. The last row in this table displays the average of each statistic computed by each method across all cities. From these results, we see that the method of Couture et al. [23] performed less well by a significant margin. Specifically, the average minimum and mean dominating set size is significantly greater than the other methods. Furthermore, the average standard deviation of the dominating set size is also significantly greater than the other methods. This demonstrates that the method is less stable and more dependent on the choice of random seed. The standard greedy method [15] performed equally well as the beam search heuristic method with a beam width of one. This result can be attributed to Theorem 4 which established an equivalence between these methods. For most cities, the smallest minimum and mean dominating set size was achieved when using the beam search heuristic method with a larger beam width. This is reflected in the corresponding average statistics. This demonstrates the usefulness of using a beam search as opposed to the standard greedy search heuristic. Finally, the average standard deviation of the dominating set size for the standard greedy and beam search heuristic methods is quite small. This demonstrates that both methods are quite stable and less dependent on the choice of random seed. City Name | Beam Sr | Beam Sr | Beam Sr | Standard Greedy | Couture et al. ---|---|---|---|---|--- | $b=1$ | $b=2$ | $b=4$ | [15, 7] | [23] Bath | 44, 45.0, 0.8 | 43, 44.7, 1.0 | 43, 44.6, 0.9 | 44, 45.1, 0.9 | 58, 63.9, 3.1 Belfast | 48, 50.5, 1.8 | 48, 50.3, 1.6 | 48, 50.2, 1.5 | 48, 50.3, 1.5 | 74, 76.6, 2.7 Brighton | 28, 28.6, 0.9 | 28, 28.2, 0.6 | 28, 28.2, 0.6 | 28, 28.8, 1.0 | 33, 37.3, 2.2 Bristol | 47, 47.5, 1.0 | 46, 47.4, 1.0 | 47, 47.2, 0.9 | 47, 47.7, 0.8 | 69, 74.1, 5.6 Cardiff | 49, 51.0, 0.9 | 49, 50.8, 0.7 | 48, 50.6, 1.0 | 49, 50.8, 0.9 | 71, 76.8, 4.0 Coventry | 44, 45.1, 0.5 | 44, 44.9, 0.3 | 44, 44.8, 0.4 | 44, 45.2, 0.6 | 69, 73.8, 3.1 Exeter | 50, 50.8, 0.9 | 49, 50.6, 0.8 | 50, 50.6, 0.5 | 50, 51.0, 0.7 | 71, 76.7, 3.0 Glasgow | 58, 59.7, 1.2 | 58, 59.5, 1.1 | 58, 59.2, 0.7 | 58, 59.7, 1.2 | 79, 86.4, 3.7 Leeds | 51, 52.7, 0.8 | 51, 52.6, 0.8 | 51, 52.4, 0.8 | 51, 52.7, 0.8 | 73, 76.7, 3.6 Leicester | 51, 51.8, 0.4 | 51, 51.6, 0.5 | 51, 51.5, 0.5 | 51, 52.0, 0.4 | 75, 80.9, 3.6 Liverpool | 38, 38.4, 0.5 | 38, 38.5, 0.5 | 38, 38.4, 0.5 | 38, 38.6, 0.5 | 50, 56.6, 4.3 Manchester | 45, 46.2, 0.9 | 45, 46.0, 0.6 | 45, 45.9, 0.5 | 45, 46.0, 0.8 | 71, 75.2, 3.7 Newcastle | 52, 53.3, 1.0 | 51, 52.9, 1.1 | 51, 52.6, 1.1 | 52, 53.4, 0.6 | 73, 77.5, 2.4 Nottingham | 56, 57.1, 0.8 | 56, 56.9, 0.7 | 55, 56.6, 0.8 | 56, 56.9, 0.5 | 77, 81.8, 2.6 Oxford | 28, 28.2, 0.4 | 27, 28.0, 0.6 | 27, 27.9, 0.5 | 27, 28.1, 0.7 | 38, 40.9, 1.9 Plymouth | 40, 40.6, 0.6 | 40, 40.5, 0.7 | 39, 40.3, 0.8 | 40, 40.7, 0.6 | 54, 59.1, 3.4 Sheffield | 52, 53.3, 0.9 | 52, 53.0, 0.9 | 51, 52.5, 0.7 | 52, 53.1, 0.5 | 76, 81.4, 3.2 Southampton | 29, 29.9, 0.8 | 28, 29.8, 0.9 | 28, 29.6, 0.8 | 29, 29.6, 0.5 | 41, 46.0, 2.7 Sunderland | 46, 46.5, 0.5 | 46, 46.3, 0.4 | 46, 46.3, 0.4 | 46, 46.5, 0.5 | 56, 62.7, 3.7 York | 39, 39.3, 0.4 | 39, 39.2, 0.4 | 39, 39.1, 0.3 | 39, 39.5, 0.5 | 60, 68.5, 4.7 Average | 44.7, 45.7, 0.8 | 44.4, 45.5, 0.7 | 44.3, 45.4, 0.7 | 44.7, 45.7, 0.7 | 63.4, 68.6, 3.3 Table 4: The minimum, mean and standard deviation of the dominating set ($k=1$) sizes computed using different heuristic methods for 20 UK cities. Tables 5 and 6 display the statistics of the computed dominating set sizes for $k$ equal to $2$ and $4$ respectively for each UK city. For both values of $k$, the method of Couture et al. [23] performed less well by a significant margin while the beam search heuristic method performed the best. Comparing the averages of the best found $2$-dominating set sizes, we see that on average it achieved a best $2$-dominating set size approximately $3$, $3.5$, and $4$ vertices smaller than that achieved by the standard greedy method when using the beam width of 1, 2, and 4, respectively. This approximately equals a 5% reduction in the size of best found $2$-dominating sets. Comparing the averages of the best found $4$-dominating set sizes, we see that on average it achieved a best $4$-dominating set size approximately $13.5$, $14$, and $14.5$ vertices smaller than that achieved by the standard greedy method when using the beam width of 1, 2, and 4, respectively. This approximately equals a 9% reduction in the size of best found $2$-dominating sets. Similarly to the case $k=1$, the average standard deviation of the dominating set size for the standard greedy and beam search heuristic methods is quite small. City Name | Beam Sr | Beam Sr | Beam Sr | Standard Greedy | Couture et al. ---|---|---|---|---|--- | $b=1$ | $b=2$ | $b=4$ | [7] | [23] Bath | 86, 89.0, 1.4 | 87, 89.0, 0.6 | 87, 88.0, 0.6 | 90, 91.2, 0.7 | 118, 122.5, 3.1 Belfast | 96, 98.9, 1.6 | 96, 98.8, 1.6 | 96, 97.6, 1.0 | 100, 104.5, 1.9 | 140, 147.9, 4.0 Brighton | 50, 50.6, 0.5 | 49, 50.0, 0.6 | 49, 49.4, 0.5 | 51, 52.3, 1.0 | 68, 73.2, 3.8 Bristol | 93, 95.2, 1.1 | 93, 94.8, 0.9 | 91, 94.0, 1.4 | 96, 98.2, 1.8 | 142, 145.1, 4.0 Cardiff | 95, 97.7, 1.4 | 95, 97.1, 1.1 | 92, 95.9, 1.6 | 97, 98.8, 0.9 | 141, 146.1, 3.7 Coventry | 85, 85.8, 0.7 | 84, 85.3, 0.6 | 84, 85.1, 0.7 | 88, 89.1, 1.3 | 138, 142.3, 3.8 Exeter | 94, 96.4, 1.1 | 94, 96.1, 0.9 | 94, 95.7, 1.0 | 97, 99.3, 1.6 | 141, 148.4, 4.5 Glasgow | 111, 112.5, 1.3 | 108, 111.6, 1.7 | 108, 110.6, 1.7 | 113, 116.3, 2.2 | 149, 157.0, 4.4 Leeds | 99, 100.3, 0.6 | 99, 100.0, 0.6 | 98, 99.6, 1.0 | 101, 102.6, 2.3 | 143, 150.2, 5.7 Leicester | 94, 94.8, 0.6 | 93, 94.4, 0.9 | 93, 94.1, 0.8 | 100, 101.0, 0.8 | 146, 150.3, 2.9 Liverpool | 71, 72.4, 0.8 | 71, 72.4, 0.8 | 71, 72.0, 0.8 | 73, 74.4, 0.6 | 102, 110.0, 4.6 Manchester | 92, 93.0, 0.8 | 90, 92.2, 0.9 | 90, 91.5, 0.9 | 92, 93.9, 1.9 | 143, 147.5, 3.3 Newcastle | 95, 97.2, 1.8 | 95, 96.4, 1.4 | 94, 95.4, 1.1 | 99, 101.5, 1.2 | 133, 142.3, 4.8 Nottingham | 102, 103.5, 0.8 | 102, 103.3, 0.8 | 102, 103.3, 0.8 | 107, 108.5, 0.8 | 156, 161.0, 4.3 Oxford | 55, 55.5, 0.7 | 54, 55.2, 0.6 | 54, 54.9, 0.7 | 58, 59.4, 1.0 | 74, 76.8, 2.2 Plymouth | 74, 76.3, 1.2 | 74, 75.8, 1.0 | 73, 75.0, 1.1 | 77, 78.5, 0.7 | 105, 111.2, 4.9 Sheffield | 98, 100, 1.4 | 97, 99.5, 1.5 | 97, 98.9, 1.3 | 105, 106.7, 1.7 | 148, 155.1, 5.4 Southampton | 61, 61.7, 0.6 | 61, 61.6, 0.5 | 60, 61.1, 0.7 | 62, 64.2, 1.0 | 78, 87.2, 5.5 Sunderland | 88, 90.4, 1.4 | 88, 89.9, 1.1 | 87, 89.1, 1.1 | 91, 92.2, 0.6 | 120, 121.8, 2.6 York | 77, 77.9, 0.7 | 77, 78.0, 0.8 | 77, 77.6, 0.6 | 78, 78.8, 0.7 | 127, 131.0, 2.9 Average | 85.8, 87.4, 1.0 | 85.3, 87.0, 0.9 | 84.8, 86.4, 0.9 | 88.7, 90.5, 1.2 | 125.6, 131.3, 4.0 Table 5: The minimum, mean and standard deviation of the dominating set ($k=2$) sizes computed using different heuristic methods for 20 UK cities. City Name | Beam Sr | Beam Sr | Beam Sr | Standard Greedy | Couture et al. ---|---|---|---|---|--- | $b=1$ | $b=2$ | $b=4$ | [7] | [23] Bath | 160, 162.8, 1.6 | 159, 161.3, 1.2 | 159, 160, 1.1 | 178, 180, 1.33 | 210, 220.4, 4.3 Belfast | 178, 181.0, 1.7 | 177, 180.2, 2.0 | 177, 179.6, 2.0 | 194, 196.0, 1.4 | 257, 263.0, 4.4 Brighton | 93, 95.7, 1.4 | 93, 94.4, 0.9 | 92, 94.8, 1.9 | 101, 103.5, 2.1 | 123, 135.4, 5.5 Bristol | 176, 177.7, 1.1 | 175, 176.8, 0.9 | 175, 176.4, 0.8 | 187, 188.3, 0.9 | 253, 263.2, 5.4 Cardiff | 181, 185.0, 1.7 | 181, 183.6, 1.7 | 181, 183.2, 1.4 | 196, 199.6, 2.0 | 238, 252.5, 7.3 Coventry | 171, 175.1, 1.7 | 171, 174.1, 1.5 | 170, 172.6, 1.4 | 182, 183.4, 1.5 | 247, 255.2, 5.2 Exeter | 182, 183.1, 0.9 | 182, 182.8, 0.6 | 181, 182.3, 0.6 | 196, 199.4, 1.9 | 259, 263.9, 3.2 Glasgow | 198, 201.6, 1.9 | 198, 200.5, 1.6 | 197, 199.8, 1.6 | 221, 226.2, 2.3 | 256, 264.3, 4.3 Leeds | 186, 188.5, 1.2 | 187, 188.0, 0.8 | 186, 187.1, 0.7 | 198, 201.5, 2.4 | 264, 268.6, 3.8 Leicester | 176, 179.6, 1.4 | 176, 179.3, 1.6 | 175, 177.7, 1.8 | 199, 202.0, 2.3 | 267, 274.1, 4.5 Liverpool | 133, 134.5, 1.4 | 132, 133.7, 1.1 | 132, 133, 0.8 | 143, 145.4, 1.5 | 194, 201.9, 4.9 Manchester | 177, 179.6, 1.2 | 177, 179.1, 1.2 | 177, 178.5, 1.0 | 185, 188.5, 1.3 | 251, 266.6, 7.6 Newcastle | 170, 172.4, 1.0 | 169, 171.5, 1.2 | 170, 171.2, 0.7 | 189, 192.9, 2.33 | 242, 246.6, 4.9 Nottingham | 194, 196.5, 1.1 | 194, 195.3, 1.0 | 193, 195.2, 1.2 | 205, 208.4, 2.4 | 288, 295.1, 4.6 Oxford | 100, 101.7, 1.2 | 99, 100.8, 0.9 | 100, 100.8, 0.8 | 108, 114.8, 3.1 | 129, 130.9, 1.2 Plymouth | 137, 138.8, 1.3 | 136, 137.9, 1.1 | 135, 137.0, 1.2 | 153, 155.1, 1.4 | 195, 200.4, 4.0 Sheffield | 182, 184.3, 0.9 | 182, 183.2, 1.1 | 180, 182.2, 1.2 | 202, 204.3, 1.5 | 272, 278.3, 4.3 Southampton | 113, 114.7, 1.6 | 113, 114.2, 1.3 | 112, 113.2, 1.4 | 124, 125.2, 1.2 | 150, 156.0, 3.4 Sunderland | 164, 164.8, 0.7 | 163, 164.1, 0.7 | 162, 163.6, 1.0 | 176, 180.7, 3.1 | 218, 223.8, 4.2 York | 146, 147.1, 1.0 | 145, 146.4, 1.2 | 144, 145.8, 1.2 | 153, 157.2, 2.2 | 223, 231.0, 6.0 Average | 160.8, 163.2, 1.3 | 160.4, 162.3, 1.1 | 159.9, 161.7, 1.1 | 174.5, 177.6, 1.9 | 226.8, 234.5, 4.6 Table 6: The minimum, mean and standard deviation of the dominating set ($k=4$) sizes computed using different heuristic methods for 20 UK cities. Tables 7, 8, and 9 display the statistics of the computed dominating set sizes for $k$ equal to $1$, $2$, and $4$ respecively for each international city. For all values of $k$, Couture et al. [23] performed less well by a significant margin. For $k$ equal to $2$ and $4$, the proposed greedy heuristic method performed the best. In fact, comparing the averages of the best found dominating set sizes, we see that on average it achieved a best $2$\- and $4$-dominating set size approximately $6$ and $23$ vertices respectively smaller than that achieved by the standard greedy method. This approximately equals a 3% and 6% reduction respectively in the size of best found $k$-dominating set. City Name | Proposed Greedy | Standard Greedy | Couture et al. ---|---|---|--- | | [15, 7] | [23] Belgrade, Serbia | 99, 100.3, 0.9 | 99, 100.9, 0.9 | 128, 138.8, 5.0 Berlin, Germany | 146, 147.2, 1.1 | 146, 147.0, 0.7 | 174, 181.7, 6.0 Boston, USA | 71, 72.8, 1.2 | 71, 72.6, 1.4 | 85, 86.8, 2.3 Dublin, Ireland | 88, 90.2, 1.8 | 89, 89.8, 1.2 | 129, 134.1, 3.5 Minsk, Belarus | 139, 139.8, 0.9 | 138, 139.3, 1.1 | 183, 188.9, 4.6 Average | 108.6, 109.9, 1.1 | 108.6, 109.9, 1.0 | 139.8, 146.0, 4.2 Table 7: The minimum, mean and standard deviation of the dominating set ($k=1$) sizes computed using different heuristic methods for 5 international cities. City Name | Proposed Greedy | Standard Greedy | Couture et al. ---|---|---|--- | | [15, 7] | [23] Belgrade, Serbia | 197, 197.5, 0.9 | 199, 200.1, 0.7 | 271, 281.2, 6.7 Berlin, Germany | 268, 270.4, 1.4 | 277, 278.0, 1.6 | 352, 361.5, 7.4 Boston, USA | 133, 135.0, 1.1 | 137, 138.0, 0.6 | 167, 175.7, 6.1 Dublin, Ireland | 166, 166.8, 0.4 | 175, 175.8, 0.7 | 256, 269.0, 7.6 Minsk, Belarus | 265, 266.4, 1.3 | 271, 272.7, 1.3 | 360, 365.4, 5.4 Average | 205.8, 207.2, 1.0 | 211.8, 212.9, 0.9 | 281.2, 290.5, 6.6 Table 8: The minimum, mean and standard deviation of the dominating set ($k=2$) sizes computed using different heuristic methods for 5 international cities. City Name | Proposed Greedy | Standard Greedy | Couture et al. ---|---|---|--- | | [15, 7] | [23] Belgrade, Serbia | 379, 381.4, 2.0 | 397, 400.1, 1.3 | 537, 550.5, 8.5 Berlin, Germany | 501, 503.4, 1.9 | 531, 537.4, 4.4 | 685, 697.7, 13.4 Boston, USA | 255, 255.4, 0.5 | 265, 266.8, 2.0 | 336, 349.4, 7.7 Dublin, Ireland | 317, 318.5, 1.5 | 342, 344.1, 1.6 | 514, 530.5, 10.4 Minsk, Belarus | 506, 509.4, 2.9 | 539, 542.3, 2.4 | 684, 701.5, 9.6 Average | 391.6, 393.62, 1.76 | 414.8, 418.14, 2.34 | 551.2, 565.92, 9.92 Table 9: The minimum, mean and standard deviation of the dominating set ($k=4$) sizes computed using different heuristic methods for 5 international cities. The proposed greedy and beam search heuristic methods perform better than the standard greedy approach for larger values of $k$. This can be attributed to the fact that, as the value of $k$ increases, the number of levels to which a vertex can be dominated increases. For example, in the case $k=1$, a vertex can only be dominated or not dominated. On the other hand, when $k=4$, a vertex can be dominated to five different levels, corresponding to the values of coverage parameter $C(D,v)$ in problem formulation (3). The proposed greedy and beam search heuristic methods exploit this information to make better decisions, while the standard greedy approach does not. In summary, for the $k$-domination problem with $k>1$, the proposed new greedy and beam search heuristic methods outperform the baseline methods, and the performance gain is greater for larger values of $k$. Table 10 reports running times measured in seconds required by the proposed and baseline methods to compute $2$-dominating sets for five UK cities and five international cities. All algorithms were implemented in the Python programming language and executed on a desktop computer with an Intel Core i7-8700 CPU. The proposed and standard greedy algorithms run very quickly on the medium sized UK city networks. Both algorithms run reasonably fast on the large sized international city networks considering the size of the networks in question. The general beam search heuristic method runs much slower than either of the greedy algorithms. This can be attributed to higher computational complexity plus the challenge in transforming this method into an efficient implementation. In particular, although the beam search heuristic with the beam width equal to $1$ is equivalent to the proposed greedy algorithm, its overhead makes it much slower and inefficient in comparison to the pure greedy version. The method of [23] runs very quickly on the medium sized UK city networks as well as the large sized international city networks. City Name | Proposed | Beam Sr | Beam Sr | Beam Sr | Standard | Couture ---|---|---|---|---|---|--- | Greedy | $b=1$ | $b=2$ | $b=4$ | Greedy [7] | et al. [23] Bath | 1, 0 | 112, 1 | 464, 15 | 1736, 41 | 1, 0 | 1, 0 Belfast | 3, 0 | 582, 14 | 2182, 99 | 8834, 177 | 2, 0 | 1, 0 Brighton | 1, 0 | 78, 2 | 309, 4 | 1257, 31 | 1, 0 | 1, 0 Bristol | 2, 0 | 447, 8 | 1752, 30 | 7156, 161 | 2, 0 | 1, 0 Cardiff | 1, 0 | 210, 2 | 835, 21 | 3327, 100 | 1, 0 | 1, 0 Belgrade | 907, 11 | - | - | - | 800, 17 | 14, 1 Berlin | 1435, 40 | - | - | - | 1199, 31 | 21, 1 Boston | 1761, 21 | - | - | - | 1437, 21 | 23, 2 Dublin | 1828, 9 | - | - | - | 1593, 18 | 23, 1 Minsk | 188, 1 | - | - | - | 165, 3 | 7, 0 Table 10: The mean and standard deviation of running times (in seconds) for computing dominating sets ($k=2$) using different heuristic methods. ## 5 Conclusion In this work, we proposed novel greedy and beam search heuristic methods for the $k$-domination problem. These methods are inspired by a novel formulation of the problem (3). The methods were evaluated with respect to two baseline methods on a set of street network reachability graphs. Our evaluation found that, for the classic domination problem ($k=1$), the proposed methods perform equally well with one of the existing methods. This result is attributed to an equivalence between methods in this particular case. On the other hand, for the $k$-domination problem with $k>1$, the proposed methods outperform the baseline methods, and the performance gain is greater for larger values of $k$. A useful characteristic of the proposed methods is their simplicity. The proposed beam search heuristic method of Algorithm 2 with a beam width of one reduces to the greedy search heuristic method of Algorithm 1. The latter algorithm is efficient and simple to implement. This contrasts with metaheuristic or machine learning based methods for combinatorial optimization problems which can be very challenging to implement [29]. Possible directions for future research in this area include the following. The evaluation presented in this work was purely empirical. In future work, it would be interesting to show better analysis of the performance of the proposed methods. Such analysis could take the form of proving some bounds on the size of the $k$-dominating sets found by the algorithms. The related works section of this article highlighted that there currently exist no metaheuristic methods for the $k$-domination problem where $k>1$. Given good performance of such methods with respect to the classic domination problem ($k=1$), this presents an interesting direction for research as well. ## References * [1] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410–5419, 2017. * [2] Padraig Corcoran, Musfira Jilani, Peter Mooney, and Michela Bertolotto. Inferring semantics from geometry: the case of street networks. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 1–10, 2015. * [3] Andrei Gagarin, Anush Poghosyan, and Vadim Zverovich. Randomized algorithms and upper bounds for multiple domination in graphs and networks. Discrete Applied Mathematics, 161(4-5):604–611, 2013. * [4] Ralf Klasing and Christian Laforest. Hardness results and approximation algorithms of $k$-tuple domination in graphs. Information Processing Letters, 89(2):75–83, 2004. * [5] P.J. Slater T.W. Haynes, S.T. Hedetniemi. Fundamentals of Domination in Graphs. Marcel Dekker, New York, 1998. * [6] My T Thai, Ning Zhang, Ravi Tiwari, and Xiaochun Xu. On approximation algorithms of k-connected m-dominating sets in disk graphs. Theoretical Computer Science, 385(1-3):49–59, 2007. * [7] Andrei Gagarin and Padraig Corcoran. Multiple domination models for placement of electric vehicle charging stations in road networks. Computers & Operations Research, 96:69–79, 2018. * [8] Mohammad Mehdi Daliri Khomami, Alireza Rezvanian, Negin Bagherpour, and Mohammad Reza Meybodi. Minimum positive influence dominating set and its application in influence maximization: a learning automata approach. Applied Intelligence, 48(3):570–593, 2018. * [9] M.S. Jacobson and K. Peters. Complexity questions for $n$-domination and related parameters. Congressus Numerantium, 68:7–22, 1989. * [10] J.K. Lan and G.J. Chang. Algorithmic aspects of the $k$-domination problem in graphs. Discrete Applied Mathematics, 161:1513–1520, 2013. * [11] W.H. Bird. Computational Methods for Domination Problems. PhD thesis, Department of Computer Science, University of Victoria, BC, Canada, 2017. * [12] N. Assadian. Dominating sets of the cartesian products of cycles. Master’s thesis, Department of Computer Science, University of Victoria, BC, Canada, 2019. * [13] Yiyuan Wang, Shaowei Cai, Jiejiang Chen, and Minghao Yin. A fast local search algorithm for minimum weight dominating set problem on massive graphs. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 1514–1522. AAAI Press, 2018. * [14] Weiping Shang, Pengjun Wan, Frances Yao, and Xiaodong Hu. Algorithms for minimum m-connected k-tuple dominating set problem. Theoretical Computer Science, 381(1-3):241–247, 2007. * [15] Abhay K Parekh. Analysis of a greedy heuristic for finding small dominating sets in graphs. Information processing letters, 39(5):237–240, 1991. * [16] Laura A Sanchis. Experimental analysis of heuristic algorithms for the dominating set problem. Algorithmica, 33(1):3–18, 2002. * [17] Stephen Eubank, V.S. Anil Kumar, Madhav V Marathe, Aravind Srinivasan, and Nan Wang. Structural and algorithmic aspects of massive social networks. In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, pages 718–727, 2004. * [18] Mustapha Chellali, Odile Favaron, Adriana Hansberg, and Lutz Volkmann. k-domination and k-independence in graphs: A survey. Graphs and Combinatorics, 28(1):1–55, 2012. * [19] Abdel-Rahman Hedar and Rashad Ismail. Hybrid genetic algorithm for minimum dominating set problem. In International conference on computational science and its applications, pages 457–467. Springer, 2010. * [20] Abdel-Rahman Hedar and Rashad Ismail. Simulated annealing with stochastic local search for minimum dominating set problem. International Journal of Machine Learning and Cybernetics, 3(2):97–109, 2012. * [21] Chin Kuan Ho, Yashwant Prasad Singh, and Hong Tat Ewe. An enhanced ant colony optimization metaheuristic for the minimum dominating set problem. Applied Artificial Intelligence, 20(10):881–903, 2006. * [22] Martin Nehéz, Dušan Bernát, and Martin Klaučo. Comparison of algorithms for near-optimal dominating sets computation in real-world networks. In Proceedings of the 16th International Conference on Computer Systems and Technologies, pages 199–206, 2015. * [23] Mathieu Couture, Michel Barbeau, Prosenjit Bose, and Evangelos Kranakis. Incremental construction of k-dominating sets in wireless sensor networks. Adhoc & Sensor Wireless Networks, 5, 2008. * [24] Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008. * [25] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall Press, USA, 3rd edition, 2009. * [26] Padraig Corcoran and Peter Mooney. Characterising the metric and topological evolution of openstreetmap network representations. The European Physical Journal Special Topics, 215(1):109–122, 2013\. * [27] Geoff Boeing. Osmnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. Computers, Environment and Urban Systems, 65:126–139, 2017. * [28] Guy E Blelloch, Jeremy T Fineman, and Julian Shun. Greedy sequential maximal independent set and matching are parallel on average. In Proceedings of the twenty-fourth annual ACM symposium on Parallelism in algorithms and architectures, pages 308–317, 2012. * [29] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, pages 6348–6358, 2017.
100 capbtabboxtable[][] # LAION-5B: An open large-scale dataset for training next generation image- text models Christoph Schuhmann1 §§°° Romain Beaumont1 §§°° Richard Vencu1,3,8 §§°° Cade Gordon2 §§°° Ross Wightman1§§ Mehdi Cherti 1,10§§ Theo Coombes1 Aarush Katta1 Clayton Mullis1 Mitchell Wortsman6 Patrick Schramowski1,4,5 Srivatsa Kundurthy1 Katherine Crowson1,8,9 Ludwig Schmidt6 °° Robert Kaczmarczyk1,7 °° Jenia Jitsev1,10 °° LAION1 UC Berkeley2 Gentec Data3 TU Darmstadt4 Hessian.AI5 University of Washington, Seattle6 Technical University of Munich7 Stability AI8 EleutherAI9 Juelich Supercomputing Center (JSC), Research Center Juelich (FZJ)10 <EMAIL_ADDRESS> §§ Equal first contributions, °° Equal senior contributions ###### Abstract Groundbreaking language-vision architectures like CLIP and DALL-E proved the utility of training on large amounts of noisy image-text data, without relying on expensive accurate labels used in standard vision unimodal supervised learning. The resulting models showed capabilities of strong text-guided image generation and transfer to downstream tasks, while performing remarkably at zero-shot classification with noteworthy out-of-distribution robustness. Since then, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and Imagen made further improvements. Studying the training and capabilities of such models requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. To address this problem and democratize research on large-scale multi-modal models, we present LAION-5B - a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language. We show successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further experiments enabled with an openly available dataset of this scale. Additionally we provide several nearest neighbor indices, an improved web- interface for dataset exploration and subset generation, and detection scores for watermark, NSFW, and toxic content detection. 111Project page: https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/ .tocmtsection ## 1 Introduction Learning from multimodal data such as text, images, and audio is a longstanding research challenge in machine learning [51, 56, 83, 31, 86]. Recently, contrastive loss functions combined with large neural networks have led to breakthroughs in the generalization capabilities of vision and language models [58, 59, 66]. For instance, OpenAI’s CLIP models [58] achieved large gains in zero-shot classification on ImageNet [65], improving from the prior top-1 accuracy of 11.5% [41] to 76.2%. In addition, CLIP achieved unprecedented performance gains on multiple challenging distribution shifts [78, 61, 23, 82, 3, 70]. Inspired by CLIP’s performance, numerous groups have further improved image-text models by increasing the amount of computation and the training set size [28, 54, 94, 89]. Another recent success of multimodal learning is in image generation, where DALL-E [59] and later models [60, 66, 52, 90, 64] demonstrated the potential of text-guided image generation by producing high-quality images specific to the provided text. A critical ingredient in this new generation of image-text models is the pre- training dataset. All of the aforementioned advances rely on large datasets containing hundreds of millions or even billions of image-text pairs, e.g., 400 million for CLIP [58] and 6.6 billion for BASIC [54]. However, _none of these datasets are publicly available_. While OpenAI still released the CLIP models publicly [58], later papers made neither the pre-training dataset nor the resulting models available to the wider research community [28, 54, 2, 90, 66, 52, 89]. As a result, research in this area has pooled into a small number of industrial research labs, limiting transparency and impeding research progress. In this work, we address this challenge and make multimodal training more accessible by assembling a public dataset that is suitable for training large image-text models. Specifically, we introduce LAION-5B, the largest public image-text dataset containing over 5.8 billion examples (see Table 2 for a comparison). By starting from Common Crawl [1] and filtering this data source with an existing CLIP model, we derive a dataset consisting of three parts: 2.32 billion English image-text examples, 2.26 billion multilingual examples, and 1.27 billion examples that are not specific to a particular language (e.g., places, products, etc.). Beyond assembling the dataset, we also explore its ethical implications and flaws that emerge with large-scale data collection. By releasing LAION-5B publicly, we offer the first opportunity for the community to audit and refine a dataset of this magnitude. Figure 1: Zero-Shot Accuracy. CLIP models trained on LAION-400M (ours) [69], a previously released subset of LAION-5B, show competitive zero-shot accuracy compared to CLIP models trained on OpenAI’s original training set WIT when evaluated on ImageNet1k. Dataset | # English Img-Txt Pairs ---|--- Public Datasets MS-COCO | 330K CC3M | 3M Visual Genome | 5.4M WIT | 5.5M CC12M | 12M RedCaps | 12M YFCC100M | 100M222Although YFCC100M contains 100M image-text pairs, it is unclear how well the text matches the image for an average example from the dataset. Radford et al. [57]’s curation procedure reduced YFCC100M to 15M samples. LAION-5B (Ours) | 2.3B Private Datasets CLIP WIT (OpenAI) | 400M ALIGN | 1.8B BASIC | 6.6B Figure 2: Dataset Size. LAION-5B is more than 20 times larger than other public English image-text datasets. We extend the analysis from Desai et al. [14] and compare the sizes of public and private image-text datasets. To validate that LAION-5B is indeed suitable for training large image-text models, we conduct multiple experiments. We focus on matching the performance of OpenAI’s CLIP models because they are the largest publicly released image- text models. OpenAI’s CLIP models were trained on 400 million image-text pairs, and hence we also train CLIP models on a subset of LAION-5B containing the same number of examples (“LAION-400M”). Across a diverse range of problem settings including ImageNet (zero-shot), distribution shifts, VTAB, retrieval, and fine-tuning, our models trained on LAION-400M match or come close to the performance of OpenAI’s CLIP models. Our ViT-L/14 models trained with OpenCLIP are the first open source reproductions of the largest CLIP models released by OpenAI. Despite these validation results, LAION-5B is _not_ a finished data product. Due to the immense size of current image-text pre-training datasets, curating LAION-5B for widespread use goes beyond the scope of a single research paper. Hence we do not only release our dataset, but also our software stack we built for assembling LAION-5B. We view our initial data release and this paper as a first step on the way towards a widely applicable pre-training dataset for multimodal models. As a result, we strongly recommend that LAION-5B should only be used for academic research purposes in its current form. We advise against any applications in deployed systems without carefully investigating behavior and possible biases of models trained on LAION-5B. The remainder of the paper proceeds as follows. After reviewing related work, we present our data collection process for LAION-5B in Section 3. Section 4 then describes LAION-5B’s composition including its various subsets. To validate LAION-5B, we reproduce and evaluate different image-text models in Section 5. Before concluding, we discuss the technical limitations of LAION-5B in Section 6 and safety and ethics concerns in Section 7. ## 2 Related Work Vision-Language Models. Radford et al. [58] made a large step forward in multimodal learning for image-text data with their CLIP (Contrastive Language–Image Pre-training) model. The authors proposed a contrastive learning scheme to embed both images and text into a shared representation space, which enabled unparalleled performance in zero-shot image classification. Moreover, CLIP made large progress on multiple challenging distribution shifts [78, 84]. After CLIP’s initial success, ALIGN and BASIC improved contrastive multimodal learning by increasing the training set size and the batch size used for training [28, 54]. LiT also increased training scale and experimented with a combination of pre-trained image representations and contrastive fine-tuning to connect frozen image representations to text [94]. Flamingo introduced the first large vision-language model with in-context learning [2]. Other papers have combined contrastive losses with image captioning to further improve performance [89, 43]. Beyond image classification and retrieval, the community later adapted CLIP to further vision tasks such as object navigation and visual question answering [50, 32, 72, 17]. Another direction that has recently seen large progress in multimodal learning is text-guided image generation [47, 62, 95]. Specifically, DALL-E demonstrated diverse image generation capabilities for text prompts combining multiple concepts [59]. GLIDE, DALL-E 2, Imagen, Parti, and Stable Diffusion then improved visual fidelity and text-prompt correspondence [52, 66, 60, 90, 64]. Image-Text Datasets. Earlier dataset creation efforts such as MS-COCO and Visual Genome curated image and region labels through human annotation [44, 36]. While this resulted in high-quality labels, it also limited the scale of the datasets to only 330K and 5M examples, respectively. The web-harvested YFCC-100M dataset is substantially larger with about 99 million images and one million videos from Flickr, but only contains the user-generated metadata without additional annotations collected specifically for training computer vision models [79]. As a result, the text associated with an image sometimes has little to no correspondence with the actual image content. To address this shortcoming of web-harvested image-text data, the Conceptual Captions dataset (CC3M) started with images and alt-text collected from the web, but then performed additional data cleaning procedures [71]. To increase the size of the dataset, researchers later relaxed the filtering protocol to arrive at the subsequent CC12M dataset [11]. Building datasets from alt-text continued with ALT200M [26] and ALIGN [28], which increased the dataset size up to 1.8 billion image-text pairs. In contrast to relying on alt-text, RedCaps used the captions provided by Reddit users to collect higher quality captions [14]. Datasets with non-English image-text pairs are less common. As a result, researchers translated English captioning datasets to other languages such as Farsi, Korean, and Japanese [74, 73, 67]. To the best of our knowledge, the largest multilingual dataset before LAION-5B has around 36 million samples from Wikipedia Image Text [75]. With the release of LAION-5B, researchers now have access to roughly two orders of magnitude more multilingual samples, which provides new opportunities for research on low-resource languages and multilingual models. Scaling Behavior. Improving model performance by increasing data scale has been a theme in machine learning since at least the ImageNet dataset [13]. In the following decade, computer vision benefited from growth in model, data, and compute scale, in addition to advances in both convolutional and transformer architectures [33, 15, 81, 92]. Industrial research labs assembled large internal datasets such as Instagram-1B, JFT300M, and JFT3B to support image pre-training [46, 77, 93]. Natural language processing (NLP) demonstrated the beneficial effect of model, data, and compute scale on generalization through large language models such as GPT-3 [8] and associated experiments on scaling behavior [30]. Community efforts like the The Pile [18] and BigScience ROOTS [40] made large text datasets more accessible. ## 3 Collection Methodology We constructed LAION-5B starting from Common Crawl, a public web archive [1]. The Common Crawl organization crawls the web since 2008 and publishes the results in snapshots approximately every month. Recent snapshots each contain about 300 TiB of data for around 3 billion web pages. In the following, we introduce our pipeline to assemble and filter a vision-language dataset from images in Common Crawl and their associated HTML alt-text. ### 3.1 Dataset Assembly Pipeline Our dataset assembly pipeline follows the flowchart of Figure 3. At a high level, the pipeline consists of three main components: (i) distributed filtering of the Common Crawl web pages, (ii) distributed downloading of image-text pairs, and (iii) content filtering. The code used for the dataset pipeline may be found on GitHub333https://github.com/rvencu/crawlingathome- gpu-hcloud. We now describe each component in more detail. Figure 3: Overview of the acquisition pipeline: Files are downloaded, tracked, and undergo distributed inference to determine inclusion. Those above the specified CLIP threshold are saved. Web page filtering. To extract image-text pairs from Common Crawl, we parse the HTML IMG (image) tags from Common Crawl’s WAT metadata files.444See https://commoncrawl.org/the-data/get-started/ for details of the metadata format. Specifically, we focus on images with an _alt-text_ so we can create image-text pairs. The alt-text is an HTML attribute of IMG tags containing alternative text for situations where the corresponding image cannot be rendered. For instance, screen reader software for a visually impaired person may read the alt-text in place of an image, or a search engine may use the alt-text to better index a web page without analyzing the actual image content. After extracting the alt-text, we perform language detection using CLD3 [53] with three possible outputs: English, another language, or no detected language (i.e., all detections are below a confidence threshold [69]). Based on a manual inspection of a random sample, the “no language” set contains language-agnostic short form text such as the names of products and places. We stored the resulting data in a PostgreSQL server for processing in the next stages of the pipeline. We maintained about 500M image URLs in the server at all times. Downloading Image-Text Pairs. In order to maximize resource utilization, we downloaded the raw images from the parsed URLs with asynchronous requests using the Trio and Asks Python libraries. To limit costs, we chose a small cloud node with 2 vCPUs, 1GB of RAM, and 10Mbps download bandwidth as a worker instance. Such a worker can process 10,000 links in about 10 – 15 minutes. We utilized roughly 300 workers in parallel and batched the workload into chunks of 10,000 links taken from the aforementioned PostgreSQL server. Post-Processing. After downloading the WAT files from Common Crawl, we removed data with less than 5 characters of text, less than 5 KB of image data, and potentially malicious, large, or redundant images. To conclude the pipeline, we filtered image-text pairs based on their content. Specifically, we computed cosine similarities between the image and text encodings with OpenAI’s ViT-B/32 CLIP model. For languages other than English, we utilized the multi- lingual CLIP ViT-B/32 from Carlsson et al. [10]. While OpenAI also released larger CLIP models later, these models were not available when we began to assemble LAION-5B. For consistency, we therefore relied on ViT-B/32 CLIP models for the entire dataset. We removed all English image-text pairs with cosine similarity below 0.28, and all other pairs with similarity below 0.26. This step removed around 90% of the original 50 billion images, leaving just short of 6 billion examples. ### 3.2 Safety During Collection Current automated filtering techniques are far from perfect: harmful images are likely to pass, and others are likely to be falsely removed. We make a best effort to identify, document, and tag such content. In the case of illegal content, we computed CLIP embeddings to filter out such samples. Furthermore, these images and texts could amplify the social bias of machine learning models, especially ones trained with no or weak supervision [76]. It is important to note that the above mentioned classifiers are not perfect, especially keeping the complexity of these tasks and the diverse opinions of different cultures in mind. Therefore, we advocate using these tags responsibly, not relying on them to create a truly safe, “production-ready” subset after removing all potentially problematic samples. For a detailed discussion in this regard, we refer to Sec. 7. To encourage research in fields such as dataset curation, we refrain from removing potentially offensive samples and tag them instead. The user can decide whether to include content depending on their task. To this end, we also encourage model developers to state, e.g., in their model card [49] which subsets and tagged images are used. We apply Q16 [68] and our own specialized pornographic and sexualized content classifier (here referred to as NSFW) to identify and document a broad range of inappropriate concepts displaying not only persons but also objects, symbols, and text, see cf. [68] and Appendix Sec. C.5 and Sec. C.6 for details. Both classifiers are based on CLIP embeddings. Following our main intention of a publicly available dataset, these two approaches, as with all other implementations related to LAION 5B, are open-sourced. We separate pornographic content and otherwise inappropriate content (e.g. harm, exploitation and degradation). Both can be dis- and enabled in the publicly available dataset exploration UI.555https://knn5.laion.ai/ With both together, the UI and the openly accessible code, we encourage users to explore and, subsequently, report further not yet detected content and thus contribute to the improvement of our and other existing approaches. ## 4 Dataset Composition (a) Figure 4: LAION-5B examples. Sample images from a nearest neighbor search in LAION-5B using CLIP embeddings. The image and caption (C) are the first results for the query (Q). We release LAION-5B as the following three subsets: * • 2.32 billion English image-text pairs. We refer to this subset as LAION-2B-en or LAION-2B if the language is clear from context. * • 2.26 billion image-text pairs from over 100 other languages. In the multilingual subset, the top-5 most frequent languages are Russian (10.6%), French (7.4%), German (6.6%), Spanish (6.6%), and Chinese (6.3%). * • 1.27 billion samples where a language could not be clearly detected. Based on visually inspecting a random subset of these low-confidence language samples, the corresponding images often depict products or places. The captions contain language with clear semantics, but might also include noise such as keywords for search engine optimiziation or product tags. We provide metadata files in the Apache Parquet format that consist of the following attributes for each image-text pair: * • A 64-bit integer identifier * • The URL of the image. * • The text string. * • Height and width of the image. * • Cosine similarity between the text and image embeddings. * • The output from our NSFW and watermark detectors (one score between 0 and 1 each). 3% of images were detected as NSFW, which can be filtered out by a user with the NSFW tag. ## 5 Experiments Validating LAION-5B In this section, we showcase prior work using the LAION-400M [69] and other subsets as well as our CLIP reproduction studies to give quantitative and qualitative evidence of the dataset’s utility for training SOTA large scale language-vision models. ### 5.1 Usage Examples Subdataset Generation. LAION-5B’s scale enables novel dataset curation for computer vision related tasks. Recently, researchers have utilized both LAION-5B and a subset, LAION-400M, as a data source in vision related tasks such as facial representation learning [96] and invasive species mitigation [38]. Within LAION, we have compiled from LAION-5B both LAION-High- Resolution666https://huggingface.co/datasets/laion/laion-high-resolution, a 170M subset for superresolution models, and LAION- Aesthetic777https://github.com/LAION-AI/laion-datasets/blob/main/laion- aesthetic.md, a 120M subset of aesthetic images, as determined by a linear estimator on top of CLIP. CLIP Reproduction and Improvements. Gao et al. [19], trained an enhanced CLIP architecture on the LAION-400M subset, outperforming OpenAI’s CLIP on ImageNet zero-shot classification top-1 accuracy. See Sec. 5.2 for our CLIP reproduction experiments using models of different scales. Training on a LAION-5B subset, Li et al. [42] developed BLIP to unify understanding and generation for vision-language tasks via a novel Vision-Language Pretraining (VLP) framework. It has been shown that BLIP matched or outperformed comparable models as per CIDEr, SPICE, and BLEU@4 metrics. Eichenberg et al. [16] used a LAION subset for MAGMA, a model generating text “answers” for image-question pairs; MAGMA achieves state of the art results on OKVQA metrics and outperforming Frozen [80]. Image Generation. Rombach et al. [63] utilized a subset of LAION-5B in training latent diffusion models (LDM) that achieved state-of-the-art results on image inpainting and class-conditional image synthesis. The work was further extended into stable diffusion project that used subsets of LAION-5B (LAION-2B-en, laion-high-resolution and laion-aesthetics888See https://github.com/CompVis/stable-diffusion for more details) for training a publicly available SOTA text-to-image generative model (see Appendix Sec. F.2). Furthermore, Gu et al. [21] used LAION-400M to train VQ diffusion text- to-image generation models, which have been shown to be more efficient, and are able to generate higher quality images. Moreover, Saharia et al. [66] showed an improved architecture of a diffusion model that was trained on a subset of LAION-400M that outperforms OpenAI’s recent DALLE-2 and achieves a new state-of-the-art COCO FID of 7.27. ### 5.2 Experiments on CLIP Reproduction In an effort to reproduce the results of CLIP [58], and to validate the data collection pipeline we describe in Sec. 3, we trained several models on LAION-400M [69] and a model on LAION-2B-en, datasets which are both subsets of LAION-5B. As training such models require large compute due to dataset and model sizes that are considered in the experiments, the usage of supercomputers and large compute clusters is necessary in order to train the models efficiently. We used OpenCLIP [27], an open source software for training CLIP-like models. After adapting OpenCLIP for distributed training and execution on JUWELS Booster supercomputer [29], we reproduced CLIP models of different size on the LAION-400M subset. We trained ViT-B/32, ViT-B/16, and ViT-L/14 following CLIP [58], and an additional model that we call ViT-B/16+, a slightly larger version of ViT-B/16. We followed the same hyper-parameter choices of the original CLIP models. We used between 128 and 400 NVIDIA A100 GPUs to train the models. All trained models may be found in the OpenCLIP repository999https://github.com/mlfoundations/open_clip. For more information about hyper-parameters and training details, see Appendix Sec. E.1. #### 5.2.1 Zero-Shot Classification and Robustness Performance Following CLIP [58] and subsequent works, we evaluate the models on zero-shot classification. For each downstream dataset, we use a set of pre-defined prompts for each class, which we collected from prior works [58, 94]. We compute the embeddings of each class by averaging over the embedding of the prompts, computed each using the text encoder. For each image, and for each class, we compute the cosine similarity between their embeddings, and classify each image as the class that have the largest cosine similarity with the image embedding. We evaluate the models using top-1 accuracy. In Tab. 1, we show a comparison between models trained on LAION (400M, 2B) and original CLIP from [58]. We follow [94] and evaluate robustness performance on ImageNet distribution shift datasets [61, 23, 82, 25, 3]. Additionally, we construct a benchmark we call VTAB+, a superset of VTAB [91], on which we compute the average top-1 accuracy over 35 tasks101010[91] showed that different aggregation strategies have high rank correlation (Kendall score) with the simple top-1 average accuracy over datasets, thus we follow the same strategy. We also compute the ranks of each model on each task and average the ranks, and find that the ranking is similar to averaging top-1 accuracy.. We can see that on ImageNet-1k (noted "INet" on the table), performance of LAION-400M models and original CLIP models (trained on a 400M private dataset) is matched well. On the four ImageNet distribution shift datasets, we observe some larger differences, notably on ObjNet (CLIP WIT is better) and INet-S (LAION is better), which allows us to conclude that in overall, CLIP models trained on LAION match in their robustness original CLIP. With ViT-B/32 and ViT-L/14, training on the larger LAION-2B-en improves over LAION-400M model everywhere. To obtain an idea about how the zero-shot performance improves with scale, we show the relationship between the total compute and accuracy on VTAB+ on models trained on LAION (400M, 2B-en). In Figure 5, we see that accuracy on VTAB+ improves with compute (log-log plot). It would be interesting to study in future work if the relationship between compute and accuracy keeps showing the same trend or whether we start to see saturation, like it was observed in [93]. Here, we can report that increasing either model or data scale for CLIP pre-training results in improvement of zero-shot classification performance on various downstream transfer targets. For a full overview of zero-shot classification and retrieval results, view Sec. E.3 of the Appendix. To show that larger dataset scale matters for the performance of pre-trained models, we perform additional experiments using ViT-B/32 and ViT-L/14 on different LAION-5B and LAION-400M subsets, while varying the amount of training compute (samples seen). Our findings confirm that the effect of dataset scale is significant, given sufficient compute for training. For instance, for the same amount of compute (34B images seen), training ViT-L/14 on LAION-2B-en (75.4%) outperforms LAION-400M (73.9%) on ImageNet-1k zero-shot classification. Same effect is observed for smaller ViT-B/32 model. For more detailed results, see Fig. 13 and Tab. 5 in the Appendix. (a) (b) Figure 5: The relationship between total compute (giga multiply–accumulates (GMACS)) and zero-shot top-1 classification accuracy (%) of models trained on LAION (400M, 2B-en). The dashed line in each figure is a linear fit in log-log space. Each point corresponds to a model trained on either the 400M or 2B-en LAION subsets. We show results on ImageNet-1k (left) and VTAB+ (right) where we average the accuracy over 35 tasks (see Appendix E.3 for details). Clear effect of model, data and compute training scale is evident on zero-shot performance that increases following scale power law. Model | Pre-training | INet | INet-v2 | INet-R | INet-S | ObjNet | VTAB+ ---|---|---|---|---|---|---|--- B/32 | CLIP WIT | 63.3 | 56.0 | 69.4 | 42.3 | 44.2 | 45.4 | LAION-400M | ${62.9}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | ${55.1}^{\tiny\color[rgb]{1,0,0}\textbf{-0.9}}$ | ${73.4}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.0}}$ | ${49.4}^{\tiny\color[rgb]{0,0.88,0}\textbf{+7.1}}$ | ${43.9}^{\tiny\color[rgb]{1,0,0}\textbf{-0.3}}$ | ${45.6}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | LAION-2B-en | ${65.7}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.4}}$ | ${57.4}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.4}}$ | ${75.9}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.5}}$ | ${52.9}^{\tiny\color[rgb]{0,0.88,0}\textbf{+10.6}}$ | ${48.7}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.5}}$ | ${47.9}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.5}}$ B/16 | CLIP WIT | 68.3 | 61.9 | 77.7 | 48.2 | 55.3 | 47.5 | LAION-400M | ${67.0}^{\tiny\color[rgb]{1,0,0}\textbf{-1.3}}$ | ${59.6}^{\tiny\color[rgb]{1,0,0}\textbf{-2.3}}$ | ${77.9}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | ${52.4}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.2}}$ | ${51.5}^{\tiny\color[rgb]{1,0,0}\textbf{-3.8}}$ | ${48.3}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.8}}$ B/16+ | LAION-400M | 69.2 | 61.5 | 80.5 | 54.4 | 53.9 | 49.2 L/14 | CLIP WIT | 75.6 | 69.8 | 87.9 | 59.6 | 69.0 | 55.7 | LAION-400M | ${72.8}^{\tiny\color[rgb]{1,0,0}\textbf{-2.8}}$ | ${65.4}^{\tiny\color[rgb]{1,0,0}\textbf{-4.4}}$ | ${84.7}^{\tiny\color[rgb]{1,0,0}\textbf{-3.2}}$ | 59.6 | ${59.9}^{\tiny\color[rgb]{1,0,0}\textbf{-9.1}}$ | ${51.8}^{\tiny\color[rgb]{1,0,0}\textbf{-3.9}}$ | LAION-2B-en | ${\color[rgb]{0,0,0}{75.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.3}}$ | ${\color[rgb]{0,0,0}{67.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.0}}$ | ${\color[rgb]{0,0,0}{87.4}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.5}}$ | ${\color[rgb]{0,0,0}{63.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+3.7}}$ | ${\color[rgb]{0,0,0}{65.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.6}}$ | ${\color[rgb]{0,0,0}{54.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.2}}$ Table 1: Comparison between CLIP models trained on LAION (400M, 2B) and the original CLIP models [58] trained on OpenAI’s WebImageText (WIT) dataset. We show zero-shot top-1 classification accuracy (%) on various datasets including ImageNet, four ImageNet distribution shift datasets, and a benchmark we call VTAB+, where we average performance over 35 tasks. See Appendix E.3 for more details about the datasets used for evaluation and the results. ### 5.3 Experiments with Generative Models To validate LAION-5B as a dataset for training strong text-to-image generation models, we fine-tuned OpenAI’s GLIDE [52] on LAION-5B data. The obtained results comparing generated samples from original OpenAI GLIDE and from our reproduction (LAIONIDE) are compiled into an interactive web demo111111https://wandb.ai/afiaka87/glide_compare/reports/laionide-v3-benchmark–VmlldzoxNTg3MTkz. See Appendix Sec F for more technical details on experiments with GLIDE (F.1) and Stable Diffusion (F.2). ## 6 Technical Limitations The large scale of current image-text datasets makes it infeasible to thoroughly investigate all aspects of a dataset in a single publication. Hence we now outline some potential technical limitations specifically affecting LAION-5B. These potential limitations are starting points for future work on analyzing and improving image-text datasets. Data Overlap. Our experiments in Section 5.2 show that models trained on LAION-5B achieve good performance on a variety of downstream tasks. However, the LAION-5B training set may overlap with some of the downstream test sets if these test sets are also included in Common Crawl. If overlap is present, it may lead to incorrectly large test set accuracies that overstate the true generalization capabilities of models trained on LAION-5B. Overall, we do not consider potential test set overlap to be a serious threat for the validity of results obtained with LAION-5B. OpenAI encountered the same question in the context of their pre-training dataset for CLIP and found only few examples of substantial performance difference due to data overlap on downstream target datasets [58]. Some datasets such as ObjectNet [3] are likely not contained in Common Crawl because ObjectNet was not assembled from web images. Instead, the authors of ObjectNet tasked MTurk workers to take new pictures in their own homes. Nevertheless, measuring the degree of overlap between LAION-5B and popular computer vision benchmarks is an important question for future work, which will include further de-duplication efforts. Other text sources. Birhane et al. [6] described the shortcomings of alt-text and noted that alt-text is not necessarily a good description of the corresponding image. For instance, the alt-text may be search engine optimization (SEO) spam, an incoherent list of keywords, or overly corrupted otherwise. In such cases, the language in the text annotations may become less informative or entirely useless for training. For ImageNet zero-shot classification, BASIC [54] has demonstrated strong results when turning 5 billion of the 6.6 billion captions into the form of CLASS_1 and CLASS_2 and ... and CLASS_K, by using an internal multi-label classification dataset (JFT-3B). Thus, image captions formed by just concatenating class names may also serve as meaningful alternative of otherwise corrupted text. Such a finding adds a possibility of employing generated together with existing natural language captions for training contrastive image-language models with strong zero-shot performance. Filtering with CLIP. CLIP allows the curation and collection of this dataset to be low-cost and scalable. Such an automated process reduces dramatically necessity for the human control which would be otherwise intractable for such large scale collection. However, through curating with CLIP, we also incur its flaws and model biases. For additional discussion of CLIP filtering related to safety and ethics, see Appendix Sec. G.2. Filtering by a small scale CLIP ViT-B/32 may leave more image-text pairs with weak or no semantic connection in the dataset while also accidentally removing some high quality image-text pairs than filtering with stronger, larger scale models that were not available in the time of our experiments. The larger CLIP ViT-L/14 model may create a less noisy version of LAION datasets than what was possible with smaller scale CLIP ViT-B/32. We hypothesize that filtering Common Crawl with a CLIP ViT-L model will further increase the quality of our dataset. It is subject to our future work to create a CLIP ViT L/14 filtered version of LAION-400M and LAION-5B to test how this affects model training and downstream transfer performance. ## 7 Safety and Ethical Discussion Recent developments in large-scale models, such as GPT-3 [9], CLIP [57], ALIGN [28], GLIDE [52], and DALLE-2 [60] have potential for far-reaching impact on society, both positive and negative, when deployed in applications such as image classification and generation, recommendation systems, or search engines. Besides model parameter scaling, the advances made so far also rely on the underlying large-scale datasets. Recent research [4, 5] described many potential negative societal implications that may arise due to careless use of vision-language models, e.g., the models perform worse for certain groups of users or reproduce discriminatory behavior. Unfortunately, only a minority of these models are publicly released, most of them are only accessible by an “input to output” interface. Importantly, the underlying large-scale datasets are also not often publicly available. While open-source efforts exist to re-implement model architectures and training, the closed nature of large-scale datasets used for model training makes any proper systematic investigation of model training and model behavior very hard or even impossible. Studying full training, comparison of different model architectures and progress in large-scale multi-modal learning becomes restricted to those institutions that were able to obtain their closed large- scale datasets. It also results in safety issues of creating and using such models, as broad research community does not get to test both model and the dataset used for its training for causes underlying undesired behaviours. LAION-5B as an open large-scale dataset provides here not only a chance to make progress in careful studies of the trained models’ capabilities and replication but also to investigate how uncurated large-scale datasets impact various model biases and under which circumstances their usage may result in undesired safety issues. Such research can help to design automated ways to curate and create datasets from uncurated ones that alleviate the bias and safety issues. To this end, LAION also created a number of tools to aid researchers and other users in large-scale data handling and exploration. One such a tool uses pre-computed image embeddings to enable search of images guided either by text or image input via an easily and publically accessible web interface (CLIP retrieval tool121212https://knn5.laion.ai, see Appendix Sec. C.4). LAION made also source code for the tool and routines necessary to build an own version of it publicly available131313https://github.com/rom1504/clip-retrieval (see Appendix Sec C, C.2, C.3 for more details). After the release of LAION-400M, several groups (e.g., [6]) already used such tools and investigated potential problems arising from an unfiltered dataset. Motivated by these findings, with LAION-5B, we introduced an improved inappropriate content tagging (cf. Sec. 3.2) as well as a watermark filter, which can improve the safety and quality of the text-to-image models trained on the dataset. Such development indicates that this dataset acts as a starting point, and is not the final endpoint, for creating further improved datasets to train models for various tasks. In our opinion, this process is not supposed to be a non- transparent closed-door avenue. It should be approached by broad research community, resulting in open and transparent datasets and procedures for model training. Towards meeting this challenge, the large-scale public image-text dataset of over 5.8 billion pairs and further annotations introduced here provides diversity that can be a starting point for ensuring balance and for selecting safe, curated subsets for corresponding target applications. We encourage everybody to participate in this exciting and important future journey. In the current form, we consider this dataset a research artefact and strongly advocate academic use-only and advise careful investigation of downstream model biases (Appendix Sec. G.2). Additionally, we encourage users to use the described tools and to transparently explore and, subsequently, report further not yet detected content and model behaviour to our dataset repository141414https://github.com/laion-ai/laion5b-bias, and help to further advance existing approaches for data curation using the real-world large dataset introduced here. Privacy. We comment on privacy issues arising from Common Crawl as source of links in LAION-5B and measures undertaken to handle those in the Appendix Sec. G.1 ## 8 Conclusion By releasing LAION-5B, a larger updated version of an openly available dataset that contains over 5 billion image-text pairs, we have further pushed the scale of open datasets for training and studying state-of-the-art language- vision models. This scale gives strong increases to zero-shot transfer and robustness. To validate the utility of LAION-5B, we demonstrated that a subset of our dataset can be used to train SOTA CLIP models of various scale that match the strong zero-shot and robustness performance of the original models trained on closed curated data, or to fine-tune generative models like GLIDE, producing samples of good quality. The dataset thus provides opportunities in multi- language large-scale training and research of language-vision models, that were previously restricted to those having access to proprietary large datasets, to the broader research community. Finally, thanks to its large scale, even a rather strict subset filtering (driven by various criterion like NSFW, watermark presence, resolution) provides high-quality datasets that are still large enough to provide sufficient scale for the training or fine-tuning of strong specialized language-vision models. ## Acknowledgments We thank Phil Wang, the creator of the DALLE-pytorch github repository151515https://github.com/lucidrains/DALLE-pytorch, who inspired us and helped creating our open community. We also want to thank Aran Komatsuzaki, Andreas Köpf, Bokai Yu, John David Pressman, Natalie Parde, Gabriel Ilharco, Fredde Frallan (see also Appendix) and all the members of the LAION discord server161616https://discord.gg/xBPBXfcFHd for helping crawling image-text-pairs and run inference on their private computers. We want to thank Hugging Face and Stability AI for their continuous financial support and providing hosting space for open datasets and models. We would also like to thank openAI for making their pre-trained CLIP models publicly available, which allowed us to filter the LAION datasets. We would like to express gratitude to all the people who are working on making code, models and data publicly available, advancing community based research and making research more reproducible. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. 171717https://gauss-centre.eu for funding this work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster [29] at Jülich Supercomputing Centre (JSC). We also acknowledge storage resources on JUST [20] granted and operated by JSC. Patrick Schramowski acknowledges the support by the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) cluster project “The Third Wave of AI”. ## References * [1] URL https://commoncrawl.org/. * Alayrac et al. [2022] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. _arXiv preprint arXiv:2204.14198_ , 2022. * Barbu et al. [2019] Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019. URL https://proceedings.neurips.cc/paper/2019/file/97af07a14cacba681feacf3012730892-Paper.pdf. * Bender et al. [2021] Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In _Proceedings of ACM Conference on Fairness, Accountability, and Transparency (FAccT)_ , pages 610–623, 2021. * Birhane and Prabhu [2021] Abeba Birhane and Vinay Uday Prabhu. Large image datasets: A pyrrhic win for computer vision? In _Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV)_ , pages 1536–1546. IEEE, 2021. * Birhane et al. [2021] Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. Multimodal datasets: misogyny, pornography, and malignant stereotypes. October 2021. * Bossard et al. [2014] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In _European Conference on Computer Vision (ECCV)_ , 2014. https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/. * Brown et al. [2020a] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020a. * Brown et al. [2020b] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020b. URL https://arxiv.org/abs/2005.14165. * Carlsson et al. [2022] Fredrik Carlsson, Philipp Eisen, Faton Rekathati, and Magnus Sahlgren. Cross-lingual and multilingual clip. In _Proceedings of the Language Resources and Evaluation Conference_ , pages 6848–6854, Marseille, France, June 2022. European Language Resources Association. URL https://aclanthology.org/2022.lrec-1.739. * Changpinyo et al. [2021] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 3558–3568, 2021. * Cimpoi et al. [2014] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2014. https://arxiv.org/abs/1311.3618. * Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pages 248–255. Ieee, 2009. * Desai et al. [2021] Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. Redcaps: Web-curated image-text data created by the people, for the people. _arXiv preprint arXiv:2111.11431_ , 2021. * Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020. * Eichenberg et al. [2021] Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank. MAGMA - multimodal augmentation of generative models through adapter-based finetuning. _CoRR_ , abs/2112.05253, 2021. URL https://arxiv.org/abs/2112.05253. * Gadre et al. [2022] Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, Ludwig Schmidt, and Shuran Song. CLIP on wheels: Zero-shot object navigation as object localization and exploration, 2022. * Gao et al. [2020] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. _arXiv preprint arXiv:2101.00027_ , 2020. * Gao et al. [2022] Yuting Gao, Jinfeng Liu, Zihan Xu, Jun Zhang, Ke Li, and Chunhua Shen. Pyramidclip: Hierarchical feature alignment for vision-language model pretraining, 2022. URL https://arxiv.org/abs/2204.14095. * Graf and Mextorf [2021] Stephan Graf and Olaf Mextorf. Just: Large-scale multi-tier storage infrastructure at the jülich supercomputing centre. _Journal of large-scale research facilities JLSRF_ , 7:180, 2021. * Gu et al. [2021] Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. _CoRR_ , abs/2111.14822, 2021. URL https://arxiv.org/abs/2111.14822. * Gulshan et al. [2016] Varun Gulshan, Lily Peng, Marc Coram, Martin C. Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, Ramasamy Kim, Rajiv Raman, Philip C. Nelson, Jessica L. Mega, and Dale R. Webster. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. _JAMA_ , 316(22):2402–2410, 12 2016. ISSN 0098-7484. doi: 10.1001/jama.2016.17216. URL https://doi.org/10.1001/jama.2016.17216. * Hendrycks et al. [2021a] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. _International Conference on Computer Vision (ICCV)_ , 2021a. https://arxiv.org/abs/2006.16241. * Hendrycks et al. [2021b] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 15262–15271, 2021b. * Hendrycks et al. [2021c] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2021c. https://arxiv.org/abs/1907.07174. * Hu et al. [2021] Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. Scaling up vision-language pre-training for image captioning. _arXiv preprint arXiv:2111.12233_ , 2021. * Ilharco et al. [2021] Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/zenodo.5143773. * Jia et al. [2021] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. _CoRR_ , abs/2102.05918, 2021. URL https://arxiv.org/abs/2102.05918. * Juelich Supercomputing Center [2020] Juelich Supercomputing Center. JUWELS Booster Supercomputer, 2020. https://apps.fz-juelich.de/jsc/hps/juwels/configuration.html#hardware-configuration-of-the-system-name-booster-module. * Kaplan et al. [2020] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_ , 2020. * Karpathy and Fei-Fei [2015] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 3128–3137, 2015. * Khandelwal et al. [2021] Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. Simple but effective: Clip embeddings for embodied ai. _arXiv preprint arXiv:2111.09888_ , 2021. * Kolesnikov et al. [2020] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In _European conference on computer vision_ , pages 491–507. Springer, 2020. * Kornblith et al. [2019] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019. https://arxiv.org/abs/1805.08974. * Krause et al. [2013] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In _International Conference on Computer Vision (ICCV) Workshops_ , 2013. https://ieeexplore.ieee.org/document/6755945. * Krishna et al. [2017] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. _International journal of computer vision_ , 123(1):32–73, 2017. * Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. * Kundurthy [2022] Srivatsa Kundurthy. Lantern-rd: Enabling deep learning for mitigation of the invasive spotted lanternfly, 2022. URL https://arxiv.org/abs/2205.06397. * Kuznetsova et al. [2020] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. _IJCV_ , 2020. * [40] Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, et al. The bigscience roots corpus: A 1.6 tb composite multilingual dataset. In _Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_. * Li et al. [2017] Ang Li, Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. Learning visual n-grams from web data. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 4183–4192, 2017. * Li et al. [2022a] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation, 2022a. URL https://arxiv.org/abs/2201.12086. * Li et al. [2022b] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. _arXiv preprint arXiv:2201.12086_ , 2022b. * Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _European conference on computer vision_ , pages 740–755. Springer, 2014. * Liu et al. [2022] Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. _arXiv preprint arXiv:2202.09778_ , 2022. * Mahajan et al. [2018] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In _Proceedings of the European conference on computer vision (ECCV)_ , pages 181–196, 2018. * Mansimov et al. [2015] Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. _arXiv preprint arXiv:1511.02793_ , 2015. * Maximov et al. [2020] Maxim Maximov, Ismail Elezi, and Laura Leal-Taixé. Ciagan: Conditional identity anonymization generative adversarial networks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 5447–5456, 2020. * Mitchell et al. [2019] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In _Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT)_. ACM, 2019. * Mokady et al. [2021] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. _arXiv preprint arXiv:2111.09734_ , 2021. * Mori et al. [1999] Yasuhide Mori, Hironobu Takahashi, and Ryuichi Oka. Image-to-word transformation based on dividing and vector quantizing images with words. In _First international workshop on multimedia intelligent storage and retrieval management_ , pages 1–9. Citeseer, 1999. * Nichol et al. [2021] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models, 2021. URL https://arxiv.org/abs/2112.10741. * Ooms [2022] Jeroen Ooms. _cld3: Google’s Compact Language Detector 3_ , 2022. https://docs.ropensci.org/cld3/, https://github.com/ropensci/cld3 (devel) https://github.com/google/cld3 (upstream). * Pham et al. [2021] Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. _arXiv preprint arXiv:2111.10050_ , 2021. * Pont-Tuset et al. [2020] Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari. Connecting vision and language with localized narratives. In _ECCV_ , 2020. * Quattoni et al. [2007] Ariadna Quattoni, Michael Collins, and Trevor Darrell. Learning visual representations using images with captions. In _2007 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1–8. IEEE, 2007. * Radford et al. [2021a] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021a. URL https://arxiv.org/abs/2103.00020. * Radford et al. [2021b] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_ , pages 8748–8763. PMLR, 2021b. * Ramesh et al. [2021] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. _CoRR_ , abs/2102.12092, 2021. URL https://arxiv.org/abs/2102.12092. * Ramesh et al. [2022] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents, 2022\. URL https://arxiv.org/abs/2204.06125. * Recht et al. [2019] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do ImageNet classifiers generalize to ImageNet? In _International Conference on Machine Learning (ICML)_ , 2019. https://arxiv.org/abs/1902.10811. * Reed et al. [2016] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In _International conference on machine learning_ , pages 1060–1069. PMLR, 2016. * Rombach et al. [2021a] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. _CoRR_ , abs/2112.10752, 2021a. URL https://arxiv.org/abs/2112.10752. * Rombach et al. [2021b] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021b. * Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. _International Journal of Computer Vision (IJCV)_ , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. * Saharia et al. [2022] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022. URL https://arxiv.org/abs/2205.11487. * Sajjad Ayoubi [2021] Navid Kanaani Sajjad Ayoubi. Clipfa: Connecting farsi text and images. https://github.com/SajjjadAyobi/CLIPfa, 2021. * Schramowski et al. [2022] Patrick Schramowski, Christopher Tauchmann, and Kristian Kersting. Can machines help us answering question 16 in datasheets, and in turn reflecting on inappropriate content? In _Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT)_. ACM, 2022. * Schuhmann et al. [2021] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. _arXiv preprint arXiv:2111.02114_ , 2021. * Shankar et al. [2019] Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time?, 2019. https://arxiv.org/abs/1906.02168. * Sharma et al. [2018] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2556–2565, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238. * Shen et al. [2021] Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. How much can clip benefit vision-and-language tasks? _arXiv preprint arXiv:2107.06383_ , 2021. * Shing [2022] Makoto Shing. Japanese clip. https://github.com/rinnakk/japanese-clip, May 2022. * Son et al. [20201] Guijin Son, Hansol Park, Jake Tae, and Trent Oh. Koclip. https://github.com/jaketae/koclip, 20201. * Srinivasan et al. [2021] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 2443–2449, 2021\. * Steed and Caliskan [2021] Ryan Steed and Aylin Caliskan. Image representations learned with unsupervised pre-training contain human-like biases. In _Proceedings of ACM Conference on Fairness, Accountability, and Transparency (FAccT)_ , pages 701–713, 2021. * Sun et al. [2017] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In _Proceedings of the IEEE international conference on computer vision_ , pages 843–852, 2017. * Taori et al. [2020] Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. _Advances in Neural Information Processing Systems_ , 33:18583–18599, 2020. * Thomee et al. [2016] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. _Communications of the ACM_ , 59(2):64–73, 2016\. * Tsimpoukelli et al. [2021] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. _Advances in Neural Information Processing Systems_ , 34:200–212, 2021. * Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_ , 30, 2017. * Wang et al. [2019] Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019. https://arxiv.org/abs/1905.13549. * Weston et al. [2010] Jason Weston, Samy Bengio, and Nicolas Usunier. Large scale image annotation: learning to rank with joint word-image embeddings. _Machine learning_ , 81(1):21–35, 2010. * Wortsman et al. [2021] Mitchell Wortsman, Gabriel Ilharco, Mike Li, Jong Wook Kim, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. _arXiv preprint arXiv:2109.01903_ , 2021. * Xiao et al. [2016] Jianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database: Exploring a large collection of scene categories. _International Journal of Computer Vision_ , 2016. https://link.springer.com/article/10.1007/s11263-014-0748-y. * Xu et al. [2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_ , pages 2048–2057. PMLR, 2015. * Yang et al. [2022] Kaiyu Yang, Jacqueline H Yau, Li Fei-Fei, Jia Deng, and Olga Russakovsky. A study of face obfuscation in imagenet. In _International Conference on Machine Learning_ , pages 25313–25330. PMLR, 2022. * Young et al. [2014] Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. _Transactions of the Association for Computational Linguistics_ , 2:67–78, 2014. doi: 10.1162/tacl_a_00166. URL https://aclanthology.org/Q14-1006. * Yu et al. [2022a] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. _arXiv preprint arXiv:2205.01917_ , 2022a. * Yu et al. [2022b] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. _arXiv preprint arXiv:2206.10789_ , 2022b. * Zhai et al. [2019] Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. A large-scale study of representation learning with the visual task adaptation benchmark. _arXiv preprint arXiv:1910.04867_ , 2019. * Zhai et al. [2021a] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. 2021a. doi: 10.48550/ARXIV.2106.04560. URL https://arxiv.org/abs/2106.04560. * Zhai et al. [2021b] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. _arXiv preprint arXiv:2106.04560_ , 2021b. * Zhai et al. [2021c] Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. _arXiv preprint arXiv:2111.07991_ , 2021c. * Zhang et al. [2017] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In _Proceedings of the IEEE international conference on computer vision_ , pages 5907–5915, 2017. * Zheng et al. [2021] Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, and Fang Wen. General facial representation learning in a visual-linguistic manner. _CoRR_ , abs/2112.03109, 2021. URL https://arxiv.org/abs/2112.03109. ## Appendix (LAION-5B: An open large-scale dataset for training next generation image-text models) ## Appendix A Datasheet for LAION-5B dataset ### A.1 Motivation 1. Q1 For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. * • LAION-5B was created as an open solution to training very large multimodal models such as CLIP or DALL-E. Before the curation of this dataset, the closest in size was YFCC with 100 million image/videos and associated metadata. OpenAI previously used a 15 million sample subset to train a publicly comparable CLIP model, but that pales in comparison to the private 400 million sample dataset they used to train the high-performant CLIP models. At the time of writing this, the ImageNet-1k zero-shot top-1 state-of-the-art, Google’s BASIC, used a dataset of 6.6 billion image-text pairs. With the release of LAION-5B, researchers no longer have to be part of a few selected institutions to study these problems. 2. Q2 Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? * • This dataset is presented by LAION (Large-scale Artificial Intelligence Open Network), a non-profit research organization aiming to democratize access to large-scale open datasets and powerful machine learning models through the research and development of open-source resources. The communication and organization of this project took place on the open LAION discord server 181818https://discord.gg/xBPBXfcFHd. 3. Q3 Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. * • This work was sponsored by Hugging Face and Stability AI. 4. Q4 Any other comments? * • No. ### A.2 Composition 5. Q5 What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. * • We provide 5.8 billion image-text pairs. Each pair consists of the following: an image file url; text caption; width; height; the caption’s language; cosine similarity (CLIP ViT/B-32 for English and MCLIP for multiple and unknown languages); the probability of the image containing a watermark; the probability of a sample being NSFW. We made our models openly available on the LAION github page (https://github.com/LAION-AI/LAION-5B-WatermarkDetection, https://github.com/LAION-AI/CLIP-based-NSFW-Detector). 6. Q6 How many instances are there in total (of each type, if appropriate)? * • LAION-5B contains 2.3 billion English samples, 2.2 billion multilingual samples, and 1.2 billion unknown language samples. A further overview of the statistics may be seen in the announcement blog post . 7. Q7 Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). * • Common Crawl is a public repository of crawled web pages. From this collection of web pages we filter the images and alt-text to derive LAION-5B. Of the existing 50+ billion images available in common crawl. We provide image url and alt-text pairings of only 5.8 billion images. 8. Q8 What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. * • We provide raw urls and their associated alt-text. 9. Q9 Is there a label or target associated with each instance? If so, please provide a description. * • There is no hard class label, but researchers will often formulate a mapping of the text to image or vice-versa. 10. Q10 Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. * • No. 11. Q11 Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. * • No. 12. Q12 Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. * • No. 13. Q13 Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. * • There exist near duplicate images which makes possible a many to one embedding in certain scenarios. CLIP embeddings may be used to remove more or less of them. 14. Q14 Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. * • This dataset is reliant on links to the World Wide Web. As such, we are unable to offer any guarantees of the existence of these samples. Due to the size we will also not be able to offer archives of the current state either. In order to rapidly and efficiently download images from URLs, we provide img2dataset. Depending on bandwidth, it’s feasible to download the entire LAION-5B dataset in 7 days using 10 nodes. 15. Q15 Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description. * • This dataset was collected using openly available parts of the internet with the assumption that any data found was intended to be shared freely. However, it is possible that the parties crawled by Common Crawl may have publicly hosted confidential data. 16. Q16 Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. * • Since the dataset is scraped from Common Crawl, it is known to have instances of sexually explicit, racist, abusive or other discomforting or disturbing content. We choose to include these samples for the usage of safety researchers and further dataset curation surrounding these sensitive topics. * • To address the existence of distressing content, we provide safety tags. Details on tagging potentially inappropriate content can be found in Sec. 3.2 in the main text and Appendix Sec. C.5 and Sec. C.6. During down-stream training tasks, users may check the sample’s boolean flags to determine whether or not the sample should be used. However, as we described in the main text, it is important to note that the safety tags are not perfect, especially keeping the complexity of these tasks and the diverse opinions of different cultures in mind. Therefore, we advocate using these tags responsibly, not relying on them to create a truly safe, “production-ready” subset after removing all potentially problematic samples. 17. Q17 Does the dataset relate to people? If not, you may skip the remaining questions in this section. * • People may be present in the images or textual descriptions, but people are not the sole focus of the dataset. 18. Q18 Does the dataset identify any subpopulations (e.g., by age, gender)? * • We do not provide any markers of subpopulation as attributes of the image-text pairs, but it may be possible to deduce this in some cases from the image and language pairing. 19. Q19 Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. * • Yes it may be possible to identify people using face recognition. We do not provide any such means nor make attempts, but institutions owning large amounts of face identifiers may identify specific people in the dataset. Similarly, people may be identified through the associated text. 20. Q20 Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. * • Yes the dataset contains sensitive content. Although, the dataset wasn’t created with the intention of obtaining samples fitting this criteria, it is possible that individuals might have hosted such items on a website that had been crawled by Common Crawl. 21. Q21 Any other comments? * • We caution discretion on behalf of the user and call for responsible usage of the dataset for research purposes only. ### A.3 Collection Process 22. Q22 How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of- speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. * • From the aforementioned Common Crawl, we filter images and their associated alt-text. Inclusion is determined by cosine similarity of the alt-text and the image as determined by OpenAI’s CLIP ViT-B/32 for english samples and MCLIP for all other samples. We include English samples with a cosine similarity score above 0.28, and we select all multilingual and unknown language samples with a 0.26 cosine similarity score or greater. 23. Q23 What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? * • We ran a preprocessing script in python, over hundred of small CPU nodes, and few GPU nodes. They were validated by manual inspection of the results and post processing on them: computation of statistics on the width, height, size of captions, clip embeddings and indices 24. Q24 If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? * • The dataset was obtained by openAI CLIP ViT B/32 filtering of Common Crawl links using cosine similarity of the image and its text the links were referring to. 25. Q25 Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? * • No crowdworkers were used in the curation of the dataset. Open-source researchers and developers enabled its creation for no payment. 26. Q26 Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. * • The data was filtered from September 2021 to January 2022, but those who created the sites might have included content from before then. It is impossible to know for certain how far back the data stretches. 27. Q27 Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. * • We corresponded with the University of Washington’s Human Subject Division, and as we do not intervene with the people depicted in the data as well as the data being public, they stated that the work did not require IRB review. Furthermore, the NeurIPS ethics review determined that the work has no serious ethical issues. 28. Q28 Does the dataset relate to people? If not, you may skip the remaining questions in this section. * • People may appear in the images and descriptions, although they are not the exclusive focus of the dataset. 29. Q29 Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? * • We retrieve the data from Common Crawl which contains almost all websites. 30. Q30 Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. * • Individuals were not notified about the data collection. 31. Q31 Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. * • We follow Common Crawl’s practice of crawling the web and follow each site’s robots.txt file, thus users consent to their sites being crawled. However, those depicted in the photograph might not have given their consent to its upload. 32. Q32 If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). * • Users have a possibility to check for the presence of the links in our dataset leading to their data on public internet by using the search tool provided by LAION, accessible at https://knn5.laion.ai. If users wish to revoke their consent after finding sensitive data, they can contact the hosting party and request to delete the content from the underlying website – it will be automatically removed from LAION-5B since we distributed image-text pairs as URLs. Moreover, we provide a contact email<EMAIL_ADDRESS>and contact form https://laion.ai/dataset-requests/ to request removal of the links from the dataset. The actual content behind the links is out of our reach and will in that case remain accessible on the public internet for other crawlers. 33. Q33 Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. * • Birhane, Prabhu, and Kahembwe opened the discussion on the limitations and imminent biases that come with the creation of a weakly-curated dataset using CLIP. CLIP and its usage of cosine similarity offers a useful but imperfect heuristic for dataset inclusion that inherits various biases contained in the image-text pairs crawled from the web. In addition, the biases already existent within CLIP and the World Wide Web may become amplified when distilling original raw data and forming a filtered dataset. Using a model trained on this dataset without any further curation in production has the potential to reinforce harmful simplistic stereotypes against already marginalized communities. * • However, the authors also note that this dataset posits currently the only openly available solution for studying multimodal models of this scale, examining their potential benefits and harms. Combining the aforementioned limitations and opportunities that this dataset provides, we agree with the authors and authorize the dataset for purely academic endeavors and strongly advice against any usage in end products. 34. Q34 Any other comments? * • No. ### A.4 Preprocessing, Cleaning, and/or Labeling 35. Q35 Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. * • No preprocessing or labelling is done. Certain images were removed on the basis of safety, and others are tagged in the presence of NSFW content or a watermark. 36. Q36 Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. * • We do not save the raw data. 37. Q37 Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point. * • To preprocess the data we used: * – https://github.com/rvencu/crawlingathome-gpu-hcloud process common crawl into a laion5B-like dataset * – http://github.com/rom1504/img2dataset A tool to easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine. * – https://github.com/rom1504/clip-retrieval a tool to easily compute clip embeddings and build a clip retrieval system with them * • For individuals to preprocess the data for training, we provide: * – https://github.com/rom1504/laion-prepro 38. Q38 Any other comments? * • No. ### A.5 Uses 39. Q39 Has the dataset been used for any tasks already? If so, please provide a description. * • LAION-5B (and the associated LAION-400M) has been used on a number of tasks such as CLIP Reproduction, BLIP Training, Glide Training, Cloob Training, and sub-dataset generation. For example, Gu et al. used LAION-400M to train VQ diffusion text-to-image generation models. Additionally, Rombach et al. applied a subset of LAION-400M in training Latent Diffusion Models that achieved state-of-the-art results on image inpainting and class-conditional image synthesis. The team behind open_CLIP demonstrated the capabilities of the 400M subset for CLIP reproduction, achieving performance on par with that of OpenAI. On the matter of subset generation and CLIP reproduction, Zheng et al. utilized LAION for facial representation learning. It should be noted that this example demonstrates the potential for users to misuse this dataset for the purpose of identification. Li et al. applied a subset of LAION for the purpose of image-captioning. Finally, Eichenberg et al. used a LAION subset for MAGMA, a model generating text “answers” for image-question pairs. 40. Q40 Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. * • Yes, scientific publications and systems that use LAION datasets can be found on the LAION github page. 41. Q41 What (other) tasks could the dataset be used for? * • We encourage future researchers to curate LAION-5B for several tasks. Particularly, we see applications of the dataset in image and text representation learning, image to text generation, image captioning, and other common multimodal tasks. Due to the breadth of the data, it also offers a unique opportunity for safety and low resource language researchers. We hope for LAION-5B to serve under-represented projects as well. 42. Q42 Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? * • As this data stems from the greater internet, it mirrors the broader biases of society in the period of its collection. Biases in subpopulation depiction (eg. correlation between gender and jobs), violence, and nudity (for which we provide safety tags) might create harmful outcomes for those a model might be applied to. For this reason this dataset should not be used to make a decision surrounding people. 43. Q43 Are there tasks for which the dataset should not be used? If so, please provide a description. * • Due to the known biases of the dataset, under no circumstance should any models be put into production using the dataset as is. It is neither safe nor responsible. As it stands, the dataset should be solely used for research purposes in its uncurated state. * • Likewise, this dataset should not be used to aid in military or surveillance tasks. 44. Q44 Any other comments? * • No. ### A.6 Distribution 45. Q45 Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. * • Yes, the dataset will be open-source. 46. Q46 How will the dataset be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? * • The data will be available through Huggingface datasets. 47. Q47 When will the dataset be distributed? * • 31/03/2022 and onward. 48. Q48 Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. * • CC-BY-4.0 49. Q49 Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. * • LAION owns the metadata and release as CC-BY-4.0. * • We do not own the copyright of the images or text. 50. Q50 Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. * • No. 51. Q51 Any other comments? * • No. ### A.7 Maintenance 52. Q52 Who will be supporting/hosting/maintaining the dataset? * • Huggingface will support hosting of the metadata. * • The Eye supports hosting of the embeddings and backups of the rest. * • LAION will maintain the samples distributed. 53. Q53 How can the owner/curator/manager of the dataset be contacted (e.g., email address)? * • https://laion.ai/dataset-requests/ 54. Q54 Is there an erratum? If so, please provide a link or other access point. * • There is no erratum for our initial release. Errata will be documented as future releases on the dataset website. 55. Q55 Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? * • LAION-5B will not be updated. However a future LAION-streamed-from-CC may exist for updates. Specific samples can be removed on request. 56. Q56 If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. * • People may contact us at the LAION website to add specific samples to a blacklist. 57. Q57 Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. * • We will continue to support LAION-400M. 58. Q58 If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description. * • Unless there are grounds for significant alteration to certain indexes, extension of the dataset will be carried out on an individual basis. 59. Q59 Any other comments? * • No. ## Appendix B Dataset Setup Procedure After processing and filtering common crawl, 5B of image url/text samples are available. Here we provide an overview of all the steps necessary to combine the full dataset. 1. 1. Downloading the data as webdataset with distributed img2dataset 2. 2. Computing Vit-L/14 embeddings with distributed clip-inference 3. 3. Computing a KNN index from these embeddings using autofaiss 4. 4. Computing additional tags (NSFW and watermark) using CLIP embeddings ## Appendix C Dataset Preparation and Curation Details ### C.1 Distributed img2dataset We developed img2dataset library to easily download, resize, and store images and captions in the webdataset format.191919https://github.com/rom1504/img2dataset This allows to download 100 million images from our list of URLs in 20 hours with a single node (1Gbps connection speed, 32GB of RAM, an i7 CPU with 16 cores), allowing anyone to obtain the whole dataset or a smaller subset. For LAION-5B we introduced a distributed mode for this tool, allowing to download the 5B samples in a week using 10 nodes. see 202020https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion5B.md and 212121https://github.com/rom1504/img2dataset/blob/main/examples/distributed_img2dataset_tutorial.md ### C.2 Distributed CLIP inference From these images, the CLIP retrieval inference tool 222222https://github.com/rom1504/clip-retrieval was used to compute ViT-L/14 embeddings, allowing for a better analysis capacity of the data. In particular a distributed mode 232323https://github.com/rom1504/clip- retrieval/blob/main/docs/distributed_clip_inference.md made it possible to compute these embeddings in a week using 32 NVIDIA A100s: this larger CLIP model can only be computed at a speed of 312 sample/s per gpu, compared to 1800 sample/s for ViT-B/32. The resulting embeddings are available for everyone to use for clustering, indexing, linear inference. ### C.3 Distributed indexing We then used these 9TB of image embeddings to build a large PQ128 knn index using the autofaiss tool 242424https://github.com/criteo/autofaiss. To make this run faster, a distributed mode is available 252525https://github.com/criteo/autofaiss/blob/master/docs/distributed/distributed_autofaiss.md ### C.4 Integration in the search UI In order to demonstrate the value of this data, we integrated this index into the 262626https://knn5.laion.ai UI. It is powered by the code called clip back at 272727https://github.com/rom1504/clip-retrieval The knn index is 800GB and the metadata (url and captions) as well, so memory mapping is used for both in order to use no RAM, only a SSD drive of that capacity is required. ### C.5 Specialized NSFW image content tagging We applied various tagging to the content of LAION 5B. Among other contents, we tagged images with pornographic or sexualized content (referred to as NSFW). To ensure all implementations related to LAION-5B are open-source, we refrained from using existing commercial solutions. In particular, we first trained an EfficientNetV2-based classifier. However, then moved to a simple MLP based on OpenAI’s CLIP/L-14. To this end, we created a training dataset by retrieving images from the previous LAION-400M dataset which are close in the CLIP embedding space to various keywords related to the five categories: “neutral", “drawing”, “porn”, “hentai” or “sexy”. Additionally, we added SFW images from the Wikiart 282828https://www.wikiart.org and Danbooru datasets 292929https://www.gwern.net/Danbooru2021 to the “drawing” category and NSFW images from Danbooru to the “hentai” category. Following this procedure, we obtained over 682K images from the five classes “drawing” (39026), “hentai” (28134), “neutral” (369507), “porn” (207969) and “sexy” (37914). Using this data we trained a detector for these five categories by finetuning an ImageNet-1k pretrained EfficientNet-V2-B02 model. 303030Code may be found at: https://github.com/LAION-AI/LAION-SAFETY To use this image classifier as a binary SFW - NSFW classifier, we consider images from the classes “drawing” and “neutral” as SFW and “hentai”, “porn” and “sexy” as NSFW. To measure the performance of this model, we created a test dataset with 1000 images from each category and manually inspected it, to make sure all test images where correctly annotated. Our EfficientNet-V2-B02 image classifier predicted 96,45% of the true NSFW correctly as NSFW and discards 7,96% of the SFW images incorrectly as NSFW. ### C.6 Further inappropriate content tagging Further, we used the Q16 documentation pipeline [68] to document the broad range of identified potentially inappropriate concepts contained, cf. Sec. 3.2 for details. Fig. 6 shows the most frequent identified concepts following this procedure. One can see that in a lot of cases these images show humans (cf. concepts human, people, man, woman). Further, one main concept is pornographic content (e.g. porn, bondage, kinky, bdsm). Additionally, most frequent present concepts are, among other concepts, weapons, violence, terror, murder, slavery, racism and hate. Note that also content surrounding halloween (costume, halloween, zombie) and art or media such as movies, games and comics are potentially tagged, depending on the displayed content. Further filtering depends highly on the use-case and users’ opinions. Figure 6: Word cloud based on [68] documenting the potentially inappropriate image content of the LAION-5B subset which contains text in English language. Provided alternative text is used as text description of the images. Word size is proportional to the word counts and rank in descriptions corresponding to the inappropriate image set. ### C.7 Watermark and safety inference Finally, we wanted to let user the ability to remove unsafe examples, and watermarked examples. To do that we collected training and test sets. The training set was augmented with examples retrieved from the KNN index, while the test set samples were selected to represent well the dataset distribution but were all manually annotated. 7 (a) Figure 7: Watermark test set annotation examples. Criteria for LAION-5B sample annotation for watermark (top row) and non-watermark (bottom row) images. The inference is done using the embedding- reader313131https://github.com/rom1504/embedding-reader module. These tags were then integrated in the UI, allowing everyone to observe that the safety tags indeed filter out almost all the unsafe results, and giving confidence that training a generative model on this data will not result in unexpectedly unsafe images. ## Appendix D Dataset Samples and Statistics Here, we present samples from the dataset and some distribution statistics to aid in understanding the dataset. In Figure 8, we randomly select 4 samples from each of the 3 LAION-5B subsets. As can be seen, the language classifier seems to have low confidence with names, identifying numbers, and short form text. An important future line of work will be to improve the language classifier. Figure 8: LAION-5B random examples from all subsets. We take the first 4 SFW samples from each of the 3 randomly shuffled LAION-5B subsets. We present the image and its associated caption. Figure 9: Caption Character Length. Each of the LAION-5B subsets contains similar frequencies and exhibit a right skew. Figure 10: Multilingual Language Frequency. The 10 most frequent languages seem to be largely of European and East Asian origin. To comprehend the dataset beyond visual examples, we may look at statistics collected about the distribution. Figure 10 gives an overview of the caption length amongst all subsets. Additionally, Figure 10 describes the frequency of languages within the multilingual subset. The 10 most frequent languages compose 56% of the multilingual dataset. ## Appendix E Further Experimental Details and Results on CLIP reproduction We provide details about experiments that were done to reproduce CLIP [58] using LAION (400M, 2B-en) subsets. In addition, we document all experimental results on both zero-shot classification using the VTAB+ suite and retrieval. ### E.1 Training Details We used distributed data parallel training (using PyTorch DDP) to train models on multiple NVIDIA A100 GPUs. Training was done using the InfoNCE loss like in [58]. We used Adam with decoupled weight regularization (i.e., AdamW) as an optimizer, with $\beta_{1}=0.9$ and $\beta_{2}=0.98$ for all models. We used a linear warmup followed by a cosine decay schedule. For regularization we used the same weight decay of $0.2$ for all the models. Details about different architectures that were used are provided in Tab. 2. Training hyper-parameters and resources used are provided in Tab. 3. ### E.2 Distributed Training and InfoNCE Loss To properly deal with global batch for contrastive InfoNCE loss in distributed setting, we need additional communication between GPU workers to compute the loss and the gradients for all positive and negative sample pairs correctly. In each worker, we gather all image and text embeddings from the other workers, and use them as negative examples for each image-text pair in the mini-batch. A naive implementation of InfoNCE involves materializing a very large $N\times N$ matrix, $N$ being the global batch size. For $N=32768$, the matrix occupies a hefty 8 GB in float32. To remedy this, we use a formulation of the loss like OpenAI [58] where redundant operations are sharded to local devices while maintaining correct global gradients. This successfully overcomes a significant scaling issue and achieves a memory complexity that scales linearly with global batch size by only materializing 2 matrices of size $n\times N$, $n$ being local batch size per GPU. By turning memory complexity from $\mathcal{O}(N^{2})$ into $\mathcal{O}(nN)$, we slash memory overhead due to scaling from GBs down to MBs. Name | Width | Embed Dim | Depth | Res. | Acts. | Params ---|---|---|---|---|---|--- ViT-B/32 | 768 / 512 | 512 | 12 / 12 | 224x224 | 10 M | 151 M ViT-B/16 | 768 / 512 | 512 | 12 / 12 | 224x224 | 29 M | 150 M ViT-B/16+ | 896 / 640 | 640 | 12 / 12 | 240x240 | 40 M | 208 M ViT-L/14 | 1024 / 768 | 768 | 24 / 12 | 224x224 | 97 M | 428 M Table 2: Hyper-parameters of different architectures we used for reproducing CLIP models. Acts refers to the number of activations in millions, while Params refers to the number of parameters in millions. All entries in the form of A / B denote image and text parameters respectively. Model (data size) | BS. (global) | #GPUs | LR. | Warm. | Ep. | Time (hrs.) ---|---|---|---|---|---|--- B/32 (400M) | 256 (32768) | 128 | $5\text{e-}4$ | 2K | 32 | 36 B/32 (2B) | 416 (46592) | 112 | $5.5\text{e-}4$ | 10K | 16 | 210 B/16 (400M) | 192 (33792) | 176 | $5\text{e-}4$ | 2K | 32 | 61 B/16+(400M) | 160 (35840) | 224 | $7\text{e-}4$ | 5K | 32 | 61 L/14 (400M) | 96 (38400) | 400 | $6\text{e-}4$ | 5K | 32 | 88 Table 3: Training hyper-parameters and resources used to reproduce CLIP [58] models on LAION 400M and 2B subsets. Note that BS refer to batch size per GPU worker (with global the corresponding global batch size), LR to base learning rate, Warm to the total number of warmup steps, Ep to the total number of training epochs, and Time to total training time in hours. ### E.3 Detailed Results & Further Analysis In this section we present all zero-shot classification results on VTAB+ as well as retrieval results. In Tab. 4, we describe the datasets that are used in VTAB+. For zero-shot classification, we collected prompts and class names from prior works [58, 94] and made them available in our benchmark repository323232https://github.com/LAION-AI/CLIP_benchmark. In Tab. 6, we show zero-shot top-1 classification accuracy (%) on VTAB+ datasets. Tables 7 and 8 depict retrieval results on Flickr30K[88] and MSCOCO [44]. Dataset | Abbr.(Tab. 1, 6) | Test size | #Classes ---|---|---|--- ImageNet-1k | INet | 50,000 | 1,000 ImageNet-v2 | INet-v2 | 10,000 | 1,000 ImageNet-R | INet-R | 30,000 | 200 ImageNet Sketch | INet-S | 50,889 | 1,000 ObjectNet | ObjNet | 18,574 | 113 ImageNet-A | INet-A | 7,500 | 200 CIFAR-10 | - | 10,000 | 10 CIFAR-100 | - | 10,000 | 100 MNIST | - | 10,000 | 10 Oxford Flowers 102 | Flowers102 | 6,149 | 102 Stanford Cars | Cars | 8,041 | 196 SVHN | - | 26,032 | 10 Facial Emotion Recognition 2013 | FER2013 | 7,178 | 7 RenderedSST2 | - | 1,821 | 2 Oxford-IIIT Pets | Pets | 3,669 | 37 Caltech-101 | - | 6,085 | 102 Pascal VOC 2007 Classification | VOC2007-Cl | 14,976 | 20 SUN397 | - | 108,754 | 397 FGVC Aircraft | - | 3,333 | 100 Country211 | - | 21,100 | 211 Describable Textures | DTD | 1,880 | 47 GTSRB | - | 12,630 | 43 STL10 | - | 8,000 | 10 Diabetic Retinopathy | Retino | 42,670 | 5 EuroSAT | - | 5,400 | 10 RESISC45 | - | 6,300 | 45 PatchCamelyon | PCAM | 32,768 | 2 CLEVR Counts | - | 15,000 | 8 CLEVR Object Distance | CLEVR Dist | 15,000 | 6 DSPRITES Orientation | DSPRITES Orient | 73,728 | 40 DSPRITES Position | DSPRITES pos | 73,728 | 32 SmallNORB Elevation | SmallNORB Elv | 12,150 | 9 SmallNORB Azimuth | SmallNORB Azim | 12,150 | 18 DMLAB | - | 22,735 | 6 KITTI closest vehicle distance | KITTI Dist | 711 | 4 Table 4: Datasets used for zero-shot classification evaluation (VTAB+). ##### Effect of data scale. We observe similar or better results on most datasets when using the larger LAION-2B-en instead of LAION-400M. Exceptions are on some datasets with specialized domains (e.g., Diabetic Retinopathy, PatchCamelyon) or in structured tasks (see corresponding paragraph below). To demonstrate the importance of the data scale for the quality of the pre-trained models, we conduct a series of experiments where we vary both data scale (LAION-80M, LAION-400M and LAION-2B) and amount of training compute measured in samples seen (3B, 13B and 34B). We observe that when investing enough into training compute, seeing same number of samples on larger data scale leads consistently to better zero-shot transfer performance measured on ImageNet-1k. This is valid for both smaller B/32 and larger L/14 model scales. For instance, models pre-trained on LAION-2B outperform there significantly models pre-trained on LAION-400M, when using same large compute training budget of 34B samples seen (see Fig. 13 and Tab. 5). We conclude from these findings that extending dataset scale all the way up towards LAION-2B is indeed important for obtaining stronger zero-shot transfer performance, given sufficiently large compute for training. ##### Few-shot transfer: comparison to CLIP and effect of scale. To examine the quality of the learned representations, we evaluate few-shot linear probe performance on seven datasets commonly used to benchmark transfer performance. The results are presented in Figures 11 and 12. Figure 11 displays few-shot performance on ImageNet [13] while Figure 12 displays few- shot performance on Food101 [7], Cars [35], CIFAR-10 & 100 [37], DTD [12] and SUN397 [85]. In addition to evaluating models trained on subsets of LAION, we also compare with the CLIP models of Radford et al. [58]. Overall we observe that the models trained on LAION achieve similar transfer performance to those trained by OpenAI. Moreover, we observe that performance increases with more data (i.e., B/32 2B outperforms B/32 400M) and larger models. ##### ImageNet-A In ImageNet-A [24] (noted INet-A), we observe large differences between CLIP WIT and LAION models, e.g. a difference of 24.3% on ViT-L/14. We note that INet-A design and data collection is quite different from other ImageNet distribution shifts datasets, as the images were specifically selected to be adversarial for a ResNet-50 pre-trained on ImageNet-1k. Although we do not have yet an explanation for the observed discrepancies and it would be interesting to understand why LAION models are worse than CLIP WIT, it is not clear whether improvements in INet-A are generalizable, as the dataset is based on adversarial images specific to a pre-trained model (ResNet-50). Figure 11: Evaluating few-shot linear probe performance on ImageNet. We evaluate i) models trained on various LAION subsets and ii) the original CLIP models. Models trained on LAION show similar transfer performance to those trained by OpenAI. Also evident is clear effect of model or data scale on transfer across few-shot conditions. Figure 12: Evaluating few-shot linear probe performance on 6 datasets commonly used to benchmark transfer [34]. We evaluate i) models trained on various LAION subsets and ii) the original CLIP models. We evaluate performance on Food101 [7], Cars [35], CIFAR-10 & 100 [37], DTD [12] and SUN397 [85]. Figure 13: ViT-B/32 and ViT-L/14 additional experiments where we vary the amount compute (3B, 13B, and 34B images seen) and LAION subset size (80M, 400M, 2B). We evaluate the models on zero-shot Imagenet-1k classification. Seeing same number of samples on larger data scale leads consistently to better zero-shot transfer performance, when investing enough into training compute. Model | Samples seen | LAION-80M | LAION-400M | LAION-2B-en ---|---|---|---|--- ViT-B/32 | 3B | 51.93 | 58.73 | 57.60 | 13B | 56.46 | 62.90 | 62.56 | 34B | - | 64.07 | 65.50 ViT-L/14 | 13B | - | 72.98 | 73.12 | 34B | - | 73.90 | 75.40 Table 5: ViT-B/32 and ViT-L/14 additional experiments where we vary the amount compute (3B, 13B, and 34B images seen) and LAION subset size (80M, 400M, 2B). We evaluate the models on zero-shot Imagenet-1k classification. When investing enough into training compute, seeing same number of samples on larger data scale leads consistently to better zero-shot transfer performance measured on ImageNet-1k. | B/32 | B/16 | B/16+ | L/14 ---|---|---|---|--- Dataset | CLIP WIT | LAION-400M | LAION-2B | CLIP WIT | LAION-400M | LAION-400M | CLIP WIT | LAION-400M INet | 63.3 | ${\color[rgb]{0,0,0}{62.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | ${\color[rgb]{0,0,0}{65.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.4}}$ | 68.3 | ${\color[rgb]{0,0,0}{67.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.3}}$ | 69.2 | 75.6 | ${\color[rgb]{0,0,0}{72.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.8}}$ INet-v2 | 56.0 | ${\color[rgb]{0,0,0}{55.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.9}}$ | ${\color[rgb]{0,0,0}{57.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.4}}$ | 61.9 | ${\color[rgb]{0,0,0}{59.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.3}}$ | 61.5 | 69.8 | ${\color[rgb]{0,0,0}{65.4}}^{\tiny\color[rgb]{1,0,0}\textbf{-4.4}}$ INet-R | 69.4 | ${\color[rgb]{0,0,0}{73.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.0}}$ | ${\color[rgb]{0,0,0}{75.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.5}}$ | 77.7 | ${\color[rgb]{0,0,0}{77.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | 80.5 | 87.9 | ${\color[rgb]{0,0,0}{84.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.2}}$ INet-S | 42.3 | ${\color[rgb]{0,0,0}{49.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+7.1}}$ | ${\color[rgb]{0,0,0}{52.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+10.6}}$ | 48.2 | ${\color[rgb]{0,0,0}{52.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.2}}$ | 54.4 | 59.6 | 59.6 ObjNet | 44.2 | ${\color[rgb]{0,0,0}{43.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.3}}$ | ${\color[rgb]{0,0,0}{48.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.5}}$ | 55.3 | ${\color[rgb]{0,0,0}{51.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.8}}$ | 53.9 | 69.0 | ${\color[rgb]{0,0,0}{59.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-9.1}}$ INet-A | 31.6 | ${\color[rgb]{0,0,0}{21.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-9.9}}$ | ${\color[rgb]{0,0,0}{26.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-5.5}}$ | 49.9 | ${\color[rgb]{0,0,0}{33.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-16.7}}$ | 36.9 | 70.8 | ${\color[rgb]{0,0,0}{46.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-24.3}}$ CIFAR-10 | 89.8 | ${\color[rgb]{0,0,0}{90.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.9}}$ | ${\color[rgb]{0,0,0}{94.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.2}}$ | 90.8 | ${\color[rgb]{0,0,0}{91.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.9}}$ | 92.7 | 95.6 | ${\color[rgb]{0,0,0}{94.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.0}}$ CIFAR-100 | 64.2 | ${\color[rgb]{0,0,0}{70.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.1}}$ | ${\color[rgb]{0,0,0}{75.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+11.2}}$ | 66.9 | ${\color[rgb]{0,0,0}{71.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.3}}$ | 73.8 | 75.9 | ${\color[rgb]{0,0,0}{77.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.5}}$ MNIST | 48.2 | ${\color[rgb]{0,0,0}{37.4}}^{\tiny\color[rgb]{1,0,0}\textbf{-10.8}}$ | ${\color[rgb]{0,0,0}{63.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+15.2}}$ | 51.8 | ${\color[rgb]{0,0,0}{66.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+14.5}}$ | 57.0 | 76.4 | ${\color[rgb]{0,0,0}{76.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ Flowers102 | 66.5 | ${\color[rgb]{0,0,0}{68.1}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.6}}$ | ${\color[rgb]{0,0,0}{69.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.5}}$ | 71.2 | ${\color[rgb]{0,0,0}{69.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.9}}$ | 71.1 | 79.2 | ${\color[rgb]{0,0,0}{75.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.6}}$ Cars | 59.6 | ${\color[rgb]{0,0,0}{79.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+19.7}}$ | ${\color[rgb]{0,0,0}{84.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+24.8}}$ | 64.7 | ${\color[rgb]{0,0,0}{83.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+19.0}}$ | 84.5 | 77.9 | ${\color[rgb]{0,0,0}{89.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+11.7}}$ SVHN | 13.4 | ${\color[rgb]{0,0,0}{27.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+14.3}}$ | ${\color[rgb]{0,0,0}{38.8}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+25.4}}$ | 31.3 | ${\color[rgb]{0,0,0}{38.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+7.2}}$ | 36.2 | 57.0 | ${\color[rgb]{0,0,0}{38.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-19.0}}$ FER2013 | 41.4 | ${\color[rgb]{0,0,0}{43.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.6}}$ | ${\color[rgb]{0,0,0}{48.1}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.7}}$ | 46.3 | ${\color[rgb]{0,0,0}{43.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.1}}$ | 44.5 | 50.1 | ${\color[rgb]{0,0,0}{50.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ RenderedSST2 | 58.6 | ${\color[rgb]{0,0,0}{52.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-6.3}}$ | ${\color[rgb]{0,0,0}{54.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-4.3}}$ | 60.5 | ${\color[rgb]{0,0,0}{54.4}}^{\tiny\color[rgb]{1,0,0}\textbf{-6.1}}$ | 57.9 | 68.9 | ${\color[rgb]{0,0,0}{56.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-12.9}}$ Pets | 87.3 | ${\color[rgb]{0,0,0}{86.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | ${\color[rgb]{0,0,0}{89.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.9}}$ | 89.0 | ${\color[rgb]{0,0,0}{89.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | 90.3 | 93.3 | ${\color[rgb]{0,0,0}{91.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.4}}$ Caltech-101 | 81.6 | ${\color[rgb]{0,0,0}{83.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.6}}$ | ${\color[rgb]{0,0,0}{83.1}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.5}}$ | 82.2 | ${\color[rgb]{0,0,0}{83.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.4}}$ | 83.2 | 83.3 | ${\color[rgb]{0,0,0}{84.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.7}}$ VOC2007-Cl | 76.4 | ${\color[rgb]{0,0,0}{75.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.6}}$ | ${\color[rgb]{0,0,0}{78.8}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.4}}$ | 78.3 | ${\color[rgb]{0,0,0}{76.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.5}}$ | 76.4 | 78.3 | ${\color[rgb]{0,0,0}{75.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.7}}$ SUN397 | 62.5 | ${\color[rgb]{0,0,0}{67.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.5}}$ | ${\color[rgb]{0,0,0}{68.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.0}}$ | 64.4 | ${\color[rgb]{0,0,0}{69.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+5.2}}$ | 69.8 | 67.6 | ${\color[rgb]{0,0,0}{72.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+5.0}}$ FGVC Aircraft | 19.6 | ${\color[rgb]{0,0,0}{16.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.9}}$ | ${\color[rgb]{0,0,0}{23.1}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+3.5}}$ | 24.3 | ${\color[rgb]{0,0,0}{17.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-6.6}}$ | 18.5 | 31.8 | ${\color[rgb]{0,0,0}{25.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-6.8}}$ Country211 | 17.2 | ${\color[rgb]{0,0,0}{14.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.4}}$ | ${\color[rgb]{0,0,0}{16.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.7}}$ | 22.8 | ${\color[rgb]{0,0,0}{18.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-4.7}}$ | 18.9 | 31.9 | ${\color[rgb]{0,0,0}{23.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-8.9}}$ DTD | 44.3 | ${\color[rgb]{0,0,0}{54.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+10.3}}$ | ${\color[rgb]{0,0,0}{53.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+9.6}}$ | 44.9 | ${\color[rgb]{0,0,0}{51.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+6.4}}$ | 55.5 | 55.3 | ${\color[rgb]{0,0,0}{60.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+5.2}}$ GTSRB | 32.6 | ${\color[rgb]{0,0,0}{42.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+9.4}}$ | ${\color[rgb]{0,0,0}{36.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+3.9}}$ | 43.3 | ${\color[rgb]{0,0,0}{43.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | 49.4 | 50.6 | ${\color[rgb]{0,0,0}{49.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.7}}$ STL10 | 97.1 | ${\color[rgb]{0,0,0}{95.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.5}}$ | ${\color[rgb]{0,0,0}{96.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.6}}$ | 98.2 | ${\color[rgb]{0,0,0}{97.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.2}}$ | 97.0 | 99.4 | ${\color[rgb]{0,0,0}{98.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.3}}$ Retino | 45.5 | ${\color[rgb]{0,0,0}{24.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-21.3}}$ | ${\color[rgb]{0,0,0}{19.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-26.4}}$ | 3.3 | ${\color[rgb]{0,0,0}{7.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.1}}$ | 9.2 | 73.3 | ${\color[rgb]{0,0,0}{6.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-67.3}}$ EuroSAT | 50.4 | ${\color[rgb]{0,0,0}{51.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.1}}$ | ${\color[rgb]{0,0,0}{50.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.1}}$ | 55.9 | ${\color[rgb]{0,0,0}{50.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-5.6}}$ | 58.2 | 62.6 | ${\color[rgb]{0,0,0}{62.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.3}}$ RESISC45 | 53.6 | ${\color[rgb]{0,0,0}{54.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.9}}$ | ${\color[rgb]{0,0,0}{61.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+8.3}}$ | 58.2 | ${\color[rgb]{0,0,0}{58.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.3}}$ | 61.4 | 63.4 | ${\color[rgb]{0,0,0}{67.4}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.0}}$ PCAM | 62.3 | ${\color[rgb]{0,0,0}{55.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-6.4}}$ | ${\color[rgb]{0,0,0}{50.7}}^{\tiny\color[rgb]{1,0,0}\textbf{-11.6}}$ | 50.7 | ${\color[rgb]{0,0,0}{59.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+8.9}}$ | 55.2 | 52.0 | ${\color[rgb]{0,0,0}{49.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.4}}$ CLEVR Counts | 23.2 | ${\color[rgb]{0,0,0}{16.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-7.0}}$ | ${\color[rgb]{0,0,0}{19.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-4.0}}$ | 21.2 | ${\color[rgb]{0,0,0}{28.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+7.5}}$ | 23.9 | 19.4 | ${\color[rgb]{0,0,0}{24.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+4.8}}$ CLEVR Dist | 16.3 | ${\color[rgb]{0,0,0}{15.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | ${\color[rgb]{0,0,0}{16.8}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.5}}$ | 15.8 | ${\color[rgb]{0,0,0}{24.5}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+8.7}}$ | 15.9 | 16.1 | ${\color[rgb]{0,0,0}{14.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.2}}$ DSPRITES Orient | 2.4 | ${\color[rgb]{0,0,0}{1.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.5}}$ | ${\color[rgb]{0,0,0}{2.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.1}}$ | 2.3 | ${\color[rgb]{0,0,0}{2.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.6}}$ | 2.7 | 2.3 | ${\color[rgb]{0,0,0}{2.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.3}}$ DSPRITES pos | 3.6 | ${\color[rgb]{0,0,0}{2.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.8}}$ | ${\color[rgb]{0,0,0}{3.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.5}}$ | 3.0 | ${\color[rgb]{0,0,0}{3.2}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | 4.3 | 3.2 | ${\color[rgb]{0,0,0}{3.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.2}}$ SmallNORB Elv | 12.7 | ${\color[rgb]{0,0,0}{9.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.8}}$ | ${\color[rgb]{0,0,0}{11.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.7}}$ | 12.2 | ${\color[rgb]{0,0,0}{10.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.2}}$ | 11.0 | 11.5 | ${\color[rgb]{0,0,0}{11.0}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.5}}$ SmallNORB Azim | 6.1 | ${\color[rgb]{0,0,0}{4.5}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.6}}$ | ${\color[rgb]{0,0,0}{5.2}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.9}}$ | 5.2 | ${\color[rgb]{0,0,0}{6.0}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.8}}$ | 5.5 | 4.5 | ${\color[rgb]{0,0,0}{5.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.8}}$ DMLAB | 19.3 | ${\color[rgb]{0,0,0}{17.3}}^{\tiny\color[rgb]{1,0,0}\textbf{-2.0}}$ | ${\color[rgb]{0,0,0}{18.9}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | 15.5 | ${\color[rgb]{0,0,0}{15.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-0.4}}$ | 14.8 | 16.3 | ${\color[rgb]{0,0,0}{18.7}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.4}}$ KITTI Dist | 27.4 | ${\color[rgb]{0,0,0}{28.8}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+1.4}}$ | ${\color[rgb]{0,0,0}{17.6}}^{\tiny\color[rgb]{1,0,0}\textbf{-9.8}}$ | 26.4 | ${\color[rgb]{0,0,0}{18.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-8.3}}$ | 28.1 | 21.8 | ${\color[rgb]{0,0,0}{20.1}}^{\tiny\color[rgb]{1,0,0}\textbf{-1.7}}$ VTAB+(Avg.) | 45.4 | ${\color[rgb]{0,0,0}{45.6}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.2}}$ | ${\color[rgb]{0,0,0}{47.9}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+2.5}}$ | 47.5 | ${\color[rgb]{0,0,0}{48.3}}^{\tiny\color[rgb]{0,0.88,0}\textbf{+0.8}}$ | 49.2 | 55.7 | ${\color[rgb]{0,0,0}{51.8}}^{\tiny\color[rgb]{1,0,0}\textbf{-3.9}}$ Table 6: Comparison between CLIP models trained on LAION (400M, 2B) and the original CLIP models [58] trained on OpenAI’s WebImageText (WIT) dataset. We show zero-shot top-1 classification accuracy (%) on the 35 datasets that are part of VTAB+. We highlight the difference (+/-) between LAION models and original CLIP WIT models for each model size (except B/16+, for which there is no CLIP WIT checkpoint). | | Flickr30K (1K test set) ---|---|--- Model | Pre-training | Image → Text | Text → Image | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 ViT-B/32 | CLIP WIT | 77.5 | 94.7 | 98.2 | 58.8 | 83.3 | 89.7 | LAION-400M | 78.9 | 94.0 | 97.1 | 61.7 | 85.5 | 90.9 | LAION-2B-en | 84.3 | 96.3 | 98.4 | 66.3 | 88.2 | 93.2 ViT-B/16 | CLIP WIT | 81.9 | 96.2 | 98.8 | 81.9 | 96.2 | 98.8 | LAION-400M | 83.3 | 96.8 | 98.5 | 65.5 | 88.3 | 93.0 ViT-B/16+ | LAION-400M | 86.5 | 97.1 | 98.8 | 68.0 | 88.9 | 94.0 ViT-L/14 | CLIP WIT | 85.1 | 97.3 | 99.0 | 65.2 | 87.3 | 92.0 | LAION-400M | 87.6 | 97.7 | 99.5 | 70.3 | 90.9 | 94.6 | | | | | | | Table 7: CLIP Zero-Shot retrieval results on the Flickr30K test set. We show retrieval performance at 1, 5, and 10 samples for both image to text and text to image. | | MSCOCO (5K test set) ---|---|--- Model | Pre-training | Image → Text | Text → Image | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 ViT-B/32 | CLIP WIT | 50.0 | 75.0 | 83.3 | 30.4 | 54.8 | 66.1 | LAION-400M | 53.5 | 77.2 | 85.4 | 34.9 | 60.3 | 71.1 | LAION-2B-en | 56.4 | 79.6 | 87.4 | 38.7 | 64.1 | 74.4 ViT-B/16 | CLIP WIT | 51.7 | 76.8 | 84.3 | 32.7 | 57.8 | 68.2 | LAION-400M | 56.5 | 80.4 | 87.3 | 37.9 | 63.2 | 73.3 ViT-B/16+ | LAION-400M | 58.6 | 81.6 | 88.4 | 40.0 | 65.5 | 75.1 ViT-L/14 | CLIP WIT | 56.0 | 79.5 | 86.9 | 35.3 | 60.0 | 70.2 | LAION-400M | 59.3 | 81.9 | 89.0 | 42.0 | 67.2 | 76.6 | | | | | | | Table 8: CLIP Zero-Shot retrieval results on the MSCOCO test set. We show retrieval performance at 1, 5, and 10 samples for both image to text and text to image. ##### Diabetic Retinopathy We observe a large variation of performance on Diabetic Retinopathy [22] (noted Retino). Accuracy goes from 3% to 73.3% for CLIP WIT models, and from 7.4% to 24.2% for LAION models. Additionally, the difference between CLIP WIT and LAION models goes up to 67.3% (on L/14). After investigating, we found that on low accuracy models, performance on the majority class is very low (e.g., for ViT-B/16 LAION model, recall was 3.4% on the majority class), and given that the dataset is highly imbalanced (majority class constitutes 74% of the samples), accuracy is affected heavily. A possible reason for low performance could be the prompts that were used, thus tuning the prompts could alleviate the problem. We re-evaluated the models using mean per-class recall, and found that the performances are less disparate, with a maximum difference between CLIP WIT models and LAION models of 2.1%. Overall, the results remain quite low, best mean per-class recall was 25.4%, obtained with ViT-B/32 trained on LAION-400M. ##### Structured tasks Similarly to [94], we observe low accuracy on VTAB’s structured tasks [91] (CLEVR, DSPRITES, SmallNORB, DMLAB, KITTI) which involve counting, depth prediction, or position/angle prediction. Finding ways to improve accuracy on those tasks is an open research question [94] that would be interesting to investigate in future work. ##### Retrieval We observe consistent improvements of LAION models over CLIP WIT models on MSCOCO 5K test set (Tab. 8) across all metrics and model sizes. On Flickr30k (Tab 7), we observe similar or better results with LAION models, with the exception of image retrieval on ViT-B/16 where CLIP WIT model is better. It would be interesting to investigate why LAION models have an advantage, and whether the advantage is more general or specific to the datasets that are considered in this work. Overall, we obtain better results than the best reported results in [58], e.g. on MSCOCO text retrieval we obtain 59.3% vs 58.4% for CLIP WIT, and on image retrieval we obtain 42% vs 37.8% for CLIP WIT, both evaluated using the R@1 metric. ## Appendix F Overview of Experiments and Results on Generative Models Here we provide overview about training experiments that were performed with generative models, GLIDE and Stable Diffusion, using subsets of LAION-5B. ### F.1 GLIDE OpenAI released checkpoints for the GLIDE [52] architecture to the public, but only released checkpoints trained on a filtered dataset removing hate-symbols and humans. These models can do a lot, but are incapable of generating imagery of humans. To evaluate the LAION dataset and its generalization capabilities, we aim to re-introduce the ability to generate imagery of humans into these checkpoints by finetuning them on LAION-5B. We finetune the released GLIDE 64 pixel base (filtered) checkpoint from OpenAI on LAION-5B. For upscaling from 64x64 images to 256x256 images, we use the unmodified weights from OpenAI GLIDE-upsample-filtered. During training, captions were randomly replaced with the unconditional token 20% of the time.
# Tucker Tensor Regression and Neuroimaging Analysis Xiaoshan Li, Hua Zhou and Lexin Li North Carolina State University ###### Abstract Large-scale neuroimaging studies have been collecting brain images of study individuals, which take the form of two-dimensional, three-dimensional, or higher dimensional arrays, also known as tensors. Addressing scientific questions arising from such data demands new regression models that take multidimensional arrays as covariates. Simply turning an image array into a long vector causes extremely high dimensionality that compromises classical regression methods, and, more seriously, destroys the inherent spatial structure of array data that possesses wealth of information. In this article, we propose a family of generalized linear tensor regression models based upon the Tucker decomposition of regression coefficient arrays. Effectively exploiting the low rank structure of tensor covariates brings the ultrahigh dimensionality to a manageable level that leads to efficient estimation. We demonstrate, both numerically that the new model could provide a sound recovery of even high rank signals, and asymptotically that the model is consistently estimating the best Tucker structure approximation to the full array model in the sense of Kullback-Liebler distance. The new model is also compared to a recently proposed tensor regression model that relies upon an alternative CANDECOMP/PARAFAC (CP) decomposition. 11footnotetext: Address for correspondence: Lexin Li, Department of Statistics, North Carolina State University, Box 8203, Raleigh, NC 27695-8203. Email<EMAIL_ADDRESS> Key Words: CP decomposition; magnetic resonance image; tensor; Tucker decomposition. ## 1 Introduction Advancing technologies are constantly producing large scale scientific data with complex structures. An important class arises from medical imaging, where the data takes the form of multidimensional array, also known as _tensor_. Notable examples include electroencephalography (EEG, 2D matrix), anatomical magnetic resonance images (MRI, 3D array), functional magnetic resonance images (fMRI, 4D array), among other image modalities. In medical imaging data analysis, a primary goal is to better understand associations between brains and clinical outcomes. Applications include using brain images to diagnose neurodegenerative disorders, to predict onset of neuropsychiatric diseases, and to identify disease relevant brain regions or activity patterns. This family of problems can collectively be formulated as a regression with clinical outcome as response, and image, or tensor, as predictor. However, the sheer size and complex structure of image covariate pose unusual challenges, which motivate us to develop a new class of regression models with image covariate. Most classical regression models take vector as covariate. Naively turning an image array into a vector is evidently unsatisfactory. For instance, a typical MRI image of size 128-by-128-by-128 implicitly requires $128^{3}=2,097,152$ regression parameters. Both computability and theoretical guarantee of the classical regression models are severely compromised by this ultra-high dimensionality. More seriously, vectorizing an array destroys the inherent spatial structure of the image array that usually possesses abundant information. A typical solution in the literature first employs the subject knowledge to extract a vector of features from images, and then feeds the feature vector into a classical regression model (Mckeown et al.,, 1998; Blankertz et al.,, 2001; Haxby et al.,, 2001; Kontos et al.,, 2003; Mitchell et al.,, 2004; LaConte et al.,, 2005; Shinkareva et al.,, 2006). Alternatively one first applies unsupervised dimension reduction, often some variant of principal components analysis, to the image array, and then fits a regression model in the reduced dimensional vector space (Caffo et al.,, 2010). Both solutions are intuitive and popular, and have enjoyed varying degrees of success. At heart, both transform the problem to a classical vector covariate regression. However, there is no consensus on what choice best summarizes a brain image even for a single modality, whereas unsupervised dimension reduction like principal components could result in information loss in a regression setup. In contrast to constructing an image feature vector, the functional approach views image as a function and then employs functional regression models (Ramsay and Silverman,, 2005). Reiss and Ogden, (2010) notably applied this idea to regression with 2D image predictor. Extending their method to 3D and higher dimensional images, however, is far from trivial and requires substantial research, due to the large number of parameters and multi-collinearity among imaging measures. In a recent work, Zhou et al., (2013) proposed a class of generalized linear _tensor_ regression models. Specifically, for a response variable $Y$, a vector predictor ${\bm{Z}}\in\mathrm{I\\!R}\mathit{{}^{p_{0}}}$ and a $D$-dimensional tensor predictor ${\bm{X}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\ldots\times p_{D}}}$, the response is assumed to belong to an exponential family where the linear systematic part is of the form, $\displaystyle g(\mu)=\mbox{\boldmath$\gamma$}^{\mbox{\tiny{\sf T}}}{\bm{Z}}+\langle{\bm{B}},{\bm{X}}\rangle.$ (1) Here $g(\cdot)$ is a strictly increasing link function, $\mu=E(Y|{\bm{X}},{\bm{Z}})$, $\mbox{\boldmath$\gamma$}\in\mathrm{I\\!R}\mathit{{}^{p_{0}}}$ is the regular regression coefficient vector, ${\bm{B}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times p_{D}}}$ is the coefficient array that captures the effects of tensor covariate ${\bm{X}}$, and the inner product between two arrays is defined as $\langle{\bm{B}},{\bm{X}}\rangle=\langle\mathrm{vec}{\bm{B}},\mathrm{vec}{\bm{X}}\rangle=\sum_{i_{1},\ldots,i_{D}}\beta_{i_{1}\ldots i_{D}}x_{i_{1}\ldots i_{D}}$. This model, if with no further simplification, is prohibitive given its gigantic dimensionality: $p_{0}+\prod_{d=1}^{D}p_{d}$. Motivated by a commonly used tensor decomposition, Zhou et al., (2013) introduced a low rank structure on the coefficient array ${\bm{B}}$. That is, ${\bm{B}}$ is assumed to follow a rank-$R$ CANDECOMP/PARAFAC (CP) decomposition (Kolda and Bader,, 2009), $\displaystyle{\bm{B}}=\sum_{r=1}^{R}\mbox{\boldmath$\beta$}_{1}^{(r)}\circ\cdots\circ\mbox{\boldmath$\beta$}_{D}^{(r)},$ (2) where $\mbox{\boldmath$\beta$}_{d}^{(r)}\in\mathrm{I\\!R}\mathit{{}^{p_{d}}}$ are all column vectors, $d=1,\ldots,D,r=1,\ldots,R$, and $\circ$ denotes an outer product among vectors. Here the outer product ${\bm{b}}_{1}\circ{\bm{b}}_{2}\circ\cdots\circ{\bm{b}}_{D}$ of $D$ vectors ${\bm{b}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}}}$, $d=1,\ldots,D$, is defined as the $p_{1}\times\cdots\times p_{D}$ array with entries $({\bm{b}}_{1}\circ{\bm{b}}_{2}\circ\cdots\circ{\bm{b}}_{D})_{i_{1}\cdots i_{D}}=\prod_{d=1}^{D}b_{di_{d}}$. For convenience, this CP decomposition is often represented by a shorthand ${\bm{B}}=\llbracket{\bm{B}}_{1},\ldots,{\bm{B}}_{D}\rrbracket$, where ${\bm{B}}_{d}=[\mbox{\boldmath$\beta$}_{d}^{(1)},\ldots,\mbox{\boldmath$\beta$}_{d}^{(R)}]\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R}}$, $d=1,\ldots,D$. Combining (1) and (2) yields generalized linear tensor regression models of Zhou et al., (2013), where the dimensionality decreases to the scale of $p_{0}+R\times\sum_{d=1}^{D}p_{d}$. Under this setup, ultrahigh dimensionality of (1) is reduced to a manageable level, which in turn results in efficient estimation and prediction. For instance, for a regression with 128-by-128-by-128 MRI image and 5 usual covariates, the dimensionality is reduced from the order of $2,097,157=5+128^{3}$ to $389=5+128\times 3$ for a rank-1 model, and to $1,157=5+3\times 128\times 3$ for a rank-3 model. Zhou et al., (2013) showed that this low rank tensor model could provide a sound recovery of even high rank signals. In the tensor literature, there has been an important development parallel to CP decomposition, which is called Tucker decomposition, or higher-order singular value decomposition (HOSVD) (Kolda and Bader,, 2009). In this article, we propose a class of _Tucker tensor regression models_. To differentiate, we call the models of Zhou et al., (2013) _CP tensor regression models_. Specifically, we continue to adopt the model (1), but assume that the coefficient array ${\bm{B}}$ follows a Tucker decomposition, $\displaystyle{\bm{B}}=\sum_{r_{1}=1}^{R_{1}}\cdots\sum_{r_{D}=1}^{R_{D}}g_{r_{1},\ldots,r_{D}}\mbox{\boldmath$\beta$}_{1}^{(r_{1})}\circ\cdots\circ\mbox{\boldmath$\beta$}_{D}^{(r_{D})},$ (3) where $\mbox{\boldmath$\beta$}_{d}^{(r_{d})}\in\mathrm{I\\!R}\mathit{{}^{p_{d}}}$ are all column vectors, $d=1,\ldots,D,r_{d}=1,\ldots,R_{d}$, and $g_{r_{1},\ldots,r_{D}}$ are constants. It is often abbreviated as ${\bm{B}}=\llbracket{\bm{G}};{\bm{B}}_{1},\ldots,{\bm{B}}_{D}\rrbracket$, where ${\bm{G}}\in\mathrm{I\\!R}\mathit{{}^{R_{1}\times\cdots\times R_{D}}}$ is a $D$-dimensional _core tensor_ with entries $({\bm{G}})_{r_{1}\ldots r_{D}}=g_{r_{1},\ldots,r_{D}}$, and ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R_{d}}}$ are the factor matrices. ${\bm{B}}_{d}$’s are usually orthogonal and can be thought of as the _principal components_ in each dimension (and thus the name, HOSVD). The number of parameters of a Tucker tensor model is in the order of $p_{0}+\sum_{d=1}^{D}R_{d}\times p_{d}$. Comparing the two decompositions (2) and (3), the key difference is that CP fixes the number of basis vectors $R$ along each dimension of ${\bm{B}}$ so that all ${\bm{B}}_{d}$’s have the _same_ number of columns (ranks). In contrast, Tucker allows the number $R_{d}$ to differ along different dimensions and ${\bm{B}}_{d}$’s could have _different_ ranks. This difference between the two decompositions seems minor; however, in the context of tensor regression modeling and neuroimging analysis, it has profound implications, and such implications motivate this article. On one hand, the Tucker tensor regression model shares the advantages of the CP tensor regression model, in that it effectively exploits the special structure of the tensor data, it substantially reduces the dimensionality to enable efficient model estimation, and it provides a sound low rank approximation to a potentially high rank signal. On the other hand, Tucker tensor regression offers a much more _flexible_ modeling framework than CP regression, as it allows distinct order along each dimension. When the orders are all identical, it includes the CP model as a special case. This flexibility leads to several improvements that are particularly useful for neuroimaging analysis. First, a Tucker model could be more parsimonious than a CP model thanks to the flexibility of different orders. For instance, suppose a 3D signal ${\bm{B}}\in\mathrm{I\\!R}\mathit{{}^{16\times 16\times 16}}$ admits a Tucker decomposition (3) with $R_{1}=R_{2}=2$ and $R_{3}=5$. It can only be recovered by a CP decomposition with $R=5$, costing 230 parameters. In contrast, the Tucker model is more parsimonious with only 131 parameters. This reduction of free parameters is valuable for medical imaging studies, as the number of subjects is often limited. Second, the freedom in the choice of different orders is useful when the tensor data is skewed in dimensions, which is common in neuroimaging data. For instance, in EEG, the two dimensions consist of electrodes (channels) and time, and the number of sampling time points usually far exceeds the number of channels. Third, even when all tensor modes have comparable sizes, the Tucker formulation explicitly models the interactions between factor matrices ${\bm{B}}_{d}$’s, and as such allows a finer grid search within a larger model space, which in turn may explain more trait variance. Finally, as we will show in Section 2.3, there exists a duality regarding the Tucker tensor model. Thanks to this duality, a Tucker tensor decomposition naturally lends itself to a principled way of imaging data downsizing, which, given the often limited sample size, again plays a practically very useful role in neuroimaging analysis. For these reasons, we feel it important to develop a complete methodology of Tucker tensor regression and its associated theory. The resulting Tucker tensor model carries a number of useful features. It performs dimension reduction through low rank tensor decomposition but in a supervised fashion, and as such avoids potential information loss in regression. It works for general array-valued image modalities and/or any combination of them, and for various types of responses, including continuous, binary, and count data. Besides, an efficient and highly scalable algorithm has been developed for the associated maximum likelihood estimation. This scalability is important considering the massive scale of imaging data. In addition, regularization has been studied in conjunction with the proposed model, yielding a collection of regularized Tucker tensor models, and particularly one that encourages sparsity of the core tensor to facilitate model selection among the defined Tucker model space. Recently there have been some increasing interests in matrix/tensor decomposition and their applications in brain imaging studies (Crainiceanu et al.,, 2011; Allen et al.,, 2011; Hoff,, 2011; Aston and Kirch,, 2012). Nevertheless, this article is distinct in that we concentrate on a regression framework with scalar response and tensor valued covariates. In contrast, Crainiceanu et al., (2011) and Allen et al., (2011) studied unsupervised decomposition, Hoff, (2011) considered model-based decomposition, whereas Aston and Kirch, (2012) focused on change point distribution estimation. The most closely related work to this article is Zhou et al., (2013); however, we feel our work is _not_ a simple extension of theirs. First of all, considering the complex nature of tensor, the development of the Tucker model estimation as well as its asymptotics is far from a trivial extension of the CP model of Zhou et al., (2013). Moreover, we offer a detailed comparison, both analytically (in Section 2.4) and numerically (in Sections 6.3 and 6.4), of the CP and Tucker decompositions in the context of regression with imaging/tensor covariates. We believe this comparison is crucial for an adequate comprehension of tensor regression models and supervised tensor decomposition in general. The rest of the article is organized as follows. Section 2 begins with a brief review of some preliminaries on tensor, and then presents the Tucker tensor regression model. Section 3 develops an efficient algorithm for maximum likelihood estimation. Section 4 derives inferential tools such as score, Fisher information, identifiability, consistency, and asymptotic normality. Section 5 investigates regularization method for the Tucker regression. Section 6 presents extensive numerical results. Section 7 concludes with some discussions and points to future extensions. All technical proofs are delegated to the Appendix. ## 2 Model ### 2.1 Preliminaries We start with a brief review of some matrix/array operations and results. Extensive references can be found in the survey paper (Kolda and Bader,, 2009). A _tensor_ is a multidimensional array. _Fibers_ of a tensor are the higher order analogue of matrix rows and columns. A fiber is defined by fixing every index but one. A matrix column is a mode-1 fiber and a matrix row is a mode-2 fiber. Third-order tensors have column, row, and tube fibers, respectively. We next review some important operators that transform a tensor into a vector/matrix. The _vec operator_ stacks the entries of a $D$-dimensional tensor ${\bm{B}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times p_{D}}}$ into a column vector. Specifically, an entry $b_{i_{1}\ldots i_{D}}$ maps to the $j$-th entry of $\mathrm{vec}\,{\bm{B}}$ where $j=1+\sum_{d=1}^{D}(i_{d}-1)\prod_{d^{\prime}=1}^{d-1}p_{d^{\prime}}$. For instance, when $D=2$, the matrix entry at cell $(i_{1},i_{2})$ maps to position $j=1+i_{1}-1+(i_{2}-1)p_{1}=i_{1}+(i_{2}-1)p_{1}$, which is consistent with the more familiar $\mathrm{vec}$ operator on a matrix. The _mode- $d$ matricization_, ${\bm{B}}_{(d)}$, maps a tensor ${\bm{B}}$ into a $p_{d}\times\prod_{d^{\prime}\neq d}p_{d^{\prime}}$ matrix such that the $(i_{1},\ldots,i_{D})$ element of the array ${\bm{B}}$ maps to the $(i_{d},j)$ element of the matrix ${\bm{B}}_{(d)}$, where $j=1+\sum_{d^{\prime}\neq d}(i_{d^{\prime}}-1)\prod_{d^{\prime\prime}<d^{\prime},d^{\prime\prime}\neq d}p_{d^{\prime\prime}}$. When $D=1$, we observe that $\mathrm{vec}\,{\bm{B}}$ is the same as vectorizing the mode-1 matricization ${\bm{B}}_{(1)}$. The _mode-( $d,d^{\prime}$) matricization_ ${\bm{B}}_{(dd^{\prime})}\in\mathrm{I\\!R}\mathit{{}^{p_{d}p_{d^{\prime}}\times\prod_{d^{\prime\prime}\neq d,d^{\prime}}p_{d^{\prime\prime}}}}$ is defined in a similar fashion. We then define the _mode- $d$ multiplication_ of the tensor ${\bm{B}}$ with a matrix ${\bm{U}}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times q}}$, denoted by ${\bm{B}}\times_{d}{\bm{U}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times q\times\cdots\times p_{D}}}$, as the multiplication of the mode-$d$ fibers of ${\bm{B}}$ by ${\bm{U}}$. In other words, the mode-$d$ matricization of ${\bm{B}}\times_{d}{\bm{U}}$ is ${\bm{U}}{\bm{B}}_{(d)}$. We also review two properties of a tensor ${\bm{B}}$ that admits a Tucker decomposition (3). The mode-$d$ matricization of ${\bm{B}}$ can be expresses as $\displaystyle{\bm{B}}_{(d)}={\bm{B}}_{d}{\bm{G}}_{(d)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1})^{\mbox{\tiny{\sf T}}},$ where $\otimes$ denotes the Kronecker product of matrices. If applying the $\mathrm{vec}$ operator to ${\bm{B}}$, then $\displaystyle\mathrm{vec}{\bm{B}}=\mathrm{vec}{\bm{B}}_{(1)}=\mathrm{vec}({\bm{B}}_{1}{\bm{G}}_{(1)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{2})^{\mbox{\tiny{\sf T}}})=({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1})\mathrm{vec}{\bm{G}}.$ These two properties are useful for our subsequent Tucker regression development. ### 2.2 Tucker Regression Model We elaborate on the Tucker tensor regression model introduced in Section 1. We assume that $Y$ belongs to an exponential family with probability mass function or density (McCullagh and Nelder,, 1983), $\displaystyle p(y_{i}|\theta_{i},\phi)=\exp\left\\{\frac{y_{i}\theta_{i}-b(\theta_{i})}{a(\phi)}+c(y_{i},\phi)\right\\}$ with the first two moments $E(Y_{i})=\mu_{i}=b^{\prime}(\theta_{i})$ and $\mathrm{Var}(Y_{i})=\sigma_{i}^{2}=b^{\prime\prime}(\theta_{i})a_{i}(\phi)$. $\theta$ and $\phi>0$ are, respectively, called the natural and dispersion parameters. We assume the systematic part of GLM is of the form $\displaystyle g(\mu)=\eta=\mbox{\boldmath$\gamma$}^{\mbox{\tiny{\sf T}}}{\bm{Z}}+\langle\sum_{r_{1}=1}^{R_{1}}\cdots\sum_{r_{D}=1}^{R_{D}}g_{r_{1},\ldots,r_{D}}\mbox{\boldmath$\beta$}_{1}^{(r_{1})}\circ\cdots\circ\mbox{\boldmath$\beta$}_{D}^{(r_{D})},{\bm{X}}\rangle.$ (4) That is, we impose a Tucker structure on the array coefficient ${\bm{B}}$. We make a few remarks. First, in this article, we consider the problem of estimating the core tensor ${\bm{G}}$ and factor matrices ${\bm{B}}_{d}$ simultaneously given the response $Y$ and covariates ${\bm{X}}$ and ${\bm{Z}}$. This can be viewed as a _supervised_ version of the classical unsupervised Tucker decomposition. It is also a supervised version of principal components analysis for higher-order multidimensional array. Unlike a two-stage solution that first performs principal components analysis and then fits a regression model, the basis (principal components) ${\bm{B}}_{d}$ in our models are estimated under the guidance (supervision) of the response variable. Second, the CP model of Zhou et al., (2013) corresponds to a special case of the Tucker model (4) with $g_{r_{1},\ldots,r_{D}}=1_{\\{r_{1}=\cdots=r_{D}\\}}$ and $R_{1}=\ldots=R_{D}=R$. In other words, the CP model is a specific Tucker model with a super-diagonal core tensor ${\bm{G}}$. The CP model has a rank at most $R$ while the general Tucker model can have a rank as high as $R^{D}$. We will further compare the two model sizes in Section 2.4. ### 2.3 Duality and Tensor Basis Pursuit Next we investigate a duality regarding the inner product between a general tensor and a tensor that admits a Tucker decomposition. ###### Lemma 1 (Duality). Suppose a tensor ${\bm{B}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times p_{D}}}$ admits Tucker decomposition ${\bm{B}}=\llbracket{\bm{G}};{\bm{B}}_{1},\ldots,{\bm{B}}_{D}\rrbracket$. Then, for any tensor ${\bm{X}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times p_{D}}}$, $\langle{\bm{B}},{\bm{X}}\rangle=\langle{\bm{G}},\tilde{\bm{X}}\rangle$, where $\tilde{\bm{X}}$ admits a Tucker decomposition $\tilde{\bm{X}}=\llbracket{\bm{X}};{\bm{B}}_{1}^{\mbox{\tiny{\sf T}}},\ldots,{\bm{B}}_{D}^{\mbox{\tiny{\sf T}}}\rrbracket$. This duality gives some important insights to the Tucker tensor regression model. First, if we consider ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R_{d}}}$ as fixed and known basis matrices, then Lemma 1 says fitting the Tucker tensor regression model (4) is equivalent to fitting a tensor regression model in ${\bm{G}}$ with the _transformed_ data $\tilde{\bm{X}}=\llbracket{\bm{X}};{\bm{B}}_{1}^{\mbox{\tiny{\sf T}}},\ldots,{\bm{B}}_{D}^{\mbox{\tiny{\sf T}}}\rrbracket\in\mathrm{I\\!R}\mathit{{}^{R_{1}\times\cdots\times R_{D}}}$. When $R_{d}\ll p_{d}$, the transformed data $\tilde{\bm{X}}$ effectively _downsize_ the original data. We will further illustrate this downsizing feature in the real data analysis in Section 6.4. Second, in applications where the numbers of basis vectors $R_{d}$ are unknown, we can utilize possibly over-complete basis matrices ${\bm{B}}_{d}$ such that $R_{d}\geq p_{d}$, and then estimate ${\bm{G}}$ with sparsity regularizations. This leads to a tensor version of the classical basis pursuit problem (Chen et al.,, 2001). Take fMRI data as an example. We can adopt the wavelet basis for the three image dimensions and the Fourier basis for the time dimension. Regularization on ${\bm{G}}$ can be achieved by either imposing a low rank decomposition (CP or Tucker) on ${\bm{G}}$ (hard thresholding) or penalized regression (soft thresholding). We will investigate Tucker regression regularization in details in Section 5. ### 2.4 Model Size: Tucker vs CP In this section we investigate the size of the Tucker tensor model. Comparison with the size of the CP tensor model helps gain better understanding of both models. In addition, it provides a base for data adaptive selection of appropriate orders in a Tucker model. First we quickly review the number of free parameters $p_{\text{C}}$ for a CP model ${\bm{B}}=\llbracket{\bm{B}}_{1},\ldots,{\bm{B}}_{d}\rrbracket$, with ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R}}$. For $D=2$, $p_{\text{C}}=R(p_{1}+p_{2})-R^{2}$, and for $D>2$, $p_{\text{C}}=R(\sum_{d=1}^{D}p_{d}-D+1)$. For $D=2$, the term $-R^{2}$ adjusts for the nonsingular transformation indeterminacy for model identifiability; for $D>2$, the term $R(-D+1)$ adjusts for the scaling indeterminacy in the CP decomposition. See Zhou et al., (2013) for more details. Following similar arguments, we obtain that the number of free parameters $p_{\text{T}}$ in a Tucker model ${\bm{B}}=\llbracket{\bm{G}};{\bm{B}}_{1},\ldots,{\bm{B}}_{d}\rrbracket$, with ${\bm{G}}\in\mathrm{I\\!R}\mathit{{}^{R_{1}\times\cdots\times R_{d}}}$ and ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R_{d}}}$, is $\displaystyle p_{\text{T}}=\sum_{d=1}^{D}p_{d}R_{d}+\prod_{d=1}^{D}R_{d}-\sum_{d=1}^{D}R_{d}^{2},$ for any $D$. Here the term -$\sum_{d=1}^{D}R_{d}^{2}$ adjusts for the non- singular transformation indeterminancy in the Tucker decomposition. We summarize these results in Table 1. Next we compare the two model sizes (degrees of freedom) under an additional assumption that $R_{1}=\cdots=R_{d}=R$. The difference becomes: $\displaystyle p_{\text{T}}-p_{\text{C}}=\begin{cases}0&\textrm{ when }D=2,\\\ R(R-1)(R-2)&\textrm{ when }D=3,\\\ R(R^{3}-4R+3)&\textrm{ when }D=4,\\\ R(R^{D-1}-DR+D-1)&\textrm{ when }D>4.\end{cases}$ Based on this formula, when $D=2$, the Tucker model is essentially the same as the CP model. When $D=3$, Tucker has the same number of parameters as CP for $R=1$ or $R=2$, but costs $R(R-1)(R-2)$ more parameters for $R>2$. When $D>3$, Tucker and CP are the same for $R=1$, but Tucker costs substantially more parameters than CP for $R>2$. For instance, when $D=4$ and $R=3$, Tucker model takes 54 more parameters than the CP model. However, one should bear in mind that the above discussion assumes $R_{1}=\cdots=R_{d}=R$. In reality, Tucker could require _less_ free parameters than CP, as shown in the illustrative example given in Section 1, since Tucker is more flexible and allows different order $R_{d}$ along each dimension. Table 1: Number of free parameters in Tucker and CP models. | CP | Tucker ---|---|--- $D=2$ | $R(p_{1}+p_{2})-R^{2}$ | $p_{1}R_{1}+p_{2}R_{2}+R_{1}R_{2}-R_{1}^{2}-R_{2}^{2}$ $D>2$ | $R(\sum_{d}p_{d}-D+1)$ | $\sum_{d}p_{d}R_{d}+\prod_{d}R_{d}-\sum_{d}R_{d}^{2}$ Figure 1 shows an example with $D=3$ dimensional array covariates. Half of the true signal (brain activity map) ${\bm{B}}$ is displayed in the left panel, which is by no means a low rank signal. Suppose 3D images ${\bm{X}}_{i}$ are taken on $n=1,000$ subjects. We simulate image traits ${\bm{X}}_{i}$ from independent standard normals and quantitative traits $Y_{i}$ from independent normals with mean $\langle{\bm{X}}_{i},{\bm{B}}\rangle$ and unit variance. Given the limited sample size, the hope is to infer a reasonable low rank approximation to the activity map from the 3D image covariates. The right panel displays the model deviance versus the degrees of freedom of a series of CP and Tucker model estimates. The CP model is estimated at ranks $R=1,\ldots,5$. The Tucker model is fitted at orders $(R_{1},R_{2},R_{3})=(1,1,1)$, $(2,2,2)$, $(3,3,3)$, $(4,4,3)$, $(4,4,4)$, $(5,4,4)$, $(5,5,4)$, and $(5,5,5)$. We see from the plot that, under the same number of free parameters, the Tucker model could generally achieve a better model fit with a smaller deviance. (Note that the deviance is in the log scale, so a small discrepancy between the two lines translates to a large value of difference in deviance.) $\begin{array}[]{cc}\includegraphics[width=166.2212pt]{fig_skull_half}&\includegraphics[width=166.2212pt]{fig_skull_dev_vs_dof}\end{array}$ Figure 1: Left: half of the true signal array ${\bm{B}}$. Right: Deviances of CP regression estimates at $R=1,\ldots,5$, and Tucker regression estimates at orders $(R_{1},R_{2},R_{3})=(1,1,1)$, $(2,2,2)$, $(3,3,3)$, $(4,4,3)$, $(4,4,4)$, $(5,4,4)$, $(5,5,4)$, and $(5,5,5)$. The sample size is $n=1000$. The explicit model size formula of the Tucker model is also useful for choosing appropriate orders $R_{d}$’s along each direction given data. This can be treated as a model selection problem, and we can employ a typical model selection criterion, e.g., Bayesian information criterion (BIC). It is of the form: $-2\log\ell+\log(n)p_{e}$, where $\ell$ is the log-likelihood, and $p_{e}=p_{\text{T}}$ is the effective number of parameters of the Tucker model as given in Table 1. We will illustrate this BIC criterion in the numerical Section 6.1, and will discuss some heuristic guidelines of selecting orders in Section 6.4. ## 3 Estimation We pursue the maximum likelihood estimation (MLE) for the Tucker tensor regression model and develop a scalable estimation algorithm in this section. The key observation is that, although the systematic part (4) is not linear in ${\bm{G}}$ and ${\bm{B}}_{d}$ _jointly_ , it is linear in them _separately_. This naturally suggests a block relaxation algorithm, which updates each factor matrix ${\bm{B}}_{d}$ and the core tensor ${\bm{G}}$ _alternately_. The algorithm consists of two core steps. First, when updating ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times R_{d}}}$ with the rest ${\bm{B}}_{d^{\prime}}$’s and ${\bm{G}}$ fixed , we rewrite the array inner product in (4) as $\displaystyle\langle{\bm{B}},{\bm{X}}\rangle$ $\displaystyle=$ $\displaystyle\langle{\bm{B}}_{(d)},{\bm{X}}_{(d)}\rangle$ $\displaystyle=$ $\displaystyle\langle{\bm{B}}_{d}{\bm{G}}_{(d)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1})^{\mbox{\tiny{\sf T}}},{\bm{X}}_{(d)}\rangle$ $\displaystyle=$ $\displaystyle\langle{\bm{B}}_{d},{\bm{X}}_{(d)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1}){\bm{G}}_{(d)}^{\mbox{\tiny{\sf T}}}\rangle.$ Then the problem turns into a GLM regression with ${\bm{B}}_{d}$ as the “parameter” and the term ${\bm{X}}_{(d)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1}){\bm{G}}_{(d)}^{\mbox{\tiny{\sf T}}}$ as the “predictor”. It is a low dimensional GLM with only $p_{d}R_{d}$ parameters and thus is easy to solve. Second, when updating ${\bm{G}}\in\mathrm{I\\!R}\mathit{{}^{R_{1}\times\cdots\times R_{D}}}$ with all ${\bm{B}}_{d}$’s fixed, $\displaystyle\langle{\bm{B}},{\bm{X}}\rangle$ $\displaystyle=$ $\displaystyle\langle\mathrm{vec}{\bm{B}},\mathrm{vec}{\bm{X}}\rangle$ $\displaystyle=$ $\displaystyle\langle({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1})\mathrm{vec}{\bm{G}},\mathrm{vec}{\bm{X}}\rangle$ $\displaystyle=$ $\displaystyle\langle\mathrm{vec}{\bm{G}},({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1})^{\mbox{\tiny{\sf T}}}\mathrm{vec}{\bm{X}}\rangle.$ This implies a GLM regression with $\mathrm{vec}{\bm{G}}$ as the “parameter” and the term $({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1})^{\mbox{\tiny{\sf T}}}\mathrm{vec}{\bm{X}}$ as the ”predictor”. Again this is a low dimensional regression problem with $\prod_{d}R_{d}$ parameters. For completeness, we summarize the above alternating estimation procedure in Algorithm 1. The orthogonality between the columns of factor matrices ${\bm{B}}_{d}$ is not enforced as in unsupervised HOSVD, because our primary goal is approximating tensor signal instead of finding the principal components along each mode. Initialize: $\mbox{\boldmath$\gamma$}^{(0)}=\mbox{argmax}_{\mbox{\boldmath$\gamma$}}\,\ell(\mbox{\boldmath$\gamma$},{\bf 0},\ldots,{\bf 0})$, ${\bm{B}}_{d}^{(0)}\in$ $\mathrm{I\\!R}\mathit{{}^{p_{d}\times R_{d}}}$ a random matrix for $d=1,\ldots,D$, and ${\bm{G}}^{(0)}\in\mathrm{I\\!R}\mathit{{}^{R_{1}\times\cdots\times R_{D}}}$ a random matrix. repeat for $d=1,\ldots,D$ do ${\bm{B}}_{d}^{(t+1)}=\mbox{argmax}_{{\bm{B}}_{d}}\,\ell(\mbox{\boldmath$\gamma$}^{(t)},{\bm{B}}_{1}^{(t+1)},\ldots,{\bm{B}}_{d-1}^{(t+1)},{\bm{B}}_{d},{\bm{B}}_{d+1}^{(t)},\ldots,{\bm{B}}_{D}^{(t)},{\bm{G}}^{(t)})$ end for ${\bm{G}}^{(t+1)}=\mbox{argmax}_{{\bm{G}}}\,\ell(\mbox{\boldmath$\gamma$}^{(t)},{\bm{B}}_{1}^{(t+1)},\ldots,{\bm{B}}_{D}^{(t+1)},{\bm{G}})$ $\mbox{\boldmath$\gamma$}^{(t+1)}=\mbox{argmax}_{\mbox{\boldmath$\gamma$}}\,\ell(\mbox{\boldmath$\gamma$},{\bm{B}}_{1}^{(t+1)},\ldots,{\bm{B}}_{D}^{(t+1)},{\bm{G}}^{(t+1)})$ until $\ell(\mbox{\boldmath$\theta$}^{(t+1)})-\ell(\mbox{\boldmath$\theta$}^{(t)})<\epsilon$ Algorithm 1 Block relaxation algorithm for fitting the Tucker tensor regression. Next we study the convergence properties of the proposed algorithm. As the block relaxation algorithm monotonically increases the objective value, the stopping criterion is well-defined and the convergence properties of iterates follow from the standard theory for monotone algorithms (de Leeuw,, 1994; Lange,, 2010). The proof of next result is given in the Appendix. ###### Proposition 1. Assume (i) the log-likelihood function $\ell$ is continuous, coercive, i.e., the set $\\{\mbox{\boldmath$\theta$}:\ell(\mbox{\boldmath$\theta$})\geq\ell(\mbox{\boldmath$\theta$}^{(0)})\\}$ is compact, and bounded above, (ii) the objective function in each block update of Algorithm 1 is strictly concave, and (iii) the set of stationary points (modulo nonsingular transformation indeterminacy) of $\ell(\mbox{\boldmath$\gamma$},{\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ are isolated. We have the following results. 1. 1. (Global Convergence) The sequence $\mbox{\boldmath$\theta$}^{(t)}=(\mbox{\boldmath$\gamma$}^{(t)},{\bm{G}}^{(t)},{\bm{B}}_{1}^{(t)},\ldots,{\bm{B}}_{D}^{(t)})$ generated by Algorithm 1 converges to a stationary point of $\ell(\mbox{\boldmath$\gamma$},{\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$. 2. 2. (Local Convergence) Let $\mbox{\boldmath$\theta$}^{(\infty)}=(\mbox{\boldmath$\gamma$}^{(\infty)},{\bm{G}}^{(\infty)},{\bm{B}}_{1}^{(\infty)},\ldots,{\bm{B}}_{D}^{(\infty)})$ be a strict local maximum of $\ell$. The iterates generated by Algorithm 1 are locally attracted to $\mbox{\boldmath$\theta$}^{(\infty)}$ for $\mbox{\boldmath$\theta$}^{(0)}$ sufficiently close to $\mbox{\boldmath$\theta$}^{(\infty)}$. ## 4 Statistical Theory In this section we study the usual large $n$ asymptotics of the proposed Tucker tensor regression. Regularization is treated in the next section for the small or moderate $n$ cases. For simplicity, we drop the classical covariate ${\bm{Z}}$ in this section, but all the results can be straightforwardly extended to include ${\bm{Z}}$. We also remark that, although the usually limited sample size of neuroimging studies makes the large $n$ asymptotics seem irrelevant, we still believe such an asymptotic investigation important, for several reasons. First, when the sample size $n$ is considerably larger than the effective number of parameters $p_{\text{T}}$, the asymptotic study tells us that the model is consistently estimating the best Tucker structure approximation to the full array model in the sense of Kullback-Liebler distance. Second, the explicit formula for score and information are not only useful for asymptotic theory but also for computation, while the identifiability issue has to be properly dealt with for the given model. Finally, the regular asymptotics can be of practical relevance, for instance, can be useful in a likelihood ratio type test in a replication study. ### 4.1 Score and Information We first derive the score and information for the tensor regression model, which are essential for statistical estimation and inference. The following standard calculus notations are used. For a scalar function $f$, $\nabla f$ is the (column) gradient vector, $df=[\nabla f]^{\mbox{\tiny{\sf T}}}$ is the differential, and $d^{2}f$ is the Hessian matrix. For a multivariate function $g:\mathrm{I\\!R}\mathit{{}^{p}}\mapsto\mathrm{I\\!R}\mathit{{}^{q}}$, $Dg\in\mathrm{I\\!R}\mathit{{}^{p\times q}}$ denotes the Jacobian matrix holding partial derivatives $\frac{\partial g_{j}}{\partial x_{i}}$. We start from the Jacobian and Hessian of the systematic part $\eta\equiv g(\mu)$ in (4). ###### Lemma 2. 1. 1. The gradient $\nabla\eta({\bm{B}}_{1},\ldots,{\bm{B}}_{D})\in\mathrm{I\\!R}\mathit{{}^{\prod_{d}R_{d}+\sum_{d=1}^{D}p_{d}R_{d}}}$ is $\displaystyle\nabla\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})=[{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\,\,{\bm{J}}_{2}\,\,\cdots\,\,{\bm{J}}_{D}]^{\mbox{\tiny{\sf T}}}(\mathrm{vec}{\bm{X}}),$ where ${\bm{J}}_{d}\in\mathrm{I\\!R}\mathit{{}^{\prod_{d=1}^{D}p_{d}\times p_{d}R_{d}}}$ is the Jacobian $\displaystyle{\bm{J}}_{d}=D{\bm{B}}({\bm{B}}_{d})=\mbox{\boldmath$\Pi$}_{d}\\{[({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1}){\bm{G}}_{(d)}^{\mbox{\tiny{\sf T}}}]\otimes{\bm{I}}_{p_{d}}\\}$ (5) and $\mbox{\boldmath$\Pi$}_{d}$ is the $(\prod_{d=1}^{D}p_{d})$-by-$(\prod_{d=1}^{D}p_{d})$ permutation matrix that reorders $\mathrm{vec}{\bm{B}}_{(d)}$ to obtain $\mathrm{vec}{\bm{B}}$, i.e., $\mathrm{vec}{\bm{B}}=\mbox{\boldmath$\Pi$}_{d}\,\mathrm{vec}{\bm{B}}_{(d)}.$ 2. 2. Let the Hessian $d^{2}\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})\in\mathrm{I\\!R}\mathit{{}^{(\prod_{d}R_{d}+\sum_{d}p_{d}R_{d})\times(\prod_{d}R_{d}+\sum_{d}p_{d}R_{d})}}$ be partitioned into four blocks ${\bm{H}}_{{\bm{G}},{\bm{G}}}\in\mathrm{I\\!R}\mathit{{}^{\prod_{d}R_{d}\times\prod_{d}R_{d}}}$, ${\bm{H}}_{{\bm{G}},{\bm{B}}}={\bm{H}}_{{\bm{B}},{\bm{G}}}^{\mbox{\tiny{\sf T}}}\in\mathrm{I\\!R}\mathit{{}^{\prod_{d}R_{d}\times\sum_{d}p_{d}R_{d}}}$ and ${\bm{H}}_{{\bm{B}},{\bm{B}}}\in\mathrm{I\\!R}\mathit{{}^{\sum_{d}p_{d}R_{d}\times\sum_{d}p_{d}R_{d}}}$. Then ${\bm{H}}_{{\bm{G}},{\bm{G}}}={\bf 0}$, ${\bm{H}}_{{\bm{G}},{\bm{B}}}$ has entries $\displaystyle h_{(r_{1},\ldots,r_{D}),(i_{d},s_{d})}$ $\displaystyle=$ $\displaystyle 1_{\\{r_{d}=s_{d}\\}}\sum_{j_{d}=i_{d}}x_{j_{1},\ldots,j_{D}}\prod_{d^{\prime}\neq d}\beta_{j_{d^{\prime}}}^{(r_{d^{\prime}})},$ and ${\bm{H}}_{{\bm{B}},{\bm{B}}}$ has entries $\displaystyle h_{(i_{d},r_{d}),(i_{d^{\prime}},r_{d^{\prime}})}=1_{\\{d\neq d^{\prime}\\}}\sum_{j_{d}=i_{d},j_{d^{\prime}}=i_{d^{\prime}}}x_{j_{1},\ldots,j_{D}}\sum_{s_{d}=r_{d},s_{d^{\prime}}=r_{d^{\prime}}}g_{s_{1},\ldots,s_{D}}\prod_{d^{\prime\prime}\neq d,d^{\prime}}\beta_{j_{d^{\prime\prime}}}^{(s_{d^{\prime\prime}})}.$ Furthermore, ${\bm{H}}_{{\bm{B}},{\bm{B}}}$ can be partitioned in $D^{2}$ sub- blocks as $\displaystyle\left(\begin{array}[]{cccc}{\bf 0}&*&*&*\\\ {\bm{H}}_{21}&{\bf 0}&*&*\\\ \vdots&\vdots&\ddots&*\\\ {\bm{H}}_{D1}&{\bm{H}}_{D2}&\cdots&{\bf 0}\end{array}\right).$ The elements of sub-block ${\bm{H}}_{dd^{\prime}}\in\mathrm{I\\!R}\mathit{{}^{p_{d}R_{d}\times p_{d^{\prime}}R_{d^{\prime}}}}$ can be retrieved from the matrix ${\bm{X}}_{(dd^{\prime})}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{d^{\prime}+1}\otimes{\bm{B}}_{d^{\prime}-1}\otimes\cdots\otimes{\bm{B}}_{1}){\bm{G}}_{(dd^{\prime})}^{\mbox{\tiny{\sf T}}}.$ ${\bm{H}}_{{\bm{G}},{\bm{B}}}$ can be partitioned into $D$ sub-blocks as $({\bm{H}}_{1},\ldots,{\bm{H}}_{D})$. The sub-block ${\bm{H}}_{d}\in\mathrm{I\\!R}\mathit{{}^{\prod_{d}R_{d}\times p_{d}R_{d}}}$ has at most $p_{d}\prod_{d}R_{d}$ nonzero entries which can be retrieved from the matrix $\displaystyle{\bm{X}}_{(d)}({\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{d+1}\otimes{\bm{B}}_{d-1}\otimes\cdots\otimes{\bm{B}}_{1}).$ Let $\ell({\bm{B}}_{1},\ldots,{\bm{B}}_{D}|y,{\bm{x}})=\ln p(y|{\bm{x}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ be the log-density of GLM. Next result derives the score function, Hessian, and Fisher information of the Tucker tensor regression model. ###### Proposition 2. Consider the tensor regression model defined by (2.2) and (4). 1. 1. The score function (or score vector) is $\displaystyle\nabla\ell({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})=\frac{(y-\mu)\mu^{\prime}(\eta)}{\sigma^{2}}\nabla\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ (7) with $\nabla\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ given in Lemma 2. 2. 2. The Hessian of the log-density $\ell$ is $\displaystyle H({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ (8) $\displaystyle=$ $\displaystyle-\left[\frac{[\mu^{\prime}(\eta)]^{2}}{\sigma^{2}}-\frac{(y-\mu)\theta^{\prime\prime}(\eta)}{\sigma^{2}}\right]\nabla\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})d\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ $\displaystyle+\frac{(y-\mu)\theta^{\prime}(\eta)}{\sigma^{2}}d^{2}\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D}),$ with $d^{2}\eta$ defined in Lemma 2. 3. 3. The Fisher information matrix is $\displaystyle{\bm{I}}({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ (9) $\displaystyle=$ $\displaystyle E[-H({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})]$ $\displaystyle=$ $\displaystyle\mathrm{Var}[\nabla\ell({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})d\ell({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})]$ $\displaystyle=$ $\displaystyle\frac{[\mu^{\prime}(\eta)]^{2}}{\sigma^{2}}[{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\ldots{\bm{J}}_{D}]^{\mbox{\tiny{\sf T}}}(\mathrm{vec}{\bm{X}})(\mathrm{vec}{\bm{X}})^{\mbox{\tiny{\sf T}}}[{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\ldots{\bm{J}}_{D}].$ Remark 2.1: For canonical link, $\theta=\eta$, $\theta^{\prime}(\eta)=1$, $\theta^{\prime\prime}(\eta)=0$, and the second term of Hessian vanishes. For the classical GLM with linear systematic part ($D=1$), $d^{2}\eta({\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})$ is zero and thus the third term of Hessian vanishes. For the classical GLM ($D=1$) with canonical link, both second and third terms of the Hessian vanish and thus the Hessian is non-stochastic, coinciding with the information matrix. ### 4.2 Identifiability The Tucker decomposition (3) is unidentifiable due to the nonsingular transformation indeterminacy. That is $\displaystyle\llbracket{\bm{G}};{\bm{B}}_{1},\ldots,{\bm{B}}_{D}\rrbracket=\llbracket{\bm{G}}\times_{1}{\bm{O}}_{1}^{-1}\times\cdots\times_{D}{\bm{O}}_{D}^{-1};{\bm{B}}_{1}{\bm{O}}_{1},\ldots,{\bm{B}}_{D}{\bm{O}}_{D}\rrbracket$ for any nonsingular matrices ${\bm{O}}_{d}\in\mathrm{I\\!R}\mathit{{}^{R_{d}\times R_{d}}}$. This implies that the number of free parameters for a Tucker model is $\sum_{d}p_{d}R_{d}+\prod_{d}R_{d}-\sum_{d}R_{d}^{2}$, with the last term adjusting for nonsingular indeterminacy. Therefore the Tucker model is identifiable only in terms of the equivalency classes. For asymptotic consistency and normality, it is necessary to adopt a specific constrained parameterization. It is common to impose the orthonormality constraint on the factor matrices ${\bm{B}}_{d}^{\mbox{\tiny{\sf T}}}{\bm{B}}_{d}={\bm{I}}_{R_{d}}$, $d=1,\ldots,D$. However the resulting parameter space is a manifold and much harder to deal with. We adopt an alternative parameterization that fixes the entries of the first $R_{d}$ rows of ${\bm{B}}_{d}$ to be ones $\displaystyle{\cal{\bm{B}}}=\\{\llbracket{\bm{G}};{\bm{B}}_{1},\ldots,{\bm{B}}_{D}\rrbracket:\beta_{i_{d}}^{(r)}=1,i_{d}=1,\ldots,R_{d},d=1,\ldots,D\\}.$ The formulae for score, Hessian and information in Proposition 2 require changes accordingly. The entries in the first $R_{d}$ rows of ${\bm{B}}_{d}$ are fixed at ones and their corresponding entries, rows and columns in score, Hessian and information need to be deleted. Choice of the restricted space $\mathcal{{\bm{B}}}$ is obviously arbitrary, and excludes arrays with any entries in the first rows of ${\bm{B}}_{d}$ equal to zeros. However the set of such exceptional arrays has Lebesgue measure zero. In specific applications, subject knowledge may suggest alternative restrictions on the parameters. Given a finite sample size, conditions for global identifiability of parameters are in general hard to obtain except in the linear case ($D=1$). Local identifiability essentially requires linear independence between the “collapsed” vectors $[{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\ldots{\bm{J}}_{D}]^{\mbox{\tiny{\sf T}}}\mathrm{vec}{\bm{x}}_{i}\in\mathrm{I\\!R}\mathit{{}^{\sum_{d}p_{d}R_{d}+\prod_{d}R_{d}-\sum_{d}R_{d}^{2}}}$. ###### Proposition 3 (Identifiability). Given iid data points $\\{(y_{i},{\bm{x}}_{i}),i=1,\ldots,n\\}$ from the Tucker tensor regression model. Let ${\bm{B}}_{0}\in\mathcal{{\bm{B}}}$ be a parameter point and assume there exists an open neighborhood of ${\bm{B}}_{0}$ in which the information matrix has a constant rank. Then ${\bm{B}}_{0}$ is locally identifiable if and only if $\displaystyle I({\bm{B}}_{0})=[{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\ldots{\bm{J}}_{D}]^{\mbox{\tiny{\sf T}}}\left[\sum_{i=1}^{n}\frac{\mu^{\prime}(\eta_{i})^{2}}{\sigma_{i}^{2}}(\mathrm{vec}\,{\bm{x}}_{i})(\mathrm{vec}\,{\bm{x}}_{i})^{\mbox{\tiny{\sf T}}}\right][{\bm{B}}_{D}\otimes\cdots\otimes{\bm{B}}_{1}\,\,{\bm{J}}_{1}\ldots{\bm{J}}_{D}]$ is nonsingular. ### 4.3 Asymptotics The asymptotics for tensor regression follow from those for MLE or M-estimation. The key observation is that the nonlinear part of tensor model (4) is a degree-$D$ polynomial of parameters and the collection of polynomials $\\{\langle{\bm{B}},{\bm{X}}\rangle,{\bm{B}}\in\mathcal{{\bm{B}}}\\}$ form a Vapnik-C̆ervonenkis (VC) class. Then the classical uniform convergence theory applies (van der Vaart,, 1998). For asymptotic normality, we need to establish that the log-likelihood function of tensor regression model is quadratic mean differentiable (Lehmann and Romano,, 2005). A sketch of the proof is given in the Appendix. ###### Theorem 1. Assume ${\bm{B}}_{0}\in\mathcal{{\bm{B}}}$ is (globally) identifiable up to permutation and the array covariates ${\bm{X}}_{i}$ are iid from a bounded underlying distribution. 1. 1. (Consistency) The MLE is consistent, i.e., $\hat{\bm{B}}_{n}$ converges to ${\bm{B}}_{0}$ in probability, in following models. (1) Normal tensor regression with a compact parameter space $\mathcal{{\bm{B}}}_{0}\subset\mathcal{{\bm{B}}}$. (2) Binary tensor regression. (3) Poisson tensor regression with a compact parameter space $\mathcal{{\bm{B}}}_{0}\subset\mathcal{{\bm{B}}}$. 2. 2. (Asymptotic Normality) For an interior point ${\bm{B}}_{0}\in\mathcal{{\bm{B}}}$ with nonsingular information matrix ${\bm{I}}({\bm{B}}_{0})$ (9) and $\hat{\bm{B}}_{n}$ is consistent, $\sqrt{n}(\mathrm{vec}\hat{\bm{B}}_{n}-\mathrm{vec}{\bm{B}}_{0})$ converges in distribution to a normal with mean zero and covariance matrix ${\bm{I}}^{-1}({\bm{B}}_{0})$. In practice it is rare that the true regression coefficient ${\bm{B}}_{\text{true}}\in\mathrm{I\\!R}\mathit{{}^{p_{1}\times\cdots\times p_{D}}}$ is exactly a low rank tensor. However the MLE of the rank-$R$ tensor model converges to the maximizer of function $M({\bm{B}})=\mathbb{P}_{{\bm{B}}_{\text{true}}}\ln p_{{\bm{B}}}$ or equivalently $\mathbb{P}_{{\bm{B}}_{\text{true}}}\ln(p_{{\bm{B}}}/p_{{\bm{B}}_{\text{true}}})$. In other words, the MLE consistently estimates the best approximation (among models in ${\cal{\bm{B}}}$) of ${\bm{B}}_{\text{true}}$ in the sense of Kullback-Leibler distance. ## 5 Regularized Estimation Regularization plays a crucial role in neuroimaging analysis for several reasons. First, even after substantial dimension reduction by imposing a Tucker structure, the number of parameters $p_{\text{T}}$ can still exceed the number of observations $n$. Second, even when $n>p_{\text{T}}$, regularization could potentially be useful for stabilizing the estimates and improving the risk property. Finally, regularization is an effective way to incorporate prior scientific knowledge about brain structures. For instance, it may sometimes be reasonable to impose symmetry on the parameters along the coronal plane for MRI images. In our context of Tucker regularized regression, there are two possible types of regularizations, one on the core tensor ${\bm{G}}$ _only_ , and the other on both ${\bm{G}}$ and ${\bm{B}}_{d}$ _simultaneously_. Which regularization to use depends on the practical purpose of a scientific study. In this section, we illustrate the regularization on the core tensor, which simultaneously achieves sparsity in the number of outer products in Tucker decomposition (3) and shrinkage. Toward that purpose, we propose to maximize the regularized log-likelihood $\displaystyle\ell(\mbox{\boldmath$\gamma$},{\bm{G}},{\bm{B}}_{1},\ldots,{\bm{B}}_{D})-\sum_{r_{1},\ldots,r_{D}}P_{\eta}(|g_{r_{1},\ldots,r_{D}}|,\lambda),$ where $P_{\eta}(|x|,\lambda)$ is a scalar penalty function, $\lambda$ is the penalty tuning parameter, and $\eta$ is an index for the penalty family. Note that the penalty term above only involves elements of the core tensor, and thus regularization on ${\bm{G}}$ only. This formulation includes a large class of penalty functions, including power family (Frank and Friedman,, 1993), where $P_{\eta}(|x|,\lambda)=\lambda|x|^{\eta}$, $\eta\in(0,2]$, and in particular lasso (Tibshirani,, 1996) ($\eta=1$) and ridge ($\eta=2$); elastic net (Zou and Hastie,, 2005), where $P_{\eta}(|x|,\lambda)=\lambda[(\eta-1)x^{2}/2+(2-\eta)|x|]$, $\eta\in[1,2]$; SCAD (Fan and Li,, 2001), where $\partial/\partial|x|P_{\eta}(|x|,\lambda)=\lambda\left\\{1_{\\{|x|\leq\lambda\\}}+(\eta\lambda-|x|)_{+}/(\eta-1)\lambda 1_{\\{|x|>\lambda\\}}\right\\}$, $\eta>2$; and MC+ penalty (Zhang,, 2010), where $P_{\eta}(|x|,\lambda)=\left\\{\lambda|x|-x^{2}/(2\eta)\right\\}1_{\\{|x|<\eta\lambda\\}}+0.5\lambda^{2}\eta 1_{\\{|x|\geq\eta\lambda\\}}$, among many others. Two aspects of the proposed regularized Tucker regression, parameter estimation and tuning, deserve some discussion. For regularized estimation, it incurs only slight changes in Algorithm 1. That is, when updating ${\bm{G}}$, we simply fit a penalized GLM regression problem, $\displaystyle{\bm{G}}^{(t+1)}=\mbox{argmax}_{{\bm{G}}}\,\ell(\mbox{\boldmath$\gamma$}^{(t)},{\bm{B}}_{1}^{(t+1)},\ldots,{\bm{B}}_{D}^{(t+1)},{\bm{G}})-\sum_{r_{1},\ldots,r_{D}}P_{\eta}(|g_{r_{1},\ldots,r_{D}}|,\lambda),$ for which many software packages exist. Our implementation utilizes an efficient Matlab toolbox for sparse regression (Zhou et al.,, 2011). Other steps of Algorithm 1 remain unchanged. For the regularization to remain legitimate, we constrain the column norms of ${\bm{B}}_{d}$ to be one when updating factor matrices ${\bm{B}}_{d}$. For parameter tuning, one can either use the general cross validation approach, or employ Bayesian information criterion to tune the penalty parameter $\lambda$. ## 6 Numerical Study We have carried out intensive numerical experiments to study the finite sample performance of the Tucker regression. Our simulations focus on three aspects: first, we demonstrate the capacity of the Tucker regression in identifying various shapes of signals; second, we study the consistency property of the method by gradually increasing the sample size; third, we compare the performance of the Tucker regression with the CP regression of Zhou et al., (2013). We also examine a real MRI imaging data to illustrate the Tucker downsizing and to further compare the two tensor models. ### 6.1 Identification of Various Shapes of Signals In our first example, we demonstrate that the proposed Tucker regression model, though with substantial reduction in dimension, can manage to identify a range of two dimensional signal shapes with varying ranks. In Figure 2, we list the 2D signals ${\bm{B}}\in\mathrm{I\\!R}\mathit{{}^{64\times 64}}$ in the first row, along with the estimates by Tucker tensor models in the second to fourth rows with orders $(1,1),(2,2)$ and $(3,3)$, respectively. Note that, since the orders along both dimensions are made equal, the Tucker model is to perform essentially the same as a CP model in this example, and the results are presented here for completeness. We will examine differences of the two models in later examples. The regular covariate vector ${\bm{Z}}\in\mathrm{I\\!R}\mathit{{}^{5}}$ and image covariate ${\bm{X}}\in\mathrm{I\\!R}\mathit{{}^{64\times 64}}$ are randomly generated with all elements being independent standard normals. The response $Y$ is generated from a normal model with mean $\mu=\mbox{\boldmath$\gamma$}^{\mbox{\tiny{\sf T}}}{\bm{Z}}+\langle{\bm{B}},{\bm{X}}\rangle$ and variance $\textrm{var}(\mu)/10$. The vector coefficient $\mbox{\boldmath$\gamma$}={\bf 1}_{5}$, and the coefficient array ${\bm{B}}$ is binary, with the signal region equal to one and the rest zero. Note that this problem differs from the usual edge detection or object recognition in imaging processing (Qiu,, 2005, 2007). In our setup, all elements of the image ${\bm{X}}$ follow the same distribution. The signal region is defined through the coefficient matrix ${\bm{B}}$ and needs to be inferred from the relation between $Y$ and ${\bm{X}}$ after adjusting for ${\bm{Z}}$. It is clearly see in Figure 2 that, the Tucker model yields a sound recovery of the true signals, even for those of high rank or natural shape, e.g., “disk” and “butterfly”. We also illustrate in the plot the BIC criterion in Section 2.4. --- Figure 2: True and recovered image signals by Tucker regression. The matrix variate has size 64 by 64 with entries generated as independent standard normals. The regression coefficient for each entry is either 0 (white) or 1 (black). The sample size is 1000. TR$(r)$ means estimate from the Tucker regression with an $r$-by-$r$ core tensor. ### 6.2 Performance with Increasing Sample Size In our second example, we continue to employ a similar model as in Figure 2 but with a three dimensional image covariate. The dimension of ${\bm{X}}$ is set as $p_{1}\times p_{2}\times p_{3}$, with $p_{1}=p_{2}=p_{3}=16$ and $32$, respectively. The signal array ${\bm{B}}$ is generated from a Tucker structure, with the elements of core tensor ${\bm{G}}$ and the factor matrices ${\bm{B}}$’s all coming from independent standard normals. The dimension of the core tensor ${\bm{G}}$ is set as $R_{1}\times R_{2}\times R_{3}$, with $R_{1}=R_{2}=R_{3}=2,5$, and $8$, respectively. We gradually increase the sample size, starting with an $n$ that is in hundred and no smaller than the degrees of freedom of the generating model. We aim to achieve two purposes with this example: first, we verify the consistency property of the proposed estimator, and second, we gain some practical knowledge about the estimation accuracy with different values of the sample size. Figure 3 summarizes the results. It is clearly seen that the estimation improves with the increasing sample size. Meanwhile, we observe that, unless the core tensor dimension is small, one would require a relatively large sample size to achieve a good estimation accuracy. This is not surprising though, considering the number of parameters of the model and that regularization is not employed here. The proposed tensor regression approach has been primarily designed for imaging studies with a reasonably large number of subjects. Recently, a number of such large-scale brain imaging studies are emerging. For instance, the Attention Deficit Hyperactivity Disorder Sample Initiative (ADHD,, 2013) consists of over 900 participants from eight imaging centers with both MRI and fMRI images, as well as their clinical information. Another example is the Alzheimer’s Disease Neuroimaging Initiative (ADNI,, 2013) database, which accumulates over 3,000 participants with MRI, fMRI and genomics data. In addition, regularization discussed in Section 5 and the Tucker downsizing in Section 2.3 can both help improve estimation given a limited sample size. $p_{1}=p_{2}=p_{3}=16$ | $p_{1}=p_{2}=p_{3}=32$ ---|--- | | | Figure 3: Root mean squared error (RMSE) of the tensor parameter estimate versus the sample size. Reported are the average and standard deviation of RMSE based on 100 data replications. Top: $R_{1}=R_{2}=R_{3}=2$; Middle: $R_{1}=R_{2}=R_{3}=5$; Bottom: $R_{1}=R_{2}=R_{3}=8$. ### 6.3 Comparison of the Tucker and CP Models In our third example, we focus on comparison between the Tucker tensor model with the CP tensor model of Zhou et al., (2013). We generate a normal response, and the 3D signal array ${\bm{B}}$ with dimensions $p_{1},p_{2},p_{3}$ and the $d$-ranks $r_{1},r_{2},r_{3}$. Here, the $d$-rank is defined as the column rank of the mode-$d$ matricization ${\bm{B}}_{(d)}$ of ${\bm{B}}$. We set $p_{1}=p_{2}=p_{3}=16$ and $32$, and $(r_{1},r_{2},r_{3})=(5,3,3),(8,4,4)$ and $(10,5,5)$, respectively. The sample size is 2000. We fit a Tucker model with $R_{d}=r_{d}$, and a CP model with $R=\max r_{d}$, $d=1,2,3$. We report in Table 2 the degrees of freedom of the two models under different setup, as well as the root mean squared error (RMSE) out of 100 data replications. It is seen that the Tucker model requires a smaller number of free parameters, while it achieves a more accurate estimation compared to the CP model. Such advantages come from the flexibility of the Tucker decomposition that permits different orders $R_{d}$ along directions. Table 2: Comparison of the Tucker and CP models. Reported are the average and standard deviation (in the parenthesis) of the root mean squared error, all based on 100 data replications. Dimension | Criterion | Model | $(5,3,3)$ | $(8,4,4)$ | $(10,5,5)$ ---|---|---|---|---|--- $16\times 16\times 16$ | Df | Tucker | 178 | 288 | 420 | | CP | 230 | 368 | 460 | RMSE | Tucker | 0.202 (0.013) | 0.379 (0.017) | 0.728 (0.030) | | CP | 0.287 (0.033) | 1.030 (0.081) | 2.858 (0.133) $32\times 32\times 32$ | Df | Tucker | 354 | 544 | 740 | | CP | 470 | 752 | 940 | RMSE | Tucker | 0.288 (0.013) | 0.570 (0.023) | 1.236 (0.045) | | CP | 0.392 (0.046) | 1.927 (0.172) | 16.238 (3.867) ### 6.4 Attention Deficit Hyperactivity Disorder Data Analysis We analyze the attention deficit hyperactivity disorder (ADHD) data from the ADHD-200 Sample Initiative (ADHD,, 2013) to illustrate our proposed method as well as the Tucker downsizing. ADHD is a common childhood disorder and can continue through adolescence and adulthood. Symptoms include difficulty in staying focused and paying attention, difficulty in controlling behavior, and over-activity. The data set that we analyzed is part of the ADHD-200 Global Competition data sets. It was pre-partitioned into a training data of 770 subjects and a testing data of 197 subjects. We removed those subjects with missing observations or poor image quality, resulting in 762 training subjects and 169 testing subjects. In the training set, there were 280 combined ADHD subjects, 482 normal controls, and the case-control ratio is about 3:5. In the testing set, there were 76 combined ADHD subjects, 93 normal controls, and the case-control ratio is about 4:5. T1-weighted images were acquired for each subject, and were preprocessed by standard steps. The data we used is obtained from the Neuro Bureau after preprocessing (the Burner data, http://neurobureau.projects.nitrc.org/ADHD200/Data.html). In addition to the MRI image predictor, we also include the subjects’ age and handiness as regular covariates. The response is the binary diagnosis status. The original image size was $p_{1}\times p_{2}\times p_{3}=121\times 145\times 121$. We employ the Tucker downsizing in Section 2.3. More specifically, we first choose a wavelet basis for ${\bm{B}}_{d}\in\mathrm{I\\!R}\mathit{{}^{p_{d}\times\tilde{p}_{d}}}$, then transform the image predictor from ${\bm{X}}$ to $\tilde{\bm{X}}=\llbracket{\bm{X}};{\bm{B}}_{1}^{\mbox{\tiny{\sf T}}},\ldots,{\bm{B}}_{D}^{\mbox{\tiny{\sf T}}}\rrbracket$. We pre-specify the values of $\tilde{p}_{d}$’s that are about tenth of the original dimensions $p_{d}$, and equivalently, we fit a Tucker tensor regression with the image predictor dimension downsized to $\tilde{p}_{1}\times\tilde{p}_{2}\times\tilde{p}_{3}$. In our example, we have experimented with a set of values of $\tilde{p}_{d}$’s, and the results are qualitatively similar. We report two sets, $\tilde{p}_{1}=12$, $\tilde{p}_{2}=14$, $\tilde{p}_{3}=12$, and $\tilde{p}_{1}=10$, $\tilde{p}_{2}=12$, $\tilde{p}_{3}=10$. We have also experimented with the Haar wavelet basis (Daubechies D2) and the Daubechies D4 wavelet basis, which again show similar qualitative patterns. For $\tilde{p}_{1}=12,\tilde{p}_{2}=14,\tilde{p}_{3}=12$, we fit a Tucker tensor model with $R_{1}=R_{2}=R_{3}=3$, resulting in 114 free parameters, and fit a CP tensor model with $R=4$, resulting in 144 free parameters. For $\tilde{p}_{1}=10,\tilde{p}_{2}=12,\tilde{p}_{3}=10$, we fit a Tucker tensor model with $R_{1}=R_{2}=2$ and $R_{3}=3$, resulting in 71 free parameters, and fit a CP tensor model with $R=4$, resulting in 120 free parameters. We have chosen those orders based on the following considerations. First, the number of free parameters of the Tucker and CP models are comparable. Second, at each step of GLM model fit, we ensure that the ratio between the sample size $n$ and the number of parameters under estimation in that step $\tilde{p}_{d}\times R_{d}$ satisfies a heuristic rule of greater than two in normal models and greater than five in logistic models. In the Tucker model, we also ensure the ratio between $n$ and the number of parameters in the core tensor estimation $\prod_{d}R_{d}$ satisfies this rule. We note that this selection of Tucker orders is heuristic; however, it seems to be a useful guideline especially when the data is noisy. We also fit a regularized Tucker model and a regularized CP model with the same orders, while the penalty parameter is tuned based on 5-fold cross validation of the training data. We evaluate each model by comparing the misclassification error rate on the independent testing set. The results are shown in Table 3. We see from the table that, the regularized Tucker model performs the best, which echoes the findings in our simulations above. We also remark that, considering the fact that the ratio of case-control is about 4:5 in the testing data, the misclassification rate from 0.32 to 0.36 achieved by the regularized Tucker model indicates a fairly sound classification accuracy. On the other hand, we note that, a key advantage of our proposed approach is its capability of suggesting a useful model rather than the classification accuracy per se. This is different from black-box type machine learning based imaging classifiers. Table 3: ADHD testing data misclassification error. Basis | Reduced dimension | Reg-Tucker | Reg-CP | Tucker | CP ---|---|---|---|---|--- Haar (D2) | $12\times 14\times 12$ | 0.361 | 0.367 | 0.379 | 0.438 | $10\times 12\times 10$ | 0.343 | 0.390 | 0.379 | 0.408 Daubechies (D4) | $12\times 14\times 12$ | 0.337 | 0.385 | 0.385 | 0.414 | $10\times 12\times 10$ | 0.320 | 0.396 | 0.367 | 0.373 It is also of interest to compare the run times of the two tensor model fittings. We record the run times of fitting the Tucker and CP models with the ADHD training data in Table 4. They are comparable. Table 4: ADHD model fitting run time (in seconds). Basis | Reduced dimension | Reg-Tucker | Reg-CP | Tucker | CP ---|---|---|---|---|--- Haar (D2) | $12\times 14\times 12$ | 3.68 | 4.39 | 31.25 | 22.43 | $10\times 12\times 10$ | 1.36 | 2.79 | 9.08 | 25.10 Daubechies (D4) | $12\times 14\times 12$ | 3.30 | 2.18 | 16.87 | 26.34 | $10\times 12\times 10$ | 1.92 | 1.90 | 9.96 | 17.10 ## 7 Discussion We have proposed a tensor regression model based on the Tucker decomposition. Including the CP tensor regression (Zhou et al.,, 2013) as a special case, Tucker model provides a more flexible framework for regression with imaging covariates. We develop a fast estimation algorithm, a general regularization procedure, and the associated asymptotic properties. In addition, we provide a detailed comparison, both analytically and numerically, of the Tucker and CP tensor models. In real imaging analysis, the signal hardly has an exact low rank. On the other hand, given the limited sample size, a low rank estimate often provides a reasonable approximation to the true signal. This is why the low rank models such as the Tucker and CP could offer a sound recovery of even a complex signal. The tensor regression framework established in this article is general enough to encompass a large number of potential extensions, including but not limited to imaging multi-modality analysis, imaging classification, and longitudinal imaging analysis. These extensions consist of our future research. ## References * ADHD, (2013) ADHD (2013). The ADHD-200 sample. http://fcon_1000.projects.nitrc.org/indi/adhd200/. [Online; accessed 03-2013]. * ADNI, (2013) ADNI (2013). Alzheimer’’s disease neuroimaging initiative. http://adni.loni.ucla.edu. [Online; accessed 03-2013]. * Allen et al., (2011) Allen, G., Grosenick, L., and Taylor, J. (2011). A generalized least squares matrix decomposition. Rice University Technical Report No. TR2011-03, arXiv:1102:3074. * Aston and Kirch, (2012) Aston, J. A. and Kirch, C. (2012). Estimation of the distribution of change-points with application to fmri data. Annals of Applied Statistics, 6:1906–1948. * Blankertz et al., (2001) Blankertz, B., Curio, G., and Müller, K.-R. (2001). Classifying single trial EEG: Towards brain computer interfacing. In NIPS, pages 157–164. * Caffo et al., (2010) Caffo, B., Crainiceanu, C., Verduzco, G., Joel, S., S.H., M., Bassett, S., and Pekar, J. (2010). Two-stage decompositions for the analysis of functional connectivity for fMRI with application to Alzheimer’s disease risk. Neuroimage, 51(3):1140–1149. * Chen et al., (2001) Chen, S. S., Donoho, D. L., and Saunders, M. A. (2001). Atomic decomposition by basis pursuit. SIAM Rev., 43(1):129–159. * Crainiceanu et al., (2011) Crainiceanu, C. M., Caffo, B. S., Luo, S., Zipunnikov, V. M., and Punjabi, N. M. (2011). Population value decomposition, a framework for the analysis of image populations. J. Amer. Statist. Assoc., 106(495):775–790. * de Leeuw, (1994) de Leeuw, J. (1994). Block-relaxation algorithms in statistics. In Information Systems and Data Analysis, pages 308–325. Springer, Berlin. * Fan and Li, (2001) Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc., 96(456):1348–1360. * Frank and Friedman, (1993) Frank, I. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35(2):109–135. * Haxby et al., (2001) Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., and Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425–2430. * Hoff, (2011) Hoff, P. (2011). Hierarchical multilinear models for multiway data. Computational Statistics and Data Analysis, 55:530–543. * Kolda and Bader, (2009) Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM Rev., 51(3):455–500. * Kontos et al., (2003) Kontos, D., Megalooikonomou, V., Kontos, D., Faloutsos, C., Megalooikonomou, V., Ghubade, N., and Faloutsos, C. (2003). Detecting discriminative functional MRI activation patterns using space filling curves. In in Proc. of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC, pages 963–967. Springer-Verlag. * LaConte et al., (2005) LaConte, S., Strother, S., Cherkassky, V., Anderson, J., and Hu, X. (2005). Support vector machines for temporal classification of block design fMRI data. Neuroimage, 26:317–329. * Lange, (2010) Lange, K. (2010). Numerical Analysis for Statisticians. Statistics and Computing. Springer, New York, second edition. * Lehmann and Romano, (2005) Lehmann, E. L. and Romano, J. P. (2005). Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, New York, third edition. * McCullagh and Nelder, (1983) McCullagh, P. and Nelder, J. A. (1983). Generalized Linear Models. Monographs on Statistics and Applied Probability. Chapman & Hall, London. * Mckeown et al., (1998) Mckeown, M. J., Makeig, S., Brown, G. G., Jung, T.-P., Kindermann, S. S., Kindermann, R. S., Bell, A. J., and Sejnowski, T. J. (1998). Analysis of fMRI data by blind separation into independent spatial components. Human Brain Mapping, 6:160–188. * Mitchell et al., (2004) Mitchell, T. M., Hutchinson, R., Niculescu, R. S., Pereira, F., Wang, X., Just, M., and Newman, S. (2004). Learning to decode cognitive states from brain images. Machine Learning, 57:145–175. * Qiu, (2005) Qiu, P. (2005). Image Processing and Jump Regression Analysis. Wiley series in probability and statistics. John Wiley. * Qiu, (2007) Qiu, P. (2007). Jump surface estimation, edge detection, and image restoration. Journal of the American Statistical Association, 102:745–756. * Ramsay and Silverman, (2005) Ramsay, J. O. and Silverman, B. W. (2005). Functional Data Analysis. Springer-Verlag, New York. * Reiss and Ogden, (2010) Reiss, P. and Ogden, R. (2010). Functional generalized linear models with images as predictors. Biometrics, 66:61–69. * Shinkareva et al., (2006) Shinkareva, S. V., Ombao, H. C., Sutton, B. P., Mohanty, A., and Miller, G. A. (2006). Classification of functional brain images with a spatio-temporal dissimilarity map. NeuroImage, 33(1):63–71. * Tibshirani, (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267–288. * van der Vaart, (1998) van der Vaart, A. W. (1998). Asymptotic Statistics, volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge. * Zhang, (2010) Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38(2):894–942. * Zhou et al., (2011) Zhou, H., Armagan, A., and Dunson, D. (2011). Path following and empirical Bayes model selection for sparse regressions. arXiv:1201.3528. * Zhou et al., (2013) Zhou, H., Li, L., and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, In press(arXiv:1203.3209). * Zou and Hastie, (2005) Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol., 67(2):301–320.
# Fiat: Fusing Learning Paradigms with Instruction-Accelerated Tuning Xinyi Wang, John Wieting, Jonathan H. Clark Google DeepMind <EMAIL_ADDRESS> ###### Abstract Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own trade-offs based on available data, model size, compute cost, ease-of-use, and final quality with neither solution performing well across-the-board. In this article, we first describe ICL and fine-tuning paradigms in a way that highlights their natural connections. Based on these connections, we propose a new learning paradigm called Fiat 111We derive the name Fiat from Fusing Learning Paradigms with Instruction Accelerated Tuning. that fuses222Fiat fuses not only the learning paradigms but the models themselves. the best of these paradigms together, enabling prompt-engineered instructions and chain-of-thought reasoning with the very largest models while also using similar methods to perform parameter updates on a modestly-sized LLM with parameter-efficient tuning. We evaluate Fiat’s effectiveness on a variety of multilingual tasks333We say that these tasks are naturally low-data because no additional data is available for such languages and it’s non- trivial to obtain more; we contrast this with artificially low-data scenarios where large data exists, but is ignored. and observe that Fiat performs better than both ICL and fine-tuning at scales ranging from 100–10,000 training examples. We hope that Fiat provides a practical way of harnessing the full potential of LLMs without needing to make a hard choice between learning paradigms. ## 1 Introduction Large language models (LLMs) show impressive generalization ability to new tasks and languages. Some of their most exciting capabilities, such as producing logical reasoning to solve a problem, are found to emerge only when the model size is over a certain threshold, often hundreds of billions of parameters (Wei et al., 2022b; a). The impressive capabilities of these models to produce high-quality responses without any task-specific tuning along with the very high cost of further tuning such models has led much recent work to focus on the paradigm of In-Context Learning (ICL)—placing a few task-specific examples and instructions into the model’s input (Brown et al., 2020; Chowdhery et al., 2022; Google et al., 2023; OpenAI, 2023). Although prior work has seen that fine-tuning a model on task data can often lead to superior performance on the downstream task compared to ICL (Scao & Rush, 2021; Schick & Schütze, 2020a; b; Asai et al., 2023), there are significantly fewer recent efforts on fine-tuning models for tasks with limited data, perhaps because the time and compute costs associated with tuning a very large model drives practitioners toward smaller models, abandoning the ability to take advantage of emergent model capabilities. ICL and model fine-tuning each come with their own trade-offs. ICL does not incur any training cost and it allows one to utilize the most capable LLMs (Schick & Schütze, 2020b; OpenAI, 2023). However, while ICL can achieve competitive performance on many tasks with a handful of annotated examplars, it often requires very large models to work well and it cannot take advantage of additional training examples if they do not fit into the context window. For many tasks, this leads to ignoring a substantial amount of potentially- useful training examples. Fine-tuning, on the other hand, is not constrained by the need to fit training examples into the model’s input, and it can be quite effective even with smaller language models. These trade-offs tend to lead practitioners to arbitrarily pick a paradigm or run costly experiments on these disparate methods in order to choose the best approach. We instead take the view that these two model learning paradigms are in fact complementary. To this end, we propose Fiat—Fusing Learning Paradigms with Instruction-Accelerated Tuning (Fiat), which utilizes both ICL on very large models and parameter tuning on moderately-sized LLM while fusing the common techniques associated with each paradigm. Fiat uses hand-engineering instruction prompts that elicit chain-of-thought reasoning from a very large model, while also using the generated reasoning and instruction prompts to tune a moderately-size LLM with parameter-efficient tuning. Figure 1 shows the workflow of Fiat and how it compares to ICL and fine-tuning. In the remainder of this article, we formally describe the connections between ICL and fine-tuning, along with the various techniques that have developed within each paradigm (§2); we propose Fiat, which fuses the best of these together and avoids many of the pitfalls of each of the individuals (§2.3); we present experiments demonstrating how Fiat improves over both learning paradigms in data scenarios ranging from 100–10,000 examples along with ablations detailing where these gains come from (§3). Figure 1: Overall flow of Fiat and how it compares to ICL and fine-tuning. The colored components are updated while building and learning a task-specific instance of Fiat, while other components are fixed.$\theta_{\beta}$ is the parameters of the larger LLM and $I_{\beta}$ are the instructions used to induce reasoning; $\theta_{\tau}$ are the parameters of a moderately-sized LLM to be tuned and $I_{\tau}$ is its instructions, which helps the model predict the correct final answer. ## 2 Learning Paradigms for LLMs In this section, we review two popular learning paradigms for LLMs (ICL in §2.1 and parameter tuning in §2.2) while considering their strengths and weaknesses, which directly lead to Fiat (§2.3). ### 2.1 In-Context Learning #### Instructed ICL keeps the parameters of the LLM fixed, but it instead selects an instruction prompt (often through manual optimization) to improve the accuracy of the downstream task. Formally, a model prediction is made by sampling444Typically, the sampling is a simple argmax with temperature 0, though this isn’t always the case as in techniques such as majority voting. a very large pre-trained LLM parameterized by fixed $\theta$ and a textual instruction $I$: $\displaystyle P(y|x;\theta,I)$ (1) While the instructions $I$ are prefixed onto the model input $x$ in practice, we intentionally notate them as an argument of the model, which we argue better reflects how they are conceptualized; we will build on this later. #### Chain-of-thought reasoning pushes instructed ICL a step further by crafting $I$ to induce step-by-step reasoning in the output of the model that improves the model’s ability to arrive at a correct prediction (Wei et al., 2022b). This allows auto- regressive inference to output observations about the input or solve sub- problems of the overall task that future decoding steps can leverage when predicting the final answer; it may also elicit textual patterns that the model saw during pre-training, that would otherwise be difficult to access in the model’s latent feature space (e.g. via fine-tuning). #### Few-shot ICL Few-shot ICL differs from instructed ICL in that its instructions $I$ are composed of a small number of examplars selected among training examples $\mathcal{D}$ that have been formatted as a textual input to the model via instructions. #### Instruction-tuned Base Models Instruction-tuned models such as FLAN and T0 (Sanh et al., 2021; Chung et al., 2022; Longpre et al., 2023) often provide significant improvements on ICL compared to using a pre-trained model. This is because instruction-tuning is essentially a second stage pretraining using a set of multitask data whose distribution is closer to the downstream task. The ICL paradigm achieves competitive results on various tasks with no or only a handful of annotated examples. While it does not incur any additional model tuning cost, ICL often has high inference cost because it requires LLMs over a certain size to work well, especially when using techniques such as chain-of- thought. It also cannot take advantage of additional task data beyond what fits into the context window of the model. ### 2.2 Parameter Tuning #### Full-Parameter Fine-tuning Given pre-trained parameters $\theta$ of a LLM to tune,555In practice, $|\theta|$ tends to be much smaller for fine-tuning than for ICL. standard fine-tuning simply optimizes all parameters of the model on task-specific supervised training data $\mathcal{D}$ according to: $\displaystyle P(y|x;\theta)$ (2) The optimization of $\theta$ is similar in purpose to the process of human prompt engineering of $I$ in ICL. Since model fine-tuning does not have to fit training data into the context window of the model, it is more effective when there are slightly more training examples available. Fine-tuning also works well on smaller language models with enough training examples, leading to faster inference. However, fine-tuning incurs additional training cost and requires access to model parameters, while some of the most capable LLMs are available for inference- only API access. The model could also easily overfit to the training examples due to catastrophic forgetting (Goodfellow et al., 2013), especially for tasks with limited data. #### Parameter-efficient Fine Tuning (PEFT) improves the tuning procedure by using a learning parameterization $\theta^{\text{PEFT}}$ where $|\theta^{\text{PEFT}}|\ll|\theta|$. Besides reducing the danger of overfitting, this learning technique also avoids forgetting features that may be useful for generalization beyond the training set. Similarly, ICL avoids catastrophic forgetting by only modifying the input to the model while keeping the parameters fixed. | ICL | Fine-tuning ---|---|--- Strengths Works well with small model | No | Yes Supports large training data | No | Yes Supports chain-of-thought reasoning | Yes | No Usage of instruction prompts | Yes | No Challenges No parameter updates | Yes | No Avoids catastrophic forgetting | Yes | No Table 1: Comparison of the ICL and fine-tuning learning paradigms, according to common usage patterns. ### 2.3 Fusing learning paradigms with Fiat In this section, we construct Fiat, motivating the purpose of each design choice in terms of modeling capabilities. ICL and fine-tuning each have compelling strengths along with pitfalls, which we summarize in Table 1. At a high level, we observe that these properties are largely complementary. Reflecting on these abilities of ICL and fine-tuning, we seek an approach that is capable of: * • Instruction following: follows human-engineered instructions to achieve high quality predictions; * • Chain-of-thought reasoning: produces intermediate text that helps the model toward correct predictions; * • Parameter tuning: refines its internal representation to align with a moderate to large number of supervised training examples; and * • Data scaling: provides high quality models with data scales from 100 to 1000’s of examples. #### Model stacking via CoT-augmented Tuning We begin with the observation that chain-of-thought prompting is typically not supervised, but rather induced via carefully-written instructions. Motivated by this, we fuse two models for learning and inference: a big model $\beta$ with all the most powerful emergent capabilities of LLMs, and a tunable model $\tau$ whose size can be flexibly chosen depending on the capacity needs of the task of interest. We assign the responsibility of chain-of-thought inference to $\beta$ and then provide its textual predictions $\hat{y}_{\beta}$ to the tunable model; it can then learn how to best use these inputs (e.g. chain-of-thought explanations) based on how useful they are with regard to predicting the supervised outputs. The parameters $\theta_{\beta}$ remain fixed as we do not have nor require any directly supervised data for its sub-task. #### Instruction-augmented Tuning Crafting a good instruction prompt is known to be essential to high-quality ICL performance, and so we naturally include instructions $I_{\beta}$ to generate reasoning and explanations as a first step. Although instructions are typically not used for smaller tunable model $I_{\tau}$, we observe that instructions have the potential to benefit tuning as well. We speculate that instructions help better align a task’s inputs with the distribution seen during pre-training, allowing the model to not only converge faster but also make fewer parameter updates. This, in turn, avoids the risk of catastrophic forgetting associated with excessive parameter updates. Therefore, Fiat also provides separate instructions $I_{\tau}$ for the tunable model.666In Fiat, instructions can be viewed as serving purpose analogous to a Bayesian prior in earlier statistical learning methods: They allow encoding human knowledge into the learning procedure alongside supervised data that empirically estimates parameters. However, textual instructions are a far more natural way of doing this than the hyperparameters of a Dirichlet. #### Pervasive Instruction-tuned Models Already, instruction-tuned models have become the standard for ICL; we use such models as $\theta_{\beta}$ in all of our experiments. However, given Fiat’s use of Instruction-augmented Tuning, we also depart from the common practice of fine-tuning starting from models pre-trained primarily on span corruption objectives and instead initialize with instruction-tuned checkpoint (Longpre et al., 2023). This makes optimization easier since the model is already expecting instructions; this can be especially beneficial in limited training data scenarios. #### Parameter-efficient Tuning So far, we have added chain-of-thought reasoning, instruction following in tuning, and instruction-tuned initialization to Fiat’s design, all of which move the pre-tuning model and the task definition toward each other in terms of increasing the probability of the desired output. We hypothesize that parameter-efficient tuning is a particularly good fit for optimizing $\theta_{\tau}$ in Fiat over the training data, because large changes to the model parameters $\theta_{\tau}$ should not be necessary given a good initialization.777In Fiat, we use LoRA (Hu et al., 2021) to parameterize the tuning procedure because it does not induce additional inference cost. Future work should consider other methods such as soft prompt tuning (Lester et al., 2021). Formalizing all the above modifications, we arrive at the final formulation of Fiat used for fine-tuning and inference in Alg. 1 and Alg. 2. Input: $\theta_{\beta}$, $\theta_{\tau}$, $\mathcal{D}$ Output: $\theta^{\prime}_{\tau}$, $I_{\beta}$, $I_{\tau}$ // Write reasoning instructions & select exemplars. $I_{\beta}=\textsc{PromptEngineering}(\mathcal{D},\hskip 4.0pt\theta_{\beta})$ // Write tuning instructions, based on large model. $I_{\tau}=\textsc{PromptEngineering}(\mathcal{D},\hskip 4.0pt\theta_{\beta})$ // Initialize parameter-efficient tuning. $\theta^{\text{PEFT}}_{\tau}\leftarrow\textsc{Init}(\theta_{\tau})$ // Iterate over examples or batches of data. for _$x,y\in\mathcal{D}$_ do // Generate expansions, explanations, reasoning. $\hat{y}_{\beta}=\operatorname*{arg\,max}_{y}P(y|x;\theta_{\beta},I_{\beta})$ // Optimize using parameter-efficient update. $g_{\tau}=\nabla_{\text{PEFT}}P(y|x,\hat{y}_{\beta};\theta_{\tau},\theta_{\tau}^{\text{PEFT}},I_{\tau})$ $\theta^{\text{PEFT}}_{\tau}\leftarrow\textsc{Update}(\theta^{\text{PEFT}}_{\tau},g_{\tau})$ end for // Apply PEFT updates to final tuned model. $\theta^{\prime}_{\tau}\leftarrow\theta_{\tau}\oplus\theta_{\tau}^{\text{PEFT}}$ Algorithm 1 Model building with Fiat Input: $x,I_{\beta}$, $I_{\tau}$, $\theta_{\beta}$, $\theta^{\prime}_{\tau}$ Output: $y$ // Generate expansions, explanations, reasoning. $\hat{y}_{\beta}=\operatorname*{arg\,max}_{y}P(y|x;\theta_{\beta},I_{\beta})$ // Infer final output using tuned model. $y=\operatorname*{arg\,max}_{y}P(y|x,\hat{y}_{\beta};\theta^{\prime}_{\tau},I_{\tau})$ Algorithm 2 Inference with Fiat Figure 2: Model building and inference with Fiat. Left: Model building with Fiat begins with interactive prompt engineering of the instructions $I$. $I_{\beta}$ specifies how to perform reasoning using few-shot exemplars on $\theta_{\beta}$—i.e. behaviors for which we have no large-scale annotations, while $I_{\tau}$ specifies guidance to the tuned model $\theta_{\tau}$ for using the generated reasoning and input to produce a final output. Both $\theta_{\beta}$ and $\theta_{\tau}$ are instruction-tuned models and only $\theta_{\tau}$ is updated during training via parameter-efficient tuning. Right: Inference with Fiat is very simple, requiring only: (1) a call to the large generative model using the fixed pre-trained parameters $\theta_{\beta}$ and the reasoning instructions $I_{\beta}$; and (2) a call to the tuned model $\theta_{\tau}$ along with the associated task instructions $I_{\tau}$. ## 3 Experiments #### Datasets One of our primary objectives in selecting datasets that naturally cover a broad variety of training data sizes. We consider tasks ranging from classification to exercising a model’s ability to generate short answers, and we include a large number and variety of languages to evaluate the generality of the method. First, we use Xor-AttriQA (Muller et al., 2023), a classification task where model is asked to predict whether the provided answer to the question is supported by the given passage context, which includes 5 languages with 262 examples total. We refer to this as the $\mathcal{O}(100)$ data scenario. We also study Fiat’s behavior on the Cross-lingual QA task of Xtreme-Up (Ruder et al., 2023). This data is an expansion of the XOR QA888XOR QA stands for cross-lingual open-retrieval question answering; note the difference between XOR QA and Xor-AttriQA. dataset (Asai et al., 2020), a cross-lingual variant of the TyDi QA (Clark et al., 2020) dataset. This task asks a model to predict the correct English answer span given a non-English question and an English answer passage; this task also includes the possibility that the passage does not contain a correct answer, making it more challenging. Cross-lingual QA is a particularly important task for languages that have very little answer content as it enables providing answers to questions that would otherwise be unanswerable using only in-language content. We provide results on two focus sets. First, we use the subset of 20 Indic languages in Xtreme-Up Cross- lingual QA where each language has about 300 examples, to allow for studying a scenario with moderate data; we refer to this as the $\mathcal{O}(1000)$ data scenario. We also study the full Xtreme-Up Cross-lingual QA task which has 22,500 examples across 27 languages where the 5 high-resource languages have more than 2500 examples each; we refer to this as the $\mathcal{O}$(10,000) data scenario.999We report the average result on the under-represented languages, following the recommendations of the Xtreme-Up benchmark. Together, these tasks allow us to test our methods on three different data size scenarios from small 100’s to over training 20,000 examples. Details of the languages and the dataset size can be found in App. A.1. #### Models We use PaLM-2 (Google et al., 2023) as our base model, and we experiment with instruction-tuned models using the FLAN mixture (Chung et al., 2022). We use PaLM-2 L as $\mathcal{M}_{\beta}$ and we use PaLM-2 XS and S for $\mathcal{M}_{\tau}$. #### Baselines We compare to both ICL and fine-tuning baselines. For ICL, we use PaLM-2 L with chain-of-thought reasoning (Wei et al., 2022b). We include 4 few-shot exemplars with hand-written chain-of-thought explanations in English for each of the 5 languages in the Xor-AttriQA Attribution task.101010During manual prompt engineering, we used Google Translate to assist with explanation annotation. for a total of 20 exemplars. However, for Xtreme-Up cross-lingual QA, it was not feasible to hand-engineer prompts for each of the 27 languages. Therefore, we hand-write 4 chain-of-thought explanations based on Bengali exemplars,111111Note that while the exemplars have Bengali questions, we instruct the model to carry out its reasoning in English. and use the same ICL examples for all 20 languages. ### 3.1 Results | Xor-AttriQA | Xtreme-Up Cross-lingual QA (Indic) | Xtreme-Up Cross-lingual QA (Full) ---|---|---|--- | $\mathcal{O}$(100) | $\mathcal{O}$(1000) | $\mathcal{O}$(10000) $\theta_{\tau}$ | $\theta_{\beta}$ | Method | Acc / AUC-PR | F1 | F1 —– | L | ICL | 78.6 / —–† | 68.9 | 69.2 XS | —– | Fine-tune | 90.5 / 52.1 | 63.5 | 75.5 L | Fiat | 94.0 / 78.1 | 73.6 | 77.8 S | —– | Fine-tune | 90.6 / 54.5 | 67.1 | 77.8 L | Fiat | 93.9 / 77.5 | 77.3 | 79.3 Gain over best baseline | + 3.5 / + 26.0 (vs S fine-tune) | + 8.4 (vs ICL) | + 1.5 (vs S fine-tune) Table 2: Overall results of Fiat and typical baselines. While we provide improvements with regard to the best baseline, we also point out that the best baseline often differs between ICL and fine-tuning, especially at smaller model sizes; this leaves practitioners to empirically determine the best course of action. †AUC-PR is not computed for the ICL because outputs are text-only. We present the performance of the baselines (ICL and fine-tuning) and our Fiat framework for all three data settings in Table 2. We show the average scores across all languages in each dataset for simplicity, and we provide the result for each language in App. A.2. Looking at the baselines, we find that few-shot ICL using PaLM-2 L model is quite competitive without any additional model tuning, but still lags behind PaLM-2 S fine-tuned on a relatively small amount of task data. However, we find that the best baseline differs between ICL and fine-tuning PaLM-2 XS across different tasks and data size settings. If one were choosing between just ICL or fine-tuning, this inconsistency makes it difficult to determine the best course of action without empirical comparisons. On the other hand, Fiat offers the best performance by combining the strengths of both ICL and fine-tuning. ## 4 Ablations and Analysis | Xor-AttriQA | Xtreme-Up Cross-lingual QA: Indics | Xtreme-Up Cross-lingual QA: Full ---|---|---|--- | O(100) | O(1000) | O(10000) $\theta_{\tau}$ | $\theta_{\beta}$ | Method | Acc / AUC-PR | F1 | F1 —– | L | Few-shot ICL | 78.6 / —– | 68.9 | 69.2 XS | L | Fiat | 94.0 / 78.1 | 73.6 | 77.8 —– | w/o CoT-augmentated tuning | 94.0 / 80.3 | 70.7 | 76.0 —– | w/o Instruction-augmented tuning | 93.5 / 72.4 | 69.8 | 76.4 —– | w/o Parameter-efficient tuning | 93.7 / 69.8 | 67.8 | 75.8 —– | w/o Instruction-tuned base model | 90.5 / 52.1 | 63.5 | 75.5 S | L | Fiat | 93.9 / 77.5 | 77.3 | 79.3 —– | w/o CoT-augmentated tuning | 94.7 / 80.7 | 76.7 | 79.8 —– | w/o Instruction-augmented tuning | 94.1 / 71.6 | 75.3 | 79.1 —– | w/o Parameter-efficient tuning | 94.7 / 76.2 | 72.3 | 78.5 —– | w/o Instruction-tuned base model | 90.6 / 54.5 | 67.1 | 77.8 Table 3: Ablations showing the contribution of each modification within the Fiat recipe; each removal is cumulative with the one above. We observe that each modification tends to make a substantial positive impact on at least one scenario. The bottom line in each block is equivalent to traditional fine- tuning. In this section, we study the effect of individual design decisions within Fiat and present the results in Table 3, and drawing conclusions from them below. In the end, we find that while certain design choices tend to have a larger effect on some settings than others, each tends to have substantial contributions in some area, and together the overall modeling recipe is very effective as a whole. #### Instructed-tuned base models improve final quality of fine-tuned models. The instruction-tuned Flan XS model improves over the base model on all datasets, especially on Xor-AttriQA and Xtreme-Up Cross-lingual QA Indic, where the total amount of task data is around $O(100)$ to $O(1000)$. This indicates that instruction-tuned models are not only beneficial for ICL, but can also be beneficial for fine-tuning on limited data (Longpre et al., 2023). However, the advantage of instruction-tuned model on Xtreme-Up Cross-lingual QA decreases from the Indic ($O(1000)$ training examples) to Full ($O(10000)$ training examples), indicating that instruction-tuned model is less helpful when the fine-tuning dataset is large. #### Instruction-augmented Tuning generally leads to significant improvements. Adding an appropriate prompted format to the task data is generally beneficial for all tasks. This result indicates that prompt engineering is not only helpful for direct few-shot ICL, but also has a positive impact on model fine- tuning. Prompted tuning is especially helpful for Xor-AttriQA and Xtreme-Up Cross-lingual QA Indic, where the amount of task data is very limited. This is because the prompt format aligns the distribution of downstream task closer to the model pretraining distribution, which allows the pretrained model to generalize to the downstream task with a small amount of task examples. #### CoT-augmentated Tuning is helpful for most tasks. Our CoT-augmented Tuning can lead to large improvement for Xtreme-Up Cross- lingual QA Indic task. Surprisingly, it does not help Xor-AttriQA, which is contradictory to findings from prior works which show that explanations can be especially helpful for classification tasks (Hsieh et al., 2023; Zhou et al., 2023). We hypothesize that this is because the model already performs quite well on Xor-AttriQA without having access to the explanations (over 90 percent accuracy) and this task may be reaching its saturation point. #### CoT-augmented Tuning is even more helpful for tasks and languages with lower performance. We analyze the relationship between the gains brought by CoT-augmentated Tuning on the Xtreme-Up Cross-lingual QA tasks. Figure 4 shows the improvement in F1 score of different languages versus a baseline model’s F1 score that lacks CoT-augmented Tuning. We can see that there is an inverse relationship between the benefit of CoT-augmented Tuning and the baseline model score, indicating that CoT is more beneficial for harder tasks or languages where the model could not perform well without the help of the CoT augmentation. This means that while we see meaningful gains in aggregate, for individual languages (or, more generally, individual tasks and use cases), CoT can have an out-sized impact on quality. Figure 3: Gains in F1 on Xtreme-Up Cross-lingual QA with CoT-augmented Tuning. The lower performing languages tend to benefit more from CoT augmentation. Method | F1 | Gains ---|---|--- Baseline | 70.7 | —– Distilled CoT (Hsieh et al., 2023) | 72.5 | \+ 1.8 Our CoT-augmented Tuning | 73.6 | \+ 2.9 Figure 4: Performance on Xtreme-Up Cross-lingual QA Indic compared to the baseline without CoT. Our CoT-augmented Tuning method significantly outperforms previous methods on distilling CoT. Figure 5: The validation F1 score throughout training on Xtreme-Up Cross- lingual QA for methods with and without Instruction-augmented Tuning. Instruction-augmented Tuning out-performs baseline and it has much better performance at step 0, before any model optimization. Figure 6: Improvement with Instruction-augmented Tuning for the model with and without instruction-tuning. Instruction-augmented Tuning is generally helpful for both types of models, and it tends to be more beneficial for instruction- tuned models #### CoT-augmented Tuning leads to better quality than CoT distillation. Recent work proposed distilled CoT, which uses the explanation as a multitask output target, so that the model does not need to generate additional explanations at test time (Hsieh et al., 2023). Here we compare the performance of these two different ways of using the CoT explanations and list the performance on cross-lingual QA tasks in Figure 4. Despite incurring higher inference cost, our CoT augmentation method further out-performs the distilled CoT by a large margin on the harder Xtreme-Up Cross-lingual QA Indic task. In general, we view distillation as an orthogonal technique to Fiat, which is aimed at efficiency over quality. #### Adding instructions to tuning helps from beginning to end. In Figure 6, we plot the training curves of Flan PaLM-2 S model with and without Instruction-augmented Tuning. We can see that adding instructions to tuning leads to much better performance at step 0, before any model optimization. This indicates that adding the instructions to the task data during fine-tuning121212Note we use the term instruction-augmented tuning to differentiate from the separate concepts of instruction-tuned base models, which creates base models that are better able to follow instructions of specific tasks later, and prompt tuning, which learns soft prompt embeddings. can significantly improve the zero-shot performance of the model, probably because it makes the task data more similar to the data used in the instruction tuning stage. Importantly, this also implies that the model parameters don’t need to move as far away from their starting point in order to achieve the same level of quality, reducing the risk of catastrophic forgetting. However, the model does not only reach the same level of quality with less steps, but also manages to exceed the quality of a model without instructions. #### Instruction-augmented Tuning helps more with an instruction-tuned base model. We compare the effect of prompted tuning on models with and without instruction tuning. Figure 6 shows that prompted tuning generally brings improvements for both the base model without instruction tuning and the Flan model with instruction tuning, while the gains on the instruction-tuned Flan model tend to be slightly larger and more consistent. This is likely because the data format we used for prompted tuning (task instructions followed by the input) is more similar to the Flan data mixture used for instruction tuning. ## 5 Related Work #### Instruction Tuning Instruction-tuned models (Wei et al., 2021; Longpre et al., 2023) often have better performance for few-shot ICL tasks than base language models since they are already primed to following instructions due to being fine-tuned on a diverse set of tasks. Using instruction-tuned models is a key component of Fiat. #### In-Context Learning In in-context learning, the parameters of the LLM remain fixed and a prompt containing a few examples along with reasoning steps is used to prime the model for solving similar tasks (Nye et al., 2021; Wei et al., 2022b). In- context learning works best for large language models. Fiat uses this capability of large language models, along with fine-tuning, to power small language models in the low-data regime. #### Knowledge Transfer from Larger to Smaller LLMs A popular prior method for transferring knowledge from large models to smaller ones is model distillation (Hinton et al., 2015), where the outputs of a larger model are used as a training signal for a smaller one. Other approaches include using the larger language model to generate data and then using this data to train smaller models. More recently, the latter has approach has been extended to generate reasoning steps which are provided as fine-tuning data for the smaller language model (Magister et al., 2022; Huang et al., 2022; Li et al., 2022; Ho et al., 2023; Hsieh et al., 2023; Fu et al., 2023; Zhu et al., 2023; Li et al., 2023). #### Under-represented Languages Most work that trains large language model and uses them for downstream tasks focus on English or the collection of 100 or so languages where there are large, easily available corpora (ImaniGooghari et al., 2023). Tail languages have often been ignored by language technologies due to lack of available corpora (Nayak & Joshi, 2022). Recent works has focused on tail languages outside of these head languages (Bapna et al., 2022; Ruder et al., 2023). In this work, we make the low-data regime the focus of our efforts, which is especially useful for tail languages. #### Fine-tuning smaller LLMs While fine-tuning with prompts has been studied for encoders pre-trained with masked language modeling objectives (Scao & Rush, 2021), we show that it is also important to fine-tuning generative language models. For example, some works show that fine-tuning a smaller language model is a more competitive and efficient method for practical low-data learning problems than few-shot ICL (Asai et al., 2023; Ruder et al., 2023). Agrawal et al. (2022) propose to synthetic QA data generated from very large LLM to improve the performance of a smaller model. ## 6 Conclusion We have presented Fiat, a method that fuses the ICL and fine-tuning learning paradigms and leads to improved model predictions across a variety of data scenarios, ranging from 100–10,000 training examples. We hope Fiat provides a practical way of harnessing the full potential of LLMs without needing to make a hard choice between learning paradigms. ## References * Agrawal et al. (2022) Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. Qameleon: Multilingual qa with only 5 examples. _arXiv preprint arXiv:2211.08264_ , 2022. * Asai et al. (2020) Akari Asai, Jungo Kasai, Jonathan H Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. Xor qa: Cross-lingual open-retrieval question answering. _arXiv preprint arXiv:2010.11856_ , 2020. * Asai et al. (2023) Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, and Hannaneh Hajishirzi. Buffet: Benchmarking large language models for few-shot cross-lingual transfer. _arXiv preprint arXiv:2305.14857_ , 2023. * Bapna et al. (2022) Ankur Bapna, Isaac Caswell, Julia Kreutzer, Orhan Firat, Daan van Esch, Aditya Siddhant, Mengmeng Niu, Pallavi Baljekar, Xavier Garcia, Wolfgang Macherey, et al. Building machine translation systems for the next thousand languages. _arXiv preprint arXiv:2205.03983_ , 2022. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020. * Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_ , 2022. * Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_ , 2022. * Clark et al. (2020) Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. Tydi qa: A benchmark for information-seeking question answering in ty pologically di verse languages. _Transactions of the Association for Computational Linguistics_ , 8:454–470, 2020. * Fu et al. (2023) Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. _arXiv preprint arXiv:2301.12726_ , 2023. * Goodfellow et al. (2013) Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. _arXiv preprint arXiv:1312.6211_ , 2013. * Google et al. (2023) Google, Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. _arXiv preprint arXiv:2305.10403_ , 2023. * Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2015. * Ho et al. (2023) Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pp. 14852–14882, Toronto, Canada, July 2023. Association for Computational Linguistics. * Hsieh et al. (2023) Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. _arXiv preprint arXiv:2305.02301_ , 2023. * Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_ , 2021. * Huang et al. (2022) Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. _arXiv preprint arXiv:2210.11610_ , 2022. * ImaniGooghari et al. (2023) Ayyoob ImaniGooghari, Peiqin Lin, Amir Hossein Kargaran, Silvia Severini, Masoud Jalili Sabet, Nora Kassner, Chunlan Ma, Helmut Schmid, André FT Martins, François Yvon, et al. Glot500: Scaling multilingual corpora and language models to 500 languages. _arXiv preprint arXiv:2305.12182_ , 2023. * Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. _arXiv preprint arXiv:2104.08691_ , 2021. * Li et al. (2023) Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, and Yejin Choi. Symbolic chain-of-thought distillation: Small models can also" think" step-by-step. _arXiv preprint arXiv:2306.14050_ , 2023. * Li et al. (2022) Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, et al. Explanations from large language models make small reasoners better. _arXiv preprint arXiv:2210.06726_ , 2022. * Longpre et al. (2023) Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. _arXiv preprint arXiv:2301.13688_ , 2023. * Magister et al. (2022) Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. Teaching small language models to reason. _arXiv preprint arXiv:2212.08410_ , 2022. * Muller et al. (2023) Benjamin Muller, John Wieting, Jonathan H Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Baldini Soares, Roee Aharoni, Jonathan Herzig, and Xinyi Wang. Evaluating and modeling attribution for cross-lingual question answering. _arXiv preprint arXiv:2305.14332_ , 2023. * Nayak & Joshi (2022) Ravindra Nayak and Raviraj Joshi. L3Cube-HingCorpus and HingBERT: A code mixed Hindi-English dataset and BERT language models. In _Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference_ , pp. 7–12, Marseille, France, June 2022. European Language Resources Association. * Nye et al. (2021) Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. _arXiv preprint arXiv:2112.00114_ , 2021. * OpenAI (2023) OpenAI. Gpt-4 technical report, 2023. * Ruder et al. (2023) Sebastian Ruder, Jonathan H Clark, Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel A Sarr, Xinyi Wang, et al. Xtreme-up: A user-centric scarce-data benchmark for under-represented languages. _arXiv preprint arXiv:2305.11938_ , 2023. * Sanh et al. (2021) Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. _arXiv preprint arXiv:2110.08207_ , 2021. * Scao & Rush (2021) Teven Le Scao and Alexander M Rush. How many data points is a prompt worth? _NAACL_ , 2021. * Schick & Schütze (2020a) Timo Schick and Hinrich Schütze. Exploiting cloze questions for few shot text classification and natural language inference. _arXiv preprint arXiv:2001.07676_ , 2020a. * Schick & Schütze (2020b) Timo Schick and Hinrich Schütze. It’s not just size that matters: Small language models are also few-shot learners. _arXiv preprint arXiv:2009.07118_ , 2020b. * Wei et al. (2021) Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. _arXiv preprint arXiv:2109.01652_ , 2021. * Wei et al. (2022a) Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. _arXiv preprint arXiv:2206.07682_ , 2022a. * Wei et al. (2022b) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837, 2022b. * Zhou et al. (2023) Yangqiaoyu Zhou, Yiming Zhang, and Chenhao Tan. Flame: Few-shot learning from natural language explanations. _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics_ , 2023. * Zhu et al. (2023) Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, and Bowen Zhou. Pad: Program-aided distillation specializes large models in reasoning. _arXiv preprint arXiv:2305.13888_ , 2023. ## Appendix A Appendix ### A.1 List of Languages for Each Task We provide the number of training, validation, and test examples for each task in Table 4 and Table 5. Split | bn | fi | ja | ru | te ---|---|---|---|---|--- Train | 40 | 66 | 20 | 84 | 52 Validation | 218 | 150 | 578 | 136 | 174 Test | 2822 | 1318 | 1908 | 1268 | 2146 Table 4: Dataset size for Xor-AttriQA. Split | as | bho | brx | gbm | gom | gu | hi | hne | kn | mai | ml | mni | mr | mwr | or | pa | ps | sa | ta | ur | ar | bn | fi | ja | ko | ru | te ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- Train | 323 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 326 | 3159 | 377 | 2467 | 2926 | 3327 | 2560 | 373 Validation | 356 | 358 | 357 | 365 | 365 | 371 | 519 | 372 | 373 | 369 | 373 | 380 | 385 | 386 | 386 | 385 | 384 | 385 | 384 | 387 | 941 | 618 | 978 | 727 | 861 | 731 | 468 Test | 633 | 631 | 633 | 634 | 629 | 630 | 1049 | 629 | 631 | 635 | 629 | 628 | 633 | 632 | 632 | 624 | 633 | 630 | 630 | 634 | 582 | 397 | 606 | 471 | 548 | 448 | 333 Table 5: Dataset size for Xtreme-Up Cross-lingual QA. ### A.2 Language-wise Breakdown of the Results We provide the performance for each language in Table 6, Table 7, and Table 8. | bn | fi | ja | ru | te ---|---|---|---|---|--- $\mathcal{M}_{\tau}$ | $\mathcal{M}_{\beta}$ | Method | Acc / AUC-PR —- | L | Few-shot ICL | 85.9 / —- | 78.5 / —- | 85.4 / —- | 84.5 / —- | 58.9 / —- XS | L | Fiat | 92.6 / 81.1 | 91.0 / 85.3 | 96.3 / 66.5 | 94.8 / 84.9 | 95.3 / 72.5 —- | w/o CoT-Augmented Tuning | 92.5 / 84.7 | 91.8 / 85.8 | 96.2 / 70.3 | 94.6 / 84.1 | 95.0 / 76.6 —- | w/o Instruction-Augmented Tuning | 91.7 / 74.1 | 91.2 / 81.4 | 95.9 / 53.5 | 93.8 / 77.4 | 94.8 / 75.4 —- | w/o Parameter-efficient Tuning | 92.6 / 73.9 | 92.0 / 76.7 | 95.0 / 55.8 | 94.2 / 74.1 | 94.7 / 68.6 —- | w/o Instruction-tuned base model | 89.4 / 65.6 | 88.9 / 65.9 | 94.3 / 42.1 | 90.1 / 58.6 | 89.7 / 28.2 S | L | Fiat | 92.3 / 81.3 | 92.1 / 84.0 | 96.2 / 62.4 | 94.6 / 84.9 | 94.0 / 93.9 —- | w/o CoT-Augmented Tuning | 93.0 / 84.3 | 94.4 / 81.2 | 95.5 / 58.8 | 98.8 / 87.4 | 95.3 / 78.4 —- | w/o Instruction-Augmented Tuning | 93.1 / 75.6 | 92.7 / 82.9 | 95.0 / 51.3 | 94.6 / 78.1 | 95.2 / 70.1 —- | w/o Parameter-efficient Tuning | 92.7 / 76.2 | 93.2 / 83.6 | 96.3 / 59.0 | 95.1 / 83.3 | 96.5 / 78.8 —- | w/o Instruction-tuned base model | 90.9 / 66.3 | 88.6 / 67.7 | 93.2 / 41.0 | 89.7 / 57.5 | 90.3 / 40.2 Table 6: Results on each language for Xor-AttriQA. | as | bho | brx | gbm | gom | gu | hi | hne | kn | mai | ml | mni | mr | mwr | or | pa | ps | sa | ta | ur ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\mathcal{M}_{\tau}$ | $\mathcal{M}_{\beta}$ | Method | F1 —- | L | Few-shot ICL | 72.5 | 61.8 | 43.0 | 60.3 | 72.3 | 70.6 | 61.5 | 70.8 | 72.9 | 73.3 | 72.2 | 57.1 | 71.5 | 69.5 | 71.4 | 73.7 | 70.6 | 72.6 | 71.5 | 69.4 XS | L | Fiat | 75.9 | 73.9 | 47.2 | 72.7 | 76.1 | 76.1 | 79.3 | 76.2 | 76.6 | 75.5 | 76.3 | 61.1 | 75.4 | 73.3 | 76.0 | 75.6 | 76.6 | 77.4 | 75.4 | 73.3 —- | w/o CoT-Augmented Tuning | 73.2 | 73.0 | 40.7 | 68.8 | 71.3 | 76.1 | 79.0 | 72.3 | 74.0 | 71.4 | 76.7 | 48.8 | 73.3 | 72.3 | 71.6 | 74.6 | 72.2 | 74.9 | 75.0 | 74.7 —- | w/o Instruction-Augmented Tuning | 73.2 | 71.5 | 39.1 | 67.8 | 71.7 | 73.7 | 78.5 | 70.3 | 74.0 | 71.2 | 74.7 | 50.1 | 73.9 | 71.4 | 70.9 | 72.2 | 72.8 | 71.8 | 74.5 | 72.48 —- | w/o Parameter-efficient Tuning | 70.7 | 69.5 | 49.2 | 65.7 | 70.7 | 80.5 | 67.4 | 69.9 | 69.7 | 70.9 | 51.6 | 70.0 | 67.8 | 66.8 | 69.5 | 69.7 | 68.7 | 70.9 | 69.8 | 67.8 —- | w/o Instruction-tuned base model | 65.6 | 64.7 | 49.3 | 60.3 | 62.6 | 65.7 | 76.9 | 63.2 | 65.2 | 63.7 | 65.4 | 52.8 | 64.2 | 63.5 | 63.8 | 65.8 | 64.3 | 63.7 | 65.4 | 64.4 S | L | Fiat | 80.2 | 77.8 | 52.2 | 77.2 | 78.3 | 80.6 | 82.2 | 79.5 | 79.7 | 78.8 | 79.8 | 64.5 | 79.4 | 77.4 | 79.4 | 80.7 | 80.0 | 80.4 | 79.8 | 78.0 —- | w/o CoT-augmented Tuning | 79.1 | 78.4 | 50.3 | 75.6 | 78.7 | 79.9 | 84.6 | 77.8 | 79.2 | 78.3 | 79.2 | 62.4 | 77.8 | 77.7 | 79.6 | 79.2 | 78.8 | 79.9 | 80.1 | 78.0 —- | w/o Instruction-Augmented Tuning | 78.8 | 77.6 | 47.7 | 75.1 | 76.1 | 79.1 | 82.8 | 76.3 | 78.4 | 78.0 | 78.4 | 58.0 | 78.1 | 76.0 | 79.3 | 78.1 | 77.0 | 78.2 | 78.0 | 77.2 —- | w/o Parameter-efficient Tuning | 74.3 | 71.2 | 50.6 | 71.7 | 72.7 | 74.6 | 81.8 | 72.7 | 75.1 | 74.1 | 74.9 | 61.9 | 73.9 | 72.1 | 75.8 | 75.5 | 73.5 | 72.6 | 73.6 | 73.5 —- | w/o Instruction-tuned base model | 68.8 | 68.2 | 46.1 | 66.5 | 67.5 | 69.0 | 79.4 | 68.8 | 69.4 | 68.3 | 69.4 | 53.5 | 68.4 | 67.1 | 69.2 | 68.4 | 69.4 | 67.3 | 70.0 | 68.0 Table 7: Results on each language for Xtreme-Up Cross-lingual QA Indic. | as | bho | brx | gbm | gom | gu | hi | hne | kn | mai | ml | mni | mr | mwr | or | pa | ps | sa | ta | ur | ar | bn | fi | ja | ko | ru | te ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\mathcal{M}_{\tau}$ | $\mathcal{M}_{\beta}$ | Method | F1 | | | | | | | —- | L | Few-shot ICL | 72.5 | 61.8 | 43.0 | 60.3 | 72.3 | 70.6 | 61.5 | 70.8 | 72.9 | 73.3 | 72.2 | 57.1 | 71.5 | 69.5 | 71.4 | 73.7 | 70.6 | 72.6 | 71.5 | 69.4 | 66.0 | 75.2 | 65.5 | 60.3 | 61.2 | 66.9 | 68.7 XS | L | Fiat | 80.1 | 80.4 | 52.6 | 77.0 | 78.9 | 80.7 | 85.2 | 80.5 | 80.8 | 79.0 | 79.6 | 65.6 | 79.6 | 78.7 | 79.8 | 79.1 | 80.1 | 79.5 | 79.8 | 78.3 | 83.7 | 84.6 | 82.8 | 83.7 | 86.3 | 81.6 | 82.4 —- | w/o CoT-augmented Tuning | 79.8 | 76.8 | 49.1 | 71.9 | 76.5 | 78.1 | 84.2 | 77.5 | 79.0 | 75.4 | 79.0 | 55.2 | 77.8 | 75.9 | 75.8 | 78.7 | 78.1 | 78.3 | 80.5 | 78.1 | 83.5 | 85.0 | 82.1 | 82.3 | 85.9 | 80.8 | 81.1 —- | w/o Instruction-augmented Tuning | 78.8 | 77.8 | 49.2 | 72.8 | 77.0 | 78.7 | 83.9 | 76.8 | 80.1 | 76.1 | 80.4 | 58.3 | 78.7 | 76.2 | 77.1 | 78.6 | 76.8 | 79.1 | 79.4 | 79.4 | 84.5 | 84.6 | 81.5 | 82.6 | 87.0 | 81.7 | 80.8 —- | w/o Parameter-efficient Tuning | 78.3 | 75.6 | 55.4 | 74.7 | 75.0 | 78.0 | 84.9 | 76.5 | 78.9 | 77.3 | 78.8 | 61.9 | 77.8 | 77.3 | 75.9 | 78.4 | 76.9 | 76.6 | 79.8 | 77.8 | 84.3 | 83.5 | 81.9 | 83.2 | 88.1 | 82.0 | 81.3 —- | w/o Instruction-tuned base model | 76.9 | 76.4 | 56.6 | 73.1 | 74.2 | 76.8 | 84.7 | 75.4 | 77.9 | 75.5 | 78.1 | 62.8 | 77.5 | 74.3 | 74.7 | 77.5 | 76.5 | 75.3 | 77.5 | 75.8 | 82.4 | 84.2 | 81.2 | 82.8 | 88.1 | 80.4 | 80.3 S | L | Fiat | 81.6 | 80.5 | 51.9 | 78.3 | 80.2 | 82.3 | 85.8 | 81.2 | 82.4 | 82.1 | 81.5 | 67.0 | 82.1 | 80.2 | 81.6 | 80.9 | 81.5 | 82.2 | 82.3 | 79.5 | 82.5 | 86.2 | 82.0 | 83.7 | 87.1 | 83.3 | 86.2 —- | w/o CoT-augmented Tuning | 82.8 | 80.5 | 49.9 | 78.0 | 80.0 | 83.4 | 85.9 | 80.4 | 82.7 | 80.5 | 83.7 | 64.9 | 81.5 | 80.2 | 82.0 | 82.0 | 83.0 | 82.4 | 80.0 | 84.2 | 86.6 | 81.9 | 82.4 | 87.0 | 83.9 | 84.3 | 80.6 —- | w/o Instruction-augmented Tuning | 81.3 | 80.0 | 51.2 | 78.3 | 78.4 | 82.0 | 85.7 | 80.5 | 81.2 | 80.3 | 81.8 | 64.8 | 81.0 | 79.7 | 81.2 | 80.5 | 80.7 | 80.5 | 81.6 | 79.4 | 82.8 | 85.7 | 83.3 | 83.8 | 86.4 | 84.1 | 84.0 —- | w/o Parameter-efficient Tuning | 79.5 | 77.5 | 61.5 | 77.3 | 78.3 | 80.1 | 85.3 | 79.0 | 79.9 | 79.0 | 80.5 | 68.9 | 79.0 | 78.4 | 79.8 | 78.8 | 78.7 | 78.9 | 80.5 | 78.3 | 83.3 | 85.1 | 84.1 | 84.9 | 89.2 | 85.7 | 82.4 —- | w/o Instruction-tuned base model | 79.5 | 77.4 | 55.4 | 75.6 | 79.1 | 79.9 | 85.5 | 77.5 | 80.7 | 78.5 | 80.3 | 63.4 | 79.5 | 77.8 | 78.8 | 78.6 | 78.7 | 78.8 | 80.7 | 77.7 | 81.9 | 85.8 | 84.0 | 85.0 | 88.8 | 91.9 | 82.1 Table 8: Results on each language for Xtreme-Up Cross-lingual QA All.
# Multifractal Analysis of the Sinkhorn Algorithm: Unveiling the Intricate Structure of Optimal Transport Maps Jose Rafael Espinosa Mena University of Southern California <EMAIL_ADDRESS> ###### Abstract The Sinkhorn algorithm has emerged as a powerful tool for solving optimal transport problems, finding applications in various domains such as machine learning, image processing, and computational biology. Despite its widespread use, the intricate structure and scaling properties of the coupling matrices generated by the Sinkhorn algorithm remain largely unexplored. In this paper, we delve into the multifractal properties of these coupling matrices, aiming to unravel their complex behavior and shed light on the underlying dynamics of the Sinkhorn algorithm. We prove the existence of the multifractal spectrum and the singularity spectrum for the Sinkhorn coupling matrices. Furthermore, we derive bounds on the generalized dimensions, providing a comprehensive characterization of their scaling properties. Our findings not only deepen our understanding of the Sinkhorn algorithm but also pave the way for novel applications and algorithmic improvements in the realm of optimal transport. ## 1 Introduction Optimal transport has become a cornerstone of modern data analysis, enabling the comparison and manipulation of probability distributions in a geometrically meaningful way [7]. At the heart of many optimal transport algorithms lies the Sinkhorn algorithm, an iterative procedure that efficiently computes the optimal coupling between two probability measures [4]. The Sinkhorn algorithm has found wide-ranging applications, from domain adaptation in machine learning [1] to applications in computer vision [3]. Despite its practical success, the theoretical understanding of the Sinkhorn algorithm’s behavior remains incomplete. In particular, the multiscale structure and scaling properties of the coupling matrices generated by the Sinkhorn algorithm have not been fully explored. Multifractal analysis offers a powerful framework to study such complex systems, providing insights into their local regularity and global scaling behavior [5]. By unraveling the multifractal properties of the Sinkhorn coupling matrices, we aim to gain a deeper understanding of the algorithm’s dynamics and unveil the intricate interplay between the optimal transport problem and its solution. In this paper, we investigate the multifractal properties of the Sinkhorn algorithm. Our main contributions are twofold: * • We prove the existence of the multifractal spectrum and the singularity spectrum for the coupling matrices generated by the Sinkhorn algorithm. These spectra provide a comprehensive characterization of the local scaling behavior and the distribution of singularities in the coupling matrices. * • We derive bounds on the generalized dimensions of the Sinkhorn coupling matrices, shedding light on their global scaling properties and the interplay between the problem size and the regularity of the optimal transport plan. Our work builds upon and extends the existing literature on the mathematical analysis of the Sinkhorn algorithm and the multifractal formalism. By bridging these two domains, we provide a novel perspective on the intricate structure of optimal transport and contribute to the theoretical foundation of this rapidly evolving field. The rest of the paper is organized as follows. In Section 2, we introduce the necessary background on the Sinkhorn algorithm and multifractal analysis. Section 3 presents our main results, including the existence proofs for the multifractal and singularity spectra, as well as the bounds on the generalized dimensions. Finally, we conclude the paper in Section 4. ## 2 Background ### 2.1 The Sinkhorn Algorithm The Sinkhorn algorithm is an iterative procedure for solving the optimal transport problem between two probability measures [4]. Given a cost matrix $C\in\mathbb{R}^{n\times n}$ and two probability vectors $\mathbf{r},\mathbf{c}\in\mathbb{R}^{n}$, the algorithm seeks to find a coupling matrix $P$ that minimizes the total transportation cost while satisfying the marginal constraints: $\min_{P\in U(\mathbf{r},\mathbf{c})}\langle P,C\rangle,$ where $U(\mathbf{r},\mathbf{c})$ denotes the set of coupling matrices with marginals $\mathbf{r}$ and $\mathbf{c}$, and $\langle\cdot,\cdot\rangle$ is the Frobenius inner product. The Sinkhorn algorithm solves this problem by alternating between row and column normalization steps. Starting from an initial matrix $K=\exp(-C/\varepsilon)$, where $\varepsilon>0$ is a regularization parameter, the algorithm iteratively updates the Scaling vectors $\mathbf{u}$ and $\mathbf{v}$ as follows: $\mathbf{u}\leftarrow\mathbf{r}\oslash(K\mathbf{v}),\quad\mathbf{v}\leftarrow\mathbf{c}\oslash(K^{\top}\mathbf{u}),$ where $\oslash$ denotes element-wise division. The coupling matrix $P$ is then obtained as: $P=\text{diag}(\mathbf{u})K\text{diag}(\mathbf{v})$ The Sinkhorn algorithm has several attractive properties, such as fast convergence, stability, and the ability to handle large-scale problems [2]. Moreover, it has been shown that the Sinkhorn divergence, defined as the entropy-regularized optimal transport cost, possesses desirable geometric properties and can be used as a meaningful distance between probability measures [6]. #### 2.1.1 Multifractal Analysis Multifractal analysis is a powerful framework for studying the local regularity and scaling properties of complex systems [5]. It goes beyond traditional fractal analysis by considering the distribution of local scaling exponents and the interplay between different scales. The central objects of interest in multifractal analysis are the multifractal spectrum and the singularity spectrum. The multifractal spectrum $D(q)$ is defined as: $D(q)=\lim_{\varepsilon\to 0}\frac{1}{q-1}\frac{\log\sum_{i}\mu(B_{i}(\varepsilon))^{q}}{\log\varepsilon},$ where $q\in\mathbb{R}$ is a moment order, $\mu$ is a measure on a metric space, and $B_{i}(\varepsilon)$ are disjoint balls of radius $\varepsilon$ covering the support of $\mu$. The multifractal spectrum captures the scaling behavior of the measure $\mu$ at different moment orders, providing a global characterization of its complexity. On the other hand, the singularity spectrum $f(\alpha)$ describes the distribution of local scaling exponents $\alpha$. It is defined as: $f(\alpha)=\dim_{H}\\{x:\alpha(x)=\alpha\\},$ where $\alpha(x)$ is the local scaling exponent at a point $x$, and $\dim_{H}$ denotes the Hausdorff dimension. The singularity spectrum quantifies the "size" of the set of points with a given scaling exponent, revealing the local regularity of the measure. The multifractal and singularity spectra are related through the Legendre transform: $f(\alpha)=\inf_{q}(q\alpha-D(q)).$ This relationship highlights the deep connection between the global scaling properties captured by the multifractal spectrum and the local regularity described by the singularity spectrum. In the context of the Sinkhorn algorithm, multifractal analysis offers a novel perspective on the structure of the coupling matrices and the underlying optimal transport problem. By studying the multifractal properties of these matrices, we aim to gain a deeper understanding of the algorithm’s dynamics and its potential for further improvement. ## 3 Main Results In this section, we present our main results on the multifractal properties of the Sinkhorn algorithm. We start by proving the existence of the multifractal spectrum and the singularity spectrum for the coupling matrices generated by the algorithm. Then, we derive bounds on the generalized dimensions, providing insights into their scaling behavior. ### 3.1 Existence of the Multifractal Spectrum Our first main result establishes the existence of the multifractal spectrum for the Sinkhorn coupling matrices. ###### Theorem 1 (Existence of the Multifractal Spectrum). Let $P$ be the coupling matrix generated by the Sinkhorn algorithm for a given cost matrix $C\in\mathbb{R}^{n\times n}$ and marginals $\mathbf{r},\mathbf{c}\in\mathbb{R}^{n}$. Assume that the cost matrix $C$ has non-negative entries and satisfies the triangle inequality. Then, the multifractal spectrum $D(q)$ of the measure $\mu$ induced by $P$ exists for all $q\in\mathbb{R}$. ###### Proof. The proof relies on the properties of the Sinkhorn algorithm and the regularity of the cost matrix $C$. We first show that the coupling matrix $P$ generated by the algorithm has positive entries and satisfies the marginal constraints: $\displaystyle P\mathbf{1}$ $\displaystyle=\mathbf{r}$ $\displaystyle P^{\top}\mathbf{1}$ $\displaystyle=\mathbf{c}$ This implies that the measure $\mu$ induced by $P$ is a probability measure on the support of $P$. Next, we exploit the regularization property of the Sinkhorn algorithm. The algorithm starts from an initial matrix $K=\exp(-C/\varepsilon)$, where $\varepsilon>0$ is a regularization parameter. This exponential transformation ensures that the entries of $K$ decay rapidly with increasing cost, introducing a smoothing effect on the optimal transport plan. Using the triangle inequality assumption on the cost matrix $C$, we can show that the entries of the coupling matrix $P$ decay exponentially with respect to the geodesic distance on the support of $\mu$. Specifically, there exist constants $\alpha,\beta>0$ such that: $P_{ij}\leq\alpha\exp(-\beta d(x_{i},x_{j})),$ where $d(\cdot,\cdot)$ is the geodesic distance induced by the cost matrix. This exponential decay property allows us to control the local regularity of the measure $\mu$ and establish the existence of the partition function $Z(q,\varepsilon)$: $Z(q,\varepsilon)=\sum_{i}\mu(B_{i}(\varepsilon))^{q},$ where $B_{i}(\varepsilon)$ are disjoint balls of radius $\varepsilon$ covering the support of $\mu$. Using the decay property of $P$ and the regularity of the cost matrix, we can derive upper and lower bounds on $Z(q,\varepsilon)$ of the form: $c_{1}\varepsilon^{-\tau(q)}\leq Z(q,\varepsilon)\leq c_{2}\varepsilon^{-\tau(q)},$ where $c_{1},c_{2}>0$ are constants, and $\tau(q)$ is a real-valued function. Finally, by taking the limit as $\varepsilon\to 0$ and using the definition of the multifractal spectrum: $D(q)=\lim_{\varepsilon\to 0}\frac{1}{q-1}\frac{\log Z(q,\varepsilon)}{\log\varepsilon},$ we obtain that $D(q)$ exists and is equal to $\tau(q)$ for all $q\in\mathbb{R}$. The technical details of the proof involve careful estimations of the partition function and the use of measure-theoretic arguments to ensure the existence of the limit. The regularity assumptions on the cost matrix $C$ play a crucial role in controlling the local behavior of the measure $\mu$ and establishing the required bounds. ∎ The existence of the multifractal spectrum has important implications for understanding the scaling properties of the Sinkhorn coupling matrices. It shows that these matrices exhibit a rich multiscale structure, with different regions characterized by distinct scaling exponents. The multifractal spectrum provides a global description of this structure, capturing the interplay between the regularization parameter, the cost matrix, and the resulting optimal transport plan. Moreover, the proof highlights the regularizing effect of the Sinkhorn algorithm. The exponential transformation of the cost matrix introduces a smoothing of the optimal transport plan, ensuring a certain degree of regularity in the coupling matrix. This regularity is essential for the existence of the multifractal spectrum and the well-behaved scaling properties of the algorithm. ### 3.2 Existence of the Singularity Spectrum Our second main result concerns the existence of the singularity spectrum for the Sinkhorn coupling matrices. ###### Theorem 2 (Existence of the Singularity Spectrum). Under the same assumptions as in Theorem 1, the singularity spectrum $f(\alpha)$ of the measure $\mu$ induced by the Sinkhorn coupling matrix $P$ exists for all $\alpha$ in the range of local scaling exponents. ###### Proof. The proof builds upon the existence of the multifractal spectrum and the properties of the Legendre transform. We start by defining the local scaling exponents $\alpha(x)$ as: $\alpha(x)=\lim_{\varepsilon\to 0}\frac{\log\mu(B(x,\varepsilon))}{\log\varepsilon},$ where $B(x,\varepsilon)$ is a ball of radius $\varepsilon$ centered at $x$. The existence of this limit for almost all $x$ follows from the regularity properties of the measure $\mu$ established in the proof of Theorem 1. Next, we define the singularity spectrum $f(\alpha)$ as the Hausdorff dimension of the set of points with a given local scaling exponent: $f(\alpha)=\dim_{H}\\{x:\alpha(x)=\alpha\\},$ To prove the existence of $f(\alpha)$, we use the Legendre transform relationship between the multifractal spectrum $D(q)$ and the singularity spectrum: $f(\alpha)=\inf_{q}(q\alpha-D(q)).$ The existence of $D(q)$, as proven in Theorem 1, ensures that the infimum in the Legendre transform is attained for each $\alpha$. This guarantees the existence of $f(\alpha)$ for all $\alpha$ in the range of local scaling exponents. The proof also relies on the properties of the Legendre transform, such as its convexity and the fact that it relates the global scaling behavior captured by $D(q)$ to the local regularity described by $f(\alpha)$. The singularity spectrum provides a detailed characterization of the distribution of local scaling exponents, revealing the fine-grained structure of the measure $\mu$. ∎ The existence of the singularity spectrum complements the multifractal spectrum in describing the scaling properties of the Sinkhorn coupling matrices. While the multifractal spectrum captures the global behavior, the singularity spectrum focuses on the local regularity and the distribution of singularities in the measure. The singularity spectrum allows us to identify regions of the coupling matrix with different scaling behavior and to quantify the "size" of these regions in terms of their Hausdorff dimension. This information is valuable for understanding the local structure of the optimal transport plan and its relationship to the underlying cost matrix and marginal constraints. Furthermore, the Legendre transform relationship between the multifractal and singularity spectra highlights the deep connection between the global and local scaling properties of the Sinkhorn coupling matrices. It shows that the singularity spectrum can be recovered from the multifractal spectrum through a variational principle, providing a unified framework for studying the multiscale structure of these matrices. ### 3.3 Bounds on Generalized Dimensions Our third main result provides bounds on the generalized dimensions of the Sinkhorn coupling matrices, shedding light on their scaling behavior and the influence of the problem size. ###### Theorem 3 (Bounds on Generalized Dimensions). Let $P$ be the coupling matrix generated by the Sinkhorn algorithm for a cost matrix $C\in\mathbb{R}^{n\times n}$ and marginals $\mathbf{r},\mathbf{c}\in\mathbb{R}^{n}$. Then, the generalized dimensions $D(q)$ of the measure $\mu$ induced by $P$ satisfy the following bounds: $\displaystyle D(0)$ $\displaystyle\leq\min\left(\log n-\log\min_{i,j}C_{ij}\right)\cdot\frac{1}{\log n}$ $\displaystyle D(1)$ $\displaystyle\leq\min\left(\log n-\log\min_{i,j}P_{ij}\right)\cdot\frac{1}{\log n}$ $\displaystyle D(2)$ $\displaystyle\leq\min\left(\log n-2\log\min_{i,j}P_{ij}\right)\cdot\frac{1}{\log n}.$ ###### Proof. The proof relies on the properties of the Sinkhorn algorithm and the generalized dimensions. We start by recalling the definition of the generalized dimensions: $D(q)=\lim_{\varepsilon\to 0}\frac{1}{q-1}\frac{\log\sum_{i}\mu(B_{i}(\varepsilon))^{q}}{\log\varepsilon},$ where $B_{i}(\varepsilon)$ are disjoint balls of radius $\varepsilon$ covering the support of $\mu$. For $q=0$, the generalized dimension $D(0)$ coincides with the box-counting dimension, which measures the growth rate of the number of non-empty balls $N(\varepsilon)$ as $\varepsilon\to 0$: $D(0)=\lim_{\varepsilon\to 0}\frac{\log N(\varepsilon)}{\log 1/\varepsilon}.$ To bound $D(0)$, we use the fact that the coupling matrix $P$ has size $n\times n$ and that its entries are non-negative. This implies that the number of non-empty balls satisfies: $N(\varepsilon)\leq\min(n^{2},\varepsilon^{-d}),$ where $d$ is the dimension of the ambient space. The term $n^{2}$ comes from the fact that the coupling matrix has at most $n^{2}$ non-zero entries, while the term $\varepsilon^{-d}$ represents the maximum number of balls of radius $\varepsilon$ needed to cover the support of $\mu$. By taking the logarithm and the limit as $\varepsilon\to 0$, we obtain the bound: $D(0)\leq\min\left(\log n,-\log\min_{i,j}C_{ij}\right)\cdot\frac{1}{\log n}$ where we used the fact that the minimum entry of the cost matrix $C$ provides a lower bound on the size of the balls. For $q=1$ and $q=2$, the generalized dimensions $D(1)$ and $D(2)$ are related to the information and correlation dimensions, respectively. To bound these dimensions, we use the fact that the entries of the coupling matrix $P$ are non-negative and sum up to 1. This allows us to derive upper bounds on the sums involved in the definition of $D(1)$ and $D(2)$: $\sum_{i}\mu(B_{i}(\varepsilon))\leq 1,\quad\sum_{i}\mu(B_{i}(\varepsilon))^{2}\leq\max_{i}\mu(B_{i}(\varepsilon)).$ Using these bounds and the properties of the Sinkhorn algorithm, we obtain the desired bounds on $D(1)$ and $D(2)$ in terms of the minimum entry of the coupling matrix $P$. ∎ The bounds on the generalized dimensions provide insights into the scaling behavior of the Sinkhorn coupling matrices and their dependence on the problem size. The bound on $D(0)$ shows that the box-counting dimension is limited by the logarithm of the problem size $n$ and the minimum entry of the cost matrix $C$. This reflects the fact that the coupling matrix has a finite size and that the cost matrix influences the spatial distribution of the optimal transport plan. The bounds on $D(1)$ and $D(2)$ reveal the influence of the minimum entry of the coupling matrix $P$ on the information and correlation dimensions. These dimensions capture the scaling behavior of the measure $\mu$ at different levels of singularity, with $D(1)$ focusing on the entropy and $D(2)$ on the correlation structure. The bounds suggest that the singularity of the coupling matrix, as measured by its minimum entry, plays a crucial role in determining the scaling properties of the optimal transport plan. Furthermore, the bounds highlight the interplay between the problem size $n$ and the regularity of the coupling matrix. As the problem size increases, the bounds on the generalized dimensions become tighter, indicating that the scaling behavior of the coupling matrix becomes more regular and predictable. This is consistent with the regularizing effect of the Sinkhorn algorithm, which ensures a certain degree of smoothness in the optimal transport plan. Overall, the bounds on the generalized dimensions provide a quantitative characterization of the scaling properties of the Sinkhorn coupling matrices, shedding light on their dependence on the problem size, the cost matrix, and the regularity of the optimal transport plan. These bounds can be used to assess the complexity of the optimal transport problem and to guide the design of efficient algorithms for its solution. ## 4 Conclusion In this paper, we have studied the multifractal properties of the Sinkhorn algorithm, a widely used method for solving optimal transport problems. Through a mathematical analysis, we have proven the existence of the multifractal and singularity spectra for the coupling matrices generated by the algorithm, providing a comprehensive characterization of their local regularity and scaling behavior. Moreover, we have derived bounds on the generalized dimensions of these matrices, shedding light on their dependence on the problem size and the regularity of the cost matrix. Our results have important implications for the understanding and applications of the Sinkhorn algorithm in various domains, from image processing and machine learning to computational biology and physics. By uncovering the multiscale structure of the optimal transport plans, our work opens up new possibilities for the development of efficient and adaptive numerical methods that exploit this structure, as well as for the analysis and comparison of complex data sets using multifractal techniques. Furthermore, our work highlights the fruitful interplay between optimal transport and multifractal analysis, two active and rapidly evolving fields of applied mathematics. We have shown that multifractal analysis can provide valuable insights into the behavior of optimal transport algorithms, while the Sinkhorn algorithm offers a natural and computationally tractable setting for the application of multifractal techniques. This interplay opens up new avenues for research at the intersection of these two fields, with potential applications ranging from the theoretical study of the geometry of optimal transport to the practical development of efficient multiscale methods for data analysis and simulation. In conclusion, our work presents a novel perspective on the Sinkhorn algorithm and its multifractal properties, providing a deeper understanding of the structure and behavior of optimal transport plans. We believe that our results will stimulate further research in this area and contribute to the development of more efficient and robust methods for the solution of optimal transport problems in various scientific and engineering applications. ## References * [1] Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, and Alain Rakotomamonjy. Screening sinkhorn algorithm for regularized optimal transport. In Advances in Neural Information Processing Systems (NeurIPS), volume 32, 2019. * [2] Jason Altschuler, Jonathan Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 1961–1971, Red Hook, NY, USA, 2017. Curran Associates Inc. * [3] Nicolas Bonneel and Julie Digne. A survey of optimal transport for computer graphics and computer vision. Computer Graphics Forum, 42:439–460, 2023. * [4] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transportation distances. Advances in Neural Information Processing Systems, 26, 06 2013. * [5] Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications. Wiley, Chichester, 2003. * [6] Aude Genevay. Entropy-Regularized Optimal Transport for Machine Learning. PhD thesis, 03 2019. * [7] Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11:355–206, 01 2019.
# Superconductivity by Berry connection from many-body wave functions: revisit to Andreev$-$Saint-James reflection and Josephson effect Hiroyasu Koizumi Division of Quantum Condensed Matter Physics, Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan ###### Abstract Although the standard theory of superconductivity based on the BCS theory is a successful one, several experimental results indicate the necessity for a fundamental revision. We argue that the revision is on the origin of the phase variable for superconductivity; this phase appears as a consequence of the electron-pairing in the standard theory, but its origin is a Berry connection arising from many-body wave functions. When this Berry connection is non-trivial, it gives rise to a collective mode that generates supercurrent; this collective mode creates number-changing operators for particles participating in this mode, and these number-changing operators stabilize the superconducting state by exploiting the Cooper instability. In the new theory, the role of the electron-pairing is to stabilize the nontrivial Berry connection; it is not the cause of superconductivity. In BCS superconductors, however, the simultaneous appearance of the nontrivial Berry connection and the electron-pairing occurs. Therefore, the electron-pairing amplitude can be used as an order parameter for the superconducting state. We revisit the Andreev$-$Saint-James reflection and the Josephson effect. They are explained as consequence of the presence of the Berry connection. Bogoliubov quasiparticles are replaced by the particle-number conserving Bogoliubov excitations that describe the transfer of electrons between the collective mod and single particle mode. There are two distinct cases for the Josephson effect; one of them has the common Bogoliubov excitations for the two superconductors in the junction, and the other does different Bogoliubov excitations for different superconductors. The latter case is the one considered in the standard theory; in this case, the Cooper pairs tunnel through without Bogoliubov excitations, creating an impression that the supercurrent is a flow of Cooper pairs; however, it does not explain the observed ac Josephson effect under the experimental boundary condition. On the other hand, the former case explains the ac Josephson effect under the experimental boundary condition. In this case, it is clearly shown that the supercurrent is a flow of electrons brought about by the non-trivial Berry connection which provides an additional $U(1)$ gauge field. ## I Introduction The current standard theory of superconductivity is the one based on the BCS theory [1]. In this theory, the order parameter of superconducting state is the electron-pairing amplitude or the pair potential. This gives rise to an energy gap for single-particle excitations and provides rigidity of the superconducting state against perturbations. The standard theory has been successfully applied to many superconducting materials; a notable point is that it successfully explains the superconducting transition temperatures for superconductors whose normal states are simple metals. In 1986, high temperature superconductivity was found in ceramics [2]. Since then, extensive efforts have been performed for the elucidation of the mechanism of it. In spite of all the efforts, the widely-accepted theory has not been obtained, yet. A notable point of the cuprate superconductivity is that the superconducting transition temperature for the optimally-doped sample is not the pairing energy gap formation temperature, but the stabilization temperature for loop currents of the superconducting coherence length size [3]. Efforts to elucidate the cuprate superconductivity have lead some people to reexamine the theory of superconductivity from fundamental levels [4, 5]. Then, it is noticed that there are two solid experimental facts that point to the need for fundamental revisions in the standard theory [4, 5]. One of them is the reversible phase transitions between normal and superconducting phases in the $H$-$T$ plane (for Type I superconductors)[6, 7, 8, 9]. A series of work [10, 11, 12, 13] indicate that the superconducting- normal state transition in the presence of a magnetic field occurs without energy dissipation, and the state of the art calorimetry indicates that 99.99% of the supercurrent stops without current carriers undergoing irreversible collisions (see Appendix B of Ref. [6]). However, such a transition is impossible in the standard theory; according to the standard theory, paired electrons flow without dissipation but single electrons flow with dissipation, thus, the supercurrent generated by the flow of electron pairs in the magnetic field inevitably produces the Joule heat during the superconducting to normal phase transition due to the existence of a significant number of broken pairs that flow with dissipation. The other is the mass of the electron in the London moment [14, 15]. Inside a rotating superconductor, a magnetic field is created by the supercurrent produced in the surface region. The London moment is the magnetic moment produced by this supercurrent [16]. The London moment has been measured many times using different materials, ranging from the conventional superconductor [17, 18, 19, 20, 21] to the high Tc cuprates [22, 23] and heavy fermion superconductors [24]. The results always indicate that the mass $m$ is the free electron mass $m_{e}$ if the electron charge $q=-e$ is employed. However, the standard theory predicts it to be an effective mass $m^{\ast}$, contradicting the experimental results. The resolution for the above two discrepancies is provided using a new theory of superfluid that attributes the superfluidity to the appearance of the nontrivial Berry connection from many-body wave functions [25, 15]. In this theory, the supercurrent is explained as a topologically protected current generated by the collective mode created by the nontrivial Berry connection. A salient feature of the new theory is that it is formulated in the particle number conserving way. In the standard theory, the particle number non- conserving state vector is used, and the use of it gives rise to the phase variable that explains the Meissner effect and supercurrent generation; Bogoliubov quasiparticles are superpositions of electrons and holes, which can only be meaningful in the particle number non-conserving formalism. On the other hand, the particle number is conserved in the new theory; the Bogoliubov quasiparticles are replaced by excitations that describe the transfer of electrons between the collective mode and single-particle mode with keeping the particle number fixed. Since the Andreev$-$Saint-James (ASJ) reflection [26, 27] and the Josephson effect are caused by the superconducting phase variable, it is important to show that how they can be handled in the new theory. A purpose of the present work is to provide explanations for them. In the new theory, a Berry connection arises from interactions through wave functions. Those electrons whose coordinates are the arguments of the same wave function, actually, interact through the gauge field created by the wave function they share [25]. This gauge field is calculated as a Berry connection. It is a $U(1)$ gauge field, just like the $U(1)$ gauge field of the electromagnetism. Thus, there are two $U(1)$ gauge fields in the system. The presence of the Berry connection modifies Maxwell’s equations, and the Lorentz interaction between charged particles and electromagnetic field as will be discussed in the present work. Actually, the Lorentz interaction gives rise to Aharonov-Bohm type effects [28] that cannot be described by the Lorentz force. This modification of the Lorentz interaction affects the magnetic energy part of the phenomenological Ginzburg-Landau theory [29] in such a way that Abrikosov’s vortices [30] appear, naturally. Then, the superconducting coherence length is regard as the core size of the loop currents generated by the Berry connection that exist even without the applied magnetic field. The organization of the present work is as follows; in Section II, the modification of Maxwell’s equations in the presence of the Berry connection from many-body wave functions is explained. In Section III, the modification of the magnetic energy part of the Ginzburg-Landau theory and its consequences are discussed. Supercurrent flow in the new and standard theories are compared in Section IV. In Section V, the Andreev$-$Saint-James reflection is revisited. In Section VI, the Josephson effect is revised. Lastly, we conclude the present work in Section VII. ## II Modification of Maxwell’s equations in the presence of the Berry connection from many-body wave functions From the view point of the Feynman path integral formalism of quantum mechanics [31], the wave function is the sum of contributions from all paths each contributes an exponential whose phase is the classical action divided by $\hbar$ for the path in question. For the system of charged particles and electromagnetic field, the classical action $S$ are composed of the following three terms, $\displaystyle S=S_{1}+S_{2}+S_{3}$ (1) where $\displaystyle S_{1}=\sum_{i}{m\over 2}\int dt\ \dot{\bf r}_{i}^{2}$ (2) is the action for the particles, $\displaystyle S_{2}=-\int d^{3}rdt\ \left[\rho\phi^{\rm em}({\bf r},t)-{1\over c}{\bf j}\cdot{\bf A}^{\rm em}({\bf r},t)\right]=-q\sum_{i}\int dt\ \left[\phi^{\rm em}({\bf r}_{i},t)-{1\over c}\dot{\bf r}_{i}\cdot{\bf A}^{\rm em}({\bf r}_{i},t)\right]$ (3) is the action for the interaction between the field and particles, and $\displaystyle S_{3}={1\over{8\pi}}\int d^{3}rdt\ \left[({\bf E}^{\rm em})^{2}-({\bf B}^{\rm em})^{2}\right]={1\over{8\pi}}\int d^{3}rdt\ \left[\left(-\nabla\phi^{\rm em}-{1\over c}{{\partial{\bf A}^{\rm em}}\over{\partial t}}\right)^{2}-\left(\nabla\times{\bf A}^{\rm em}\right)^{2}\right]$ (4) is the action for the field. Here $\phi^{\rm em}$ and ${\bf A}^{\rm em}$ are the scalar and vector potentials for the electromagnetic field, respectively; $\rho$ and ${\bf j}$ are the electric charge and current densities, respectively; $q$ and $m$ are the charge and mass of the particle, respectively. In the following we consider the case where the electric field ${\bf E}^{\rm em}$ is absent, and only the magnetic field ${\bf B}^{\rm em}$ is present. We will consider the case where ${\bf E}^{\rm em}$ is present, later, to deal with the ac Josephson effect. In our previous work [25, 32, 15], it is shown that the Berry connection arising from many-body wave functions modifies the momentum operator $-i\hbar\nabla$ in the Schrödinger representation of quantum mechanics as follows $\displaystyle-i\hbar\nabla\longrightarrow-i\hbar\nabla+\hbar{\bf A}_{\Phi}^{\rm MB}$ (5) where ${\bf A}_{\Phi}^{\rm MB}$ is the Berry connection defined by $\displaystyle{\bf A}^{\rm MB}_{\Phi}({\bf r},t)=-i\langle n_{\Phi}({\bf r},t)|\nabla|n_{\Phi}({\bf r},t)\rangle$ (6) and $|n_{\Phi}({\bf r})\rangle$ is the parameterized wave function with the parameter ${\bf r}$ and integration coordinates ${\bf r}_{2},\cdots{\bf r}_{N}$ given by $\displaystyle\langle{\bf r}_{2},\cdots,{\bf r}_{N}|n_{\Phi}({\bf r},t)\rangle={{\Phi({\bf r},{\bf r}_{2},\cdots,{\bf r}_{N},t)}\over{|C_{\Phi}({\bf r},t)|^{{1\over 2}}}}$ (7) with $|C_{\Phi}({\bf r},t)|$ being the normalization constant given by $\displaystyle|C_{\Phi}({\bf r},t)|=\int d{\bf r}_{2}\cdots d{\bf r}_{N}\Phi({\bf r},{\bf r}_{2},\cdots)\Phi^{\ast}({\bf r},{\bf r}_{2},\cdots)$ (8) Inclusion of $\hbar{\bf A}_{\Phi}^{\rm MB}$ means the inclusion of the gauge field that describes the interaction between particles through the wave function they share. As a consequence, the effective vector potential in the system becomes $\displaystyle{\bf A}^{\rm eff}={\bf A}^{\rm em}+{\bf A}^{\rm fic},\quad{\bf A}^{\rm fic}=\hbar{\bf A}_{\Phi}^{\rm MB}$ (9) due to the presence of the “fictitious” vector potential ${\bf A}^{\rm fic}$. Actually, ${\bf A}^{\rm fic}$ is given by $\displaystyle{\bf A}^{\rm fic}={{\hbar c}\over{2e}}\nabla\chi$ (10) where $\chi$ is an angular variable with period $2\pi$ [33]. This appears through the spin-twisting itinerant motion of electrons, and the spin-twisting is caused by the Rashba spin-orbit interaction. Although the energy gain by the spin-twisting is very small, it gives rise to the non-trivial Berry connection; the Berry connection creates the number changing operators that make it possible to gain energy by exploiting the Cooper instability. By including the Berry connection, $S_{2}$ becomes, $\displaystyle S_{2}^{\prime}={1\over c}\int d^{3}rdt\ {\bf j}\cdot{\bf A}^{\rm eff}({\bf r},t)$ (11) where we only retained the term with vector potential assuming that electric field is absent. This gives rise to “Lorentz force”. The Lorenz force from ${\bf A}^{\rm fic}$ is zero; however, ${\bf A}^{\rm fic}$ may affect the dynamics of charged particles through the Aharonov-Bohm effect [28]. Thus, we call this term, the Lorentz interaction term, instead of the Lorentz force term. Since the electromagnetic field energy is the energy stored in the space through the Lorentz interaction derived from $S_{2}^{\prime}$, $S_{3}$ should be modified as $\displaystyle S_{3}^{\prime}=-{1\over{8\pi}}\int d^{3}rdt\ ({\bf B}^{\rm eff})^{2}$ (12) where $\displaystyle{\bf B}^{\rm eff}={\bf B}^{\rm em}+{\bf B}^{\rm fic},\quad{\bf B}^{\rm fic}=\nabla\times{\bf A}^{\rm fic}={{\hbar c}\over{2e}}\nabla\times\nabla\chi$ (13) ${\bf B}^{\rm fic}$ may not be zero due to the fact that $\chi$ may be multi- valued. Using $S_{2}^{\prime}$ and $S_{3}^{\prime}$, two of the Maxwell’s equations are modified as $\displaystyle\nabla\cdot{\bf B}^{\rm eff}$ $\displaystyle=$ $\displaystyle 0$ (14) $\displaystyle\nabla\times{\bf B}^{\rm eff}$ $\displaystyle=$ $\displaystyle{{4\pi}\over c}{\bf j}$ (15) The first one gives rise to a Dirac monopole as shown below. It is written as $\displaystyle\nabla\cdot{\bf B}^{\rm em}=-\nabla\cdot(\nabla\times{\bf A}^{\rm fic})$ (16) When the both sides of the above equation are integrated for a closed region with surface ${\rm Sf}$, we have $\displaystyle\int_{\rm Sf}d{\bf S}\cdot{\bf B}^{\rm em}=-\int_{\rm Sf}d{\bf S}\cdot(\nabla\times{\bf A}^{\rm fic})$ (17) We split ${\rm Sf}$ into two surfaces ${\rm Sf}_{1}$ and ${\rm Sf}_{2}$ with common boundary loop, $C=\partial({\rm Sf}_{1})=-\partial({\rm Sf}_{2})$. Then, we have $\displaystyle\int_{{\rm Sf}_{1}}d{\bf S}\cdot(\nabla\times{\bf A}^{\rm fic})+\int_{{\rm Sf}_{2}}d{\bf S}\cdot(\nabla\times{\bf A}^{\rm fic})=\int_{\partial({\rm Sf}_{1})}d{\bf r}\cdot{\bf A}^{\rm fic}+\int_{\partial({\rm Sf}_{1})}d{\bf r}\cdot{\bf A}^{\rm fic}$ (18) We examine the case in which singularities exist in ${\bf A}^{\rm fic}$. Let us consider a closed surface $S$ with boundary $C=\partial S$, and ${\bf A}^{\rm fic}$ has a singularity in $S$. Then, we have $\displaystyle\int_{C}d{\bf r}\cdot{{\hbar c}\over{2e}}\nabla\chi={{hc}\over{2e}}n$ (19) where $n$ is an integer. If we have $n=0$ for $\partial({\rm Sf}_{1})$ term, and $n=1$ for $\partial({\rm Sf}_{2})$ term, we have $\displaystyle\int_{\rm Sf}d{\bf S}\cdot{\bf B}^{\rm em}={{hc}\over{2e}}$ (20) This shows that a monopole with magnetic charge ${{hc}\over{2e}}$ exists in the region enclosed by ${\rm Sf}$. This corresponds to the monopole considered by Dirac [34]. The second one in Eq. (15) is equal to $\displaystyle\nabla\times{\bf B}^{\rm em}$ $\displaystyle=$ $\displaystyle{{4\pi}\over c}{\bf j}$ (21) since $\nabla\times{\bf B}^{\rm fic}=0$ is satisfied as shown below. It is well-known that $\nabla\chi$ in ${\bf A}^{\rm fic}$ can be decomposed as $\displaystyle\nabla\chi=\nabla\chi_{0}+\nabla f,\quad\nabla^{2}\chi_{0}=0$ (22) where $f$ is a single-valued, and $\chi_{0}$ may be multi-valued. Thus, we have $\displaystyle\nabla\times{\bf B}^{\rm fic}=\nabla\times(\nabla\times\nabla\chi_{0})=\nabla(\nabla^{2}\chi_{0})-\nabla^{2}\nabla\chi_{0}=0$ (23) As a consequence, Eq. (14) is reduced to the original one in Eq. (21) ## III Modification of the magnetic field energy in the Ginzburg-Landau theory due to the presence of the Berry connection from many-body wave functions The Ginzburg-Landau theory [29] is based on the London theory [16]. In the London theory, the velocity field for electrons in superconductors is given by $\displaystyle{\bf v}=-{q\over{mc}}\left({\bf A}^{\rm em}-{{c\hbar}\over q}\nabla\chi^{\rm super}\right)$ (24) where $\chi^{\rm super}$ is the superpotential assumed to exist in superconductors. The Ginzburg-Landau theory uses a free energy consists of the material part and the magnetic field part. It assumes the presence of the effective wave function of superconducting electrons $\Psi_{\rm GL}$ in the superconducting phase. Using $\Psi_{\rm GL}$, the material part of the free energy for a superconductor is given by $\displaystyle F_{\rm mat}=F_{\rm normal}+\int d^{3}r{1\over{2m}}\left|\left({\hbar\over i}\nabla-{q\over c}{\bf A}^{\rm em}\right)\Psi_{\rm GL}\right|^{2}+\int d^{3}r\left(\alpha|\Psi_{\rm GL}|^{2}+{\beta\over 2}|\Psi_{\rm GL}|^{4}\right)$ (25) where $\alpha<0$, $\beta>0$ are parameters. We can express $\Psi_{\rm GL}$ using the supercurrent carrier density $n_{s}$ and the superpotential $\chi^{\rm super}$ as $\displaystyle\Psi_{\rm GL}=n_{s}^{1/2}e^{i\chi^{\rm super}}$ (26) Then, the kinetic term becomes $\displaystyle\int d^{3}r{1\over{2m}}\left|\left({\hbar\over i}\nabla-{q\over c}{\bf A}^{\rm em}\right)\Psi_{\rm GL}\right|^{2}=F_{k}+\int d^{3}r{{\hbar^{2}(\nabla n_{s})^{2}}\over{8m\ n_{s}}}$ (27) where the supercurrent kinetic energy is given by $\displaystyle F_{k}=\int d^{3}r{m\over{2}}n_{s}{\bf v}^{2}=\int d^{3}r{{q^{2}n_{s}}\over{2mc^{2}}}\left({\bf A}^{\rm eff}\right)^{2}$ (28) Here, ${\bf A}^{\rm eff}$ in Eq. (9) is used by identifying $\displaystyle{\bf A}^{\rm fic}=-{{c\hbar}\over q}\nabla\chi^{\rm super}={{c\hbar}\over{2e}}\nabla\chi,$ (29) assuming that $\chi^{\rm super}$ arises from the Berry connection. In the standard theory, the Ginzburg-Landau theory is derived from the BCS theory, yielding $q=-2e$ [35]. In this case the mass of the charge carriers becomes $m=2m^{\ast}$, thus, ${m\over q}$ in the London moment becomes $-{m^{\ast}\over e}$, disagrees with the experimental value $-{m_{e}\over e}$. Thus, the accepted derivation of the Ginzburg-Landau theory from the standard theory is incorrect. We will use $q=-e$ here, and identify $\displaystyle\nabla\chi^{\rm super}=\nabla\chi$ (30) with $m=m_{e}$. This $m=m_{e}$ can be explained in the new theory [15]. Using $F_{k}$, the current density ${\bf j}$ is calculated as $\displaystyle{\bf j}=-c{{\delta F_{k}}\over{\delta{{\bf A}^{\rm em}}}}=-{{e^{2}n_{s}}\over{m_{e}c}}{\bf A}^{\rm eff}$ (31) Since $\displaystyle{\bf j}=-en_{s}{\bf v}$ (32) Eq. (31) is equivalent to the London equation in Eq. (24). Actually, the above relation should be the definition of $n_{s}$ through ${\bf j}$ and ${\bf v}$. Using $q=-e$ and $m=m_{e}$, the velocity field ${\bf v}$ is given by $\displaystyle{\bf v}={e\over{m_{e}c}}\left({\bf A}^{\rm em}+{{c\hbar}\over e}\nabla\chi^{\rm super}\right)={e\over{m_{e}c}}\left({\bf A}^{\rm em}+{{c\hbar}\over e}\nabla\chi\right)={e\over{m_{e}c}}{\bf A}^{\rm eff}$ (33) Now, let su consider the magnetic field part. It is given by $\displaystyle F_{m}=\int d^{3}r{1\over{8\pi}}\left({\bf B}^{\rm eff}\right)^{2}$ (34) using ${\bf B}^{\rm eff}$ in place of ${\bf B}^{\rm em}$. This is different from the one employed by the original GL work. The stationary condition of $F_{k}+F_{m}$ with respect to the variation of ${\bf A}^{\rm fic}$ yields, $\displaystyle-{1\over c}{\bf j}+{1\over{4\pi}}\nabla\times{\bf B}^{\rm eff}=-{1\over c}{\bf j}+{1\over{4\pi}}\nabla\times{\bf B}^{\rm em}=0$ (35) where Eq. (23) is used. This is one of the Maxwell’s equations. Using Eq. (31) and neglecting the spatial variation of $n_{s}$, we have $\displaystyle\nabla\times{\bf j}=-{{n_{s}e^{2}}\over{m_{e}}}\left[\nabla\times{\bf A}^{\rm em}+\nabla\times{\bf A}^{\rm fic}\right]=-{{n_{s}e^{2}}\over{m_{e}}}\left[{\bf B}^{\rm em}+\nabla\times{\bf A}^{\rm fic}\right]$ (36) From Eq. (35), the following relation is obtained, $\displaystyle\nabla\times{\bf j}={{c}\over{4\pi}}\nabla\times(\nabla\times{\bf B}^{\rm em})=-{{c}\over{4\pi}}\nabla^{2}{\bf B}^{\rm em}$ (37) Here, $\nabla\cdot{\bf B}^{\rm em}=0$ is assumed. Combining Eqs. (36) and (37), the following is obtained, $\displaystyle\nabla^{2}{\bf B}^{\rm em}-{{1}\over{\lambda^{2}}}{\bf B}^{\rm em}={{1}\over{\lambda^{2}}}\nabla\times{\bf A}^{\rm fic}={{1}\over{\lambda^{2}}}{\bf B}^{\rm fic}$ (38) where $\lambda$ is the London penetration depth $\displaystyle\lambda=\sqrt{{m_{e}c}\over{4\pi n_{s}e^{2}}}$ (39) Now we consider the loop current formation by following Abrikosov [30]. The characteristic length scale for the spatial variation of $n_{s}$ in the Ginzburg-Landau theory is $\displaystyle\xi_{\rm GL}=\sqrt{\hbar^{2}\over{2m_{e}|\alpha|}}$ (40) Abrikosov argued that if $\lambda\gg\xi_{\rm GL}$ is satisfied and the singularity of $\nabla\chi$ is along the $z$-axis, Eq. (38) can be approximated as $\displaystyle\nabla^{2}{\bf B}^{\rm em}-{{1}\over{\lambda^{2}}}{\bf B}^{\rm em}={{1}\over{\lambda^{2}}}\Phi^{\rm fic}{\bf e}_{z}\delta^{(2)}({\bf r})$ (41) in the region away from the core, where $\Phi^{\rm fic}$ is given by $\displaystyle\Phi^{\rm fic}=\int_{S}{\bf B}^{\rm fic}\cdot d{\bf S}=\int_{C}{\bf A}^{\rm fic}\cdot d{\bf r}={{c\hbar}\over{2e}}\int_{C}\nabla\chi\cdot d{\bf r}$ (42) and $\delta^{(2)}({\bf r})$ is the delta function in two-dimension with singularities along the $z$-axis. The solution is known to be ${\bf B}^{\rm em}={B}^{\rm em}(\rho){\bf e}_{z}$, where ${B}^{\rm em}(\rho)$ is given by $\displaystyle{B}^{\rm em}(\rho)=-2\pi\Phi^{\rm fic}\lambda^{-2}K_{0}(\rho/\lambda)$ (43) Here $K_{0}$ is the modified Bessel function of the 2nd kind, and $\rho$ is the distance from the $z$-axis. In the BCS theory, a different coherence length $\displaystyle\xi_{\rm BCS}={{\hbar v_{\rm Fermi}}\over{\pi\Delta}}.$ (44) is defined, where $v_{\rm Fermi}$ is the velocity of the electron at the Fermi energy. It is known that $\xi_{\rm GL}$ and $\xi_{\rm BCS}$ are similar in size at very low temperatures for BCS superconductors. However, $\xi_{\rm BCS}$ is regarded as the size of the Cooper pair in contrast to $\xi_{\rm GL}$, which is the size of the core of the loop current. In our previous work, it has been argued that the Cooper pair formation is accompanied by the loop current formation that encircles a section of the Fermi surface. This loop current is stabilized by a Rashba spin-orbit interaction, and gives rise to ${\bf A}^{\rm fic}$ given in Eq. (10) [33, 32]. We can relate $\xi_{\rm BCS}$ to the core size of this loop current. First, we associate $\xi_{\rm BCS}$ to the wave number $q_{c}$ that has an excitation equal to the gap energy $\Delta$, $\displaystyle\Delta=\hbar q_{c}v_{\rm BCS}$ (45) If we identify $\displaystyle\xi_{\rm BCS}={1\over{\pi q_{c}}}$ (46) we obtain Eq. (44). Thus, $\xi_{\rm BCS}$ is an estimate of the size of the loop current whose excitation energy is equal to the gap energy. The presence of the loop current by ${\bf A}^{\rm fic}$ is plausible from another experimental fact. It explains the reversible phase transitions in a magnetic field [9]; the reversible energy transfer between the kinetic energy of the supercurrent and the magnetic field energy is explained as due to the abrupt change of loop currents associated with the change of the winding numbers for $\chi$ in ${\bf A}^{\rm fic}$. Let us see this point, below. By taking into account only the change of ${\bf A}^{\rm fic}$ during the phase transition in the time interval $\Delta t$, the change of the kinetic energy is given by $\displaystyle\Delta F_{k}$ $\displaystyle=$ $\displaystyle\int d^{3}r{{e^{2}n_{s}}\over{m_{e}c^{2}}}{\bf A}^{\rm eff}\cdot\int_{t}^{t+\Delta t}\partial_{t}{\bf A}^{\rm fic}dt$ (47) $\displaystyle=$ $\displaystyle-{1\over c}\int d^{3}r{\bf j}\cdot\int_{t}^{t+\Delta t}\partial_{t}{\bf A}^{\rm fic}dt$ and the change of the magnetic field energy is given by $\displaystyle\Delta F_{m}$ $\displaystyle=$ $\displaystyle\int d^{3}r{1\over{4\pi}}{\bf B}^{\rm eff}\cdot\int_{t}^{t+\Delta t}\partial_{t}{\bf B}^{\rm fic}dt$ (48) $\displaystyle=$ $\displaystyle\int d^{3}r{1\over{4\pi}}\nabla\times{\bf B}^{\rm eff}\cdot\int_{t}^{t+\Delta t}\partial_{t}{\bf A}^{\rm fic}dt$ $\displaystyle=$ $\displaystyle{1\over c}\int d^{3}r{\bf j}\cdot\int_{t}^{t+\Delta t}\partial_{t}{\bf A}^{\rm fic}dt$ Thus, the energy conservation, $\Delta F_{m}+\Delta F_{k}=0$, is satisfied [9]. The situation considered above may be smoothly connected to the ${\bf B}^{\rm em}=0$ case. If this is the case, similar loop currents from ${\bf A}^{\rm fic}$ with a loop current arrangement with zero macroscopic current should exit even for the ${\bf B}^{\rm em}=0$ case. Then, the phase transition between the superconducting and normal phases are accompanied by creation and annihilation of such loop currents as in the ${\bf B}^{\rm em}\neq 0$ case. Let us consider the energy balance at the transition point for the ${\bf B}^{\rm em}=0$ case mentioned above. In this case ${\bf A}^{\rm em}$ can be taken as a pure gauge, and we assume that it cancels ${\bf A}^{\rm fic}$ except in the core region of the size $\xi$. For simplicity, we assume that singularities of $\chi$ form vortices along the $z$-direction. Let us pick up one of them, and take the $z$-axis along it. Then, we have $\displaystyle n_{s}=n_{0}e^{-{{2\rho}\over{\xi}}}$ (49) near the vortex, where $n_{0}$ is a constant. The sum of the energies from the spatial variation of $n_{s}$ and the term linear to $n_{s}$ in Eqs. (25) and (27) given by $\displaystyle\int d^{3}r{{\hbar^{2}(\nabla n_{s})^{2}}\over{8m_{e}\ n_{s}}}+\int d^{3}r\alpha|\Psi_{\rm GL}|^{2}=\int d^{3}r{{\hbar^{2}(\nabla n_{s})^{2}}\over{8m_{e}\ n_{s}}}+\int d^{3}r\alpha n_{s}$ (50) becomes zero when $\xi$ satisfies the relation $\displaystyle\xi=\xi_{\rm GL}$ (51) If $\xi>\xi_{\rm GL}$, the above sum becomes negative, indicating that the vortex formation may be possible if the energy gain from it is more than the energy deficit due to the core formation. In other words, if the energy deficit by the creation of vortex cores is compensated by the energy gain obtained by the generation of the non-trivial Berry connection, the loop currents generation by the Berry connection will be realized. In the new theory, the energy gain by the electron-pair formation requires the presence of the number changing operators that arise from the non-trivial Berry connection ${\bf A}^{\rm fic}$. Thus, $\xi_{\rm BCS}$ given in Eq. (44) can be regarded as the estimate for the minimum size of the vortex core for the loop current generated by ${\bf A}^{\rm fic}$ that exit even for the ${\bf B}^{\rm em}=0$ case, which corresponds to $\xi_{\rm GL}$. ## IV Supercurrent flow Let us compare the supercurrent in the new theory and that in the standard theory. In the standard theory, the supercurrent is a flow of Cooper pairs. In the original BCS theory, the normal metallic state is assumed to be well-described by the free electrons with the effective mass $m^{\ast}$. Then, the electron field operators are given by $\displaystyle\hat{\Psi}_{\sigma}({\bf r})={1\over\sqrt{V}}\sum_{\bf k}e^{i{\bf k}\cdot{\bf r}}c_{{\bf k}\sigma}$ (52) The electron-paring amplitude is expressed as $\displaystyle\langle\hat{\Psi}_{\downarrow}({\bf r}_{2})\hat{\Psi}_{\uparrow}({\bf r}_{1})\rangle=\sum_{\bf q}e^{i{\bf q}\cdot{\bf R}}\Delta_{\bf q}({\bf r})$ (53) where $\displaystyle\Delta_{\bf q}({\bf r})=\sum_{{\bf p}}^{\prime}e^{i{\bf p}\cdot{\bf r}}\langle c_{-{\bf p}+{{\bf q}\over 2}\downarrow}c_{{\bf p}+{{\bf q}\over 2}\uparrow}\rangle$ (54) is the electron-pairing amplitude with momentum ${\bf q}$, ${\bf r}={\bf r}_{1}-{\bf r}_{2}$ is the relative position vector between the pairing electrons, ${\bf R}={1\over 2}({\bf r}_{1}+{\bf r}_{2})$ is the center-of-mass position vector of the pairing electron, and the sum is over ${\bf p}$ near the Fermi level where that the attractive interaction between electrons exists. Using $\Delta_{\bf q}({\bf r})$ the macroscopic wave function in the standard theory is given by $\displaystyle\Psi_{\rm GL}({\bf R})=gC\sum_{{\bf q}}e^{i{\bf q}\cdot{\bf R}}\Delta_{\bf q}(0)$ (55) where $g$ is the parameter for the attractive electron-electron interaction, and $C$ is a constant. Considering the change of the phase by the gauge transformation for $\langle c_{-{\bf p}+{{\bf q}\over 2}\downarrow}c_{{\bf p}+{{\bf q}\over 2}\uparrow}\rangle$, $\nabla_{\bf R}\Psi_{\rm GL}({\bf R})$ should be modified in the presence of the magnetic field as $\displaystyle\nabla_{\bf R}\Psi_{\rm GL}({\bf R})\rightarrow\left[\nabla_{\bf R}+{{2ei}\over{\hbar}}{\bf A}^{\rm em}\right]\Psi_{\rm GL}({\bf R})$ (56) This yields the kinetic energy term in Eq. (27). Since the kinetic mass for the electrons in the Cooper pair is the effective mass $m^{\ast}$, $m$ in it will be $m=2m^{\ast}$, where $2$ comes from the electron-pair. As mentioned before, this $m=2m^{\ast}$ contradicts the experimental result, which indicates $m=2m_{e}$ when $q=-2e$ is adopted, where $m_{e}$ is free electron mass. Now we consider the same problem using the new theory. This problem is dealt with our recent publication [15], but for the convenience for the later discussion, we reproduce it succinctly in the following. We allow the coordinate dependent functions that are different from plane waves as the basis for the electron field operators in Eq. (52). We use the label $n$ in place of the wave number ${\bf k}$. Then, the field operators become $\displaystyle\hat{\Psi}_{\uparrow}({\bf r})$ $\displaystyle=$ $\displaystyle\sum_{n}e^{{i\over 2}\hat{\chi}({\bf r})}\left(\gamma_{{n}\uparrow}u_{n}({\bf r})-\gamma^{\dagger}_{{n}\downarrow}v^{\ast}_{n}({\bf r})\right)$ $\displaystyle\hat{\Psi}_{\downarrow}({\bf r})$ $\displaystyle=$ $\displaystyle\sum_{n}e^{{i\over 2}\hat{\chi}({\bf r})}\left(\gamma_{{n}\downarrow}u_{n}({\bf r})+\gamma^{\dagger}_{{n}\uparrow}v^{\ast}_{n}({\bf r})\right)$ (57) The particle number conserving Bogoliubov operators satisfy $\displaystyle\gamma_{n\sigma}|{\rm Gnd}(N)\rangle=0$ (58) where $N$ is the total number of particles. The ground state satisfies on the operation of $e^{{i\over 2}\hat{\chi}({\bf r})}$ as $\displaystyle e^{{i\over 2}\hat{\chi}({\bf r})}|{\rm Gnd}(N)\rangle=e^{{i\over 2}{\chi}({\bf r})}|{\rm Gnd}(N-1)\rangle$ (59) The operator $e^{{i\over 2}\hat{\chi}({\bf r})}$ is the number changing operator that removes one electron from the collective modes at the position ${\bf r}$. This also means that the Bogoliubov operators in Eq. (57) conserve the particle number. We call them “ the particle-number conserving Bogoliubov operators”. They describe the transfer of electrons between the collective mode described by $\chi$ and single-particle mode (this point may be seen in Eq. (73) in the next section). The phase factor $e^{{i\over 2}{\chi}({\bf r})}$ arises due to the presence of the Berry connection. The connection of geometry is embedded in the superconducting state as $\displaystyle\langle e^{{i\over 2}\hat{\chi}({\bf r}_{2})}e^{-{i\over 2}\hat{\chi}({\bf r}_{1})}\rangle=e^{{i\over 2}\int_{{\bf r}_{1}}^{{\bf r}_{2}}\nabla\chi({\bf r})\cdot d{\bf r}}$ (60) which describes the phase factor appearing when an electron is added at ${\bf r}_{1}$ in the collective mode, and removed at ${\bf r}_{2}$ from the collective mode. Now, let us consider the electronic Hamiltonian expressed as $\displaystyle H=\sum_{\sigma}\int d^{3}r\hat{\Psi}^{\dagger}_{\sigma}({\bf r})h({\bf r})\hat{\Psi}_{\sigma}({\bf r})-{1\over 2}\sum_{\sigma,\sigma^{\prime}}\int d^{3}rd^{3}r^{\prime}V_{\rm eff}({\bf r},{\bf r}^{\prime})\hat{\Psi}^{\dagger}_{\sigma}({\bf r})\hat{\Psi}^{\dagger}_{\sigma^{\prime}}({\bf r}^{\prime})\hat{\Psi}_{\sigma^{\prime}}({\bf r}^{\prime})\hat{\Psi}_{\sigma}({\bf r})$ (61) where $-V_{\rm eff}$ is the effective interaction between electrons, and $h({\bf r})$ is the single-particle Hamiltonian given by $\displaystyle h({\bf r})={1\over{2m_{e}}}\left({\hbar\over i}\nabla+{e\over c}{\bf A}^{\rm em}\right)^{2}+U({\bf r})-\mu$ (62) with $U({\bf r})$ being a potential energy and $\mu$ chemical potential. We perform the mean field approximation $\displaystyle H^{\rm MF}$ $\displaystyle=$ $\displaystyle\sum_{\sigma}\int d^{3}r\hat{\Psi}^{\dagger}_{\sigma}({\bf r})h({\bf r})\hat{\Psi}_{\sigma}({\bf r})+\int d^{3}rd^{3}r^{\prime}\left[\Delta({\bf r},{\bf r}^{\prime})\hat{\Psi}^{\dagger}_{\uparrow}({\bf r})\hat{\Psi}^{\dagger}_{\downarrow}({\bf r}^{\prime})e^{{i\over 2}(\hat{\chi}({\bf r})+\hat{\chi}({\bf r}^{\prime}))}+{\rm H.c.}\right]$ $\displaystyle+$ $\displaystyle\int d^{3}rd^{3}r^{\prime}{{|\Delta({\bf r},{\bf r}^{\prime})|^{2}}\over{V_{\rm eff}({\bf r},{\bf r}^{\prime})}}$ where the gap function $\Delta({\bf r},{\bf r}^{\prime})$ is defined as $\displaystyle\Delta({\bf r},{\bf r}^{\prime})=V_{\rm eff}({\bf r},{\bf r}^{\prime})\langle e^{-{i\over 2}(\hat{\chi}({\bf r})+\hat{\chi}({\bf r}^{\prime}))}\hat{\Psi}_{\uparrow}({\bf r})\hat{\Psi}_{\downarrow}({\bf r^{\prime}})\rangle$ (64) Due to the factor $e^{-{i\over 2}(\hat{\chi}({\bf r})+\hat{\chi}({\bf r}^{\prime}))}$ that increase the number of electrons by two, the expectation value is calculated using the particle number fixed state in contrast to the standard theory. Using the relation in Eq. (59) and requiring $H^{\rm MF}$ to become $\displaystyle H^{\rm MF}=\sum_{n,\sigma}\epsilon_{n}\gamma_{n\sigma}^{\dagger}\gamma_{n\sigma}+E_{\rm const}$ (65) where $E_{\rm const}$ is a constant, the above are cast into the following system of equations $\displaystyle\epsilon_{n}u_{n}({\bf r})$ $\displaystyle=$ $\displaystyle\bar{h}({\bf r})u_{n}({\bf r})+\int d^{3}r^{\prime}\Delta({\bf r},{\bf r}^{\prime})v_{n}({\bf r}^{\prime})$ $\displaystyle\epsilon_{n}v_{n}({\bf r})$ $\displaystyle=$ $\displaystyle-\bar{h}^{\ast}({\bf r})v_{n}({\bf r})+\int d^{3}r^{\prime}\Delta^{\ast}({\bf r},{\bf r}^{\prime})u_{n}({\bf r}^{\prime})$ (66) where $\displaystyle\bar{h}({\bf r})={1\over{2m_{e}}}\left({\hbar\over i}\nabla+{e\over c}{\bf A}^{\rm em}+{\hbar\over 2}\nabla\chi\right)^{2}+U({\bf r})-\mu$ (67) and $\displaystyle\Delta({\bf r},{\bf r}^{\prime})=V_{\rm eff}({\bf r},{\bf r}^{\prime})\sum_{n}\left[u_{n}({\bf r})v^{\ast}_{n}({\bf r}^{\prime})(1-f(\epsilon_{n}))-u_{n}({\bf r}^{\prime})v^{\ast}_{n}({\bf r})f(\epsilon_{n})\right]$ (68) with $f(\epsilon_{n})$ being the Fermi function at temperature T ($k_{B}$ is the Boltzmann constant) given by $\displaystyle f(\epsilon_{n})=(e^{{\epsilon_{n}}\over{k_{B}T}}+1)^{-1}$ (69) They are Bogoliubov-de Gennes equations [36] using the particle number conserving Bogoliubov operators [25]. Note that the gauge potential in the single particle Hamiltonian $\bar{h}({\bf r})$ is the effective one, $\displaystyle{\bf A}^{\rm eff}={\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\chi$ (70) If we solve the system of equations composed of Eqs. (66), (67), and (68), with the condition ${\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\chi=0$, we obtain the currentless solutions for $u_{n},v_{n}$, which we denote as $\tilde{u}_{n},\tilde{v}_{n}$. We may construct the solution using $\tilde{u}_{n},\tilde{v}_{n}$ as $\displaystyle u_{n}({\bf r})=\tilde{u}_{n}({\bf r})e^{-{i\over 2}{\chi}({\bf r})},\quad v_{n}({\bf r})=\tilde{v}_{n}({\bf r})e^{{i\over 2}{\chi}({\bf r})}$ (71) with suitably chosen $\nabla\chi$. Actually, $\nabla\chi$ is obtained by the requirement of the conservation of local charge, and single-valuedness of the wave function with respect to electron coordinates [32]. When the solutions are obtained by the above method, we have the velocity field given in Eq. (33) since the velocity field from $\tilde{u}_{n},\tilde{v}_{n}$ is zero. The supercurrent is due to this velocity field that contains non-trivial $\nabla\chi$. In this case, $m$ is equal to $m_{e}$ when $q=-e$ is employed, which agrees with the experiment. ## V Andreev$-$Saint-James reflection In the Andreev$-$Saint-James reflection, the phase factor of the pair potential plays the central role. We examine how this phase factor arises from the Berry connection of the new theory. Let us consider the hopping Hamiltonian between the normal state and the superconducting state equipped with the Berry connection, $\displaystyle H_{\rm Super-Metal}=-t\sum_{{\bf k},\sigma}\left(c^{\dagger}_{i\sigma}M_{{\bf k}\sigma}+M^{\dagger}_{{\bf k}\sigma}c_{i\sigma}\right)$ (72) where $M_{{\bf k}\sigma}^{\dagger}$ and $M_{{\bf k}\sigma}$ are creation and annihilation operators for the electrons in the normal metal part, respectively. $c_{i\sigma}^{\dagger}$ and $c_{i\sigma}$ are creation and annihilation operators for the electron in the superconductor part at the interface site $i$, respectively. Using the number-changing operators and the Bogoliubov operators, annihilation and creation operators for the electrons in the superconducting state are given by $\displaystyle c_{i\sigma}$ $\displaystyle=$ $\displaystyle\sum_{n}[u^{n}_{i}\gamma_{n\sigma}-\sigma(v^{n}_{i})^{\ast}\gamma_{n-\sigma}^{\dagger}]e^{{i\over 2}\hat{\chi}_{i}}$ $\displaystyle c^{\dagger}_{i\sigma}$ $\displaystyle=$ $\displaystyle\sum_{n}[(u^{n}_{i\sigma})^{\ast}\gamma^{\dagger}_{n\sigma}-\sigma v^{n}_{i}\gamma_{n-\sigma}]e^{-{i\over 2}\hat{\chi}_{i}}$ (73) where $\sigma=1$ for up-spin state, and $\sigma=-1$ for down-spin state. Cooper-pair annihilation and creation operators are given by $\displaystyle c_{i\uparrow}c_{i\downarrow}$ $\displaystyle=$ $\displaystyle\sum_{n,n^{\prime}}[u^{n}_{i}\gamma_{n\uparrow}-(v^{n}_{i})^{\ast}\gamma_{n\downarrow}^{\dagger}]e^{{i\over 2}\hat{\chi}_{i}}[u^{n^{\prime}}_{i}\gamma_{n^{\prime}\downarrow}+(v^{n^{\prime}}_{i})^{\ast}\gamma_{n^{\prime}\uparrow}^{\dagger}]e^{{i\over 2}\hat{\chi}_{i}}$ $\displaystyle\approx$ $\displaystyle\sum_{n}u^{n}_{i}(v^{n}_{i})^{\ast}\left[\langle\gamma_{n\uparrow}\gamma_{n\uparrow}^{\dagger}\rangle-\langle\gamma_{n\downarrow}^{\dagger}\gamma_{n\downarrow}\rangle\right]e^{{i}\hat{\chi}_{i}}$ $\displaystyle=$ $\displaystyle e^{{i}\hat{\chi}_{i}}\sum_{n}u^{n}_{i}(v^{n}_{i})^{\ast}[1-2f(\epsilon_{n})]$ $\displaystyle c^{\dagger}_{i\downarrow}c^{\dagger}_{i\uparrow}$ $\displaystyle\approx$ $\displaystyle e^{-{i}\hat{\chi}_{i}}\sum_{n}(u^{n}_{i})^{\ast}v^{n}_{i}[1-2f(\epsilon_{n})]$ (74) where the products of Bogoliubov operators are replaced by their expectation values. The above shows that Cooper-pairs can move without Bogoliubov excitations. This feature makes an impression that supercurrent is a flow of Cooper-pairs in the standard theory. Now, let us derive the term for the Andreev reflection by applying the second order perturbation theory using $H_{\rm Super-Metal}$, $\displaystyle\langle H_{\rm Super-Metal}{1\over{E_{0}-H_{0}}}H_{\rm Super- Metal}\rangle_{\rm Bog}$ (75) $\displaystyle\approx$ $\displaystyle t^{2}\left\langle\sum_{{\bf k},{\bf k}^{\prime},\sigma,\sigma^{\prime}}\left[c^{\dagger}_{i\sigma}M_{{\bf k}\sigma}{1\over{E_{0}-H_{0}}}c^{\dagger}_{i\sigma^{\prime}}M_{{\bf k}^{\prime}\sigma^{\prime}}+M^{\dagger}_{{\bf k}\sigma}c_{i\sigma}{1\over{E_{0}-H_{0}}}M^{\dagger}_{{\bf k}^{\prime},\sigma^{\prime}}c_{i\sigma^{\prime}}\right]\right\rangle_{\rm Bog}$ $\displaystyle\approx$ $\displaystyle-\sum_{{\bf k},{\bf k}^{\prime},n}{{2t^{2}}\over{\epsilon_{n}}}\left[(u^{n}_{i})^{\ast}v^{n}_{i}M_{{\bf k}\uparrow}M_{{\bf k}^{\prime}\downarrow}e^{-{i}\hat{\chi}_{i}}+(v^{n}_{i})^{\ast}u^{n}_{i}M^{\dagger}_{{\bf k}^{\prime}\downarrow}M^{\dagger}_{{\bf k}\uparrow}e^{{i}\hat{\chi}_{i}}\right]$ where $\langle\cdots\rangle_{\rm Bog}$ denotes that the expectation value is calculated for the products of Bogoliubov operators. The above effective Hamiltonian indicates that when an electron in the normal state is reflected back as a hole, the phase factor $e^{-{i}{\chi}_{i}}$ is attached, and a hole in the normal state is reflected back as an electron, the phase factor $e^{{i}{\chi}_{i}}$ is attached; they are salient features of the Andreev reflection. In the following, we outline how the Andreev equations are derived from the particle-number conserving Bogoliubov-de Gennes method by following Ref. [37]. From Eq. (66) using (71), the particle-number conserving Bogolibov-de Gennes equations are expressed as $\displaystyle\epsilon_{n}\tilde{u}_{n}({\bf r})$ $\displaystyle=$ $\displaystyle{h}({\bf r})\tilde{u}_{n}({\bf r})+\int d^{3}r^{\prime}e^{{i\over 2}(\chi({\bf r})+\chi({\bf r}^{\prime}))}\Delta({\bf r},{\bf r}^{\prime})\tilde{v}_{n}({\bf r}^{\prime})$ $\displaystyle\epsilon_{n}\tilde{v}_{n}({\bf r})$ $\displaystyle=$ $\displaystyle-{h}^{\ast}({\bf r})\tilde{v}_{n}({\bf r})+\int d^{3}r^{\prime}e^{-{i\over 2}(\chi({\bf r})+\chi({\bf r}^{\prime}))}\Delta^{\ast}({\bf r},{\bf r}^{\prime})\tilde{u}_{n}({\bf r}^{\prime})$ (76) with $h({\bf r})$ as the single-particle Hamiltonian instead of $\bar{h}({\bf r})$. This single-particle Hamiltonian can be used throughout the system. It is important to have the same single-particle Hamiltonian in both the normal metallic and superconducting parts. Then, we may regard $\displaystyle\bar{\Delta}\left({{{\bf r}+{\bf r}^{\prime}}\over 2},{\bf r}-{\bf r}^{\prime}\right)=e^{{i\over 2}(\chi({\bf r})+\chi({\bf r}^{\prime}))}\Delta({\bf r},{\bf r}^{\prime})$ (77) as the pair potential for the Andreev reflection. Due to the phase factor $e^{{i\over 2}(\chi({\bf r})+\chi({\bf r}^{\prime}))}$ in $\bar{\Delta}$, it describes the Andreev reflection in which an electron is reflected back as a hole with the phase factor $e^{-{i}{\chi}_{i}}$, and a hole is reflected back as an electron with the phase factor $e^{{i}{\chi}_{i}}$, where $i$ is the position of the reflection. By taking the Fourier transformation with respect to the relative coordinate ${\bf s}={\bf r}-{\bf r}^{\prime}$, and denoting the center-of-mass coordinate as ${\bf r}$, we have $\displaystyle\bar{\Delta}({\bf r},{\bf k})=\int d^{3}s\bar{\Delta}({\bf r},{\bf s})e^{-i{\bf k}\cdot{\bf s}}$ (78) In the weak coupling case, only the wave vectors very close to the Fermi surface are important. Then, ${\bf k}$ dependence of $\bar{\Delta}({\bf r},{\bf k})$ comes only from the direction of the Fermi wave vector ${\bf k}_{F}$, or the unit vector on the Fermi surface ${\bf e}_{{\bf k}_{F}}$. Thus, we may write $\displaystyle\bar{\Delta}_{{\bf e}_{{\bf k}_{F}}}({\bf r})\approx\bar{\Delta}({\bf r},{\bf k}_{F})$ (79) We also separate the fast Fermi wave vector oscillation from $\tilde{u}_{n}({\bf r}),\tilde{v}_{n}({\bf r})$ as $\displaystyle\bar{u}_{n}({\bf r})=e^{-i{\bf k}_{F}\cdot{\bf r}}\tilde{u}_{n}({\bf r}),\quad\bar{v}_{n}({\bf r})=e^{-i{\bf k}_{F}\cdot{\bf r}}\tilde{v}_{n}({\bf r})$ (80) for convenience, since we consider the cas where $\xi_{\rm BCS}\gg k_{F}^{-1}$ is satisfied. As a result, Eq. (76) becomes the following Andreev equations $\displaystyle\epsilon_{n}\bar{u}_{n}({\bf r})$ $\displaystyle=$ $\displaystyle-i{\hbar^{2}\over m_{e}}{\bf k}_{F}\cdot\nabla\bar{u}_{n}({\bf r})+\bar{\Delta}_{{\bf e}_{{\bf k}_{F}}}({\bf r})\bar{v}_{n}({\bf r})$ $\displaystyle\epsilon_{n}\bar{v}_{n}({\bf r})$ $\displaystyle=$ $\displaystyle i{\hbar^{2}\over m_{e}}{\bf k}_{F}\cdot\nabla\bar{v}_{n}({\bf r})+\bar{\Delta}^{\ast}_{{\bf e}_{{\bf k}_{F}}}({\bf r})\bar{u}_{n}({\bf r})$ (81) The Andreev reflection is calculated using the above equations. The effect of the phase factor $e^{{i}{\chi}_{i}}$ is included in the phase of $\bar{\Delta}_{{\bf e}_{{\bf k}_{F}}}({\bf r})$ at the reflection point. ## VI Josephson effect The Josephson effect is also the effect which arises from the phase of the pair-potential in the standard theory [38]. We have derived the Josephson relation by the new theory, previously [5, 39, 33, 32]; however, we reproduce it here for the completeness of the discussion. Note that there are two distinct cases, one is the case where the Bogoliubov excitations are common in the two superconductors, and the other where two separate Bogoliubov excitations exist. In the standard theory, only the latter case is considered; however, for the explanation of the ac Josephson effect, the former case is relevant as is indicated as is shown below. We would like to emphasize that in the new theory, supercurrent is a flow of electrons by a collective mode arising from the Berry connection, not a flow of Cooper-pairs. Let us consider a superconductor-insulator-superconductor (SIS) junction. The hopping Hamiltonian between the two superconductors (we use labels $L$ for left and $R$ for right superconductors) across the insulator is given by $\displaystyle H_{LR}=-\sum_{\sigma}T_{LR}\left(c^{\dagger}_{L\sigma}c_{R\sigma}+c^{\dagger}_{R\sigma}c_{L\sigma}\right)$ (82) Let us first consider the case where the insulator part is very thin and the Bogoliubov excitations in the two superconductors are the same, denoted by $\gamma^{\dagger}_{n\sigma},\gamma_{n\sigma}$. In this case $H_{LR}$ is re- expressed using the number-changing operators and the Bogoliubov operators as $\displaystyle H_{LR}$ $\displaystyle=$ $\displaystyle-T_{LR}e^{-{i\over 2}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{-i{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}}$ (83) $\displaystyle\times\sum_{n,m}\Big{[}((u^{n}_{L})^{\ast}\gamma^{\dagger}_{n\downarrow}+v^{n}_{L}\gamma_{n\uparrow})(u^{m}_{R}\gamma_{m\downarrow}+(v^{m}_{R})^{\ast}\gamma_{m\uparrow}^{\dagger})$ $\displaystyle+((u^{n}_{L})^{\ast}\gamma^{\dagger}_{n\uparrow}-v^{n}_{L}\gamma_{n\downarrow})(u^{m}_{R}\gamma_{m\uparrow}-(v^{m}_{R})^{\ast}\gamma_{m\downarrow}^{\dagger})\Big{]}+\mbox{h.c.}$ Then, the effective hopping Hamiltonian that does not cause Bogoliubov excitations is given by $\displaystyle H_{J}^{e}=C\cos\left[{{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot\left({\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\chi\right)}+\alpha\right]$ (84) where $C$ and $\alpha$ are parameters given through the following relations $\displaystyle{1\over 2}Ce^{i\alpha}=-2T_{LR}\sum_{n}v^{n}_{L}(v^{n}_{R})^{\ast}+\mbox{h.c.}$ (85) Now we consider the second case where the Bogoliubov excitations in the two superconductors are different. We denote them, $\gamma^{\dagger}_{Ln\sigma},\gamma_{Ln\sigma}$ for the left superconductor, and $\gamma^{\dagger}_{Rn\sigma},\gamma_{Rn\sigma}$ for the right superconductor. In this case $H_{LR}$ is re-expressed as $\displaystyle H_{LR}$ $\displaystyle=$ $\displaystyle-T_{LR}e^{-{i\over 2}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{-i{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}}$ (86) $\displaystyle\times\sum_{n,m}\Big{[}((u^{n}_{L})^{\ast}\gamma^{\dagger}_{Ln\downarrow}+v^{n}_{L}\gamma_{Ln\uparrow})(u^{m}_{R}\gamma_{Rm\downarrow}+(v^{m}_{R})^{\ast}\gamma_{Rm\uparrow}^{\dagger})$ $\displaystyle+((u^{n}_{L})^{\ast}\gamma^{\dagger}_{Ln\uparrow}-v^{n}_{L}\gamma_{Ln\downarrow})(u^{m}_{R}\gamma_{Rm\uparrow}-(v^{m}_{R})^{\ast}\gamma_{Rm\downarrow}^{\dagger})\Big{]}+\mbox{h.c.}$ In this case, the current flow without Bogoliubov excitations require the second order perturbation. As a consequence, the current flow is the flow of electron pairs. The standard theory only considers this case; this is one of the reasons that the supercurrent is identified as the flow of electron pairs in the standard theory. The second order effective Hamiltonian with taking average over the Bogoliubov excitations is given by $\displaystyle\left\langle H_{LR}{1\over{E_{0}-H_{0}}}H_{LR}\right\rangle_{\rm Bog}\approx-\Big{\langle}\sum_{m,n,m^{\prime},n^{\prime}}T_{LR}^{2}\left[e^{-{i\over 2}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{-i{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}}v_{L}^{n}u_{R}^{m}(\gamma_{Ln\uparrow}\gamma_{Rm\downarrow}-\gamma_{Ln\downarrow}\gamma_{Rm\uparrow})+(L\leftrightarrow R)\right]$ $\displaystyle\times$ $\displaystyle{1\over{\epsilon_{m}^{R}+\epsilon_{n}^{L}}}\left[e^{-{i\over 2}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{-i{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}}(u_{L}^{n^{\prime}}v_{R}^{m^{\prime}})^{\ast}(\gamma^{\dagger}_{Ln^{\prime}\downarrow}\gamma^{\dagger}_{Rm^{\prime}\uparrow}-\gamma^{\dagger}_{Ln^{\prime}\uparrow}\gamma^{\dagger}_{Rm^{\prime}\downarrow})+(L\leftrightarrow R)\right]\Big{\rangle}_{\rm Bog}$ $\displaystyle\approx$ $\displaystyle-\sum_{m,n}{{2T_{LR}^{2}}\over{\epsilon_{m}^{R}+\epsilon_{n}^{L}}}\left[v_{L}^{n}u_{R}^{m}(u_{L}^{n}v_{R}^{m})^{\ast}e^{-{i}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{-i{{2e}\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}}+(v_{L}^{n}u_{R}^{m})^{\ast}u_{L}^{n}v_{R}^{m}e^{{i}(\hat{\chi}_{L}-\hat{\chi}_{R})}e^{i{{2e}\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot{\bf A}^{\rm em}})+|u_{L}^{n}v_{R}^{m}|^{2}+|v_{L}^{n}u_{R}^{m}|^{2}\right]$ Thus, the effective hopping Hamiltonian for this case is $\displaystyle H_{J}^{2e}=C^{\prime}\cos\left({{2e}\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot\left[{\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\chi\right]+\alpha^{\prime}\right)$ (88) where $C^{\prime}$ and $\alpha^{\prime}$ are parameters given through the following relations, $\displaystyle{1\over 2}C^{\prime}e^{i\alpha^{\prime}}=-\sum_{m,n}{{2T_{LR}^{2}}\over{\epsilon_{m}^{R}+\epsilon_{n}^{L}}}(v_{L}^{n}u_{R}^{m})^{\ast}u_{L}^{n}v_{R}^{m}$ (89) This is the well-known result in the standard theory. It gives rise to the Ambegaokar-Baratoff relation[40] for the dc Josephson effect[38]. $H_{J}^{2e}$ is used to explain the ac Josephson effect in the standard theory; however, the new theory indicates that $H_{J}^{e}$ is the right one to use as explained below. Let us write the current through the junction as $\displaystyle J=J_{c}\sin\phi$ (90) where $\displaystyle J_{c}=C{e^{2}\over{\hbar}}$ (91) and $\displaystyle\phi={{e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot\left({\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\chi\right)}+\alpha$ (92) Then, the time-derivative of $\phi$ is calculated as $\displaystyle\dot{\phi}={e\over{\hbar c}}\int_{R}^{L}d{\bf r}\cdot\left(\partial_{t}{\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla\partial_{t}{\chi}\right)=-{e\over\hbar}\int_{R}^{L}d{\bf r}\cdot{\bf E}^{\rm em}+\left.{e\over\hbar}\left(-\varphi^{\rm em}+{\hbar\over{2e}}\partial_{t}{\chi}\right)\right|^{L}_{R}$ (93) where $\displaystyle{\bf E}^{\rm em}=-{1\over c}\partial_{t}{\bf A}^{\rm em}-\nabla\varphi^{\rm em}$ (94) is used. There are two contributions for $\dot{\phi}$. The first one is $\displaystyle-{e\over\hbar}\int_{R}^{L}d{\bf r}\cdot{\bf E}^{\rm em}={{eV}\over\hbar}$ (95) where $\displaystyle V=-\int_{R}^{L}d{\bf r}\cdot{\bf E}^{\rm em}$ (96) is the voltage across the junction. The second contribution includes the following $\displaystyle\varphi^{\rm eff}=\varphi^{\rm em}-{\hbar\over{2e}}\partial_{t}{\chi}$ (97) It is actually the time-component of the four vector whose spatial components are $\displaystyle{\bf A}^{\rm eff}={\bf A}^{\rm em}+{{\hbar c}\over{2e}}\nabla{\chi}$ (98) It is gauge invariant since ${\bf A}^{\rm eff}$ is gauge invariant. Actually, $\displaystyle\int d^{3}r\varphi^{\rm eff}\rho$ (99) appears in the Hamiltonian, thus, $\varphi^{\rm eff}$ should be related to the chemical potential $\mu$ as $\displaystyle\mu=e\varphi^{\rm eff}$ (100) Then, Eq. (93) becomes $\displaystyle\dot{\phi}={e\over\hbar}V+{1\over\hbar}\left(\mu_{R}-\mu_{L}\right)$ (101) The balance between the voltage and the chemical potential difference yields, $\displaystyle eV=\mu_{R}-\mu_{L}$ (102) Thus, the following Josephson relation is obtained $\displaystyle\dot{\phi}={{2eV}\over\hbar}$ (103) In the standard theory, $H_{J}^{2e}$ is used to obtain the Josephson relation without including the contribution from the chemical potential difference. However, the chemical potential difference term exists in the real experimental situation since two superconductors in the junction are connected to difference leads with different chemical potentials. Thus, the omission of this term in the standard theory is not valid. If this term is included, the derivation using $H_{J}^{2e}$ gives $\dot{\phi}={{4eV}\over\hbar}$, instead of Eq. (103), which disagrees with experiments. ## VII Concluding remarks In the standard theory of superconductivity, the origin of superconductivity is the electron-pairing. In this theory, the induced current by a magnetic field is calculated by the linear response to the vector potential, and the supercurrent is identified as the dissipationless flow of the paired- electrons, while single electrons flow with dissipation. The above supercurrent description suffers from the following serious problems: 1) it contradicts the reversible superconducting-normal phase transition in a magnetic field observed in type I superconductors; 2) the gauge invariance of the supercurrent induced by a magnetic field requires the breakdown of the global $U(1)$ gauge invariance, or the non-conservation of the particle number; 3) the explanation of the ac Josephson effect is based on the boundary condition that is different from the real experimental one; 4) the measured London moment indicates the mass for the superconducting carrier is the free electron mass $m_{e}$ if the electron charge $q=-e$ is used although the standard theory predicts it to be the effective mass $m^{\ast}$. The standard theory relies on the non-zero value of Eq. (53), and actually, the cause of the above problems is partly due to the belief that the non-zero value of Eq. (53) is a physical consequence of the Cooper instability. In the new theory, Eq. (53) that is calculated using the particle number non- conserving state is replaced by Eq. (64) that is calculated using the particle number conserving state. This is possible due to the presence of the Berry connection that generates the number-changing operators. It appears that the standard theory takes into account the presence of the number-changing operators brought about by the Berry connection, by using the particle-number non-conserving approximation [41]. This approximation works well for some purposes. However, it also causes serious contradictions. The new theory is a significant departure from the standard one. We hope that the elucidation of the cuprate superconductivity is achieved with it. ## References * Bardeen _et al._ [1957] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of superconductivity, Phys. Rev. 108, 1175 (1957). * Bednorz and Müller [1986] J. G. Bednorz and K. A. Müller, Possible high tc superconductivity in the ba-la-cu-o system, Z. Phys. B 64, 189 (1986). * Emery and Kivelson [1995] V. J. Emery and S. A. Kivelson, Importance of phase fluctuation in superconductors with small superfluid density, Nature 374, 434 (1995). * Hirsch [2009] J. Hirsch, Bcs theory of superconductivity: it is time to question its validity, Physica Scripta 80, 035702 (2009). * Koizumi [2011] H. Koizumi, Spin-vortex superconductivity, J. Supercond. Nov. Magn. 24, 1997 (2011). * Hirsch [2017] J. E. Hirsch, Momentum of superconducting electrons and the explanation of the Meissner effect, Phys Rev. B 95, 014503 (2017). * Hirsch [2018] J. E. Hirsch, Entropy generation and momentum transfer in the superconductor-normal and normal-superconductor phase transitions and the consistency of the conventional theory of superconductivity, International Journal of Modern Physics B 32, 1850158 (2018). * Hirsch [2020] J. E. Hirsch, Inconsistency of the conventional theory of superconductivity, EPL 130, 17006 (2020). * Koizumi [2020a] H. Koizumi, Reversible superconducting-normal phase transition in a magnetic field and the existence of topologically-protected loop currents that appear and disappear without Joule heating, EPL 131, 37001 (2020a). * Keesom and Kok [1934] W. Keesom and J. Kok, Measurements of the latent heat of thallium connected with the transition, in a constant external magnetic field, from the supraconductive to the non-supraconductive state, Physica 1, 503 (1934). * Keesom and Van Laer [1936] W. Keesom and P. Van Laer, Measurements of the latent heat of tin in passing from the supraconductive to the non-supraconductive state, Physica 3, 371 (1936). * Keesom and van Laer [1937] W. Keesom and P. van Laer, Measurements of the latent heat of tin while passing from the superconductive to the non-superconductive state at constant temperature, Physica 4, 487 (1937). * van Laer and Keesom [1938] P. H. van Laer and W. H. Keesom, On the reversibility of the transition processs between the superconductive and the normal state, Physica 5, 993 (1938). * Hirsch [2013] J. E. Hirsch, The london moment: what a rotating superconductor reveals about superconductivity, Physica Scripta 89, 015806 (2013). * Koizumi [2020b] H. Koizumi, London moment, london’s superpotential, nambu-goldstone mode, and berry connection from many-body wave functions, DOI: 10.1007/s10948-021-05827-9 J. Supercond. Nov. Magn. (2020b), arXiv:2011.10701 [cond-mat.supr-con] . * London [1950] F. London, _Superfluids_ , Vol. 1 (Wiley, New York, 1950). * Hildebrandt [1964] A. F. Hildebrandt, Magnetic field of a rotating superconductor, Phys. Rev. Lett. 12, 190 (1964). * Zimmerman and Mercereau [1965] J. E. Zimmerman and J. E. Mercereau, Compton wavelength of superconducting electrons, Phys. Rev. Lett. 14, 887 (1965). * Brickman [1969] N. F. Brickman, Rotating superconductors, Phys. Rev. 184, 460 (1969). * Tate _et al._ [1989] J. Tate, B. Cabrera, S. B. Felch, and J. T. Anderson, Precise determination of the Cooper-pair mass, Phys. Rev. Lett. 62, 845 (1989). * Tate _et al._ [1990] J. Tate, S. B. Felch, and B. Cabrera, Determination of the Cooper-pair mass in niobium, Phys. Rev. B 42, 7885 (1990). * Verheijen _et al._ [1990a] A. Verheijen, J. van Ruitenbeek, R. de Bruyn Ouboter, and L. de Jongh, The London moment for high temperature superconductors, Physica B: Condensed Matter 165-166, 1181 (1990a), lT-19. * Verheijen _et al._ [1990b] A. A. Verheijen, J. M. van Ruitenbeek, R. de Bruyn Ouboter, and L. J. de Jongh, Measurement of the London moment in two high-temperature superconductors, Nature 345, 418 (1990b). * Sanzari _et al._ [1996] M. A. Sanzari, H. L. Cui, and F. Karwacki, London moment for heavy-fermion superconductors, Applied Physics Letters 68, 3802 (1996). * Koizumi [2020c] H. Koizumi, Explanation of superfluidity using the Berry connection for many-body wave functions, J. Supercond. Nov. Magn. 33, 1697 (2020c). * Andreev [1964] A. F. Andreev, Thermal conductivity of the intermediate state of superconductors, Sov. Phys. JETP. 19 19, 1228 (1964). * Saint-James [1964] D. Saint-James, Excitations élémentaires au voisinage de la surface de séparation d’un métal normal et d’un métal superconducteur, J. Phys. France 25, 899 (1964). * Aharonov and Bohm [1959] Y. Aharonov and D. Bohm, Significance of elecromagnetic potentials in the quantum theory, Phys. Rev. 115, 167 (1959). * Ginzburg and Landau [1950] V. L. Ginzburg and L. D. Landau, On the theory of superconductivity, Zh. Exsp. Teor. Fiz. 20, 1064 (1950). * Abrikosov [1957] A. A. Abrikosov, On the magnetic properties of superconductors of the second group, Sov. Phys. JETP 5, 1174 (1957). * Feynman [1965] R. P. Feynman, _Quantum mechanics and path integrals_ (McGraw-Hill companies, Inc., 1965). * Koizumi and Ishikawa [2020] H. Koizumi and A. Ishikawa, Theory of supercurrent in superconductors, International Journal of Modern Physics B 34, 2030001 (2020), https://doi.org/10.1142/S0217979220300017 . * Koizumi [2020d] H. Koizumi, Possible occurrence of superconductivity by the $\pi$-flux Dirac string formation due to spin-twisting itinerant motion of electrons, Symmetry 12, 776 (2020d). * Dirac [1931] P. Dirac, Quantised singularities in the electromagnetic field, Proc. Roy. Soc. London 133, 60 (1931). * Gor’kov [1959] L. P. Gor’kov, Microscopic derivation of the Ginzburg-Landau equations in the theory of superconductivity, Sov. Phys. JETP 36, 1364 (1959). * de Gennes [1966] P. G. de Gennes, _Superconductivity of Metals and Alloys_ (W. A. Benjamin, Inc., 1966). * Zhu [2016] J.-X. Zhu, _Bogoliubov-de Gennes Method and Its Applications_ (Springer, 2016). * Josephson [1962] B. D. Josephson, Possible new effects in superconductive tunneling, Phys. Lett. 1, 251 (1962). * Koizumi and Tachiki [2015] H. Koizumi and M. Tachiki, Supercurrent generation by spin-twisting itinerant motion of electrons: Re-derivation of the ac josephson effect including the current flow through the leads connected to josephson junction, J. Supercond. Nov. Magn. 28, 61 (2015). * Ambegaokar and Baratoff [1963] V. Ambegaokar and A. Baratoff, Tunneling between superconductors, Phys. Rev. Lett. 10, 486 (1963). * Peierls [1991] R. Peierls, Spontaneously broken symmetries, J. Phys. A 24, 5273 (1991).
# Trinity of Pixel Enhancement: a Joint Solution for Demosaicking, Denoising and Super-Resolution Guocheng Qian1∗, Jinjin Gu12∗, Jimmy S. Ren1, Chao Dong3, Furong Zhao1, Juan Lin1 1SenseTime Research 2The Chinese University of Hong Kong, Shenzhen 3Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences {gujinjin, qianguocheng, rensijie, zhaofurong<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract ††∗ J. Gu and G. Qian contributed equally to this work. This work was done when they were interns at SenseTime. Demosaicing, denoising and super-resolution (SR) are of practical importance in digital image processing and have been studied independently in the passed decades. Despite the recent improvement of learning-based image processing methods in image quality, there lacks enough analysis into their interactions and characteristics under a realistic setting of the mixture problem of demosaicing, denoising and SR. In existing solutions, these tasks are simply combined to obtain a high-resolution image from a low-resolution raw mosaic image, resulting in a performance drop of the final image quality. In this paper, we first rethink the mixture problem from a holistic perspective and then propose the Trinity Enhancement Network (TENet), a specially designed learning-based method for the mixture problem, which adopts a novel image processing pipeline order and a joint learning strategy. In order to obtain the correct color sampling for training, we also contribute a new dataset namely PixelShift200, which consists of high-quality full color sampled real- world images using the advanced pixel shift technique. Experiments demonstrate that our TENet is superior to existing solutions in both quantitative and qualitative perspective. Our experiments also show the necessity of the proposed PixelShift200 dataset. Figure 1: Our model TENet achieves better result on the mixture problem of demosaicing, denoising and SR on the real raw sensor test image captured by iPhone X. We conduct comparison with the most popular commercial software (Camera Raw) and the state-of-the-art demosaicing method [13] and SR method [3]. Our output is artifact-free and preserves detail even for challenging regions. Here, ∗CARN is fine tuned from CARN [3] using pixel averaging downsampling for fair comparison. ## 1 Introduction In computational photography, obtaining high-quality high-resolution even super-resolution images has attracted increasingly attention in research community and commercial industry. However, obtaining such images is of practical difficulty under limited hardware conditions (small prime lens and compact sensors, etc.), especially for mobile devices. The limitations mainly come from three aspects. First, most digital cameras contain sensor arrays covered by color filter arrays (CFAs, e.g. Bayer pattern), resulting in incomplete color sampling of images and loss in resolution. Second, the images captured directly by the image sensor are usually noisy. Especially when the pixel density of sensor becomes larger, the noise becomes more obvious. Third, most the lenses used in mobile devices have fixed and short focal length, which not only causes difficulties to the imaging of distant objects, but also limits the resolution of the images. In order to break through the above hardware limitations, some post-processing methods are introduced to enhance the images. Demosaicing, denoising and super-resolution (SR) are three fundamental processing tasks, respectively. In the past decades, these three tasks have been well studied separately, and all have made breakthrough progress recently with the help of deep learning. However, the problems encountered in practical applications are more complicated than any single problem – it is usually a mixture problem of noise and resolution limitation (color mosaic and insufficient resolution). Although perform well when applied separately to solve a single problem, it will bring in new problems when those tasks are simply combined to solve mixture problem (e.g., unexpected artifacts and blurry), which are caused by the interactions between tasks. Such a mixture problem has received lower attention in research field. In this paper, we rethink the mixture problem from a holistic perspective. By thoroughly analyzing the characteristics of each task and the behaviors of their interactions, we propose a new method, namely Trinity Enhancement Network (TENet), to solve the mixture problem. Experiments demonstrate the superiority of the proposed TENet method under realistic settings. The motivation behind this work is three-fold. Firstly, we adjust the order of demosaicing and SR in image processing pipeline. Although formulated differently, both demosaicing and SR are meant to overcome the sampling limitation of imaging. In the existing solutions, the image is first demosaiced to obtain a full color image. Then SR is performed to further enhance the resolution. However, demosaicing will introduce artifacts when the resolution is limited (such as color aliases, zippering and moiré artifacts), and these artifacts will be magnified by the followed SR process. To address this problem, we propose to super-resolve the raw mosaic image before demosaicing. In the new pipeline, not only the artifacts of demosaicing is reduced, but SR also helps demosaicing to break the resolution limit. Secondly, simply combining two or more tasks usually causes severe performance drop, e.g. new artifacts and blurry. An important reason for this drop is that there is no appropriate model or algorithm can perfectly handle the _middle state_ , which refers to the intermediate result after one or two steps of processing [46]. These middle states usually involve task related complicated defects. With the advent of deep learning based methods, we are able to address complicated multi-task image processing problems in an end-to-end manner, which is also known as “joint solution”. When jointly performed, if one task produces the result that is difficult to process directly, the followed task will compensate for the middle state, and provide better final results. Thus, we propose to perform demosaicing, denoising and SR in such a joint scheme for the mixture problem. Thirdly, we contribute a real-world dataset with the advanced pixel shift technique namely PixelShift200 for this mixture problem. By further diving into the training data, we find that the existing datasets have limitations in training demosaicing related tasks. As the images in those datasets are demosaiced from raw mosaic images, so they contains potential artifacts. The proposed PixelShift200 consists of 200 high- quality 4k resolution full color sampled real-world images. By training with the above dataset, our TENet can reconstruct high-quality high-resolution images with less artifacts. We summarize our contributions as follows: (1) We are the first to analyze the mixture problem of demosaicing, denoising and SR and propose the Trinity Enhancement Network (TENet) to solve the mixture problem. (2) We propose to super-resolve mosaic image before demosaicing. We show the superior performance of the proposed pipeline with experiments. (3) We contribute a new real-world dataset namely PixelShift200 for demosaicing and SR with novel pixel shift technique. Experiments show the necessity of proposed dataset in training demosaicing related tasks. ## 2 Related Work We aim to solve the mixture problem of demosaicing, denoising and SR for a single Bayer image. All the above tasks are well-studied separately. Since we are the first to address the mixture problem with joint solution, in this section, we first briefly present the previous work and existing problems for the above tasks, and then review the literature of joint solutions. ### 2.1 Demosaicking Figure 2: The interactions between different tasks. As shown in the first row, the image demosaiced from an LR image contains severe color distortion. The image demosaiced from an HR image provides better result. The second row indicates that the denoising task tends to smooth the high frequency details. The last row shows the serious artifacts of super-resolving a noisy input. Image demosaicing is an ill-posed problem of interpolating full-resolution color images from the color mosaic images (e.g. Bayer mosaic images), and is usually preformed in the beginning of image processing pipeline. Existing approaches can be mainly classified into two categories: model-based and learning-based methods. Model-based approaches [31, 50, 19, 37, 42, 17] focus on the construction of mathematical models and image priors in the spatial- spectral domain facilitating the recovery of missing data. Learning-based approaches [17, 38] build the process mapping by learning from abundant training data. Recently, deep learning has also used successfully for image demosaicing and achieved competitive performance [21, 14, 13, 39]. Michaël et al. [13] train a deep convolutional neural network (CNN) on millions of carefully selected image patches and achieve the state-of-the-art performance of demosaicking. In general, demosaicing algorithms perform well in flat regions of the image. However, it leads to conspicuous artifacts in the high-frequency texture regions and strong edges. Due to the input resolution limitation, serious artifacts such as zippering, color moiré and loss of detail are prone to occur in this area. This kind of problem is related to resolution limitation of the input Bayer image [54], and will be alleviated when the image resolution is increased, as shown in the first row of Figure 2 When the input low-resolution mosaic raw image contains noise, the demosaicking is further difficult. This leads to unpleasant artifacts, as the estimation of edge orientation is less reliable. ### 2.2 Denoising Image noise is inevitable during imaging and it may heavily degrade the visual quality. In past decades, plenty of methods have been proposed for denoising not only for color images but also mosaic images. Early methods such as anisotropic diffusion [33], total variation denoising [34] and wavelet coring [36] use hand-craft features and algorithms to recover a clean signal from noisy input. However these parametric methods have limited capacity and expressiveness. Advanced methods usually exploit effective image priors such as self-similarity [8, 16, 4] and sparse representation [2]. With the increasing of interests to learning-based methods, in recent years, most successful denoising algorithms are entirely data-driven, consisting of CNNs trained to recover from noisy images to noise-free images [5, 47, 48, 35, 45, 13]. Same as demosaicing, denoising algorithms work well in flat regions in the image, they eliminates high-frequency noise to make image smooth and clean. Unfortunately, most denoising algorithms not only eliminate noise, but also smooth the high-frequency detail and texture in the image. If we further conduct post-process on the denoised image such as SR, the blur effect will be magnify and affect image quality, as shown in the second row of Figure 2. Note that when the noise of the input image is complicated, denoising algorithms will hardly remove this kind of defects. Thus, the denoising algorithms have limited performance when removing artifacts left by other algorithms. ### 2.3 Super-Resolution SR aims to recover the high-resolution (HR) image from its low-resolution (LR) version. Since the seminal work of employing CNN for SR [10], various deep learning based methods with different network architectures [11, 23, 53, 3, 24, 30, 15] and training strategies [29, 44] have been proposed to continuously improve the SR performance. However, problems occur when apply such algorithm in real-world applications. When SR algorithms enhance the image details and texture, the unexpected noise, blurry and artifacts are also magnified. If the input image is noisy or blurry, the problems that were not serious will be magnified, especially for artifacts and noise caused by previous processing. It may lead to unsatisfactory results when apply SR separately after demosaicking or denoising. An example is shown in Figure 2. ### 2.4 Mixture Problem of Image Processing In practical applications, in addition to the above well defined problems, more common is the mixture problem of multiple image defects. For example, the mixture problem of SR and denoising [49], demosaicing and denoising [6, 22, 26], and the problem of SR and demosaicing [12, 43, 54]. For the mixture problem of multiple tasks, the difficulty of solving is greatly increased. Yu et al. [46] study the order of execution of tasks in the mixture problem and use reinforcement learning to learn the order of execution of the task. More relevant to this work, Michaël et al. [13] train a CNN to jointly perform these tasks and achieve the state of art performance. Zhang et al. [49] propose a SR network to jointly perform SR and denoising, as the denoising pre-precessing step tends to lose detail information and would deteriorate the subsequent SR performance. Zhou et al. [54] introduce deep residual network for joint demosaicking and super-resolution. However, the mixture problem of demosaicking, denoising and SR has not witnessed the usage of jointly perform strategy to the best of our knowledge. Figure 3: Our proposed Trinity Enhancement Network (TENet). ## 3 Method Our main aim is to improve the overall image quality for the mixture problem of demosaicing, denoising and SR. In this section, we first discuss the improvements from the proposed new pipeline. And then we describe the proposed joint objective function. At last we present the network design. ### 3.1 Pipeline As mentioned above, different tasks will interact with each other. When multiple processing tasks are executed in sequence, the defects generated by the previous task will affect the subsequent tasks and then cause performance drop. In our approach, we carefully adjust the execution order of denoising, SR and demosaicing to minimize the effects caused by task interaction. Firstly, we suggest to denoise before other tasks to minimize the effects of noise. If the denoising operation is performed after SR or demosaicing, the noise will impact the processing of SR and demosaicing and cause severe artifacts that is difficult to remove. Secondly, different from the previous popular image processing pipeline which first demosaic the raw image into a full color image and then perform SR, we propose to super-resolve the raw image to a higher resolution and then perform demosaicing to get the SR color image. There are at least two advantages: (1) The artifacts caused by super- resolving the defects of the demosaiced images can be avoided. (2) SR can help demosaicing task to break the limitation of resolution. Demosaic a higher resolution raw images will obtain better results. In our pipeline, for a given noisy LR raw mosaic image $M_{n}^{LR}$, its corresponding HR color image $I^{HR}$ can be written as a composite function: $I^{HR}=\mathcal{C}(\mathcal{S}_{M}(\mathcal{D}_{M}(M_{n}^{LR}))),$ (1) where $\mathcal{C}$ is the demosaicing mapping, $\mathcal{S}_{M}$111The subscript M stands for ‘mosaic’. is the SR mapping for mosaic images and $\mathcal{D}_{M}$ denotes the denoising mapping for mosaic images. The denoising is first performed to obtain noise-free mosaic LR image $M^{LR}=\mathcal{D}_{M}(M_{n}^{LR})$. We then use an SR mapping to super- resolve the LR mosaic image in order to obtain HR mosaic image $M^{HR}=\mathcal{S}_{M}(M^{LR})$. At last, we perform demosaicing to convert HR mosaic image into full color image $I^{HR}=\mathcal{C}(M^{HR})$. In our approach, we employ deep convolutional neural networks to implement the above mappings. Figure 4: The pixel shift technique used to contribute dataset PixelShift200 and samples of qualitative comparison among dcraw, Camera Raw and pixel shift results. ### 3.2 Joint Objective Function With a carefully designed image processing pipeline, we can avoid the serious performance drop caused by the interaction between different tasks to a certain extent. However, we still cannot totally solve the problem caused by the middle state. In the proposed pipeline, although the denoising is performed at first to eliminate serious artifacts, it will lose high-frequency textures and image details, which still causes difficulties for subsequent tasks – no SR or demosaicing method is designed to compensate the lost high- frequency details. The distribution of the super-resolved mosaic image is also different from the real-world mosaic image. Directly performing existing demosaicing method cannot achieve the satisfactory processing effect. To address this problem, we propose to joint perform denoising, SR and demosaicing in an end-to-end manner. In our approach, we calculate the $l_{2}$-norm loss on the final result $I^{HR}$ directly: $\mathcal{L}_{joint}=\|\mathcal{C}(\mathcal{S}_{M}(\mathcal{D}_{M}(M_{n}^{LR})))-I^{HR}_{gt}\|^{2}_{2},$ (2) where $I^{HR}_{gt}$ represents the ground-truth HR color image of $M_{n}^{LR}$. In order to provide more information to the network during training, we also calculate the SR loss on raw image to optimize the functionality of the network. $\mathcal{L}_{SR}=\|\mathcal{S}_{M}(\mathcal{D}_{M}(M_{n}^{LR}))-M^{HR}_{gt}\|^{2}_{2},$ (3) where $M^{HR}_{gt}$ represents the corresponding HR noise-free mosaic image of $M_{n}^{LR}$. The SR loss term makes the first half of the network focus on denoising and SR and the second half of the network focuses on demosaicing the super-resolved mosaic image. Although the joint perform strategy mainly focuses on the final results, providing the supervision information of intermediate state can also optimize network performance during joint processing. The final objective function in our approach is $\mathcal{L}=\mathcal{L}_{joint}+\lambda\mathcal{L}_{SR},$ (4) where $\lambda$ is the trade-off parameter. ### 3.3 Network Design As mentioned above, our approach can be divided into two parts. The first part is the mapping of joint denoising and SR, denoted with $\mathcal{F}_{M}$, which actually jointly implements $\mathcal{D}_{M}$ and $\mathcal{S}_{M}$. The second part is mapping $\mathcal{C}_{M}$, which converts the SR mosaic image into a full color image. The mapping $\mathcal{F}_{M}$ and $\mathcal{C}_{M}$ can be trained and performed jointly. At first, the Bayer mosaic image $M_{n}^{LR}$ is extracted into four color maps $M_{n}^{LR\diamond}$, so that the spatial information of the same color is more easily extracted with convolution operation. Mapping $\mathcal{F}_{M}$ maps these four color maps to a SR noise-free mosaic color maps $M^{SR\diamond}$, which is then mapped to a SR three-channel color image by the mapping $\mathcal{C}_{M}$. We employ the deep network of ESRGAN [44] to implement these two mappings, which uses a specially designed Residual in Residual Dense Block (RRDB) to increase the stability of the training. The network structure is illustrated in Figure 3. For the network $\mathcal{F}_{M}$, the input has five channels (including a noise map, stretched with noise level to indicate the sigma of the Gaussian noise) and the output has four channels. For network $\mathcal{C}_{M}$, the input is a four-channel SR mosaic image and the output is a three-channel RGB image. In order to balance the number of parameters and running time, the number of RRDBs for both $\mathcal{F}_{M}$ and $\mathcal{C}_{M}$ is set to $6$. Table 1: Quantitative comparison of the performance of different approaches on the demosaicing and SR mixture problem using dataset Kodak, McM [51], BSD100 [32] and Urban100 [20]. The SR factor is 2. Note the DemosaicNet [13] used in this comparison is the noise-free version. Method | Kodak | McMaster [51] | BSD100 [32] | Urban100 [20] ---|---|---|---|--- PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM Malvar et al. [31] $+$ ∗CARN [3] | 28.40 | 0.8421 | 28.51 | 0.8442 | 27.00 | 0.8201 | 24.49 | 0.8143 NLM [52] $+$ ∗CARN [3] | 29.29 | 0.8477 | 29.70 | 0.8731 | 27.72 | 0.8230 | 25.95 | 0.8440 NAT [52] $+$ ∗CARN [3] | 29.20 | 0.8551 | 29.64 | 0.8733 | 27.60 | 0.8293 | 25.89 | 0.8451 DemosaicNet [13] $+$ ∗CARN [3] | 30.82 | 0.8864 | 31.60 | 0.9052 | 28.99 | 0.8644 | 28.14 | 0.8886 †DemosaicNet [13] $+$ ∗CARN [3] | 30.29 | 0.8886 | 31.75 | 0.9073 | 29.22 | 0.8675 | 28.44 | 0.8942 TENet (noise-free) | 31.39 | 0.8965 | 32.40 | 0.9163 | 29.39 | 0.8736 | 29.37 | 0.9061 Table 2: Quantitative comparison of different approaches on the mixture problem of demosaicing, denoising and SR using dataset Kodak, McM [51], BSD100 [32] and Urban100 [20]. The SR factor is 2 and the noise level is set to 10, 20 and 50. The DemosaicNet [13] do not provide the model for noise level more than 20. Method | Noise | Kodak | McMaster [51] | BSD100 [32] | Urban100 [20] ---|---|---|---|---|--- level | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM ADMM [40] $+$ ∗CARN [3] | 10 | 26.71 | 0.7310 | 27.53 | 0.7793 | 25.80 | 0.6992 | 24.10 | 0.7414 Condak [7] $+$ ∗CARN [3] | | 27.21 | 0.7654 | 26.43 | 0.7717 | 25.90 | 0.7382 | 24.64 | 0.7823 FlexISP [18] $+$ ∗CARN [3] | | 25.29 | 0.6362 | 25.26 | 0.6601 | 24.30 | 0.6153 | 23.50 | 0.6784 DemosaicNet [13] $+$ ∗CARN [3] | | 27.82 | 0.7830 | 28.75 | 0.8153 | 26.82 | 0.7601 | 25.58 | 0.7960 †DemosaicNet [13] $+$ ∗CARN [3] | | 27.96 | 0.7874 | 28.87 | 0.8202 | 26.92 | 0.7640 | 25.87 | 0.8055 TENet (ours) | | 28.60 | 0.8067 | 29.56 | 0.8423 | 27.32 | 0.7783 | 27.18 | 0.8470 ADMM [40] $+$ ∗CARN [3] | 20 | 25.76 | 0.6893 | 26.03 | 0.7101 | 24.72 | 0.6480 | 23.45 | 0.7029 Condak [7] $+$ ∗CARN [3] | | 25.74 | 0.6920 | 24.81 | 0.6893 | 24.40 | 0.6462 | 23.54 | 0.7211 FlexISP [18] $+$ ∗CARN [3] | | 23.03 | 0.4573 | 22.77 | 0.4822 | 22.30 | 0.4613 | 21.30 | 0.5176 DemosaicNet [13] $+$ ∗CARN [3] | | 26.15 | 0.6989 | 26.53 | 0.7239 | 25.09 | 0.6644 | 23.89 | 0.7142 †DemosaicNet [13] $+$ ∗CARN [3] | | 26.22 | 0.7029 | 26.61 | 0.7308 | 25.15 | 0.6677 | 24.04 | 0.7218 TENet (ours) | | 26.99 | 0.7388 | 27.51 | 0.7799 | 25.68 | 0.6973 | 25.50 | 0.7932 ADMM [40] $+$ ∗CARN [3] | 50 | 23.06 | 0.5629 | 22.24 | 0.5461 | 22.02 | 0.5152 | 20.85 | 0.5723 Condak [7] $+$ ∗CARN [3] | | 22.92 | 0.5743 | 21.36 | 0.5301 | 21.66 | 0.5003 | 20.52 | 0.5683 FlexISP [18] $+$ ∗CARN [3] | | 18.47 | 0.2071 | 18.17 | 0.2292 | 18.07 | 0.2257 | 17.27 | 0.2789 TENet (ours) | | 23.93 | 0.5867 | 23.79 | 0.6010 | 22.81 | 0.5483 | 22.16 | 0.6513 ## 4 Data Collection Although there are many high-resolution image datasets available, we find that existing datasets are difficult to meet the requirements related to the training of demosaicing tasks. When it comes to demosaicing tasks, we need the full color sampled ground truth images. However, since most of the high- quality images are obtained by demosaicing the mosaic raw images, training using such data will introduce the artifacts generated by the existing demosaicing algorithm. In previous work, the high-quality images are first preprocessed to eliminate the effect of demosaicing artifacts. Zhou et al. [54] perform bicubic downsampling operation to the original high-resolution images to eliminate artifacts that have potentially been introduced by the demosaicing algorithm as well as by other factors in the camera processing pipeline (like sensor noise). Michaël et al. [13] use ImageNet [9] dataset and propose a novel training data selection method to select the ‘hard case’ of training data. Note that the ImageNet images can also be viewed as downsampled images. Although the downsampled image no longer contains obvious artifacts, the downsampled image is somewhat different from the natural image distribution. We need the real-world high-resolution images with full sampling of the color to train demosaicing related tasks. In this paper, we contribute a novel dataset PixelShift200 and a new testset PixelShiftTest. We employ advanced pixel shift technology to perform a full color sampling of the image. Pixel shift technology takes four samples of the same image, and physically controls the camera sensor to move one pixel horizontally or vertically at each sampling to capture all color information at each pixel (see Figure 4). The pixel shift technology ensures that the sampled images follow the distribution of natural images sampled by the camera, and the full information of the color is completely obtained. In this way, the collected images are artifacts-free, which could lead to better training results for demosaicing related tasks. During data collection, we use a Sony A7R3 digital camera with pixel shift technology. In order to control the quality of the data, most of the images were taken on the real scene and the finely printed pictures in the darkroom, the light intensity and color temperature is predefined and fixed. In order to avoid motion parallax when moving the sensor, We control the depth of field of the scene to a small range. We use a lens with fixed focal length and aperture, and use a low photosensibility (ISO 100 or less) to avoid possible serious noise. We divided the data into 10 test images, namely PixelShiftTest and 200 4k resolution training pictures namely PixelShift200. Some examples are shown in Figure 4, one can see that the pixel shift results is artifacts- free and thus provide better ground truth for training. | | | | | ---|---|---|---|---|--- | | | | | ADMM [40] $+$ ∗CARN [3] | Condat [6] $+$ ∗CARN [3] | FlexISP [18] $+$ ∗CARN [3] | †DemosaicNet$+$ ∗CARN [3] | TENet (ours) | Ground Truth Figure 5: Comparison of our approach with ADMM [40], Condat [6], FlexISP [18] and fine-tuned DemosaickNet [13] on the noisy synthetic test images. The noise level of Gaussian noise is 10 and the SR factor is 2. ## 5 Experiments ### 5.1 Data Preprocessing and Network Training Since downsampling operation is performed on the mosaic raw image, we propose to employ pixel averaging as the downsampling method. In pervious work, camera hardware binning was used to implement the downsampling operation on a monochromatic sensor directly [28, 27]. However, due to the existence of CFA in a color sensor, it is difficult to adopt such hardware binning technique on the color mosaic images. In our experiments, we simulate the hardware binning downsampling by performing a pixel averaging downsampling on the full color sampled images obtained by the pixel shift technique. We employ white Gaussian noise for the noisy input synthesis. We conduct the comparison on both existing high-quality image datasets and real-world dataset. For the high- quality data, we use the DIV2K dataset [1], which contains 800 2K resolution images for image restoration tasks. Beyond the training set of DIV2K, we further use the Flickr2K dataset [41] consisting of 2650 2K resolution images to enrich our training set. For the real-world data set, we use the proposed PixelShift200 contains 200 4K resolution images as the training set. For the optimization of network parameters, we use Adam [25] with $\beta_{1}=0.9,\beta_{2}=0.999$ and the learning rate is $1\times 10^{-4}$. The mini-batch size is set to 16. The spatial size of cropped HR patch of color images is $256\times 256$. We implement our models with the PyTorch framework and train them using NVIDIA Titan Xp GPUs. The entire training process takes about two days. Table 3: Quantitative comparison of different pipelines on the demosaicing, denoising and SR mixture problem. In this experiment, the tasks are perform step by step with different order with fixed network for each task. Method | Kodak | Urban100 [20] ---|---|--- PSNR | SSIM | PNSR | SSIM DM $\to$ SR $\to$ DN | 26.40 | 0.6495 | 24.98 | 0.7029 SR $\to$ DM $\to$ DN | 26.86 | 0.6796 | 25.42 | 0.7311 SR $\to$ DN $\to$ DM | 27.28 | 0.7089 | 25.86 | 0.7589 DM $\to$ DN $\to$ SR | 26.97 | 0.6991 | 25.38 | 0.7491 DN $\to$ DM $\to$ SR | 28.40 | 0.8028 | 26.55 | 0.8355 DN $\to$ SR $\to$ DM | 28.45 | 0.8038 | 26.75 | 0.8395 Table 4: Quantitative comparison of different joint solutions on the demosaicing, denoising and SR mixture problem. In this experiment, the tasks are joint or partially joint performed (denoted by $+$) in different orders. Method | Kodak | Urban100 [20] ---|---|--- PSNR | SSIM | PNSR | SSIM SR $\to$ DN $+$ DM | 27.27 | 0.7062 | 25.89 | 0.7579 DN $\to$ DM $+$ SR | 28.47 | 0.8041 | 26.76 | 0.8396 DM $\to$ DN $+$ SR | 27.07 | 0.6869 | 25.86 | 0.7468 DM $+$ DN $\to$ SR | 28.43 | 0.8039 | 26.59 | 0.8365 DM $+$ SR $\to$ DN | 26.67 | 0.6618 | 25.26 | 0.7149 DN $+$ SR $\to$ DM | 28.54 | 0.8048 | 26.96 | 0.8437 SR $+$ DN $+$ DM | 28.56 | 0.8050 | 27.10 | 0.8451 SR $+$ DN $+$ DM, w/$\mathcal{L}_{SR}$ | 28.60 | 0.8051 | 27.14 | 0.8458 ### 5.2 Experiments on Synthesis Test Images We compare our method on several public benchmark datasets under both noise- free and noisy settings. Note that it lacks research for the mixture problem of demosaicing and SR, we implement such comparison with the combination of demosaicing methods and the state-of-the-art SR method CARN [3]. For fair comparison, the CARN model used is fine tuned using pixel averaging downsampling, denoted with ∗CARN. We also provide the comparison with the joint trained DemosaicNet and ∗CARN (denoted by †DemosaicNet $+$ ∗CARN). For the noise-free setting, we compare our noise-free version model with the combination of ∗CARN and the state-of-the-art demosaicing methods. The quantitative comparison result is shown in Table 1. One can see that the joint fine tuned †DemosaicNet and ∗CARN achieves better result compared to the original model, which demonstrates the effectiveness of joint strategy. Also, our TENet outperform the all the existed solutions on the mixture problem of SR and demosaicing. For the noisy input setting, we compare our final model with the combination of ∗CARN and the state-of-the-art demosaicing and denoising methods. Table 2 shows the quantitative comparison result. One can see that our TENet outperform the all the existed solutions on the such mixture problem. Some examples are shown in Figure 5, and more comparison results are shown in supplementary material. As can be seen, for ADMM and Condak, the demosaicing are affect by the noise, resulting in over-smooth results and color aliasing artifacts. The subsequent SR task further magnify this image distortion. For FlexISP, the demosaiced image contains serious artifacts caused by noise, which is a damage to the final visual effect. †DemosaicNet causes artifacts and also fails in the recovery of high frequency details. The proposed TENet is able to provide clean image with rich and accurate details. | | | | | ---|---|---|---|---|--- | | | | | | dcraw $+^{\ast}$CARN [3] | Camera Raw $+^{\ast}$CARN [3] | †DemosaicNet $+^{\ast}$CARN [3] | TENet w/ 2K training set | TENet w/ PixelShift200 Figure 6: Comparison of our approach with dcraw, Camera Raw, DemosaicNet and two TENet trained using 2K image dataset and the propoed PixelShift200 using real-world test raw images. ### 5.3 Experiments on Real-World Test Images We test the proposed TENet using a randomly selected raw image shot by IPhone X mobile phone. We compare our method with the joint fine-tuned DemosaicNet and CARN, dcraw and a popular commercial photography software Camera Raw. Figure 1 shows the visual effect comparison. As one can see, the proposed TNEet provides clean processing result with rich details. Figure 6 shows the results on the proposed PixelShiftTest testset. Our proposed successfully reconstruct the high frequency texture without generating any artifacts and color aliasing. ## 6 Ablation Study In order to study the effects of each component in the proposed, we gradually modify the baseline model and compare their differences. In ablation study, we denote DM as Demosaicing, DN as Denoising and SR as Super-Resolution. We use the same network architecture with 6 RRDBs to implement SR, denoising and demosaicing tasks, respectively. The models are all well trained using DIV2K and Flickr2K dataset separately. For denoising and SR tasks, we prepare the models for both mosaic images and color images. ### 6.1 Comparison of Pipelines In this section, we study the performance of different pipeline orders. Based on the above models, we implement different image processing pipelines and test with SR factor equals 2 and noise level equals to 10. The quantitative comparison results are shown in Table 3. As one can see, when DN is not performed at first, the numerical performance will decline sharply due to the artifacts which is difficult to remove. When DN is fixed as the first task, exchange DM and SR will improve the performance. The proposed pipeline order out perform the others. ### 6.2 Effects of Joint Solution In this section, we study the performance of different joint solutions. We perform joint strategy to two or more tasks mentioned above, and then perform with other task wit different order. The quantitative comparison results are shown in Table 4. As can be seen, with the similar pipeline orders, the performance of the joint solution is generally better than the solution without joint strategy. Same as the above experiment, when DN is not performed at first, the numerical performance will decline sharply. According to the Table 4 row 4 and row 6, the solution that joint perform DN with other tasks at first achieves relatively good performance. The solution with SR at first outperform the traditional solution of DM $+$ DN $\to$ SR, which also demonstrate the effectiveness of the proposed pipeline order. As revealed in the last two rows, joint performing DN, SR and DM using a large network outperforms other partial joint solutions. In particular, we are able to further improve the performance with the employing of the additional SR loss $\mathcal{L}_{SR}$. ### 6.3 Comparison of Different Datasets In this section, we compare the TENet trained with different training datasets on real-world images. Figure 6 shows the qualitative comparison of several popular or state-of-the-art solutions and TENet. We can further observe that the results of TENet trained on 2K images perform well in most regions while fail under extreme conditions. One of the reason is that it lacks the good sampling of such difficult case in the training set. Our results demonstrate the effectiveness of the proposed PixelShift200 dataset. ## 7 Conclusion In this paper, we conduct thorough analysis of interactions of denoising, demosaicing and SR. We propose TENet for jointly solving these three tasks in a specific pipeline order. Our quantitive and qualitative experiments results demonstrate the effectiveness of the proposed new pipeline order and joint strategy. We also contribute a fully color-sampled datasets namely PixelShift200 for training demosaicking related tasks. The qualitative result on real mosaic raw images shows the our model trained on PixelShift200 outperforms the combination of the state-of-the-art demosaicking methods and SR methods. Our work shows the potentiality of conduct complex processing raw images. ## References * [1] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 126–135, 2017. * [2] Michal Aharon, Michael Elad, Alfred Bruckstein, et al. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11):4311, 2006. * [3] Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. Fast, accurate, and lightweight super-resolution with cascading residual network. pages 252–268, 2018. * [4] Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 60–65. IEEE, 2005. * [5] Harold C Burger, Christian J Schuler, and Stefan Harmeling. Image denoising: Can plain neural networks compete with bm3d? In 2012 IEEE conference on computer vision and pattern recognition, pages 2392–2399. IEEE, 2012. * [6] Laurent Condat and Saleh Mosaddegh. Joint demosaicking and denoising by total variation minimization. In 2012 19th IEEE International Conference on Image Processing, pages 2781–2784. IEEE, 2012. * [7] Laurent Condat and Saleh Mosaddegh. Joint demosaicking and denoising by total variation minimization. In 2012 19th IEEE International Conference on Image Processing, pages 2781–2784. IEEE, 2012. * [8] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007. * [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. * [10] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295–307, 2016. * [11] Chao Dong, Chen Change Loy, and Xiaoou Tang. Accelerating the super-resolution convolutional neural network. In European Conference on Computer Vision, pages 391–407. Springer, 2016. * [12] Sina Farsiu, Michael Elad, and Peyman Milanfar. Multiframe demosaicing and super-resolution from undersampled color images. In Computational Imaging II, volume 5299, pages 222–234. International Society for Optics and Photonics, 2004. * [13] Michaël Gharbi, Gaurav Chaurasia, Sylvain Paris, and Frédo Durand. Deep joint demosaicking and denoising. ACM Transactions on Graphics (TOG), 35(6):191, 2016. * [14] Jinwook Go, Kwanghoon Sohn, and Chulhee Lee. Interpolation using neural networks for digital still cameras. IEEE Transactions on Consumer Electronics, 46(3):610–616, 2000\. * [15] Jinjin Gu, Hannan Lu, Wangmeng Zuo, and Chao Dong. Blind super-resolution with iterative kernel correction. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2019. * [16] Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2862–2869, 2014. * [17] Fang-Lin He, Yu-Chiang Frank Wang, and Kai-Lung Hua. Self-learning approach to color demosaicking via support vector regression. In 2012 19th IEEE International Conference on Image Processing, pages 2765–2768. IEEE, 2012. * [18] Felix Heide, Markus Steinberger, Yun-Ta Tsai, Mushfiqur Rouf, Dawid Pajak, Dikpal Reddy, Orazio Gallo, Jing Liu, Wolfgang Heidrich, Karen Egiazarian, et al. Flexisp: A flexible camera image processing framework. ACM Transactions on Graphics (TOG), 33(6):231, 2014. * [19] Keigo Hirakawa and Thomas W Parks. Adaptive homogeneity-directed demosaicing algorithm. IEEE Transactions on Image Processing, 14(3):360–369, 2005. * [20] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197–5206, 2015. * [21] Oren Kapah and Hagit Zabrodsky Hel-Or. Demosaicking using artificial neural networks. In Applications of Artificial Neural Networks in Image Processing V, volume 3962, pages 112–121. International Society for Optics and Photonics, 2000. * [22] Daniel Khashabi, Sebastian Nowozin, Jeremy Jancsary, and Andrew W Fitzgibbon. Joint demosaicing and denoising via learned nonparametric random fields. IEEE Transactions on Image Processing, 23(12):4968–4981, 2014. * [23] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1646–1654, 2016. * [24] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1637–1645, 2016. * [25] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [26] Teresa Klatzer, Kerstin Hammernik, Patrick Knobelreiter, and Thomas Pock. Learning joint demosaicing and denoising based on sequential energy minimization. In 2016 IEEE International Conference on Computational Photography (ICCP), pages 1–11. IEEE, 2016. * [27] Thomas Köhler, Michel Bätz, Farzad Naderi, André Kaup, Andreas Maier, and Christian Riess. Bridging the simulated-to-real gap: Benchmarking super-resolution on real data. arXiv preprint arXiv:1809.06420, 2018. * [28] Thomas Köhler, Michel Bätz, Farzad Naderi, André Kaup, Andreas K Maier, and Christian Riess. Benchmarking super-resolution algorithms on real data. arXiv preprint arXiv:1709.04881, 2017. * [29] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew P Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, volume 2, page 4, 2017. * [30] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 136–144, 2017. * [31] Henrique S Malvar, Li-wei He, and Ross Cutler. High-quality linear interpolation for demosaicing of bayer-patterned color images. In Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference on, volume 3, pages iii–485. IEEE, 2004. * [32] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 416–423. IEEE, 2001. * [33] Pietro Perona and Jitendra Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence, 12(7):629–639, 1990. * [34] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992. * [35] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2774–2781, 2014. * [36] Eero P Simoncelli and Edward H Adelson. Noise removal via bayesian wavelet coring. In Proceedings of 3rd IEEE International Conference on Image Processing, volume 1, pages 379–382. IEEE, 1996. * [37] Chung-Yen Su. Highly effective iterative demosaicing using weighted-edge and color-difference interpolations. IEEE Transactions on Consumer Electronics, 52(2):639–645, 2006\. * [38] Jian Sun and Marshall F Tappen. Separable markov random field model and its applications in low level vision. IEEE transactions on image processing, 22(1):402–407, 2013. * [39] Nai-Sheng Syu, Yu-Sheng Chen, and Yung-Yu Chuang. Learning deep convolutional networks for demosaicing. arXiv preprint arXiv:1802.03769, 2018. * [40] Hanlin Tan, Xiangrong Zeng, Shiming Lai, Yu Liu, and Maojun Zhang. Joint demosaicing and denoising of noisy bayer images with admm. In 2017 IEEE International Conference on Image Processing (ICIP), pages 2951–2955. IEEE, 2017. * [41] Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 114–125, 2017. * [42] Chi-Yi Tsai and Kai-Tai Song. A new edge-adaptive demosaicing algorithm for color filter arrays. Image and Vision Computing, 25(9):1495–1508, 2007. * [43] Patrick Vandewalle, Karim Krichane, David Alleysson, and Sabine Süsstrunk. Joint demosaicing and super-resolution imaging from a set of unregistered aliased images. In Digital Photography III, volume 6502, page 65020A. International Society for Optics and Photonics, 2007. * [44] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In European Conference on Computer Vision, pages 63–79. Springer, 2018. * [45] Jun Xu, Lei Zhang, and David Zhang. A trilateral weighted sparse coding scheme for real-world image denoising. In Proceedings of the European Conference on Computer Vision (ECCV), pages 20–36, 2018. * [46] Ke Yu, Chao Dong, Liang Lin, and Chen Change Loy. Crafting a toolchain for image restoration by deep reinforcement learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2443–2452, 2018. * [47] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017. * [48] Kai Zhang, Wangmeng Zuo, and Lei Zhang. Ffdnet: Toward a fast and flexible solution for cnn-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622, 2018. * [49] Kai Zhang, Wangmeng Zuo, and Lei Zhang. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3262–3271, 2018. * [50] Lei Zhang and Xiaolin Wu. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Transactions on Image Processing, 14(12):2167–2178, 2005. * [51] Lei Zhang, Xiaolin Wu, Antoni Buades, and Xin Li. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. Journal of Electronic imaging, 20(2):023016, 2011. * [52] Lei Zhang, Xiaolin Wu, Antoni Buades, and Xin Li. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. Journal of Electronic imaging, 20(2):023016, 2011. * [53] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2472–2481, 2018. * [54] Ruofan Zhou, Radhakrishna Achanta, and Sabine Süsstrunk. Deep residual network for joint demosaicing and super-resolution. In Color and Imaging Conference, volume 2018, pages 75–80. Society for Imaging Science and Technology, 2018.
A^{\vee}[\nu]\simeq R^{1}p_{*}\mu_{\nu}$, since the polarization has degree prime to $\nu$. Then the Weil pairing gives $A[\nu]$ the structure of a symplectically self-dual sheaf on $U$. Further, with notation as in 5.1.4, $A[\nu]^{n}_{B}$ defines a sheaf on $\mathscr{U}^{n}_{B}$. An important example of a Selmer sheaf for us will be ${\mathcal{S}e\ell}_{A[\nu]^{n}_{B}}=R^{1}\lambda_{*}\left(j_{*}A[\nu]^{n}_{B}\right)$. ###### Example 5.1.9. A slightly more general setup than 5.1.8 is the following. Suppose we are in the setting of 5.1.4, and $b\in B$ is a closed point. Suppose we are given $\mathscr{F}$ a symplectically self-dual sheaf over $C$ so that the fiber $\mathscr{F}_{b}$ over $U_{b}$ defines a sheaf which is of the form $A_{b}[\nu]$ for $p:A\to U_{b}$ a polarized abelian scheme with polarization degree prime to $\nu$. Then we obtain a Selmer sheaf $\mathscr{F}^{n}_{B}$ over $\mathscr{U}^{n}_{B}$ so that $\mathscr{F}^{n}_{b}\simeq{\mathcal{S}e\ell}_{A[\nu]^{n}}$. The difference between this and 5.1.8 is that we may not have any abelian scheme over $U$ restricting to $A$ over $U_{b}$. ###### Remark 5.1.10. In fact, the 5.1.8 will be the setting we work in to prove our main result Theorem 1.1.2 because it is relatively easy to lift symplectically self-dual sheaves from the closed point of a DVR to the whole DVR, as we explain in 10.2.2, but we are unsure whether it is possible to lift abelian schemes in our setting. We conclude with some notation recording data associated to a quadratic twist, which we will use throughout the paper. ###### Notation 5.1.11. With notation as in 5.1.4, for $x\in\operatorname{QTwist}^{n}_{U/B}$ a point or geometric point, let $y$ denote the image of $x$ under the map $\operatorname{QTwist}^{n}_{U/B}\to\operatorname{Conf}^{n}_{U/B}$. We use $C_{x}$ to denote the fiber of $\xi:\mathscr{C}^{n}_{B}\to\operatorname{Conf}^{n}_{U/B}$ over $y$, $U_{x}$ to denote the fiber of $\xi\circ j$ over $y$, and we use $\mathscr{F}_{x}$ to denote the fiber of $\mathscr{F}^{n}_{B}$ over the point $x$. In the case we are further in the setup of 5.1.8 or else we are in the setup of 5.1.9 and $x\in\operatorname{QTwist}^{n}_{U_{b}/b}$. We use $A_{x}$ to denote the fiber of the abelian scheme $t_{*}t^{*}h^{*}A/h^{*}A$ over $x$, where $t^{*}$ and $h^{*}$ denote the pullback along $t$ and $h$, and $t_{*}$ denotes the Weil restriction along $t$. Note that $A_{x}$ is an abelian scheme over $U_{x}$. We use $\mathscr{A}_{x}$ to denote the Néron model over $C_{x}$ of $A_{x}\to U_{x}$. We let $D_{x}\subset C_{x}-U_{x}$ denote the divisor associated to $y$, the image of $x$ under the projection $\operatorname{QTwist}^{n}_{U/B}\to\operatorname{Conf}^{n}_{U/B}$. ### 5.2. Basic properties of the Selmer stack We next develop some basic properties of the Selmer stack. The next lemma shows the Selmer sheaf commutes with base change. The proof is similar to [FLR23, Lemma 2.6], though some additional technical difficulties come up related to working over the space of quadratic twists, instead of the universal family. ###### Lemma 5.2.1. With notation as in 5.1.4. In particular, $\mathscr{F}$ is a tame symplectically self-dual sheaf. Suppose $2\nu$ invertible on $B$. Then, the sheaf ${\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ is locally constant constructible and its formation commutes with base change on $\operatorname{QTwist}^{n}_{U/B}$. Further, for $\overline{\lambda}:=\lambda\circ j$, both $R^{i}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$ and $R^{i}\overline{\lambda}_{!}\left(\mathscr{F}^{n}_{B}\right)$ are locally constant constructible for all $i\geq 0$ and their formation commutes with base change on $\operatorname{QTwist}^{n}_{U/B}$. ###### Proof. In order to prove the result, we first set some notation. We have a natural map $\phi:R^{1}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}\to{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ obtained from the map $j_{!}\mathscr{F}^{n}_{B}\to j_{*}\mathscr{F}^{n}_{B}$ and the identification $R^{1}\overline{\lambda}_{!}(\mathscr{F}^{n}_{B})=R^{1}\lambda_{*}\left(j_{!}\mathscr{F}^{n}_{B}\right)$. Similarly, we have a map $\psi:{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}\to R^{1}\overline{\lambda}_{*}\mathscr{F}^{n}_{B}$ obtained from the composition of functors spectral sequence for $\lambda\circ j$. Our first goal is to show ${\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ is the image of $\psi\circ\phi$. Note that $\psi$ is injective by the Leray spectral sequence. Therefore, to show ${\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ is the image of $\psi\circ\phi$, it only remains to show $\phi$ is surjective. Because $\chi:j_{!}\mathscr{F}^{n}_{B}\to j_{*}\mathscr{F}^{n}_{B}$ is an isomorphism over $\mathscr{U}^{n}_{B}$, $\operatorname{coker}\chi$ is supported on $\mathscr{D}^{n}_{B}$, which is finite over $\operatorname{Conf}^{n}_{U/B}$, we find $R^{1}\lambda_{*}(\operatorname{coker}\chi)=0.$ This implies $\phi$ is surjective and so ${\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ is a constructible sheaf. We conclude by showing $R^{1}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}$ and $R^{1}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$ are both locally constant constructible, and their formation commutes with base change. This will imply $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ is locally constant constructible and its formation commutes with base change, as it is the image of the map $R^{1}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}\to R^{1}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$. We first show $R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}$ is locally constant constructible in the case that $\nu$ is prime. Note that its formation commutes with base change by proper base change for any $\nu$. To this end, we next verify $R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}$ is locally constant constructible for all $i\geq 0$. Using [Lau81, Corollaire 2.1.2 and Remarque 2.1.3], we see it is enough to show the Swan conductor of $\mathscr{F}^{n}_{B}$ is constant. As in [Lau81, Remarque 2.1.3], the Swan conductor over a point $[D]\in\operatorname{Conf}^{n}_{U/B}$ is a sum of local contributions, one for each geometric point of $D$ and one for each geometric point of $Z$ over the image of $D$ in $B$. At each geometric point of $D$, because we are taking a quadratic twist along $D$, the ramification index is $2$, and hence the ramification is tame, since $2$ is invertible on $B$. We are also assuming the ramification along points of $Z$ is tame for $\mathscr{F}$. This is identified with the corresponding ramification for $\mathscr{F}^{n}_{B}$ along points of $Z$, and hence this is tame as well. Therefore, the Swan conductor vanishes identically. Next, we show $R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}$ is locally constant constructible for every positive integer $\nu$ as in the statement of the lemma, using the case that $\nu$ is prime, as settled above. As an initial step, we may reduce to the case $\nu=\ell^{t}$ is a prime power by observing that if $\nu$ has prime factorization $\nu=\prod\ell^{t_{\ell}}$ then $\mu_{\nu}\simeq\oplus\mu_{\ell}^{t_{\ell}}$. Now, suppose $\nu=\ell^{t}$ is a prime power, and inductively assume we have proven $R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}[\ell^{t-1}]$ is locally constant constructible for all $i$. Since $\nu=\ell^{t}$, we have an exact sequence ${0}$${\mathscr{F}^{n}_{B}[\ell^{t-1}]}$${\mathscr{F}^{n}_{B}}$${\mathscr{F}^{n}_{B}[\ell]}$${0.}$ Applying $R\overline{\lambda}_{!}$ to the above sequence, we get a long exact sequence on cohomology ${R^{i-1}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}[\ell]}$${R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}[\ell^{t-1}]}$${R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}}$${R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}[\ell]}$${R^{i+1}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}[\ell^{t-1}].}$ Since all but the middle term are locally constant constructible by our inductive assumption, it follows that $R^{i}\overline{\lambda}_{!}(\mathscr{F}^{n}_{B}[\ell^{t}])$ is also locally constant constructible by [Sta, Tag 093U]. We conclude by showing $R^{1}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$ is locally constant constructible and its formation commutes with base change. Since $R^{i}\overline{\lambda}_{!}\mathscr{F}^{n}_{B}$ is locally constant constructible, it follows from Poincaré duality [Ver67, Theorem 4.8] and the isomorphism coming from the polarization of degree prime to $\nu$ that $\displaystyle\mathscr{H}\kern-0.5ptom\left(R^{-i}\overline{\lambda}_{!}\left(\mathscr{F}^{n}_{B}\right),\mu_{\nu}\right)\xleftarrow{\simeq}R^{i+2}\overline{\lambda}_{*}R\mathscr{H}\kern-0.5ptom(\mathscr{F}^{n}_{B},\mu_{\nu})\simeq R^{i+2}\overline{\lambda}_{*}(\mathscr{F}^{n}_{B}).$ Taking $i=-2+s$ gives $(R^{2-s}\overline{\lambda}_{!}(\mathscr{F}^{n}_{B}))^{\vee}(1)\simeq R^{s}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$. Since we have seen $(R^{2-s}\overline{\lambda}_{!}(\mathscr{F}^{n}_{B}))^{\vee}$ is locally constant constructible and its formation commutes with base change, the same holds for $R^{s}\overline{\lambda}_{*}\left(\mathscr{F}^{n}_{B}\right)$. ∎ ###### Notation 5.2.2. Let $k$ be a field and let $C$ be a smooth proper geometrically connected curve over $k$ of genus $g$, with $U\subset C$ an open subscheme. Let $A^{\prime}$ an abelian scheme over $U^{\prime}$ with Néron model $\mathscr{A}^{\prime}\to C$. Let $\Phi_{A^{\prime}}:=\left(\mathscr{A}^{\prime}/\mathscr{A}^{\prime 0}\right)(k)$ denote the component group of the Néron model of $A^{\prime}$. We use $\Phi_{A^{\prime}_{\overline{k}}}=\left(\mathscr{A}^{\prime}_{\overline{k}}/\mathscr{A}^{\prime 0}_{\overline{k}}\right)(\overline{k})\left(\mathscr{A}^{\prime}/\mathscr{A}^{\prime 0}\right)(\overline{k})$ to denote the geometric component group. The following proof is quite similar to [Lan21, Lemma 3.21]. We thank Tony Feng for suggesting the idea that appeared there for bootstrap from the prime case to the general case, which we reuse here. In the next lemma, note that since we are working over an algebraically closed field, the component group is the same as the geometric component group. ###### Lemma 5.2.3. Let $k$ be an algebraically closed field, let $C$ be a smooth proper geometrically connected curve over $k$ of genus $g$. Let $\mathscr{F}^{\prime}$ be a symplectically self-dual sheaf on an open $j:U^{\prime}\subset C$. Suppose that 1. (1) for each prime $\ell\mid\nu$, $\ell^{w}\mid\nu$, and $t\leq w$, the multiplication by $\ell^{t}$ map $j_{*}\mathscr{F}^{\prime}[\ell^{w}]\to j_{*}\mathscr{F}^{\prime}[\ell^{w-t}]$ is surjective. 2. (2) $j_{*}\mathscr{F}^{\prime}(C)=0$. Then $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\nu])$ is a free $\mathbb{Z}/\nu\mathbb{Z}$ module. In the case $j_{*}\mathscr{F}^{\prime}$ is of the form of $A^{\prime}[\nu]$, for $A^{\prime}\to U^{\prime}$ an abelian scheme, hypothesis $(1)$ above is satisfied if the geometric component group $\Phi_{A^{\prime}}$ has order prime to $\nu$. ###### Proof. Using the Chinese remainder theorem, we can reduce to the case that $\nu=\ell^{w}$ is a prime power. Suppose $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell])\simeq(\mathbb{Z}/\ell\mathbb{Z})^{r}.$ We will show by induction on $w$ that $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])\simeq(\mathbb{Z}/\ell^{w}\mathbb{Z})^{r}.$ For $0\leq t\leq w$ we claim there is an exact sequence (5.1) ${0}$${j_{*}\mathscr{F}^{\prime}[\ell^{t}]}$${j_{*}\mathscr{F}^{\prime}[\ell^{w}]}$${j_{*}\mathscr{F}^{\prime}[\ell^{w-t}]}$${0.}$ This is left exact because the analogous sequence for $\mathscr{F}^{\prime}$ in place of $j_{*}\mathscr{F}^{\prime}$ is left exact. This sequence is right exact by assumption (1) from the statement of the lemma. We now prove the final clause of the statement of the lemma: In the case $\mathscr{F}^{\prime}\simeq A^{\prime}[\nu]$, the cokernel of the map $j_{*}\mathscr{F}^{\prime}[\ell^{w}]\to j_{*}\mathscr{F}^{\prime}[\ell^{w-t}]$ is identified with $\Phi_{A^{\prime}}/\ell^{t}\Phi_{A^{\prime}}$. This is trivial by assumption as $\ell^{t}\mid\nu$. Therefore, in this case, $(1)$ holds. We next claim $H^{0}(C,j_{*}\mathscr{F}^{\prime}[\ell^{t}])=H^{2}(C,j_{*}\mathscr{F}^{\prime}[\ell^{t}])=0$. The former holds by assumption $(2)$. By [Mil80, V Proposition 2.2(b)] and the polarization $(\mathscr{F}^{\prime}[\ell^{t}])^{\vee}(1)\simeq\mathscr{F}^{\prime}[\ell^{t}]$, we find $\displaystyle H^{2}(C,j_{*}\mathscr{F}^{\prime}[\ell^{t}])\simeq H^{0}\left(C,j_{*}\left(\left(\mathscr{F}^{\prime}[\ell^{t}]\right)^{\vee}(1)\right)\right)^{\vee}\simeq H^{0}(C,j_{*}\mathscr{F}^{\prime}[\ell^{t}])^{\vee}\simeq H^{0}(C,\mathscr{F}^{\prime}[\ell^{t}])^{\vee}=0.$ The long exact sequence associated to (5.1) and the vanishing of the $0$th and $2$nd cohomology above implies we obtain an exact sequence (5.2) ${0}$${H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{t}])}$${H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])}$${H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w-t}])}$${0.}$$\scriptstyle{\alpha^{t}}$$\scriptstyle{\beta^{t}}$ Induction on $w$ implies $\\#H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])=\ell^{wr}$ and we wish to show $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])$ is free of rank $r$. By the structure theorem for finite abelian groups, it suffices to show the kernel of multiplication by $\ell^{w-1}$ on $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])$ has order $\ell^{(w-1)r}$. The multiplication by $\ell^{w-1}$ map factors as $H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])\xrightarrow{\beta^{w-1}}H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell])\xrightarrow{\alpha^{1}}H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w}])$. We know from (5.1) that $\alpha^{1}$ is injective so $\displaystyle\ker(\times\ell^{w-1})=\ker(\beta^{w-1}\circ\alpha^{1})=\ker\beta^{w-1}=H^{1}(C,j_{*}\mathscr{F}^{\prime}[\ell^{w-1}]),$ which has size $\ell^{(w-1)r}$, as we wished to show. ∎ We next aim to compute a formula for the rank of the Selmer sheaf, in favorable situations, in 5.2.6. First, we introduce notation needed to state that formula. ###### Definition 5.2.4. Suppose $\nu$ is a prime number. Given a locally constant constructible sheaf $\mathscr{F}$ of free $\mathbb{Z}/\nu\mathbb{Z}$ modules on an open $U^{\prime}\subset C$ of a curve $C$, for any point $x\in C-U^{\prime}$, there is an associated action of the inertia group $I_{x}$ at $x$ on the geometric generic fiber of $\mathscr{F}_{\overline{\eta}}$, which is well defined up to conjugacy. We use $\mathrm{Drop}_{x}(\mathscr{F})$ to denote the corank of the invariants of $I_{x}$, i.e., $\mathrm{Drop}_{x}(\mathscr{F}):=\operatorname{rk}\mathscr{F}-\operatorname{rk}\mathscr{F}_{x}^{I_{x}}$. In general, if $\nu$ is not necessarily a prime number, for each prime $\ell\mid\nu$ we use $\mathrm{Drop}_{x,\ell}(\mathscr{F}):=\mathrm{Drop}_{x}(\mathscr{F}[\ell])$, and if $\mathrm{Drop}_{x,\ell}(\mathscr{F})$ is independent of $\ell$, we denote this common value simply by $\mathrm{Drop}_{x}(\mathscr{F})$. Whenever we use the notation $\mathrm{Drop}_{x}(\mathscr{F})$ in the case $\nu$ has multiple prime divisors, we are implicitly claiming it is independent of the prime divisor. ###### Example 5.2.5. If $\nu$ is prime, and $\mathscr{F}\simeq A[\nu]$, then for any $x\in C-U$, $\mathrm{Drop}_{x}(\mathscr{F})=0$ if and only if inertia acts trivially at $x$, i.e., $A[\nu]$ extends over the point $x$. If $A$ is a relative elliptic curve and the order of the geometric component group of the Néron model of $A$ at $x$ is prime to $\nu$, then $\mathrm{Drop}_{x}(\mathscr{F})=1$ whenever $A$ has multiplicative reduction at $x$ and $\mathrm{Drop}_{x}(\mathscr{F})=2$ whenever $A$ has additive reduction at $x$. ###### Proposition 5.2.6. Maintain notation as in 5.1.4, so, in particular, $\mathscr{F}$ is a tame symplectically self-dual sheaf. Suppose $\nu$ is odd and $n>0$. Let ${\overline{b}}$ be a geometric point of $B$. Assume that 1. (1) for each prime $\ell\mid\nu$, $\ell^{w}\mid\nu$, and $t\leq w$, the multiplication by $\ell^{t}$ map $j_{*}\mathscr{F}_{\overline{b}}[\ell^{w}]\to j_{*}\mathscr{F}_{\overline{b}}[\ell^{w-t}]$ is surjective 2. (3) the sheaf $\mathscr{F}[\ell]$ is irreducible for each prime $\ell\mid p$. Assume $2\nu$ is invertible on $B$. For each $x\in\operatorname{QTwist}^{n}_{U/B}$, consider the following three properties. 1. (1’) for each prime $\ell\mid\nu$, $\ell^{w}\mid\nu$, and $t\leq w$, the multiplication by $\ell^{t}$ map $j_{*}\mathscr{F}_{x}[\ell^{w}]\to j_{*}\mathscr{F}_{x}[\ell^{w-t}]$ is surjective 2. (2’) $j_{*}\mathscr{F}_{x}(C_{x})=0$ 3. (3’) the sheaf $\mathscr{F}_{x}[\ell]$ is irreducible for each prime $\ell\mid\nu$. Then, $(2^{\prime})$ always holds, $(1^{\prime})$ holds if $(1)$ holds, and $(3^{\prime})$ holds if $(3)$ holds. Moreover, assuming $(1)$ and $(3)$, the map $\pi:\operatorname{Sel}_{\mathscr{F}^{n}_{B}}\to\operatorname{QTwist}^{n}_{U/B}$ is finite étale, representing a locally construct constructible sheaf of rank $(2g-2+n)\cdot 2r+\sum_{x\in Z({\overline{b}})}\mathrm{Drop}_{x}(\mathscr{F})$ free $\mathbb{Z}/\nu\mathbb{Z}$ modules, whose formation commutes with base change. ###### Proof. First, observe that by 5.2.1, $\pi:\operatorname{Sel}_{\mathscr{F}^{n}_{B}}\to\operatorname{QTwist}^{n}_{U/B}$ is finite étale, corresponding to a locally constant sheaf of $\mathbb{Z}/\nu\mathbb{Z}$ modules, and its formation commutes with base change on $\operatorname{QTwist}^{n}_{U/B}$. We now verify that condition $(1^{\prime})$ hold for quadratic twists $\mathscr{F}_{x}$ of $\mathscr{F}_{\overline{b}}$, ramified over a divisor $D_{x}$ disjoint from $Z_{x}$, using condition $(1)$. If $\mathscr{F}_{\overline{b}}$ corresponds to a representation of $\pi_{1}(U_{x}-D_{x})$, the quadratic twist corresponds to tensoring this representation with an order $2$ character, whose local inertia at any point outside of $D_{x}$ is trivial. Surjectivity of the map from $(1^{\prime})$ can only fail at points $p\in D_{x}\cup Z_{x}$. If $p\in Z_{x}$, since surjectivity can be verified locally, surjectivity for $j_{*}\mathscr{F}_{x}$ at $p$ follows from the corresponding surjectivity for $j_{*}\mathscr{F}_{\overline{b}}$ at $p$. If $p\in D_{x}$, the stalk of $j_{*}\mathscr{F}_{x}[\ell^{w-t}]$ is trivial, as it is identified with the invariants of multiplication by $-1$, which is trivial, and so surjectivity at such points is automatic. Next, we check $(2^{\prime})$ holds, just using $n>0$. We wish to show $H^{0}(C_{x},\mathscr{F}_{x})=0$. Thinking of $\mathscr{F}_{x}$ as a representation of $\pi_{1}(U_{x}-D_{x})$, a section corresponds to an invariant vector. However, since $n>0$, local inertia at a point of $D_{x}$ acts by $-1$, and so there are no invariant vectors. Third, we show $(3^{\prime})$ holds for $\mathscr{F}_{x}$, assuming $(3)$ holds for $\mathscr{F}_{\overline{b}}$. Note that the quadratic twist of the sheaf $\mathscr{F}_{\overline{b}}$ is obtained by tensoring the corresponding representation of $\pi_{1}(U_{\overline{b}})$ with a character. This preserves irreducibility. We next show this $\pi$ corresponds to a sheaf of free $\mathbb{Z}/\nu\mathbb{Z}$ modules. We may check this at any point of $\operatorname{QTwist}^{n}_{U/B}$ since the formation of $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ commutes with base change on $\operatorname{QTwist}^{n}_{U/B}$ by 5.2.1. It follows that over a geometric point of $\operatorname{QTwist}^{n}_{U/B}$, the hypotheses $(1)$ and $(2)$ of 5.2.3, which follow from $(1^{\prime})$ and $(2^{\prime})$ in the statement of this proposition, are satisfied for any quadratic twist of $\mathscr{F}$. Therefore, $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ corresponds to a sheaf of free $\mathbb{Z}/\nu\mathbb{Z}$ modules by 5.2.3. Finally, we compute the rank of this sheaf. Since we have shown $\mathscr{F}$ is an irreducible $\mathbb{Z}/\nu\mathbb{Z}$ locally constant constructible sheaf on $\mathscr{U}^{n}_{B}$, we can compute the formula for its rank after reduction modulo any prime $\ell\mid\nu$, and hence assume that $\nu$ is prime. The formula for the rank is given in [Kat02, Lemma 5.1.3]. Technically, the argument is given there for lisse $\overline{\mathbb{Q}}_{\ell}$ sheaves, but the same computation applies to $\mathbb{Z}/\ell\mathbb{Z}$ sheaves. In particular, with the above assumptions, if $B=\operatorname{Spec}k$, for $k$ an algebraically closed field, ${\mathcal{S}e\ell}_{\mathscr{F}^{n}_{B}}$ has rank $(2g-2+n)\cdot 2r+\sum_{x\in Z}\mathrm{Drop}_{x}(\mathscr{F})$. ∎ ### 5.3. Connecting points of the Selmer stack and Selmer groups The next two lemmas connect the Selmer stack to the sizes of Selmer groups and their proofs are quite similar to [Lan21, Proposition 3.23] and [Lan21, Corollary 3.24] respectively. ###### Lemma 5.3.1. Retaining notation from 5.1.4 and 5.1.11, suppose $n>0$, $2\nu$ is invertible on $B$, and let $\pi:\operatorname{Sel}_{\mathscr{F}^{n}_{B}}\to\operatorname{QTwist}^{n}_{U/B}$ denote the structure map. Suppose $\mathscr{F}[\ell]$ is irreducible for each prime $\ell\mid\nu$. Then for $x\in\operatorname{QTwist}^{n}_{U/B}(\mathbb{F}_{q})$, $\displaystyle H^{1}(C_{x},\mathscr{F}_{x})\simeq\left(\pi^{-1}(x)\right)(\mathbb{F}_{q}).$ Note that the right hand $\left(\pi^{-1}(x)\right)(\mathbb{F}_{q})$ acquires the structure of an abelian group as the points of a locally constant constructible sheaf. ###### Proof. Using 5.2.1, we know the formation of the Selmer sheaf commutes with base change, and hence for $\overline{x}$ a geometric point over $x$, the geometric fiber of $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ over $\overline{x}$ is identified with $\displaystyle R^{1}\lambda_{*}(j_{*}\mathscr{F}_{\overline{x}})$ $\displaystyle\simeq H^{1}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}}).$ To distinguish between étale and group cohomology, we use $H^{i}_{\operatorname{grp}}$ denote group cohomology and $H^{i}_{\operatorname{\acute{e}t}}$ to denote étale cohomology. Let $G_{x}:=\operatorname{Aut}(C_{\overline{x}}/C_{x})$. The $\mathbb{F}_{q}$ points of $\pi^{-1}(x)$ are the $G_{x}$ invariants of $H^{1}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}})$. That is, $\pi^{-1}(x)(\mathbb{F}_{q})=H_{\operatorname{grp}}^{0}(G_{x},H^{1}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}}))$. We relate this group to $H^{1}(C_{x},j_{*}\mathscr{F}_{x})$ using the Leray spectral sequence (5.3) ${0}$${H^{1}_{\operatorname{grp}}(G_{x},H^{0}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}}))}$${H^{1}_{\operatorname{\acute{e}t}}(C_{x},j_{*}\mathscr{F}_{x})}$${H^{0}_{\operatorname{grp}}(G_{x},H^{1}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}}))}$${H^{2}_{\operatorname{grp}}(G_{x},H^{0}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}})).}$$\scriptstyle{\theta}$ When $n>0$, we want to show $\theta$ is an isomorphism, so it suffices to show $H^{0}_{\operatorname{\acute{e}t}}(C_{\overline{x}},j_{*}\mathscr{F}_{\overline{x}})=0$. This holds using 5.2.6(3’). ∎ ###### Lemma 5.3.2. With the same assumptions as in 5.3.1, let $x\in\operatorname{QTwist}^{n}_{C/B}(\mathbb{F}_{q})$, and use $\operatorname{Sel}_{\nu}(A_{x})$ to denote the $\nu$ Selmer group of the generic fiber of $A_{x}$ over $U_{x}$. We have $\displaystyle\operatorname{Sel}_{\nu}(A_{x})\simeq\pi^{-1}(x)(\mathbb{F}_{q}).$ ###### Proof. Using 5.2.1, we know the geometric component group $\Phi_{A_{\overline{x}}}$ has order prime to $\nu$. As we are also assuming $q$ is prime to $\nu$, it follows from [Ces16, Proposition 5.4(c)], $\operatorname{Sel}_{\nu}(A_{x})\simeq H^{1}_{\operatorname{fppf}}(C_{x},\mathscr{A}_{x}[\nu])$. Upon identifying fppf cohomology with étale cohomology [Gro68, Théorème 11.7 $1^{\circ}$] and combining this with 5.3.1, we obtain the result. ∎ ## 6\. Identifying Selmer elements via Hurwitz stacks Throughout this section, we’ll work over the complex numbers $B=\operatorname{Spec}\mathbb{C}$. One of the main new ideas in this article is that Selmer elements can actually be parameterized by a Hurwitz stack. The reason for doing this is that the topological methods of the first part of the paper can, as in [EVW16], be used to control the number of $\mathbb{F}_{q}$-points on certain Hurwitz stacks. Using the identification between Selmer stacks and Hurwitz stacks, we will thus be able to count $\mathbb{F}_{q}$-points on Selmer stacks. These counts underlie our main theorems. We produce an isomorphism from the Selmer stack and a certain Hurwitz stack over the complex numbers parameterizing $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ covers of our base curve $C$ over $\mathbb{C}$. This is shown in 6.4.5. Before jumping into the details, we describe the idea of this isomorphism in § 6.1. Continuing to the proof, we give a monodromy theoretic description of torsion sheaves in § 6.2, and give a monodromy theoretic description of torsors for torsion sheaves in § 6.3. Finally, we identify the Selmer stack to certain Hurwitz stacks in § 6.4. ### 6.1. Idea of the isomorphism We will now describe the idea of the proof in the context of torsion in abelian varieties, though below the proof is carried out in the more general context of symplectically self-dual sheaves. The basic idea is that $\nu$ Selmer elements for an abelian variety $A^{\prime}$ over $U^{\prime}$ of relative dimension $r$ with Néron model $j_{*}A^{\prime}$ over $C$ correspond to torsors for $j_{*}A^{\prime}[\nu]$. We can identify $j_{*}A^{\prime}[\nu]$ with a $\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ Galois cover of $C$ via its Galois representation. We can then identify torsors for $j_{*}A^{\prime}[\nu]$ as $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ covers of $C$, see 6.3.2. This roughly corresponds to the fact that a torsor for $j_{*}A^{\prime}[\nu]$ can translate the monodromy of $j_{*}A^{\prime}[\nu]$ by an element of a geometric fiber of $j_{*}A^{\prime}[\nu]$, which can be identified with $(\mathbb{Z}/\nu\mathbb{Z})^{2r}=\ker\left(\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\to\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\right)$. The bulk of this section amounts to working out the precise conditions on the monodromy of these Hurwitz stacks. ### 6.2. Symplectically self-dual sheaves in terms of monodromy Recall that throughout this section, we are working over $B=\operatorname{Spec}\mathbb{C}$. As in 2.4.1, we begin with a smooth projective connected $C$ curve over $\operatorname{Spec}\mathbb{C}$, and a nonempty open subscheme $U\subset C$. For $D\subset U$ a divisor, we work with a sympletically self-dual sheaf $\mathscr{F}^{\prime}$ over $U-D$ of rank $2r$. A useful example to keep in mind will be when we are in the setting of 5.1.9 and there is an abelian scheme $A^{\prime}\to U-D$ and $\mathscr{F}=A^{\prime}[\nu]$. The main application will occur when $\mathscr{F}^{\prime}$ is a quadratic twist of a sheaf $\mathscr{F}$, ramified over $D$. We now describe $\mathscr{F}^{\prime}$ in terms of its monodromy. Fix a basepoint $p\in U-D$ and choose an identification $\mathscr{F}^{\prime}[\nu]|_{p}\simeq\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}$. Because the fundamental group $\pi_{1}^{\mathrm{top}}(U-D,p)$ acts linearly on $\mathscr{F}^{\prime}|_{p}$, we obtain a map $\pi_{1}^{\mathrm{top}}(U-D,p)\rightarrow\operatorname{GL}(\mathscr{F}^{\prime}|_{p})$. Because the sheaf is symplectically self-dual, and we are working over $\mathbb{C}$ where the cyclotomic character acts trivially, this representation factors through $\operatorname{Sp}(\mathscr{F}^{\prime}|_{p})$. In other words, we obtain a monodromy representation (6.1) $\displaystyle\rho_{\mathscr{F}^{\prime}}$ $\displaystyle:\pi_{1}^{\mathrm{top}}(U-D,p)\rightarrow\operatorname{Sp}(\mathscr{F}^{\prime}|_{p})\simeq\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z}).$ For convenience of notation, label the points of $Z$ by $s_{1},\ldots,s_{f+1}$. As in Figure 5, we can draw oriented loops $\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g},\gamma_{1},\ldots,\gamma_{n},\delta_{1},\ldots,\delta_{f+1}$ based at $p$ which pairwise intersect only at $p$ so that 1. (1) $\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g}$ forms a basis for $H_{1}(C,\mathbb{Z})$, 2. (2) $\gamma_{i}$ is a loop winding once around $p_{i}$ corresponding to the local inertia at $p_{i}$, where $p_{1},\ldots,p_{n}$ are the $n$ points in $D$, and 3. (3) $\delta_{i}$ is a loop winding once around $s_{i}$ corresponding to the local inertia at $s_{i}$. The above loops form generators of $\pi_{1}^{\mathrm{top}}(U-D,p)$ and satisfy the single relation $\displaystyle(\alpha_{1}\beta_{1}\alpha_{1}^{-1}\beta_{1}^{-1})\cdots(\alpha_{g}\beta_{g}\alpha_{g}^{-1}\beta_{g}^{-1})\gamma_{1}\cdots\gamma_{n}\delta_{1}\cdots\delta_{f+1}=\operatorname{\mathrm{id}}.$ Since $\mathscr{F}^{\prime}$ is a $\mathbb{Z}/\nu\mathbb{Z}$ local system on $U-D$, the monodromy representation $\rho^{\mathscr{F}^{\prime}}$ determines $\mathscr{F}^{\prime}$. Figure 5. This picture depicts a genus $2$ surface $X$ with $3$ punctures and a $2$-point configuration. That is it corresponds to a point in $\operatorname{Conf}^{2}_{X}$. It includes the moving points $p_{1},p_{2}$, surrounded by loops $\gamma_{1}$ and $\gamma_{2}$, the fixed punctures $s_{1},s_{2},s_{3}$ surrounded by loops $\delta_{1},\delta_{2}$, and $\delta_{2}$. It also includes the standard generators for homology of the compact surface $\sigma_{1},\frac{\alpha_{1}}{\alpha_{2}}$. ### 6.3. Torsors for symplectically self-dual sheaves in terms of monodromy We next set out to give a monodromy theoretic description of torsors in terms of monodromy. The main result we are aiming toward is 6.3.7, which gives a monodromy theoretic description of $j_{*}\mathscr{F}^{\prime}$ torsors. We retain notation from § 6.2. For $D\subset U$ a divisor, we use $j:U-D\to C$ to denote the inclusion. As a first observation, we show that any torsor for $j_{*}\mathscr{F}^{\prime}$ over $C$ is determined by its restriction to $U-D$. ###### Lemma 6.3.1. The restriction map $H^{1}(C,j_{*}\mathscr{F}^{\prime})\to H^{1}(U-D,\mathscr{F}^{\prime})$ is injective. Its image consists of those torsors $[\mathscr{S}]\in H^{1}(U-D,\mathscr{F}^{\prime})$ such that for each $q\in D\cup Z$, there is some sufficiently small complex analytic open neighborhood $C\supset W\ni q$ such that $\mathscr{S}|_{W-q}$ is the restriction of a $j_{*}\mathscr{F}^{\prime}|_{W}$ torsor to $W-q$. ###### Proof. In the étale topology, the spectral sequence associated to the composition $U-D\to C\to\operatorname{Spec}\mathbb{C}$ yields the injection $H^{1}(C,j_{*}\mathscr{F}^{\prime})\hookrightarrow H^{1}(U-D,\mathscr{F}^{\prime})$. Using the comparison between étale and complex analytic sheaf cohomology [SGA72, Exposé XI, Théoréme 4.4(iii)] we may describe elements of $H^{1}(U-D,\mathscr{F}^{\prime})$ as torsors in the complex analytic topology for $\mathscr{F}^{\prime}$. The condition that a torsor $[\mathscr{S}]\in H^{1}(U-D,\mathscr{F}^{\prime})$ lies in the image of $H^{1}(C,j_{*}\mathscr{F}^{\prime})\to H^{1}(U-D,\mathscr{F}^{\prime})$ is precisely the condition that it extends to an $j_{*}\mathscr{F}^{\prime}$ torsor over a sufficiently small neighborhood of each point $q\in D\cup Z$. ∎ Recall our goal is to give a monodromy theoretic description of $j_{*}\mathscr{F}^{\prime}$ torsors. Using 6.3.1, we can describe $j_{*}\mathscr{F}^{\prime}$ torsors as $\mathscr{F}^{\prime}$ torsors which extend over a small neighborhood of each $p_{i}$. We next describe $\mathscr{F}^{\prime}$ torsors, and then, in 6.3.6, give the condition that such a torsor extends over $D$. First, we introduce notation used to describe the monodromy representation parameterizing $\mathscr{F}^{\prime}$ torsors. ###### Definition 6.3.2. The affine symplectic group is $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z}):=\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}\rtimes\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z}),$ where the action of $\mathrm{Sp}_{2}(\mathbb{Z}/\nu\mathbb{Z})$ on $\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}$ is via the standard action of matrices on their underlying free rank $\mathbb{Z}/\nu\mathbb{Z}$ module of rank $2r$. ###### Remark 6.3.3. By definition, $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ sits in an exact sequence (6.2) ${0}$${\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}}$${\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}$${\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}$${0}$$\scriptstyle{\iota}$$\scriptstyle{\Pi}$ with inclusion map $\iota$ and quotient map $\Pi$. With this presentation, $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ can be explicitly described as those matrices of the form (6.3) $\displaystyle\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\simeq\left\\{\begin{pmatrix}M&v\\\ 0&1\end{pmatrix}\in\operatorname{GL}_{2r+1}(\mathbb{Z}/\nu\mathbb{Z}):M\in\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z}),v\in\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}\right\\}.$ ###### Notation 6.3.4. More generally, for $\mu\mid\nu$, define (6.5) $\displaystyle\operatorname{A}^{\mu}\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\simeq\left\\{(M,v):M\in\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z}),v\in\left(\mathbb{Z}/\mu\mathbb{Z}\right)^{2r}\right\\}.$ which has a group structure obtained from $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ via reducing the vector $v$ in (6.3) $\bmod\mu$. Even more generally, in order to understand moments of the $\nu$ Selmer group, suppose $H$ is a finite $\mathbb{Z}/\nu\mathbb{Z}$ module of the form $H\simeq\prod_{i=1}^{m}\mathbb{Z}/\nu_{i}\mathbb{Z}$. we will be interested in the group $\displaystyle\operatorname{\mathrm{A}^{\operatorname{H}}\mathrm{Sp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z}):=\operatorname{A}^{\nu_{1}}\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\times_{\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}\cdots\times_{\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}\operatorname{A}^{\nu_{m}}\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z}),$ which sits in an exact sequence (6.7) ${0}$${\prod_{i=1}^{m}\left(\mathbb{Z}/\nu_{i}\mathbb{Z}\right)^{2r}}$${\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}$${\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})}$${0}$$\scriptstyle{\iota}$$\scriptstyle{\Pi}$ We next describe the condition for a torsor for $\mathscr{F}^{\prime}$ to extend over a puncture, in terms of monodromy. By § 6.2, $\mathscr{F}^{\prime}$ can be described in terms of $\rho_{\mathscr{F}^{\prime}}$, which has target $\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$. A torsor $\mathscr{S}$ for $\mathscr{F}^{\prime}$ can be described in terms of $\mathscr{F}^{\prime}$ together with the additional data of transition functions lying in $\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}$. In total, $\mathscr{S}$ can be described in terms of a monodromy representation $\displaystyle\rho_{\mathscr{S}}:\pi_{1}^{\mathrm{top}}(U-D,p)\to\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z}).$ A composition of loops in $\pi_{1}^{\mathrm{top}}(U-D,p)$ maps under $\rho_{\mathscr{S}}$ to the product of their corresponding matrices, viewed as elements of $\operatorname{GL}_{2r+1}(\mathbb{Z}/\nu\mathbb{Z})$ via (6.3). ###### Remark 6.3.5. By construction, for $\Pi$ as defined in (6.2), $\Pi\circ\rho_{\mathscr{S}}=\rho_{\mathscr{F}^{\prime}}$. We now describe the condition that a $\mathscr{F}^{\prime}$ torsor extends to a $j_{*}\mathscr{F}^{\prime}$ torsor. We note, first of all, that by 6.3.1, we know that this condition only depends on the restriction of $\rho_{\mathscr{S}}$ to local inertia groups. Since these inertia groups are procyclic, this amounts to specifying some subset of $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, necessarily closed under conjugacy, in which the local monodromy groups are constrained to lie. In the following proposition, we work out what these constraints look like in explicit matrix form. ###### Lemma 6.3.6. With notation as in § 6.2, let $j:U-D\to C$ denote the inclusion. Suppose $q\in Z\cup D$ with $\eta$ a small loop around $q$ whose image under $\rho_{\mathscr{F}^{\prime}}$ corresponds to the local inertia at $q$. Let $d:=\mathrm{Drop}_{q}(\mathscr{F}^{\prime})$ so that, after choosing a suitable basis $\mathscr{F}^{\prime}_{p}\simeq(\mathbb{Z}/\nu\mathbb{Z})^{2r}$, we may write $\rho_{\mathscr{F}^{\prime}}(\eta)$ in the form $\displaystyle\begin{pmatrix}M_{1}&M_{2}\\\ 0&\operatorname{\mathrm{id}}_{2r-d}.\end{pmatrix}$ Under the identification of $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ as in (6.3), we can extend a $\mathscr{F}^{\prime}$ torsor $\mathscr{S}$ to an $j_{*}\mathscr{F}^{\prime}$ torsor in some complex analytic neighborhood $W$ of $q$ if and only if (6.8) $\displaystyle\rho_{\mathscr{S}}(\eta)=\begin{pmatrix}M_{1}&M_{2}&*\\\ 0&\operatorname{\mathrm{id}}_{2r-d}&0\\\ 0&0&1\end{pmatrix}$ for some vector $*\in(\mathbb{Z}/\nu\mathbb{Z})^{d}$. Stated more intrinsically, we can extend $\mathscr{S}$ to a $\mathscr{F}^{\prime}$ torsor if and only if the vector $v$ in (6.3) lies in $\operatorname{im}(1-\rho_{\mathscr{F}^{\prime}}(\eta))$ ###### Proof. First, 6.3.5 shows all entries of the matrix in (6.8) are necessary and sufficient for $\mathscr{S}$ to extend to a $j_{*}\mathscr{F}^{\prime}$ torsor except the first $2r$ entries of the last column, accounting for the $*$ and the $0$. Choose a simply connected neighborhood $W$ of $q$ and fix a basepoint $p\in W$. To conclude the proof, we will show the claimed entries in the last column of (6.8) from rows $d+1$ to $2r$ are $0$ if and only if $\mathscr{S}|_{W-q}$ extends to a $j_{*}\mathscr{F}^{\prime}$ torsor over $W$. Note that we can identify $(\mathbb{Z}/\nu\mathbb{Z})^{2r-d}|_{W}\subset j_{*}\mathscr{F}^{\prime}|_{W}$ as a $\mathbb{Z}/\nu\mathbb{Z}$ subsheaf which restricts to $\operatorname{Span}(e_{d+1},\ldots,e_{2r})\subset\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}\simeq\mathscr{F}^{\prime}|_{q}$ as the inertia invariants. Therefore, any $j_{*}\mathscr{F}^{\prime}|_{W}$ torsor $\mathscr{T}$ has a distinguished $(\mathbb{Z}/\nu\mathbb{Z})^{d}$ subtorsor, which is given as $\ker(1-\rho_{\mathscr{F}^{\prime}}(\eta))$. Since $W$ is simply connected, this $(\mathbb{Z}/\nu\mathbb{Z})^{2r-d}$ torsor is trivial, which implies that the local inertia at $q$ acts trivially on $\operatorname{Span}(e_{d+1},\ldots,e_{2r})\subset\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}\simeq j_{*}\mathscr{F}^{\prime}|_{q}$, and hence there is a $0$ in (6.8) as claimed. Conversely, if there is a $0$ in the second row of the third column of (6.8), we obtain a section of $\mathscr{S}$ over $W-q$ corresponding to each element of $(\mathbb{Z}/\nu\mathbb{Z})^{2d-r}$ and hence a subsheaf $(\mathbb{Z}/\nu\mathbb{Z})^{2r-d}|_{W-q}\subset\mathscr{S}|_{W-q}$. By gluing $(\mathbb{Z}/\nu\mathbb{Z})^{2r-d}|_{W}$ to $\mathscr{S}|_{W-q}$ along $(\mathbb{Z}/\nu\mathbb{Z})^{2r-d}|_{W-q}$, we obtain an $j_{*}\mathscr{F}^{\prime}$ torsor $\mathscr{T}$ which is the desired extension of $\mathscr{S}$. ∎ We can now describe $j_{*}\mathscr{F}^{\prime}$ torsors in terms of monodromy data. ###### Lemma 6.3.7. With notation as in § 6.2, let $\mathscr{F}$ be an irreducible symplectically self-dual local system on $U$. Suppose $n>0$. Fix some quadratic twist $\mathscr{F}^{\prime}$ of $\mathscr{F}$, ramified along a degree $n$ divisor $D$, in the sense that $\mathscr{F}^{\prime}$ is some fiber of $\mathscr{F}^{n}_{B}$, so that we obtain a corresponding monodromy representation $\rho_{\mathscr{F}^{\prime}}$. Suppose $\rho_{\mathscr{F}^{\prime}}$ satisfies the hypotheses $(1)$ and $(3)$ of 5.2.6. There are precisely $\nu^{(2g-2+n)\cdot 2r+\sum_{x\in Z}\mathrm{Drop}_{x}(\mathscr{F})}$ isomorphism classes of torsors for $j_{*}\mathscr{F}^{\prime}$, which can be described in terms of monodromy data by specifying a representation $\rho_{\mathscr{S}}:\pi_{1}(U-D,p)\to\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ up to $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ conjugacy, satisfying the following conditions: 1. (1) The image of $\gamma_{i}$ under $\rho_{\mathscr{S}}$ is of the form (6.3) with $M=-\operatorname{\mathrm{id}}$ 2. (2) If $\mathrm{Drop}_{s_{i}}(\mathscr{F}^{\prime})=d_{i}$, the image of $\delta_{i}$ under $\rho_{\mathscr{S}}$ is conjugate to a matrix of the form (6.8), where we take $(q,d)$ there to be $(s_{i},d_{i})$ here, 3. (3) We have $\Pi\circ\rho_{\mathscr{S}}=\rho_{\mathscr{F}^{\prime}}$. Let $j:U-D\to C$ denote the inclusion. As mentioned above, we consider two torsors $\mathscr{T}$ and $\mathscr{T}^{\prime}$ equivalent if there is some $v\in\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}$ so that $\rho_{j^{*}\mathscr{T}}(\nabla)=\iota(v)\left(\rho_{j^{*}\mathscr{T}^{\prime}}(\nabla)\right)\iota(v)^{-1}$ for every $\nabla\in\\{\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g},\gamma_{1},\ldots,\gamma_{n},\delta_{1},\ldots,\delta_{f+1}\\}$, with $\iota$ as in (6.2). ###### Proof. Using 6.3.1, we can describe torsors for $j_{*}\mathscr{F}^{\prime}$ as torsors for $\mathscr{F}^{\prime}$ which extend over a neighborhood of each $p_{i}\in D$. By 6.3.5, condition $(3)$ precisely corresponds to the condition that the associated $\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ local system associated to $\mathscr{S}$ on $U-D$ is that associated to $\mathscr{F}^{\prime}$, and hence $\mathscr{S}|_{U}$ is a $\mathscr{F}^{\prime}$ torsor. By 6.3.6, an $\mathscr{F}^{\prime}$ torsor extend to a $j_{*}\mathscr{F}^{\prime}$ torsor over $p_{1},\ldots,p_{n}$, if and only condition $(1)$ holds, and extends over $s_{1},\ldots,s_{f+1}$ if and only if condition $(2)$ holds. We consider the representations up to conjugacy, as this corresponds to a change of basepoint of $\mathscr{F}^{\prime}|_{p}\simeq\left(\mathbb{Z}/\nu\mathbb{Z}\right)^{2r}$, and expresses the usual condition for two torsors to be equivalent. To conclude, we wish to see that there are $\nu^{(2g-2+n)\cdot 2r+\sum_{x\in Z}\mathrm{Drop}_{x}(\mathscr{F})}$ isomorphism classes of torsors specified by the above data. Indeed, we see there are $\nu^{2r}$ possible values $\rho_{\mathscr{S}}$ can take on the loops $\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g}$ in order to satisfy $(3)$. For each $\gamma_{i}$, there are $\nu^{\mathrm{Drop}_{p_{i}}(\mathscr{F}^{\prime})}=\nu^{2r}$ possible values of $\rho_{\mathscr{S}}$, because $\Pi(\rho_{\mathscr{S}}(\gamma_{i}))=-\operatorname{\mathrm{id}}_{2r}$. For each $\delta_{i}$, there are $\nu^{\mathrm{Drop}_{s_{i}}(\mathscr{F}^{\prime})}=\nu^{\mathrm{Drop}_{s_{i}}(\mathscr{F})}$ possible values of $\rho_{\mathscr{S}}$. We additionally must impose the condition that $\prod_{i=1}^{g}[\alpha_{i},\beta_{i}]\prod_{i=1}^{r+f+1}\gamma_{i}=\operatorname{\mathrm{id}}$, from the relation defining the fundamental group, and that we consider these torsors up to conjugacy. Before imposing these two conditions, there are $\nu^{(2g+n)\cdot 2r+\sum_{x\in Z}\mathrm{Drop}_{x}(\mathscr{F})}$ possible tuples of matrices. The first condition imposes $\nu^{2r}$ independent constraints on the matrices. Further, the conjugation action always identifies $\nu^{2r}$ elements since the representation is center free using that it is irreducible and that $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\subset\operatorname{GL}_{2r+1}(\mathbb{Z}/\nu\mathbb{Z})$ contains no scalars, other than $\operatorname{\mathrm{id}}$. Altogether, this yields $\nu^{(2g-2+n)\cdot 2r+\sum_{x\in Z}\mathrm{Drop}_{x}(\mathscr{F})}$ such torsors. ∎ ### 6.4. Identifying Selmer stacks with Hurwitz stacks We will use the above description of torsors to identify the Selmer stack with a certain Hurwitz stack in 6.4.5. We next define that Hurwitz stack. ###### Notation 6.4.1. Let $B=\operatorname{Spec}\mathbb{C}$. Given an symplectically self-dual sheaf $\mathscr{F}$ over $U$ as in 5.1.4, and fixing values of $\nu$ and $n$, we now use the notation $\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}$ to indicate the stack $\operatorname{Hur}^{G,n,Z,\mathcal{S}}_{C/B}$ as in 2.4.2, for $n,Z,C,B$ as in 5.1.4 and $G,\mathcal{S}$ are as we define next. Let $\nu_{1},\ldots,\nu_{m}\mid\nu$ and write $H\simeq\prod_{i=1}^{m}\mathbb{Z}/\nu_{i}\mathbb{Z}$. Take $G:=\operatorname{\mathrm{A}^{\operatorname{H}}\mathrm{Sp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$. Take $\mathcal{S}$ to be the orbit under the conjugation action of $G$ of the following subset of $\phi\in\mathrm{Hom}(\pi_{1}(\Sigma_{g,f+1}),G)$. Any such $\phi$ sends a half-twist (moving point $i$ counterclockwise toward point $i+1$ and point $i+1$ counterclockwise toward point $i$) to an element $g\in G$ so that $\Pi(g)=-\operatorname{\mathrm{id}}$, for $\Pi$ as defined in (6.7). If $\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g}\subset\Sigma_{g,f+1}\subset\Sigma_{g}$ are a fixed set of simple closed curves forming a standard generating set for the first homology of $\Sigma_{g}$, we require that $\Pi(\phi(\alpha_{i}))\in\pm a_{i}$, $\Pi(\phi(\beta_{j}))\in\pm b_{j}$, where $a_{i}=\rho_{\mathscr{F}}(\alpha_{i})$ and $b_{j}=\rho_{\mathscr{F}}(\beta_{j})$. The local inertia around $s_{i}$, the $i$th puncture among the $f+1$ punctures, maps to $(M_{i},v_{i})$ where $M_{i}$ is the given local inertia for $\mathscr{F}$ and $v_{i}\in\operatorname{im}(M_{i}-\operatorname{\mathrm{id}})$. ###### Remark 6.4.2. The condition in 6.4.1 that the $\alpha_{i}$ and $\beta_{j}$ map to $\pm a_{i}$ and $\pm b_{j}$ under $\Pi\circ\phi$ may seem to depend on choices of the $\alpha_{i}$ and $\beta_{j}$, but it can be expressed independently of these choices as follows: if $\zeta:\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})\to\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})/\\{\pm 1\\}$ is the quotient map, $\zeta\circ\Pi\circ\phi=\zeta\circ\rho_{\mathscr{F}}$. In order to show the construction in 6.4.1 gives a Hurwitz stack as in 2.4.2, we need to show the set $\mathcal{S}$ is invariant under the action of $\pi_{1}(\operatorname{Conf}^{n}_{U/B})$. We now verify this. ###### Lemma 6.4.3. The set $\mathcal{S}$ from 6.4.1 is a subset of $\mathrm{Hom}(\pi_{1}(\Sigma_{g,f+1}),G)$ which is invariant under the action of $\pi_{1}(\operatorname{Conf}^{n}_{U/B})$. ###### Proof. Recall we use $\gamma_{i}$ for the loop giving inertia around $p_{i}$ for $1\leq i\leq n$ and $\delta_{i}$ for the loop giving inertia around $s_{i}$, $1\leq i\leq f+1$. First, to show the image of $\gamma_{i}$ are preserved by the $\pi_{1}(\operatorname{Conf}^{n}_{U/B})$ action, note that $-\operatorname{\mathrm{id}}$ preserved by this action. Therefore, the condition that $\Pi(g)=-\operatorname{\mathrm{id}}$ is preserved by the action as well. Hence, the condition that $\gamma_{i}$ has monodromy $g$ with $\Pi(g)=-\operatorname{\mathrm{id}}$ is preserved by the action of $\pi_{1}(\operatorname{Conf}^{n}_{U/B})$. The condition on the $\alpha_{i}$ and $\beta_{j}$ is invariant as passing one of the $n$ points across $\alpha_{i}$ or $\beta_{j}$ has the effect of negating $\Pi(\phi(\alpha_{i}))$ or $\Pi(\phi(\alpha_{i}))$, since $\Pi(\gamma_{t})=-\operatorname{\mathrm{id}}$. As for the loops $\delta_{i}$, since the loops $\gamma_{i}$ have inertia $g$ with $\Pi(g)=-\operatorname{\mathrm{id}}$, which lies in the center of $G$, the matrices $M_{i}$ defined in 6.4.1 are preserved by conjugation under $-\operatorname{\mathrm{id}}$. Therefore, the $1$-eigenspace $\ker(1-M_{i})$ is preserved by conjugation under $-\operatorname{\mathrm{id}}$, and so the same holds for $\operatorname{im}(1-M_{i})$. Thus, the set of such homomorphisms to $G$ is indeed preserved by the action of $\pi_{1}(\operatorname{Conf}^{n}_{U/B})$. ∎ ###### Hypotheses 6.4.4. Suppose $n>0$, $B=\operatorname{Spec}\mathbb{C}$, and $\mathscr{F}$ is an irreducible symplectically self-dual local system which satisfies the hypotheses $(1)$ and $(3)$ of 5.2.6. There is a map $\displaystyle\theta:\operatorname{Sel}_{\mathscr{F}^{n}_{B}}\to\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}}$ obtained via the bijection of 6.4.5 which sends a torsor to the corresponding $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ cover for some quadratic twist $\mathscr{F}^{\prime}$ of $\mathscr{F}$. ###### Proposition 6.4.5. With hypotheses as in 6.4.4, for $n>0$, the map $\theta$, defined over $B=\operatorname{Spec}\mathbb{C}$, is an isomorphism. ###### Proof. Note that the projection $\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}}\to\operatorname{QTwist}^{n}_{U/B}$ sends a point of $\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}}$, thought of as an $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ cover, to the corresponding $\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ cover. The projection $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}\to\operatorname{QTwist}^{n}_{U/B}$ sends a torsor $\mathscr{T}$ for some quadratic twist $\mathscr{F}^{\prime}$ to the corresponding $\mathscr{F}^{\prime}$. Both $\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}}$ and $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ are finite étale covers of $\operatorname{QTwist}^{n}_{U/B}$, and by 6.3.7, $\theta$ defines a bijection between geometric points over points of $\operatorname{QTwist}^{n}_{U/B}$, corresponding to a chosen degree $n$ quadratic twist $\mathscr{F}^{\prime}$ of $\mathscr{\mathscr{missing}}F$. In order to show $\theta$ is an isomorphism, it is enough to show the bijection between two finite étale covers of $\operatorname{QTwist}^{n}_{U/B}$ defines a homeomorphism. Indeed, we may verify this claim locally on $\operatorname{QTwist}^{n}_{U/B}$, in which case is enough to verify it on sufficiently small open covers of $\operatorname{QTwist}^{n}_{U/B}$. We can choose a small open neighborhood of some geometric point $[\mathscr{F}^{\prime}]\in\operatorname{QTwist}^{n}_{U/B}$, corresponding to varying the points $p_{i}$, along with the corresponding double cover, in a small, pairwise disjoint open analytic discs of $C$. Since the bijection of 6.3.7 is compatible with such variation in the points $p_{i}$, we obtain the desired isomorphism. ∎ ###### Warning 6.4.6. The Selmer stack $\operatorname{Sel}_{\mathscr{F}^{n}_{\operatorname{Spec}\mathbb{F}_{q}}}$ over $\mathbb{F}_{q}$ will not in general be isomorphic to the Hurwitz stack of $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ covers we are considering. Rather, they will be twists of each other, and the Hurwitz stack only becomes isomorphic over $\overline{\mathbb{F}}_{q}$. The reason for this is that the monodromy representation associated to $\mathscr{F}$ may fail to be contained in $\operatorname{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, and in general will it will only be contained in $\operatorname{GSp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, the general symplectic group. However, once one ensures all roots of unity lie in the base field, this issue goes away. ###### Remark 6.4.7. The issue brought up in 6.4.6 is not a concern for the main results of the present paper, for the following reason. Our plan is to estimate the number of $\mathbb{F}_{q}$-points on a Selmer stack, which we compute using the Grothendieck-Lefschetz trace formula applied to the action of Frobenius on the Selmer stack. We need two inputs: $(1)$ a precise description of the action of Frobenius on the top degree cohomology of the Selmer stack and $(2)$ a bound on the dimensions of low codimension cohomology groups. For $(2)$, the dimensions of these low codimension cohomology groups are invariant under base change, and hence can be computed over $\overline{\mathbb{F}}_{q}$. Then, via a comparison theorem, these dimensions can be computed over $\mathbb{C}$. Hence, for the bound in $(2)$, we only need to compare Selmer stacks to Hurwitz stacks over $\mathbb{C}$, and hence 6.4.6 does not play a role. Input $(1)$, which is about the term coming from the top degree cohomology, by contrast, is sensitive to which $\mathbb{F}_{q}$-rational form we have in mind. We compute this trace directly for the Selmer stack, using a monodromy computation in § 7. Computing the average size of a Selmer group in a quadratic twist family will come down to counting $\mathbb{F}_{q}$-rational points on a Selmer stack. But we will want to compute not only averages, but higher moments. This will require counting points on fiber products of Selmer stacks. But, as the following corollary shows, these stacks are isomorphic, making them amenable to the methods of this paper. ###### Corollary 6.4.8. With hypotheses as in 6.4.4, let $H\simeq\prod_{i=1}^{m}\mathbb{Z}/\nu_{i}\mathbb{Z}$. The map $\theta$, defined over $B=\operatorname{Spec}\mathbb{C}$, induces an isomorphism $\displaystyle\theta^{m}:\operatorname{Sel}_{\mathscr{F}^{n}_{B}[\nu_{1}]}\times_{\operatorname{QTwist}^{n}_{U/B}}\cdots\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{Sel}_{\mathscr{F}^{n}_{B}[\nu_{m}]}\to\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}.$ ###### Proof. It follows from the definition of $\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}$ as in 6.4.1 that $\displaystyle\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}\simeq\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}[\nu_{1}]}\times_{\operatorname{QTwist}^{n}_{U/B}}\cdots\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}[\nu_{m}]}.$ The map $\theta$ from 6.4.5 also induces isomorphisms $\operatorname{Hur}^{\mathbb{Z}/\nu_{i}\mathbb{Z}}_{\mathscr{F}^{n}_{B}[\nu_{i}]}\to\operatorname{Sel}_{\mathscr{F}^{n}_{B}[\nu_{i}]}$. For $\nu_{i}\mid\nu$, we also have $\operatorname{Hur}^{\mathbb{Z}/\nu_{i}\mathbb{Z}}_{\mathscr{F}^{n}_{B}[\nu_{i}]}\simeq\operatorname{Hur}^{\mathbb{Z}/\nu\mathbb{Z}}_{\mathscr{F}^{n}_{B}[\nu_{i}]}$ from the definition. The result follows from 6.4.5 by taking appropriate fiber products of isomorphisms over $\operatorname{QTwist}^{n}_{U/B}$. ∎ ## 7\. Computing the monodromy of Hurwitz stacks In this section, we compute the image of the monodromy representation related to Selmer stacks. This will be used later to determine their connected components. We first control the monodromy when $\nu$ is prime in § 7.1. We then control the monodromy for prime power $\nu$ in § 7.2 and for composite $\nu$ in § 7.3. The above shows that the monodromy is sufficiently large, but does not determine it exactly. We will, however, precisely describe the image of the Dickson invariant map in § 7.4. ### 7.1. Computing the monodromy when $\nu$ is a prime We first consider the case $\nu=\ell$ is prime. The main result in this case is Theorem 7.1.1, which is a generalization of [Hal08, Theorem 6.3] from the case that we have an elliptic curve over a genus $0$ base to the case of a general symplectically self-dual sheaf over a base curve of genus $g$. We begin with a definition of the monodromy representation for general odd $\nu$. ###### Definition 7.1.1. With notation as in 5.1.4, suppose $B$ is integral $\nu$ is odd, and $2\nu$ is invertible on $B$. Choose a basepoint $x\in\operatorname{QTwist}^{n}_{U/B}$. Let $V_{\mathscr{F}^{n}_{B}}:=R^{1}\lambda_{*}\left(j_{*}\mathscr{F}^{n}_{B}\right)_{x}$. The Selmer sheaf is a finite étale cover of $\operatorname{QTwist}^{n}_{U/B}$ by 5.2.1 and so induces a monodromy representation $\rho_{\mathscr{F}^{n}_{B}}:\pi_{1}(\operatorname{QTwist}^{n}_{U/B})\to\operatorname{Aut}(V_{\mathscr{F}^{n}_{B}})$. For any geometric point ${\overline{b}}\to B$, we also obtain a geometric monodromy representation $\rho_{\mathscr{F}_{\overline{b}}^{n}}:\pi_{1}(\operatorname{QTwist}^{n}_{U_{\overline{b}}/\overline{b}})\to\operatorname{Aut}(V_{\mathscr{F}^{n}_{\overline{b}}})$. ###### Warning 7.1.2. Note that $\rho_{\mathscr{F}^{n}_{B}}$ is a representation of the fundamental group of configuration space, while we use $\rho_{\mathscr{F}^{\prime}}$ very differently in (6.1) for a representation of the fundamental group of the curve $U-D$ itself. ###### Remark 7.1.3. Using that $\gcd(\nu,2)=1$, there is a nondegenerate pairing on $V_{\mathscr{F}^{n}_{B}}$ The pairing is obtained as the composition $\displaystyle H^{1}(C,j_{*}(\mathscr{F}^{n}_{B})_{x})\times H^{1}(C,j_{*}(\mathscr{F}^{n}_{B})_{x})$ $\displaystyle\to H^{2}(C,\wedge^{2}(j_{*}\mathscr{F}^{n}_{B})_{x})$ $\displaystyle\to H^{2}(C,j_{*}(\wedge^{2}\mathscr{F}^{n}_{B})_{x})$ $\displaystyle\to H^{2}(C,j_{*}\mu_{\nu})$ $\displaystyle\to\mathbb{Z}/\nu\mathbb{Z}$ using Poincaré duality [Mil80, V Proposition 2.2(b)], which is preserved by this monodromy representation. The pairing above is symmetric because Poincaré duality on curves is antisymmetric and the pairing on $j_{*}(\mathscr{F}^{n}_{B})_{x}$ is antisymmetric, coming from the assumption that $\mathscr{F}$ is symplectically self-dual. Let $Q_{\mathscr{F}^{n}_{B}}$ denote the associated quadratic form. Hence, $\rho_{\mathscr{F}_{B}^{n}}$ factors through the orthogonal group $\operatorname{O}(Q_{\mathscr{F}^{n}_{B}})$ associated to the above symmetric bilinear pairing. We now set some assumptions, which will serve as our hypotheses going forward. ###### Hypotheses 7.1.4. Suppose $\nu$ is an odd integer and $r\in\mathbb{Z}_{>0}$ so that every prime $\ell\mid\nu$ satisfies $\ell>2r+1$. Suppose we have a rank $2r$, tame, symplectically self-dual sheaf of free $\mathbb{Z}/\nu\mathbb{Z}$ modules, $\mathscr{F}$ over $U\subset C$, a nonempty proper open in a smooth proper curve $C$ with geometrically connected fibers over an integral affine base $B$. Suppose $Z:=C-U$ is nonempty and finite étale over $B$. Assume further $2\nu$ is invertible on $B$. Fix a geometric point ${\overline{b}}\to B$. We assume there is some point $x\in C_{\overline{b}}$ at which $\mathrm{Drop}_{x}(\mathscr{F}_{\overline{b}}[\ell])=1$ for every prime $\ell\mid\nu$. Also suppose $\mathscr{F}_{\overline{b}}[\ell]$ is irreducible for each $\ell\mid\nu$, and that the map $j_{*}\mathscr{F}_{\overline{b}}[\ell^{w}]\to j_{*}\mathscr{F}_{\overline{b}}[\ell^{w-t}]$ is surjective for each prime $\ell\mid\nu$ such that $\ell^{w}\mid\nu$, and $w\geq t$, as in hypotheses $(1)$ and $(3)$ of 5.2.6. Let $f+1:=\deg(C-U)$ and let $n$ be a positive even integer. Note that if we are in the situation of 5.1.9, hypothesis 5.2.6(1) in the case $\mathscr{F}_{\overline{b}}=A[\nu]$ is satisfied whenever the geometric component group $\Phi_{A_{\overline{b}}}$ has order prime to $\nu$, by 5.2.3. If we additionally assume $A_{\overline{b}}$ has multiplicative reduction at some point of $U_{\overline{b}}$, with toric part of dimension $1$, then $\mathrm{Drop}_{x}(\mathscr{F}_{\overline{b}}[\ell])=1$ for every prime $\ell\mid\nu$. ###### Theorem 7.1.1 (Generalization of [Hal08, Theorem 6.3]). Suppose $\nu=\ell>2r+1$ is prime. Choose a geometric basepoint $x\in\operatorname{QTwist}^{n}_{U/B}$ over a geometric point ${\overline{b}}\to B$. We next recall our assumptions from 7.1.4: we assume $2\nu$ is invertible on the integral affine base $B$ and $\mathscr{F}_{\overline{b}}$ is a rank $2r$ irreducible symplectically self- dual sheaf. We assume there is some point $y\in C_{\overline{b}}$ at which $\mathrm{Drop}_{y}(\mathscr{F}_{\overline{b}})=1$, and $\mathscr{F}_{\overline{b}}$ satisfies hypotheses 5.2.6(1) and (3). For $n$ an even integer satisfying $\displaystyle n>\max\left(2g,\frac{2(2r+1)(f+1)-\sum_{y\in D_{x}({\overline{b}})}\mathrm{Drop}_{y}(\mathscr{F})}{2r}-(2g-2)\right),$ the geometric monodromy representation $\rho_{\mathscr{F}_{\overline{b}}^{n}}:\pi_{1}(\operatorname{QTwist}^{n}_{U_{\overline{b}}/\overline{b}})\to\operatorname{Aut}(V_{\mathscr{F}^{n}_{\overline{b}}})$ has $\operatorname{im}(\rho_{\mathscr{F}^{n}_{\overline{b}}})$ of index at most $2$ in $\operatorname{O}(Q_{\mathscr{F}^{n}_{\overline{b}}})$, for $Q_{\mathscr{F}^{n}_{\overline{b}}}$ as in 7.1.3, and, moreover, $\operatorname{im}(\rho_{\mathscr{F}^{n}_{\overline{b}}})\neq\operatorname{SO}(Q_{\mathscr{F}^{n}_{\overline{b}}})$. ###### Proof Sketch. A fair portion of this proof is essentially explained in [Hal08, Theorem 6.3], see also [Zyw14, Theorem 3.4] for an explicit version and [Hal08, §6.6] for the generalization to $r>1$. We now briefly outline the details needed in the generalization. For the purposes of the proof, we may assume that $B={\overline{b}}$. Since $n>2g$, by [Kat02, Theorem 2.2.6], there is a map $h:C_{x}\to\mathbb{P}^{1}$ of degree $n$ which is simply branched, the branch locus of $h$ is disjoint from $h((Z\cup D)_{x})$, $h$ separates points of $(Z\cup D)_{x}$, and precisely one point $\delta\in D_{x}$ maps to $\infty\in\mathbb{P}^{1}$. Let $\operatorname{br}(h)$ denote the branch locus of $h$. Take $W\subset\mathbb{P}^{1}$ to be the complement of $\operatorname{br}(h)\cup h(Z\cup D)$. Note that $\infty\notin W$ by assumption. Then, one can show as in [Kat02, Theorem 5.4.1] that there is a map $\phi:W\to\operatorname{QTwist}^{n}_{U/\overline{b}}$ which we now describe. In order to specify a double cover of $C\times_{\overline{b}}W$, it is equivalent to specify a rank $1$ locally constant constructible $\mathbb{Z}/\ell\mathbb{Z}$ sheaf on an open whose monodromy is trivialized by that double cover. Let $\mathscr{F}^{\prime}$ denote the quadratic twist of $\mathscr{F}$ corresponding to our chosen geometric basepoint $x\in\operatorname{QTwist}^{n}_{U/B}$. Then, $\mathscr{F}^{\prime}=\mathscr{F}\otimes\mathbb{V}$, for $\mathbb{V}$ a rank $1$ locally constant constructible sheaf on $U-D$ given by $t_{*}(\mathbb{Z}/\ell\mathbb{Z})/(\mathbb{Z}/\ell\mathbb{Z})$, for $t:X\to U$ the finite étale double cover associated to $x$. We will now find a family of locally constant constructible sheaves (corresponding to quadratic twists) over $W$ whose fiber over $0\in W$ is $\mathbb{V}$. To this end, let $\chi$ denote the rank $1$ locally constant constructible sheaf on $\mathbb{G}_{m}:=\mathbb{A}^{1}-\\{0\\}$ corresponding to the double cover $\mathbb{G}_{m}\to\mathbb{G}_{m}$ via multiplication by $2$. There is a map $\alpha^{\prime}:\mathbb{A}^{1}\times\mathbb{A}^{1}-\Delta\to\mathbb{G}_{m}$ given by $(x,y)\mapsto x-y$. Consider the map $(h,\operatorname{\mathrm{id}}):C\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}$ and let $Y:=(h,\operatorname{\mathrm{id}})^{-1}(W\times W-\Delta)$. Let $\alpha$ denote the composition $Y\xrightarrow{(h,\operatorname{\mathrm{id}})}\mathbb{A}^{1}\times\mathbb{A}^{1}-\Delta\xrightarrow{\alpha^{\prime}}\mathbb{G}_{m}$ and let $\mathbb{W}:=\alpha^{*}\chi$. Let $\pi_{2}:Y\to\mathbb{A}^{1}$ denote the second projection. Take $\mathbb{V}^{\prime}:=\mathbb{W}|_{h^{-1}(W-0)\times 0}\otimes\mathbb{V}|_{h^{-1}(W-0)}$, viewed as a sheaf on $h^{-1}(W-0)\subset C$. Then $(\mathbb{V}^{\prime}\otimes\mathbb{W}^{\vee})|_{h^{-1}(W-0)}$ recovers $\mathbb{V}|_{h^{-1}(W-0)}$. Now, the locally constant constructible sheaf $\pi_{2}^{*}\mathbb{V}^{\prime}\otimes\mathbb{W}^{\vee}$ determines a locally constant constructible sheaf on $Y$. The above identifies the fiber of this over the point $0$ with a restriction of $\mathbb{V}$. Since both $\mathbb{V}^{\prime}$ and $\mathbb{W}$ correspond to representations with image $\mathbb{Z}/2\mathbb{Z}$, the same is true of $\pi_{2}^{*}\mathbb{V}^{\prime}\otimes\mathbb{W}^{\vee}$, and hence this sheaf corresponds to a finite étale double cover of $Y$. Overall, this gives a double cover of $C\times\mathbb{A}^{1}$, ramified along a degree $n$ divisor. This divisor is étale and disjoint from $Z$ over $C\times W$, and hence yields a map $\phi:W\to\operatorname{QTwist}^{n}_{U/\overline{b}}$, by the universal property of $\operatorname{QTwist}^{n}_{U/\overline{b}}$ as a moduli stack of double covers branched over a divisor disjoint from $Z$. The sheaf $\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}}$ may also be viewed as the middle convolution $\operatorname{MC}_{\chi}((h_{*}\mathscr{F}^{\prime})|_{W})$. (See [Kat02, Proposition 5.3.7] for an analogous statement in the $\ell$-adic setting.) Since $\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}}$ is the middle convolution $\operatorname{MC}_{\chi}((h_{*}\mathscr{F}^{\prime})|_{W})$ of the irreducible sheaf $(h_{*}\mathscr{F}^{\prime})|_{W}$, we obtain that $\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}}$ is irreducible. Here we are using that the middle convolution of an irreducible sheaf is irreducible. This holds because middle convolution is invertible, and hence sends irreducible objects to irreducible objects. A proof is given in [Kat96, Theorem 3.3.3(2d)] for $\overline{\mathbb{Q}}_{\ell}$ sheaves, but the same proof works for sheaves of $\mathbb{Z}/\ell\mathbb{Z}$ modules. (See also [Det08, Corollary 1.6.4] for a proof in the characteristic $0$ setting.) We may moreover compute the monodromy of $\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}}$ at the geometric points of $\mathbb{A}^{1}-W$. At branch points of $h$, the monodromy is unipotent via the calculation done in [Kat02, Proposition 5.3.6]. At the other geometric points of $\mathbb{A}^{1}-W$ the calculation is the same as in the proof of [Hal08, Theorem 6.3 and Lemma 6.5]. In particular, at each of the geometric points of $h(D)$, the monodromy is also unipotent. This is also explained in [Kat02, Proposition 5.3.6], where it is also shown that $\mathrm{Drop}_{y}(\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}})\leq 2r$ at all such geometric points $y\in\mathbb{A}^{1}-W$. We conclude by verifying the three hypotheses of [Hal08, Theorem 3.1], whose conclusion implies the statement of the theorem we are proving. In particular, the sheaf $\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}}$ is generated by the inertia around $\operatorname{br}(h),h(Z_{x}),$ and $h(D_{x}-\delta)$. We need to verify hypotheses $(i),(ii),$ and $(iii)$ [Hal08, Theorem 3.1], as well as show the image of monodromy contains a reflection and an isotropic shear, in the language of [Hal08, p. 185]. We claim the local monodromy around a point of $h(Z_{x})\subset W$ over which $A_{x}$ has toric part of codimension $1$ acts as a reflection, while the local monodromy around a point of $h(D_{x})$ acts as an isotropic shear. These claims are proven in the case of elliptic curves in [Hal08, Lemma 6.5] and the proof for higher dimensional abelian varieties is analogous. In order to verify $(i)$, take the value labeled $r$ in [Hal08, Theorem 3.1] to be what we are calling $2r=2(\dim A-\dim U_{\overline{b}})$. Maintaining our notation, we have seen above that the images of inertia around the above mentioned geometric points $y\in S:=\mathbb{A}^{1}-W$ generate an irreducible representation, and satisfy $\mathrm{Drop}_{y}(\phi^{*}{\mathcal{S}e\ell}_{\mathscr{F}^{n}_{\overline{b}}})\leq 2(\dim A-\dim U_{\overline{b}})$. This verifies [Hal08, Theorem 3.1(i)]. Taking $S_{0}\subset S$ to be the subset of the $f+1$ geometric points over $h(Z)$, we find $2(2r+1)(\\#S_{0}({\overline{b}}))\leq\dim V$ by rearranging the assumption that $\displaystyle n>\frac{2(2r+1)(f+1)-\sum_{y\in Z({\overline{b}})}\mathrm{Drop}_{y}(\mathscr{F})}{2r}-(2g-2),$ using our computation for the dimension of $V$ from 5.2.6. This verifies [Hal08, Theorem 3.1(ii)]. Finally, every $\gamma\in S-S_{0}$ has unipotent monodromy, as we showed above. Hence, every $\gamma\in S-S_{0}$ has order a power of $\ell$, so has order prime to $(2r+1)!$ whenever $\ell>2r+1$. This verifies [Hal08, Theorem 3.1(iii)]. Applying [Hal08, Theorem 3.1] gives result. ∎ ### 7.2. Computing the monodromy for prime-power $\nu$ Our next goal is to generalize Theorem 7.1.1 to prime power $\nu$, and then to general composite $\nu$. Our short-term aim is to prove 7.2.2, which will imply that if we have big monodromy $\bmod\ell$, we also have big monodromy $\bmod\ell^{j}$ for any integer $j>0$. ###### Definition 7.2.1. Suppose $Q$ is a quadratic form over $\mathbb{Z}/\ell^{k}\mathbb{Z}$. The lie algebra $\mathfrak{so}(Q)(\mathbb{F}_{\ell})$ is by definition $\ker(\operatorname{SO}(Q)(\mathbb{Z}/\ell^{2}\mathbb{Z})\to\operatorname{SO}(Q)(\mathbb{Z}/\ell\mathbb{Z}))$. We thank Eric Rains for help with the following proof. ###### Proposition 7.2.2. Let $s\geq 3$ and $\ell\geq 5$ a prime. Let $(V,Q)$ be a non-degenerate quadratic space of rank $s$ over $\mathbb{Z}/\ell\mathbb{Z}$. Suppose $G\subset\Omega(Q)(\mathbb{Z}/\ell^{j}\mathbb{Z})$ is a subgroup so that the composition $G\to\Omega(Q)(\mathbb{Z}/\ell^{j}\mathbb{Z})\to\Omega(Q)(\mathbb{Z}/\ell\mathbb{Z})$ is surjective. Then, $G=\Omega(Q)(\mathbb{Z}/\ell^{j}\mathbb{Z})$. ###### Proof. This is a special case of [Vas03, Theorem 1.3(a)]. Since there are a few mistakes in other parts of that theorem statement (though not in the part relevant to the proposition we’re proving) we spell out a few more details here. The argument proceeds as indicated in the second to last paragraph of [Vas03, p. 327]. First, as in [Vas03, Lemma 4.1.2] we can reduce to the case $j=2$. To deal with the case $j=2$, it is enough to show $G$ meets the Lie algebra $\mathfrak{so}(Q)(\mathbb{F}_{\ell})$ nontrivially, as argued in [Vas03, 4.4.1]. Finally, in [Vas03, Theorem 4.5] it is shown that $G$ meets the Lie algebra nontrivially. ∎ ### 7.3. Bootstrapping to general composite $\nu$ We next collect a few lemmas to bootstrap from showing there is big monodromy modulo prime powers, to showing there is big monodromy modulo composite integers. The main result is 7.3.3. The general strategy will be to apply Goursat’s lemma. A key input in Goursat’s lemma is to understand which simple groups appear as subquotients of orthogonal groups. As a first step, using 7.2.2, we can prove $\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$ is perfect. ###### Lemma 7.3.1. For $s\geq 3$, $\nu$ a positive integer, and $(V,Q)$ a non-degenerate quadratic space of rank $s$ over $\mathbb{Z}/\nu\mathbb{Z}$, $\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$ is perfect. That is, $\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$ is its own commutator. ###### Proof. Write $\nu=\prod_{i=1}^{t}\ell_{i}^{a_{i}}$ for $\ell_{i}$ pairwise distinct primes. Note $\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z})$ is perfect as shown in [Wil09, p. 73, lines 2-7]. Then, since the commutator subgroup $\displaystyle\left[\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z}),\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})\right]\subset\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ is a subgroup of $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ surjecting onto $\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z})$, it must be all of $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ by 7.2.2. Finally, as commutators commute with products, and $\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})=\prod_{i=1}^{t}\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$, it follows that $\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$ is its own commutator. ∎ The next result lets us relate monodromy for prime power $\nu$ to the monodromy for general composite $\nu$. ###### Proposition 7.3.2. Let $s\geq 5$. Let $(V,Q)$ be a non-degenerate quadratic space of rank $s$ over $\mathbb{Z}/\nu\mathbb{Z}$. Suppose $G\subset\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$ is a subgroup so that for each prime $\ell\mid\nu$, the composition $G\to\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})\to\Omega(Q)(\mathbb{Z}/\ell\mathbb{Z})$ is surjective. Then, $G=\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$. ###### Proof. We have already proven this in the case $\nu$ is a prime power in 7.2.2. It now remains to deal with general composite $\nu$. To this end, write $\nu=\prod_{i=1}^{t}\ell_{i}^{a_{i}}$, for $\ell_{i}$ pairwise distinct primes. The proposition follows from an application of Goursat’s lemma, as we now explain. We will show that the groups $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ for $1\leq i\leq t$ satisfy the following two properties: $(1)$ they have trivial abelianization and $(2)$ they have no finite non-abelian simple quotients in common. These two facts verify the hypotheses of Goursat’s lemma as stated in [Gre10, Proposition 2.5], which implies that $G=\prod_{i=1}^{t}\Omega(Q)\left(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z}\right)=\Omega(Q)(\mathbb{Z}/\nu\mathbb{Z})$. It remains to verify $(1)$ and $(2)$. Observe that $(1)$ follows from 7.3.1. To conclude our proof, we only need to check $(2)$: that the groups $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ for $1\leq i\leq t$ have no finite non-abelian simple quotients in common. For $G^{\prime}$ a group, let $\operatorname{Quo}(G^{\prime})$ denote the set of finite simple non-abelian quotients of $G^{\prime}$. To prove $(2)$, it suffices to show $\operatorname{Quo}(\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z}))=\left\\{\mathbb{P}\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z})\right\\}.$ Note that the latter group is indeed simple by [Wil09, 3.7.3 and 3.8.2], using that $s\geq 5$. So, we now check $\operatorname{Quo}(\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z}))=\left\\{\mathbb{P}\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z})\right\\}.$ Since every finite simple quotient appears as some Jordan Holder factor, it suffices to check the all simple Jordan Holder factors of $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})$ are contained in $\\{\mathbb{P}\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z}),\mathbb{Z}/\ell_{i}\mathbb{Z},\mathbb{Z}/2\mathbb{Z}\\}.$ To see this, consider the surjections $\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}}\mathbb{Z})\to\Omega(Q)(\mathbb{Z}/\ell_{i}^{a_{i}-1}\mathbb{Z})\to\cdots\to\Omega(Q)(\mathbb{Z}/\ell_{i}^{2}\mathbb{Z})\to\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z})\to\left\\{\operatorname{\mathrm{id}}\right\\}$. From these surjections, we obtain an associated filtration. The Jordan Holder factors associated to any refinement of this filtration will all lie in $\\{\mathbb{P}\Omega(Q)(\mathbb{Z}/\ell_{i}\mathbb{Z}),\mathbb{Z}/\ell_{i}\mathbb{Z},\mathbb{Z}/2\mathbb{Z}\\}$ since the kernels of all maps but the last are products of $\mathbb{Z}/\ell_{i}\mathbb{Z}$. ∎ ###### Proposition 7.3.3. Keep assumptions as in 7.1.4. Suppose ${\overline{b}}\to B$ is a geometric point. If (7.1) $\displaystyle n>\max\left(2,2g,\frac{2(2r+1)(f+1)-\sum_{y\in D_{x}({\overline{b}})}\mathrm{Drop}_{y}(\mathscr{F})}{2r}-(2g-2)\right),$ then the geometric monodromy representation $\rho_{\mathscr{F}_{\overline{b}}^{n}}:\pi_{1}(\operatorname{QTwist}^{n}_{U_{\overline{b}}/\overline{b}})\to\operatorname{Aut}(V_{\mathscr{F}_{\overline{b}}^{n}})$ satisfies $\Omega(Q_{\mathscr{F}_{\overline{b}}^{n}})\subset\operatorname{im}(\rho_{\mathscr{F}_{\overline{b}}^{n}})\subset\operatorname{O}(Q_{\mathscr{F}_{\overline{b}}^{n}})$ and $\operatorname{im}(\rho_{\mathscr{F}^{n}_{\overline{b}}})\not\subset\operatorname{SO}(Q_{\mathscr{F}^{n}_{\overline{b}}})$. ###### Proof. We have seen in 7.1.3 that $\operatorname{im}(\rho_{\mathscr{F}_{\overline{b}}^{n}})\subset\operatorname{O}(Q_{\mathscr{F}_{\overline{b}}^{n}})$ holds. By Theorem 7.1.1, we know $\Omega(Q_{\mathscr{F}[\ell]^{n}})\subset\operatorname{im}(\rho_{\mathscr{F}_{\overline{b}}[\ell]^{n}})$ for each prime $\ell\mid\nu$. It follows from 7.3.2 that $\Omega(Q_{\mathscr{F}^{n}_{\overline{b}}})\subset\operatorname{im}(\rho_{\mathscr{F}_{\overline{b}}^{n}})$. Note that since $n>2$, the formula for the rank of $V_{\mathscr{F}^{n}_{\overline{b}}}$ from 5.2.6 shows it is at least $5$, so the hypotheses of 7.3.2 are satisfied. From Theorem 7.1.1, we also find that $\operatorname{im}(\rho_{\mathscr{F}^{n}_{\overline{b}}})\not\subset\operatorname{SO}(Q_{\mathscr{F}^{n}_{\overline{b}}})$. ∎ ### 7.4. Understanding the image of the Dickson invariant map Having shown that the image of monodromy is close to the orthogonal group, so in particular contains $\Omega(Q_{\mathscr{F}^{n}_{B}})$, its failure to equal the orthogonal group can be understood in terms of the spinor norm and Dickson invariant. The spinor norm will not have much effect on the distribution of Selmer elements, but the Dickson invariant will have a huge effect, and is closely connected to the parity of the rank of $A$ in the case $\mathscr{F}_{b}\simeq A[\nu],$ for $A\to U$ an abelian scheme as in 5.1.9. In the remainder of this section, specifically 7.4.6, we precisely determine the image of the Dickson invariant, under the arithmetic monodromy representation $\rho_{\mathscr{F}^{n}_{b}}$. Our strategy for determining the arithmetic monodromy will be to use equidistribution of Frobenius elements, and compute images of Frobenius elements by relating them to Selmer groups. The following notation for the distribution of Selmer groups will make it convenient to express the types of Selmer groups which appear. ###### Definition 7.4.1. Keep assumptions as in as in 5.1.4 and 5.1.9, and assume that $B$ is a local scheme so that $b\in B$ is the unique closed point and has residue field contained in $\mathbb{F}_{q}$. In particular, $\mathscr{F}_{b}\simeq A[\nu]$ for $A\to U_{b}$ a polarized abelian scheme with polarization degree prime to $\nu$. Let $\mathcal{N}$ denote the set of isomorphism classes of finite $\mathbb{Z}/\nu\mathbb{Z}$ modules. Let $X_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ denote the probability distribution on $\mathcal{N}$ defined by $\displaystyle\operatorname{Prob}\left(X_{A[\nu]^{n}_{\mathbb{F}_{q}}}=H\right)=\frac{\\#\\{x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q}):\operatorname{Sel}_{\nu}(A_{x})\simeq H\\}}{\\#\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})}.$ Here, as usual, point counts of stacks are weighted inversely proportional to the isotropy group at that point. For $i\in\\{0,1\\}$, let $\mathcal{N}^{i}\subset\mathcal{N}$ denote the subset of $\mathscr{N}$ of those $H$ so that there exists some $\mathbb{Z}/\nu\mathbb{Z}$ module $G$ such that $H\simeq(\mathbb{Z}/\nu\mathbb{Z})^{i}\times G^{2}$. Given $H\in\mathcal{N}^{i}$, define $\displaystyle\operatorname{Prob}\left(X^{i}_{A[\nu]^{n}_{\mathbb{F}_{q}}}=H\right)=\frac{\\#\\{x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q}):\operatorname{Sel}_{\nu}(A_{x})\simeq H\\}}{\\#\\{\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q}):\operatorname{Sel}_{\nu}(A_{x})\in\mathcal{N}^{i}\\}}.$ The next two lemmas gives the key constraint on Tate-Shafarevich groups and Selmer groups we will use to determine the image of the Dickson invariant. It is one of the few places in this paper that the arithmetic of abelian varieties comes crucially into play. ###### Lemma 7.4.2. Let $\nu$ be an odd positive integer. Let $K$ be the function field of a curve over a finite field, and let $A$ be an abelian variety over $K$ with a polarization of degree prime to $\nu$. Then, there is a finite $\mathbb{Z}/\nu\mathbb{Z}$ module $G$ so that either $\Sha(A)[\nu]\simeq G^{2}$ or $\Sha(A)[\nu]\simeq G^{2}\oplus\mathbb{Z}/\nu\mathbb{Z}$. ###### Remark 7.4.3. If we assume the BSD conjecture, $\Sha(A)$ will be finite and then the assumptions that the polarization has degree prime to $\nu$ and $\nu$ is odd will imply $\Sha(A)[\nu]$ has square order. ###### Remark 7.4.4. The condition that the polarization has degree prime to $\nu$ is important here: In general, even when the Tate-Shafarevich group is known to be finite, it can fail to be a square or twice a square, see [CLQR04, p. 278, Theorem 1.4]. ###### Proof. To approach this, we first review some general facts about the structure of the Tate-Shafarevich group. We can write $\Sha(A)[\ell^{\infty}]\simeq(\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell})^{r_{\ell}}\oplus K_{\ell}$, where $K_{\ell}$ is a finite group and $r_{\ell}$ is the rank of $\Sha(A)[\ell^{\infty}]$. Note that the BSD conjecture would imply $r_{\ell}=0$, but we will not use this. We next claim that $\oplus_{\ell\mid\nu}K_{\ell}\simeq G_{\operatorname{nd}}^{2}$, for some finite $\mathbb{Z}/\nu\mathbb{Z}$ module $G_{\operatorname{nd}}$. Indeed, let $\Sha(A)[\nu]_{\operatorname{nd}}$ denote the non-divisible part of $\Sha(A)[\nu]$. Then, $\Sha(A)[\nu]_{\operatorname{nd}}$ has a nondegenerate pairing, by [Tat63, Theorem 3.2], which is antisymmetric by [Fla90, Theorem 1]. Since $\nu$ is odd, any finite $\mathbb{Z}/\nu\mathbb{Z}$ module with an nondegenerate antisymmetric pairing is a square, so there is some $\mathbb{Z}/\nu\mathbb{Z}$ module $G_{\operatorname{nd}}$ with $\Sha(A)[\nu]_{\operatorname{nd}}\simeq G_{\operatorname{nd}}^{2}$. We now conclude the proof. By [TY14, Corollary 1.0.3], $r_{\ell}$ has parity independent of $\ell$. Write $\nu=\prod_{\ell\mid\nu}\ell^{a_{\ell}}$, and take $G=G_{\operatorname{nd}}\oplus(\mathbb{Z}/\ell^{a_{\ell}}\mathbb{Z})^{\lfloor\frac{r_{\ell}}{2}\rfloor}$. We get $\Sha(A)[\nu]\simeq G^{2}$ if $r_{\ell}$ is even for all $\ell\mid\nu$. Similarly, we get $\Sha(A)[\nu]\simeq G^{2}\oplus\mathbb{Z}/\nu\mathbb{Z}$ if $r_{\ell}$ is odd for all $\ell\mid\nu$. ∎ ###### Lemma 7.4.5. Maintain hypotheses from 5.1.4 and notation from 7.4.1. Assume $\nu$ is odd, $n>0$, and $B$ is an integral affine scheme with $2\nu$ invertible on $B$. Let $b\in B$ be a closed point over which $\mathscr{F}_{b}\simeq A[\nu]$, for $A\to U_{b}$ an abelian scheme, as in 5.1.9. The distributions $X_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ are supported on $\mathcal{N}^{0}\coprod\mathcal{N}^{1}$. Hence, (7.2) $X_{A[\nu]^{n}_{\mathbb{F}_{q}}}=\operatorname{Prob}(X_{A[\nu]^{n}_{\mathbb{F}_{q}}}\in\mathcal{N}^{0})\cdot X^{0}_{A[\nu]^{n}_{\mathbb{F}_{q}}}+\operatorname{Prob}(X_{A[\nu]^{n}_{\mathbb{F}_{q}}}\in\mathcal{N}^{1})\cdot X^{1}_{A[\nu]^{n}_{\mathbb{F}_{q}}}.$ ###### Proof. The claim (7.2) follows from the first claim about the support of $X_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ by the law of total expectation. We now verify $X_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ are supported on $\mathcal{N}^{0}\coprod\mathcal{N}^{1}$. Using notation as in 5.1.11, it is enough to show the Selmer group of any quadratic twist $A_{x}$ of $A$ lies in $\mathcal{N}^{0}$ or $\mathcal{N}^{1}$. In general, there is an exact sequence (7.3) ${0}$${A_{x}(U_{x})/\nu A_{x}(U_{x})}$${\operatorname{Sel}_{\nu}(A_{x})}$${\Sha(A_{x})[\nu]}$${0.}$ By 7.4.2, $\Sha(A_{x})[\nu]$ lies in $\mathcal{N}^{0}\coprod\mathcal{N}^{1}$. By 5.2.6(2’), $A_{x}[\nu]=0$, which implies that $A_{x}(U_{x})/\nu A_{x}(U_{x})$ is a free $\mathbb{Z}/\nu\mathbb{Z}$ module. Hence, since $\mathbb{Z}/\nu\mathbb{Z}$ is injective as a $\mathbb{Z}/\nu\mathbb{Z}$ module, the exact sequence (7.3) splits and we obtain $\operatorname{Sel}_{\nu}(A_{x})\simeq A_{x}(U_{x})/\nu A_{x}(U_{x})\oplus\Sha(A_{x})[\nu]$. Now, we see that since $\Sha(A)[\nu]\in\mathcal{N}^{0}\coprod\mathcal{N}^{1}$ and $A_{x}(U_{x})/\nu A_{x}(U_{x})$ is a free $\mathbb{Z}/\nu\mathbb{Z}$ module, $\operatorname{Sel}_{\nu}(A_{x})\in\mathcal{N}^{0}\coprod\mathcal{N}^{1}$. ∎ Finally, we are prepared to compute the image of the Dickson invariant map. ###### Lemma 7.4.6. Assume $\nu$ is odd, $n>0$, and $B$ is an integral affine base scheme $B$ with $2\nu$ invertible on $B$. Suppose $b\in B$ is a closed point with finite residue field, and keep hypotheses as in 5.1.4, 7.1.4. Assume there is an abelian scheme $A\to U_{b}$ so that $\mathscr{F}_{b}\simeq A[\nu]$, as in 5.1.9. The Dickson invariant map $D_{Q_{\mathscr{F}^{n}_{b}}}:\operatorname{O}(Q_{\mathscr{F}^{n}_{b}})\to\prod_{\ell\mid\nu}\mathbb{Z}/2\mathbb{Z}$ sends the arithmetic monodromy group $\operatorname{im}(\rho_{\mathscr{F}^{n}_{b}})$ surjectively to the diagonal copy of $\Delta_{\mathbb{Z}/2\mathbb{Z}}:\mathbb{Z}/2\mathbb{Z}\subset\prod_{\ell\mid\nu}\mathbb{Z}/2\mathbb{Z}$. The same holds for the geometric monodromy group at a geometric point $\overline{b}$ over $b$. ###### Proof. First, we argue it suffices to show the Dickson invariant of the arithmetic monodromy group satisfies $\operatorname{im}(D_{Q_{\mathscr{F}^{n}_{b}}}\circ\rho_{\mathscr{F}^{n}_{b}})\subset\operatorname{im}\Delta_{\mathbb{Z}/2\mathbb{Z}}.$ Indeed, for $\overline{b}$ a geometric point over $b$, the image of the arithmetic monodromy group $D_{Q_{\mathscr{F}^{n}_{b}}}$ contains the image of the geometric monodromy group $D_{Q_{\mathscr{F}^{n}_{\overline{b}}}}$. Assuming we have shown the arithmetic monodromy has image the diagonal $\mathbb{Z}/2\mathbb{Z}$ under the Dickson invariant map, to show they are equal, it is enough to show the geometric monodromy has nontrivial image under the Dickson invariant map. Equivalently, we wish to show the geometric monodromy is not contained in the special orthogonal group, which follows from Theorem 7.1.1. We now verify the arithmetic monodromy group has Dickson invariant contained in $\operatorname{im}\Delta_{\mathbb{Z}/2\mathbb{Z}}.$ The strategy will be to use 7.4.5 to determine the arithmetic monodromy by relating the Dickson invariant map to the parity of the rank of Selmer groups modulo different primes, using equidistribution of Frobenius. Choose $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$. As a first step, we identify $\operatorname{Sel}_{\nu}(A_{x})$ with the $1$-eigenspace of $\rho_{\mathscr{F}^{n}_{b}}(\operatorname{Frob}_{x})$, for $\operatorname{Frob}_{x}$ the geometric Frobenius at $x$. With notation as in 5.3.2, we can identify $\pi^{-1}(x)(\mathbb{F}_{q})\simeq\operatorname{Sel}_{\nu}(A_{x})$. Since $\pi^{-1}(x)(\mathbb{F}_{q})$ can be identified with the $\operatorname{Frob}_{x}$ invariants of $\pi^{-1}(x)(\overline{\mathbb{F}}_{q})$, if $g_{x}:=\rho_{\mathscr{F}^{n}_{b}}(\operatorname{Frob}_{x})$, we also have $\pi^{-1}(x)(\mathbb{F}_{q})\simeq\ker(g_{x}-\operatorname{\mathrm{id}})$. Combining these two isomorphisms, we obtain $\ker(g_{x}-\operatorname{\mathrm{id}})\simeq\operatorname{Sel}_{\nu}(A_{x})$. For $\ell\mid\nu$, we use $g_{x,\ell}$ to denote the image of $g_{x}$ under the map ${\rm{O}}(Q_{\mathscr{F}^{n}_{b}})\to{\rm{O}}(Q_{\mathscr{F}^{n}_{b}[\ell]})$. We similarly obtain $\ker(g_{x,\ell}-\operatorname{\mathrm{id}})\simeq\operatorname{Sel}_{\ell}(A_{x})$. We next constrain the image of the Dickson invariant map applied to $\rho_{\mathscr{F}^{n}_{b}}(\operatorname{Frob}_{x})$. From 7.4.5, we have seen that $\ker(g_{x}-\operatorname{\mathrm{id}})\simeq\operatorname{Sel}_{\nu}(A_{x})\in\mathcal{N}^{0}\coprod\mathcal{N}^{1}$, for $\mathcal{N}^{i}$ defined in 7.4.1. Since the parity of the rank of $H/\ell H$ of any group $H$ in $\mathcal{N}^{0}\coprod\mathcal{N}^{1}$ is independent of the prime $\ell\mid\nu$, it follows that $\dim\ker(g_{x,\ell}-\operatorname{\mathrm{id}})$ has parity of rank independent of $\ell$, for $\ell\mid\nu$. By 2.1.3, we find $\displaystyle\dim\ker(g_{x,\ell}-\operatorname{\mathrm{id}})\bmod 2\equiv\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}-D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell}).$ Since $\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}$ is independent of $\ell\mid\nu$, as $V_{\mathscr{F}^{n}_{b}}$ is a free $\mathbb{Z}/\nu\mathbb{Z}$ module, we also obtain $D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell})$ is independent of $\ell\mid\nu$. In other words, the Dickson invariant map factors through the diagonal copy $\mathbb{Z}/2\mathbb{Z}$ for each Frobenius element associated to $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$. The lemma will now follow from equidistribution of Frobenius elements in the arithmetic fundamental group, as we next explain. At this point, we employ a result on equidistribution of Frobenius, whose precise form we could not find directly in the literature. The result is essentially [Cha97, Theorem 4.1] (see also [Kow06, Theorem 1] and [FLR23, Theorem 3.9]) except that we need a slightly more general statement which also applies to Deligne-Mumford stacks in place of only schemes. The only part of the proof of [Cha97, Theorem 4.1] which does not directly apply to stacks is its use of the Grothendieck- Lefschetz trace formula, but this has been generalized to hold in the context of stacks, see [Sun12, Theorem 4.2]. Using this, we can find a sufficiently large $q$ and $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$ with the following property: the generator $\operatorname{Frob}_{x}$ of $\pi_{1}(x)$ is sent to any particular element of $\operatorname{im}(D_{Q_{\mathscr{F}^{n}_{b}}}\circ\rho_{\mathscr{F}^{n}_{b}})$ under the composition $\pi_{1}(x)\to\pi_{1}(\operatorname{QTwist}^{n}_{U_{b}/b})\xrightarrow{D_{Q_{\mathscr{F}^{n}_{b}}}\circ\rho_{\mathscr{F}^{n}_{b}}}\prod_{\ell\mid\nu}\mathbb{Z}/2\mathbb{Z}$. For our choice of $q$ above, note that we may need to take $q$ to be suitably large, and also if $q=p^{j}$ for $p=\operatorname{\operatorname{char}}\mathbb{F}_{q}$ we may need to impose a congruence condition on $j$. Therefore, since every $\operatorname{Frob}_{x}$ has image contained in the diagonal $\mathbb{Z}/2\mathbb{Z}$, the same must be true of $\operatorname{im}(D_{Q_{\mathscr{F}^{n}_{b}}}\circ\rho_{\mathscr{F}^{n}_{b}})$. ∎ ## 8\. The rank double cover Perhaps surprisingly, the distribution of Selmer groups of abelian varieties is not determined by its moments. As mentioned in the introduction, if one fixes the parity of the rank of $\operatorname{Sel}_{\ell}$, this does not change the distribution of Selmer groups. Even more surprisingly, once one does condition on the parity of the rank of $\operatorname{Sel}_{\ell}$, the BKLPR distribution is determined by its moments. In this section, we investigate the geometry associated to a certain double cover of $\operatorname{QTwist}^{n}_{U/B}$, which we define in § 8.1. In § 8.2, we will use our homological stability machinery to bound the dimensions of the cohomology of this double cover. In § 8.3, we relate this double cover to the parity of the dimension of $\operatorname{Sel}_{\ell}$ of an abelian variety. Specifically, suppose we are given a symplectically self-dual sheaf $\mathscr{F}$ on $U$, and a point $b\in B$ with $\mathscr{F}\simeq A[\nu]$, for $A\to U_{b}$ an abelian scheme. We will define a particular double cover $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ of $\operatorname{QTwist}^{n}_{U/B}$ so that the images $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})\to\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$ corresponds precisely to abelian varieties whose rank has parity equal to $\operatorname{rk}V_{\mathscr{F}^{n}_{B}}\bmod 2$. ### 8.1. The rank double cover and its coefficient system We now define the rank double cover, and subsequently proceed to show the sequence of rank double covers form a coefficient system. ###### Definition 8.1.1. With notation as in 7.1.1, let $\operatorname{pr}_{1}:\prod_{\ell\mid\nu}\mathbb{Z}/2\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}$ denote the projection onto the first factor. We define $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}\to\operatorname{QTwist}^{n}_{U/B}$ as the finite étale double cover corresponding to the map $\operatorname{pr}_{1}\circ D_{Q_{\mathscr{F}^{n}_{B}}}\circ\rho_{\mathscr{F}^{n}_{B}}:\pi_{1}(\operatorname{QTwist}^{n}_{C/B})\to\mathbb{Z}/2\mathbb{Z}$. In order to describe the coefficient system associated to the rank double cover, we first describe the coefficient system associated to Selmer spaces, and their $H$-moments. ###### Example 8.1.2. Let $B=\operatorname{Spec}\mathbb{C}$ and let $\mathscr{F}$ be a symplectically self-dual sheaf over $U$ as in 5.1.4. Fix a nontrivial finite $\mathbb{Z}/\nu\mathbb{Z}$ module $H$. With notation as in 3.1.9, consider the coefficient system $H_{S_{\mathscr{F},H,g,f}}$ whose $n$th part is the free vector space generated by $S_{\mathscr{F}^{n}_{B},H,g,f}$ as we now define. Take $G_{H}:=\operatorname{\mathrm{A}^{\operatorname{H}}\mathrm{Sp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, as in (6.7), and, with notation as in (6.7), take $c_{H}:=\Pi^{-1}(-\operatorname{\mathrm{id}})$. Take $S_{\mathscr{F}^{n}_{B},H,g,f}\subset\mathrm{Hom}(\pi_{1}(X^{\oplus n}\oplus A_{g,f}-x^{\oplus n},p_{g,f}),G_{H})$ to the same subset $\mathcal{S}$ described in 6.4.1. (So, in the notation of 3.1.9, we are calling $S_{\mathscr{F}^{n}_{B},H,g,f}$ what we called $T^{n}_{G_{H},c_{H},g,f}$ in 3.1.9.) More precisely, $S_{\mathscr{F}^{n}_{B},H,g,f}\subset\mathrm{Hom}(\pi_{1}(X^{\oplus n}\oplus A_{g,f}-x^{\oplus n},p_{g,f}),G_{H})$ is the subset where the loops around the $n$ punctures lie in $c_{H}$, the image of the local inertia around the $f+1$ punctures is fixed under composition with $G_{H}\to\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ and the image of any $\phi\in S_{\mathscr{F}^{n}_{B},H,g,f}$ under composition with $G_{H}\to\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})/\\{\pm 1\\}$ is independent of $\phi$. So long as we choose the basepoint $p_{g,f}$ to lie on the boundary of $A_{g,f}$, We can also restrict any homomorphism $\mathrm{Hom}(\pi_{1}(X^{\oplus n}\oplus A_{g,f}-x^{\oplus n},p_{g,f}),G_{H})$ to a homomorphism $\mathrm{Hom}(\pi_{1}(X^{\oplus n}-x^{\oplus n},p_{g,f}),G_{H})$. We denote by $S_{\mathscr{F}^{n}_{B},H,0,0}\subset\mathrm{Hom}(\pi_{1}(X^{\oplus n}-x^{\oplus n},p_{g,f}),G_{H})$ the restriction of $S_{\mathscr{F}^{n}_{B},H,g,f}$ to $\mathrm{Hom}(\pi_{1}(X^{\oplus n}-x^{\oplus n},p_{g,f}),G_{H})$. Define $H_{S_{\mathscr{F},H,0,0}}$ to be the associated coefficient system, whose $n$th piece is $H_{S_{\mathscr{F}^{n}_{B},H,0,0}}$, the free vector space generated by $S_{\mathscr{F}^{n}_{B},H,0,0}$. Take $V:=H_{S_{\mathscr{F},H,0,0}}$ and take $F:=H_{S_{\mathscr{F},H,g,f}}$. We claim that $V$ forms a coefficient system for $\Sigma^{1}_{0,0}$ and $F$ forms a coefficient system for $\Sigma^{1}_{g,f}$ over $V$. Indeed, these sets $S_{\mathscr{F}^{n}_{B},H,g,f}$ are fixed under the action of $B^{n}_{g,f}$ by 6.4.3. Hence, they form coefficient systems by 3.1.9. We use the notation $\operatorname{Hur}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$ to denote the finite unramified covering space over $\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}}$ corresponding to the kernel of the finite image representation $\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n})\to\operatorname{Aut}(F_{n})$. We next aim to define the coefficient system associated to the rank double cover. In order to define it and show the rank double cover is indeed a coefficient system, we will need some different ways of thinking about the rank double cover. As a first step to describing it explicitly, the rank double cover is a $\mathbb{Z}/2\mathbb{Z}$ gerbe over its coarse space, and the next two lemmas allow us to give some description of what this gerbe looks like. ###### Lemma 8.1.3. Let $X$ be a finite type connected scheme over a finite type base $B$ on which $2$ is invertible. Let $\mathscr{X}$ denote a $\mu_{2}$ gerbe over $X$. Suppose we are given a finite étale double cover $\mathscr{Y}\to\mathscr{X}$. Then there is some $Y\to X$ so that $\mathscr{Y}\simeq Y\times_{X}\mathscr{X}$ if and only if $\mathscr{Y}\not\simeq X$. Moreover, if $\mathscr{Y}\simeq X$, the fiber of $\mathscr{Y}$ over the residual $B\mu_{2}$ gerbe at a geometric point of $\mathscr{X}$ is the residue field, while otherwise, the fiber over a residual gerbe at a geometric point is two copies of $B\mu_{2}$. In the case that $\mathscr{Y}$ is pulled back from $X$, $Y$ is the coarse space of $\mathscr{Y}$. ###### Remark 8.1.4. In the statement of 8.1.3 that $\mathscr{Y}\simeq Y\times_{X}\mathscr{X}$, the implicit map $\mathscr{X}\to X$ is the map realizing $X$ as the coarse space of $\mathscr{X}$. ###### Proof. First, suppose $\mathscr{Y}\simeq X$. Suppose, for sake of contradiction some $Y$ exists so that $\mathscr{Y}$ is the pullback of $Y$. Since the composition $X\to\mathscr{X}\to X$ is the identity, we obtain a map $X\to Y$, which would force $Y=X\coprod X$. But the pullback $\mathscr{X}\times_{X}(X\coprod X)$ is not $X$, but rather $\mathscr{X}\coprod\mathscr{X}$. In this case, $X\to\mathscr{X}$ is the trivial gerbe, so $\mathscr{X}\simeq X\times_{B}B(\mu_{2})$. For the other case, suppose $\mathscr{Y}\to\mathscr{X}$ is a finite étale double cover, not isomorphic to $X$. Let $Y$ be the coarse space of $\mathscr{Y}$. If $\mathscr{Y}$ were a scheme, since it has a degree $2$ map to $\mathscr{X}$, it would have a degree $1$ map to $X$, forcing $Y\simeq X$. Since $Y$ is not isomorphic to $X$, it cannot be a scheme, and so must be a $\mu_{2}$ gerbe over its coarse space $Y$. This implies $Y\to X$ is a finite étale double cover. Then, $\mathscr{Y}=Y\times_{X}{\mathscr{X}}$, as may be verified on an étale cover of $X$ trivializing the gerbe $\mathscr{X}$. In this case, the fiber over the residual gerbe at a geometric point is identified with a gerbe over the fiber of $Y\to X$, and so is two geometric points. ∎ ###### Lemma 8.1.5. In the setting of 8.1.1, suppose $B=\operatorname{Spec}k,$ for $k$ a field of characteristic not $2$. If $\dim V_{\mathscr{F}^{n}_{B}}$ is odd, $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ is the coarse space of $\operatorname{QTwist}^{n}_{U/B}$. If $\dim V_{\mathscr{F}^{n}_{B}}$ is even this cover is pulled back from the coarse space of $\operatorname{QTwist}^{n}_{U/B}$. That is, letting $\operatorname{QTwist}^{\operatorname{coarse},n}_{U/B}$ denote the coarse space of $\operatorname{QTwist}^{n}_{U/B}$ and $\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}}$ denote the coarse space of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$, there exists a finite étale double cover $\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}}\to\operatorname{QTwist}^{\operatorname{coarse},n}_{\mathscr{F}}$ so that (8.1) ${\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}}$${\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}}}$${\operatorname{QTwist}^{n}_{U/B}}$${\operatorname{QTwist}^{\operatorname{coarse},n}_{U/B}}$ is a fiber square. ###### Proof. First, to understand the relevance of the parity of the dimension of $V_{\mathscr{F}^{n}_{B}}$, we consider the action of the the nontrivial element of the isotropy group group at a geometric point $x\in\operatorname{QTwist}^{n}_{U/B}$. If this element of the isotropy group acts nontrivially on the double cover, the fiber of the double cover is a copy of the field, while if it acts trivially, the fiber is two copies of $B\mu_{2}$. The point $x$ corresponds to a double cover $X_{x}\to C_{x}$. The element of the isotropy group corresponds to the nontrivial automorphism of $X_{x}$ over $C_{x}$, which acts by $-1$ on $A_{x}$ from its definition as a quadratic twist 5.1.4. Hence, this automorphism also acts by $-1$ on $(V_{\mathscr{F}^{n}_{B}})_{x}=H^{1}(C_{x},(j_{x})_{*}A_{x})$, which is the fiber of $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ over $x$. The induced action on the double cover corresponding to the Dickson invariant is therefore obtained from the determinant of multiplication by $-1$, which $(-1)^{\dim V}$. Hence, the Dickson invariant is trivial if $\dim V_{\mathscr{F}^{n}_{B}}$ is even, and nontrivial if $\dim V_{\mathscr{F}^{n}_{B}}$ is odd. By 8.1.3, when $\dim V_{\mathscr{F}^{n}_{B}}$ is odd, $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ is $\operatorname{QTwist}^{\operatorname{coarse},n}_{U/B}$. If instead $\dim V_{\mathscr{F}^{n}_{B}}$ is even, it follows from 8.1.3 that $\displaystyle\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}\simeq\operatorname{QTwist}^{n}_{U/B}\times_{\operatorname{QTwist}^{\operatorname{coarse},n}_{U/B}}\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}},$ as we wished to show. ∎ Next, it will be useful to have a description of how certain generators act on the rank double cover, in order to show it is a coefficient system over the trivial coefficient system for $\Sigma^{1}_{0,0}$. Using 8.1.5, when $\operatorname{rk}V_{\mathscr{F}^{n}_{B}}$ is odd, it is not too difficult to see that the rank double cover will correspond to a coefficient system. The trickier case to analyze is that when $\operatorname{rk}V_{\mathscr{F}^{n}_{B}}$ is even, and the following lemma will help us with this. ###### Lemma 8.1.6. We use notation for $s_{i}$ and $\gamma_{i}$ as in § 6.2. For $n$ even, and $B=\operatorname{Spec}\mathbb{C}$, and $\operatorname{rk}V_{\mathscr{F}^{n}_{B}}$ is even, generators of $\pi_{1}(\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}})$ mapping to the following generators of $\pi_{1}(\operatorname{Conf}^{n}_{\Sigma^{1}_{g,f}})$ act on $V_{\mathscr{F}^{n}_{B}}$ with the Dickson invariants as claimed: 1. (1) There is a partition $\Delta_{0}\coprod\Delta_{1}=\\{s_{1},\ldots,s_{f+1}\\}$ not depending on $n$ or $j$ such that moving any of the $p_{j}$, for $1\leq j\leq n$, around any $s\in\Delta_{0}$ acts with trivial Dickson invariant and moving it around any $s\in\Delta_{1}$ acts with nontrivial Dickson invariant. 2. (2) Moving $\gamma_{i}$ in a half-twist about $\gamma_{i+1}$ acts with trivial Dickson invariant. We note that 8.1.6 can also be deduced directly from the explicit formula for the action of $\pi_{1}(\operatorname{Conf}^{n}_{\Sigma^{1}_{g,f}})$ on $\pi_{1}(\Sigma^{1}_{g,f})$. This action can be obtained from the presentation for $\pi_{1}(\operatorname{Conf}^{n}_{\Sigma^{1}_{g,f}})$ [Bel04, Theorem 1.1]. One can use this in conjunction with the description of $V_{\mathscr{F}^{n}_{B}}$ in 6.3.7 to verify 8.1.6 computationally. However, it seems the argument we give here is a bit simpler. ###### Proof. First, we verify $(1)$. First, we fix $n$, and explain independence of $j$. The double cover is described in terms of a surjection from a finite index subgroup of $B^{n}_{g,f}$ to $\mathbb{Z}/2\mathbb{Z}$. Whether the monodromy is trivial or nontrivial is only a function of the conjugacy class of the element. Since the loops sending $p_{j}$ around a fixed $s_{i}$ are all conjugate in $B^{n}_{g,f}$, as well as in the finite index subgroup $\pi_{1}(\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}})$, we obtain independence of $j$. We can further obtain that this description is independent of $n$ by using that the coarse spaces of the covers $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ correspond to the coefficient system as in 8.1.2, and hence restrict compatibly to subsurfaces for smaller values of $n$. We now turn to part $(2)$. Fixing a point $[\mathscr{F}^{\prime}]\in\operatorname{QTwist}^{n}_{U/B}$, we use the description of torsors for $\mathscr{F}^{\prime}$ given in 6.3.7. Without resorting to the formulas present in [Bel04, Theorem 1.1], we know there must be some formula expressing the result of passing $\gamma_{i}$ in a half twist about $\gamma_{i+1}$ as a product of matrix entries appearing in 6.3.7. Viewing each monodromy matrix associated to $\gamma_{i}$ as lying in $\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ and reducing to $\pm\operatorname{\mathrm{id}}$ in $\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, we find that there is no interaction between the different basis vectors for the vector $v$ in the presentation (6.3). Therefore, the action can be viewed as a block diagonal matrix with $2r$ blocks, corresponding to the $2r$ entries of $v$. Since all of these blocks are the same, the determinant of the resulting matrix is the $2r$th power of another matrix, and therefore its determinant must be a square. Hence the Dickson invariant is trivial, as claimed. ∎ Building on 8.1.2, we next describe the coefficient system corresponding to the rank double cover. ###### Example 8.1.7. For simplicity, we work over the complex numbers in this example. With notation as in 3.1.9, and 8.1.2, define the coefficient system $H^{\operatorname{rk}}_{g,f}$ for $\Sigma_{g,f}^{1}$ over the trivial coefficient system $V$ from 3.1.11 as follows: Let $H^{\operatorname{rk}}_{g,f}$ denote the coefficient system corresponding to the coarse space of the cover $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}\to\operatorname{Conf}^{n}_{U/B}$. In particular, $(H^{\operatorname{rk}}_{g,f})_{0}$ is either dimension $2^{2g}$ or $2^{2g+1}$ depending on whether $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ is the coarse space of $\operatorname{QTwist}^{n}_{U/B}$ or not, as in 8.1.5. We next check this is indeed a coefficient system. Note that the coarse space of $\operatorname{Sel}_{\mathscr{F}^{n}_{B}}$ is identified with the finite étale cover of $\operatorname{Conf}^{n}_{U/B}$ corresponding to the coefficient system $H_{S_{\mathscr{F},H,g,f}}$ (after quotienting by the $G_{H}$ conjugation action, and taking the cover associated to the kernel of the $B^{n}_{g,f}$ representation) by 6.4.5. Moreover, depending on the parity of the rank of $V_{\mathscr{F}^{n}_{B}}$, (which is independent of $n$,) it follows from 8.1.5 that $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ is either the coarse space of $\operatorname{QTwist}^{n}_{U/B}$ when $\dim V_{\mathscr{F}^{n}_{B}}$ is odd or is a double cover of the coarse space of $\operatorname{QTwist}^{n}_{U/B}$ when $\dim V_{\mathscr{F}^{n}_{B}}$ is even. The coarse space itself is expressible as a coefficient system for $\Sigma^{1}_{g,f}$ over the trivial coefficient system for $\Sigma^{1}_{0,0}$, so we now focus on the other case that $\dim V_{\mathscr{F}^{n}_{B}}$ is even. In the case that $\dim V_{\mathscr{F}^{n}_{B}}$ is even, so the coarse space of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ is a double cover of the coarse space of $\operatorname{QTwist}^{n}_{U/B}$, we claim this is also a sequence of covers associated to a coefficient system for $\Sigma^{1}_{g,f}$ over the trivial coefficient system for $\Sigma^{1}_{0,0}$. Indeed, this holds because $H_{S_{\mathscr{F},H,g,f}}$ forms a coefficient system for $\Sigma^{1}_{g,f}$ and taking determinants is compatible with restricting to subsurfaces. Note that the Dickson invariant is trivial upon the restriction of $H_{S_{\mathscr{F},H,g,f}}$ along $X^{\oplus n}\to X^{\oplus n}\oplus A_{g,f}$, by 8.1.6, and so $H^{\operatorname{rk}}_{g,f}$ defines a coefficient system for $\Sigma^{1}_{g,f}$ over the trivial coefficient system for $\Sigma^{1}_{0,0}$. Taking the tensor product of covers associated to $H$ moments and the rank double cover, we finally obtain coefficient systems associated to their fiber product. ###### Example 8.1.8. Continuing with notation as in 8.1.7, define the tensor product of coefficient systems $H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}:=H^{\operatorname{rk}}_{g,f}\otimes H_{S_{\mathscr{F},H,g,f}}$, as in 3.1.12. Take $V:=H_{S_{\mathscr{F},H,0,0}}$ and $F:=H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$. Then, $F$ is a coefficient system over $V$ by 3.1.12, since $H^{\operatorname{rk}}_{g,f}$ is a coefficient system over the trivial coefficient system, as in 3.1.11. Let $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$ denote the finite covering space of $\operatorname{Conf}^{n}_{U/B}$ corresponding to the finite monodromy local system $H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$, see 8.2.1. When $n$ is even, since tensor products of coefficient systems correspond to fiber products of covers, after taking the topological space quotient of $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$ by the conjugation action of $G_{H}$, we obtain the analytification of the finite étale cover over $\operatorname{Conf}^{n}_{U/B}$ given by $\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}\times_{\operatorname{Conf}^{n}_{U/B}}\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}}$. Therefore, $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$ is a finite covering space of $\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{QTwist}^{\operatorname{coarse},\operatorname{rk},n}_{\mathscr{F}}$. ### 8.2. Homological stability of the rank double cover We next set out to prove the main homological stability properties for the spaces related to Selmer groups we are interested in. Namely, in 8.2.3 we will prove these results for the Selmer stacks, the rank double cover, and moments associated to both of these. ###### Notation 8.2.1. Let $H$ be a finite $\mathbb{Z}/\nu\mathbb{Z}$ module of the form $H\simeq\prod_{i=1}^{m}\mathbb{Z}/\nu_{i}\mathbb{Z}$. For $\mathscr{F}$ a symplectically self-dual sheaf of $\mathbb{Z}/\nu\mathbb{Z}$ modules, and hypotheses as in 5.1.4 and 5.1.6, define $\displaystyle\operatorname{Sel}_{\mathscr{F}^{n}_{B}}^{H}:=\operatorname{Sel}_{\mathscr{F}[\nu_{1}]^{n}_{B}}\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{Sel}_{\mathscr{F}[\nu_{2}]^{n}_{B}}\times_{\operatorname{QTwist}^{n}_{U/B}}\cdots\operatorname{Sel}_{\mathscr{F}[\nu_{m}]^{n}_{B}}.$ Also define $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{B}}:=\operatorname{Sel}_{\mathscr{F}^{n}_{B}}^{H}\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$ an define $\operatorname{Hur}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{B}}:=\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{B}}\times_{\operatorname{QTwist}^{n}_{U/B}}\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}$. We use the notation $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$ to denote the finite unramified covering space over $\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}}$ corresponding to the kernel of the finite image representation $\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n})\to\operatorname{Aut}(H^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}).$ ###### Lemma 8.2.2. The hypotheses of 4.3.4 are satisfied if $V=H_{S_{\mathscr{F},H,0,0}}$ and $F$ is either $H_{S_{\mathscr{F},H,g,f}}$ or $F=H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$. ###### Proof. We consider two cases: 1. (1) $V=H_{S_{\mathscr{F},H,0,0}}$ and $F=H_{S_{\mathscr{F},H,g,f}}$, 2. (2) $V=H_{S_{\mathscr{F},H,0,0}}$ and $F=H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$, Note that by 8.1.2 and 8.1.7, $V$ and $F$ are indeed coefficient systems. We will first consider case $(1)$ and show the existence of a homogeneous central $U$ in $R^{V}$ of positive degree finite kernel and cokernel of finite degree. Note that $c_{H}$ typically does not generate $G_{H}=\operatorname{\mathrm{A}^{\operatorname{H}}\mathrm{Sp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ but instead generates the preimage of $\\{\pm 1\\}\subset\mathrm{Sp}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$ in $G_{H}$. Let $S_{H}\subset G_{H}$ denote the subgroup generated by $c_{H}$. Note that $S_{H}$ has order $2\bmod 4$ because $\nu$ is odd. Then, $(S_{H},C_{H})$ is non-splitting in the sense of [EVW16, Definition 3.1] by [EVW16, Lemma 3.2]. It then follows from [EVW16, Lemma 3.5] that there is a homogeneous central $U$ of positive degree with finite degree kernel and cokernel. We can deduce case (2) from case (1). Namely, taking the same operator $U$ as in part $(1)$ we can view $F=H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$ as a finitely many copies of $F=H_{S_{\mathscr{F},H,g,f}}$ (either $2^{2g}$ or $2^{2g+1}$ depending on whether $\operatorname{rk}V_{\mathscr{F}^{n}_{B}}$ is odd or even by 8.1.5). Since we have already shown in the first case that the action of $U$ on $H_{S_{\mathscr{F},H,g,f}}$ has kernel and cokernel of finite degree, the same holds for the action of $U$ on $F=H^{\operatorname{rk}}_{S_{\mathscr{F},H,g,f}}$. ∎ ###### Lemma 8.2.3. Let $H$ be a finite $\mathbb{Z}/\nu\mathbb{Z}$ module and $B=\operatorname{Spec}\mathbb{C}$. Working over the field $\mathbb{Z}/\ell^{\prime}\mathbb{Z}$, with $\ell^{\prime}$ relatively prime to $2,q$, and $\\#\operatorname{\mathrm{ASp}}_{2r}(\mathbb{Z}/\nu\mathbb{Z})$, there is a constant $K$ depending on $H$ but not on $n$, for $n$ even, so that (8.2) $\displaystyle\dim H^{i}(\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n}),H_{S_{\mathscr{F}^{n}_{B},H,g,f}})$ $\displaystyle<K^{i+1}\text{ and}$ $\displaystyle\dim H^{i}(\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n}),H^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}n)$ $\displaystyle<K^{i+1}.$ Suppose $\mathscr{F}$ is as in 6.4.4. Then, (8.3) $\displaystyle\dim H^{i}(\operatorname{Sel}_{\mathscr{F}^{n}_{\mathbb{C}}}^{H},\mathbb{Z}/\ell^{\prime}\mathbb{Z})$ $\displaystyle<K^{i+1}\text{ and }$ $\displaystyle\dim H^{i}(\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}},\mathbb{Z}/\ell^{\prime}\mathbb{Z})$ $\displaystyle<K^{i+1}.$ ###### Proof. First, the bound (8.2) follows from 4.3.4 whose hypotheses are verified by 8.2.2. For (8.3), note that in order to bound the homology of $\operatorname{Sel}_{\mathscr{F}^{n}_{\mathbb{C}}}^{H}$, by transfer and the assumption that $\ell^{\prime}\neq 2$, it suffices to bound the homology of its finite étale double cover $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$. (This uses that components of $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$ are either a scheme or a $\mathbb{Z}/2\mathbb{Z}$ gerbe over a scheme, and the cohomology of such a gerbe is isomorphic to the cohomology of its coarse space.) We use the notation $\operatorname{Hur}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$ and $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$ for the finite unramified covering space over $\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}}$ corresponding to the kernel of the finite image representations $\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n})\to\operatorname{Aut}(\operatorname{H}_{S_{\mathscr{F}^{n}_{B},H,g,f}})$ and $\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n})\to\operatorname{Aut}(H^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}).$ From the definition, we have $\displaystyle H^{i}(\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}},\mathbb{Z}/\ell^{\prime}\mathbb{Z})\simeq H^{i}(\pi_{1}(\operatorname{Conf}^{n}_{X^{\oplus n}\oplus A_{g,f}},x^{\oplus n}),H_{S_{\mathscr{F}^{n}_{B},H,g,f}}).$ To conclude the final statement for bounding the homology of $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$, by transfer, it suffices to show $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$ defines a finite étale cover of the coarse space of $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$. We next use the isomorphism $\operatorname{Sel}_{\mathscr{F}^{n}_{\mathbb{C}}}^{H}\to\operatorname{Hur}^{H}_{\mathscr{F}^{n}_{\mathbb{C}}}$ from 6.4.8 over $\operatorname{QTwist}^{n}_{U/B}$, which also yields the identification $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}\simeq\operatorname{Hur}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$. It therefore suffices to show $\operatorname{Hur}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{\mathbb{C}}}$ has the same homology as a space which has a a finite covering space by $\operatorname{Hur}^{\operatorname{rk}}_{S_{\mathscr{F}^{n}_{B},H,g,f}}$. This was shown in 8.1.8. ∎ ###### Remark 8.2.4. The Hurwitz stacks and Selmer stacks, whose cohomology we analyze in 8.2.3 have (up to finite index issues) an action of $\operatorname{Mod}_{g,f}$ the mapping class group of a genus $g$, $f$-punctured surface. Hence, their stable cohomology groups are $\operatorname{Mod}_{g,f}$ representations. It would be extremely interesting to determine which representations these are. A precursor to doing so would be to compute the dimension of these representations. We also cannot rule out the possibility these dimensions are $0$, and so the representations are not particularly interesting. See also 9.2.5 ### 8.3. Relation between the rank double cover and parity of rank Our main reason for introducing the rank double cover is that it tells us about the parity of the rank of $\operatorname{Sel}_{\ell}$, as we next explain. For the next statement, recall the definition of $\mathcal{N}^{i}$ from 7.4.1. ###### Lemma 8.3.1. Assume $\nu$ is odd, $n>0$ is even, and $B$ is an integral affine scheme with $2\nu$ invertible on $B$. Let $b\in B$ a closed point with residue field $\mathbb{F}_{q_{0}}$. Let $\mathbb{F}_{q}$ be a finite extension of $\mathbb{F}_{q_{0}}$. Use hypotheses as in 5.1.4, 7.1.4, and 5.1.9, so $\mathscr{F}_{b}\simeq A[\nu]$. Let $\ell\mid\nu$ and $i:=\operatorname{rk}V_{\mathscr{F}^{n}_{b}}\bmod 2\in\\{0,1\\}$. Then, for $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$, $\operatorname{Sel}_{\nu}(A_{x})\in\mathcal{N}^{i}$ if and only if $x$ lies in the image of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})\to\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$. ###### Proof. Let $g_{x}:=\rho_{\mathscr{F}^{n}_{b}}(\operatorname{Frob}_{x})$, and for $\ell\mid\nu$, we use $g_{x,\ell}$ to denote the image of $g_{x}$ under the map ${\rm{O}}(Q_{\mathscr{F}^{n}_{b}})\to{\rm{O}}(Q_{\mathscr{F}^{n}_{b}[\ell]})$. First, (2.1) yields $\displaystyle\dim\ker(g_{x,\ell}-\operatorname{\mathrm{id}})\bmod 2\equiv\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}-D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell})\bmod 2.$ Next, 5.3.2 gives $\ker(g_{x}-\operatorname{\mathrm{id}})\simeq\operatorname{Sel}_{\nu}(A_{x})$. Combining these, we find $\displaystyle D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell})\equiv\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}-\dim\ker(g_{x,\ell}-\operatorname{\mathrm{id}})\equiv V_{\mathscr{F}^{n}_{b}}-\dim\operatorname{Sel}_{\ell}(A_{x})\bmod 2.$ Since this holds for every $\ell\mid\nu$, we find that $D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell})$ takes the value $0$ if and only if $\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}\equiv\dim\operatorname{Sel}_{\ell}(A_{x})\bmod 2$. Since the finite étale double cover $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}\to\operatorname{QTwist}^{n}_{U_{b}/b}$ is trivial over each $\mathbb{F}_{q}$ point with trivial Dickson invariant, $D_{Q_{\mathscr{F}^{n}_{b}}}(g_{x,\ell})$ takes the value $0$ if and only if $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$ is in the image of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}}(\mathbb{F}_{q})$. We conclude the result because $\operatorname{rk}V_{\mathscr{F}^{n}_{b}[\ell]}\equiv\dim\operatorname{Sel}_{\ell}(A_{x})$ can be restated as $\operatorname{Sel}_{\nu}(A_{x})\in\mathcal{N}^{i}$, with $i=\operatorname{rk}V_{\mathscr{F}^{n}_{b}}\bmod 2$. ∎ We now use the previous lemma to show that the distribution of Selmer elements on the double cover controlling the parity of the rank agrees with the locus of points on the base where the rank of $\operatorname{Sel}_{\ell}$ has a specified parity. This is a fairly trivial observation, but allows us to connect moments of the rank double cover to moments of the space of quadratic twists with specified parity of rank of $\operatorname{Sel}_{\ell}$. This plays a key role in proving our main theorem, Theorem 1.1.2. For this, recall the definition of $X^{i}_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ from 7.4.1. ###### Lemma 8.3.2. With assumptions and notation as in 8.3.1, so in particular, $i:=\operatorname{rk}V_{\mathscr{F}^{n}_{b}}\bmod 2\in\\{0,1\\}$, we have (8.4) $\displaystyle\frac{\\#\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{b}}(\mathbb{F}_{q})}{\\#\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})}=\mathbb{E}(\\#\mathrm{Hom}(X^{i}_{A[\nu]^{n}_{\mathbb{F}_{q}}},H)).$ ###### Proof. Using 8.3.1, the distribution $X^{i}_{A[\nu]^{n}_{\mathbb{F}_{q}}}$ agrees with the distribution of Selmer groups at points $x\in\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$ in the image of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})\to\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q})$. Since $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}\to\operatorname{QTwist}^{n}_{U_{b}/b}$ is a finite étale double cover, each $\mathbb{F}_{q}$ point of $\operatorname{QTwist}^{n}_{U_{b}/b}$ in the image of a $\mathbb{F}_{q}$ point of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}$ has exactly two $\mathbb{F}_{q}$ points in its preimage. This means that, for $y$ varying over points of $\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})$ and $K\in\mathcal{N}$ a finite $\mathbb{Z}/\nu\mathbb{Z}$ module, $\displaystyle\operatorname{Prob}(X^{i}_{A[\nu]^{n}_{\mathbb{F}_{q}}}\simeq K)$ $\displaystyle=\operatorname{Prob}\left(\operatorname{Sel}_{\nu}(A_{x})\simeq K|x\in\operatorname{im}(\operatorname{QTwist}^{\operatorname{rk},n}_{\mathscr{F}_{b}}(\mathbb{F}_{q})\to\operatorname{QTwist}^{n}_{U_{b}/b}(\mathbb{F}_{q}))\right)$ $\displaystyle=\operatorname{Prob}(\operatorname{Sel}_{\nu}(A_{y})\simeq K).$ Taking the expectation of the number of maps to $H$, which is the same as the number of maps from $H$, it is enough to show the left hand side of (8.4) is the expected number of maps from $H$ to $\operatorname{Sel}_{\nu}(A_{y})$. This follows from 5.3.2 and the definition of $\operatorname{Sel}^{H,\operatorname{rk}}_{\mathscr{F}^{n}_{b}}$ as a fiber product. ∎ ## 9\. Computing the moments The purpose of this section is to combine our homological stability results with our big monodromy results to determine the moments of Selmer groups in quadratic twist families. The analogous problem of determining the moments in the context of Cohen-Lenstra was approached in [EVW16], where the problem was much easier as the relevant big monodromy result was already available in the literature. In § 9.1, we compute various statistics associated to kernels of random elements of orthogonal groups. Via equidistribution of Frobenius elements we then relate this to components of Selmer stacks in § 9.2. ### 9.1. Moments related to random elements of orthogonal groups We next compute statistics associated to random elements of orthogonal groups. In 9.1.5, we compute the distributions of $1$-eigenspaces of random elements of orthogonal group, and show that these limit to the BKLPR distribution as the size of the matrix grows. Moreover, we show this in a strong enough sense so that the limit of the moments is the moment of the limit. Our next computation is quite analogous to that of [FLR23, Proposition 4.13], except that here we work over $\mathbb{Z}/\nu\mathbb{Z}$ for general $\nu$, instead of the case that $\nu$ is prime covered in [FLR23]. For what follows, we use the notation of [FLR23, §4.2.1]. In the case $\nu$ is prime, we let $A,B,C$ be the three nontrivial cosets of $\Omega(Q)$ in ${\rm{O}}(Q)$ so that $\operatorname{sp}^{-}_{Q}$ is nontrivial on $A$ and $C$, while $D_{Q}$ is nontrivial on $B$ and $C$. For $Z$ a nonnegative integer-valued random variable, we let $G_{Z}(t)=\sum_{i\in\mathbb{N}}\operatorname{Prob}(\dim Z=i)t^{i}$. As in [FLR23, §4.2.1], for $\bullet\in\\{\Omega,A,B,C\\}$, we use $\operatorname{RSel}_{V}^{\bullet}$ to denote the random variable given as $\ker(g-\operatorname{\mathrm{id}})$ for $g$ a uniform random element of the coset $\bullet$. ###### Lemma 9.1.1. Let $(Q,V)$ be a quadratic space over $\mathbb{Z}/\ell\mathbb{Z}$, with $\ell$ an odd prime. When $\dim V=2s$ is even, $\displaystyle G_{\operatorname{RSel}^{B}_{V}}$ $\displaystyle=G_{\operatorname{RSel}^{C}_{V}},$ $\displaystyle G_{\operatorname{RSel}^{\Omega}_{V}}$ $\displaystyle=G_{\operatorname{RSel}^{A}_{V}}+\frac{1}{\\#\Omega(Q)}\prod_{i=0}^{s-1}(t^{2}-\ell^{2i}).$ For $a\in\mathbb{F}_{\ell}^{\times}$, let $\operatorname{sgn}(a)$ denote $1$ if $a$ is a square $\bmod\ell$ and $-1$ otherwise. When $\dim V=2s+1$ is odd, $\displaystyle G_{\operatorname{RSel}^{B}_{V}}$ $\displaystyle=G_{\operatorname{RSel}^{C}_{V}}+\frac{2\operatorname{sgn}(-1)\ell^{s}}{\\#\Omega(Q)}\prod_{i=1}^{s-1}(t^{2}-\ell^{2i})$ $\displaystyle G_{\operatorname{RSel}^{\Omega}_{V}}$ $\displaystyle=G_{\operatorname{RSel}^{A}_{V}}+\frac{t}{\\#\Omega(Q)}\prod_{i=0}^{s-1}(t^{2}-\ell^{2i}).$ ###### Proof. For the proof when $\dim V=2s$, note that [FLR23, Lemma 4.7] easily generalizes to show that for any coset $H$ of $\Omega(Q)$ in ${\rm{O}}(Q)$, $G_{\operatorname{RSel}^{H}_{V}}(\ell^{i})=G_{\operatorname{RSel}^{\Omega}_{V}}(\ell^{i})$ whenever $2i+2\leq\dim V$. When $\dim V$ is even, the proof proceeds mutatis mutandis as in [FLR23, Theorem 4.4]. Therefore, it remains to prove the case that $\dim V=2s+1$ is odd. We again proceed following the proof strategy of [FLR23, Theorem 4.4]. By 2.1.3, only even powers of $t$ can appear in $G_{\operatorname{RSel}^{B}_{V}}(t)$ and $G_{\operatorname{RSel}^{C}_{V}}(t)$. These are therefore even polynomials of degree at most $\dim V$ and agree at the $\dim V-1$ values $\pm 1,\pm\ell,\ldots,\pm\ell^{\frac{\dim V-3}{2}}$ by [FLR23, Lemma 4.5]. Since $\dim V$ is odd and the polynomials are even, the polynomials in fact have degree at most $\dim V-1$, and hence are determined up to a scalar. That is, $G_{\operatorname{RSel}^{B}_{V}}(t)-G_{\operatorname{RSel}^{C}_{V}}(t)$ is a scalar multiple of $\prod_{i=1}^{\frac{\dim V-3}{2}}(t^{2}-\ell^{2i})$. To pin that scalar multiple down, we can examine the coefficient of $t^{\dim V-1}$ in $G_{\operatorname{RSel}^{\bullet}_{V}}(t)$, for $\bullet\in\\{B,C\\}$. This coefficient is $\frac{\\#R_{\bullet}(Q)}{\\#\Omega(Q)}$, where $R_{\bullet}(Q)$ is the set of reflections in $\bullet$, since any non- identity element of the orthogonal group fixing a codimension $1$ plane is a reflection. Since there are $\ell^{2s}+q^{s}$ reflections with value $\alpha$ for any square $\alpha\in\mathbb{F}_{\ell}^{\times}$, and $\ell^{2s}-\ell^{s}$ reflections with value $\beta$ for any for any nonsquare $\beta\in\mathbb{F}_{\ell}^{\times}$, the definition of $\operatorname{sp}^{-}_{Q}$ yields that $\displaystyle G_{\operatorname{RSel}^{B}_{V}}-G_{\operatorname{RSel}^{C}_{V}}=\frac{2\operatorname{sgn}(-1)\ell^{s}}{\\#\Omega(Q)}\prod_{i=1}^{\frac{\dim V-3}{2}}(t^{2}-\ell^{2i})=\frac{2\operatorname{sgn}(-1)\ell^{s}}{\\#\Omega(Q)}\prod_{i=1}^{s-1}(t^{2}-\ell^{2i}).$ Finally, the remaining two cosets satisfy the relation $G_{\operatorname{RSel}^{\Omega}_{V}}=G_{\operatorname{RSel}^{A}_{V}}+\frac{1}{\\#\Omega(Q)}\prod_{i=0}^{s-1}(t^{2}-\ell^{2i})$ by an argument analogous to the last paragraph of the proof of [FLR23, Theorem 4.4]. Indeed, $G_{\operatorname{RSel}^{B}_{V}}(t)$ and $G_{\operatorname{RSel}^{C}_{V}}(t)$ are two odd degree $\dim V$ polynomials agreeing on the $\dim V$ values $0,\pm 1,\pm\ell,\ldots,\pm\ell^{\frac{\dim V-3}{2}}$, so their difference is divisible by $t\prod_{i=1}^{\frac{\dim V-3}{2}}(t^{2}-\ell^{2i})$, and the constant of proportionality can be determined using that the identity is the only element with a $\dim V$ dimensional fixed space. ∎ We next define a notion of $m$-total variation distance, which will be useful for proving moments of two distributions converge, see 9.1.4. ###### Definition 9.1.2. Let $\mathcal{N}$ denote the set of isomorphism classes of finite $\mathbb{Z}/\nu\mathbb{Z}$ modules. Let $X,Y$ be two $\mathcal{N}$ valued random variables. For $m\in\mathbb{Z}_{\geq 0}$, we define the $m$-total variation distance or $d^{m}_{\operatorname{TV}}(X,Y)$ $\displaystyle d^{m}_{\operatorname{TV}}(X,Y):=\sum_{H\in\mathcal{N}}(\\#H)^{m}\left|\operatorname{Prob}(X=H)-\operatorname{Prob}(Y=H)\right|.$ ###### Remark 9.1.3. When $m=0$, and the random variable is real valued instead of valued in $\mathcal{N}$, this is twice the usual notion of total variation distance, see [LPW09, §4.1 and Proposition 4.2]. We claim that a sequence of random
11institutetext: Department of Brain and Cognitive Sciences, MIT, USA 22institutetext: Department of Computer Science, ETH Zurich, Switzerland 33institutetext: Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland 44institutetext: Department of Biosystems Science and Engineering, ETH Zurich, Switzerland ✉ 44email<EMAIL_ADDRESS><EMAIL_ADDRESS> Shared first authorship. # M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms Ece Ozkan* (✉) 11 0000-0002-9889-6348 Thomas M. Sutter* (✉) 22 0000-0001-7503-4473 Yurong Hu 33 0009-0008-8997-0543 Sebastian Balzer 44 Julia E. Vogt 22 0000-0002-6004-7770 ###### Abstract Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases. An important metric of cardiac function is the left ventricular ejection fraction (EF), where lower EF is associated with cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology, with ultrasound being a low-cost, real-time, and non-ionizing technology. However, human assessment of echocardiograms for calculating EF is time- consuming and expertise-demanding, raising the need for an automated approach. In this work, we propose using the M(otion)-mode of echocardiograms for estimating the EF and classifying cardiomyopathy. We generate multiple artificial M-mode images from a single echocardiogram and combine them using off-the-shelf model architectures. Additionally, we extend contrastive learning (CL) to cardiac imaging to learn meaningful representations from exploiting structures in unlabeled data allowing the model to achieve high accuracy, even with limited annotations. Our experiments show that the supervised setting converges with only ten modes and is comparable to the baseline method while bypassing its cumbersome training process and being computationally much more efficient. Furthermore, CL using M-mode images is helpful for limited data scenarios, such as having labels for only 200 patients, which is common in medical applications. ###### Keywords: Echocardiography M-mode Ultrasound Ejection Fraction Computer Assisted Diagnosis (CAD) ## 1 Introduction Cardiovascular diseases (CVD) are the leading cause of death worldwide, responsible for nearly one-third of global deaths [29]. Early assessment of cardiac dysfunction through routine screening is essential, as clinical management and behavioral changes can prevent hospitalizations and premature deaths. An important metric for assessing cardiac (dys)function is the left ventricular (LV) ejection fraction (EF), which evaluates the ratio between LV end-systolic and end-diastolic volumes [3, 21]. Echocardiography is the most common and readily available diagnostic tool to assess cardiac function, ultrasound (US) imaging being a low-cost, non- ionizing, and rapid technology. However, the manual evaluation of echocardiograms is time-consuming, operator-dependent, and expertise- demanding. Thus, there is a clear need for an automated method to assist clinicians in estimating EF. M(otion)-mode is a form of US, in which a single scan line is emitted and received at a high frame rate through time to evaluate the dynamics to assess different diseases [23]. M-mode is often utilized in clinical practice e. g. in lung ultrasonography [1, 25] or echocardiography [6, 7, 26, 10]. Since cardiac function assessment relies on heart dynamics, M-mode images can be an excellent alternative to B(rightness)-mode image- or video-based methods. However, little effort is directed toward exploiting M-mode images in an automated manner. Data collection and annotation are expensive for most applications. Therefore, learning from limited labeled data is critical in data-limited problems, such as in healthcare. To overcome this data bottleneck, self-supervised learning (SSL) methods have been recently proposed to learn meaningful high-level representations from unlabeled data [16, 24]. Related Work A few existing works [14, 18] reconstruct M-mode images from B-mode videos to detect pneumothorax using CNNs. Furthermore, authors in [27] propose an automatic landmark localization method in M-mode images. A more related method using M-mode images in an automated manner to estimate EF is [22], which uses single M-mode images in parasternal long-axis view to measure chamber dimensions for calculating EF. For automated EF prediction, some previous works exploit either still-images [17, 31, 8] or spatio-temporal convolutions on B(rightness)-mode echocardiography videos [21]. However, still-image-based methods have a high variability [20], and video-based methods rely on a complex pipeline with larger models. Furthermore, [19] uses vision transformers and CNNs to tackle the problem of estimating the LV EF, and [15] uses geometric features of the LV derived from ECG video frames to estimate EF. The authors in [28] evaluate ML-based methods in a multi-cohort setting using different imaging modalities. In the SSL setting, [5] propose a contrastive learning framework for deep image regression, which consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch using echocardiography frames as input. Our Contribution We propose to extract images from readily available B-mode echocardiogram videos, each mimicking an M-mode image from a different scan line of the heart. We combine the different artificial M-mode images using off-the-shelf model architectures and estimate their EF to diagnose cardiomyopathy in a supervised regime. Using M-mode images allows the model to naturally observe the motion and sample the heart from different angles while bypassing cumbersome 3D models. Secondly, we propose an alternative scheme for predicting EF using generated M-mode images in a self-supervised fashion while extending contrastive learning. We design a problem-specific contrastive loss for M-mode images to learn representations with structure and patient awareness. We evaluate both regimes on the publicly available EchoNet-Dynamic dataset ([20]) and demonstrate both models’ effectiveness. To the best of our knowledge, this is the first work on image-based and temporal information incorporating cardiac function prediction methods to estimate EF. Furthermore, our method can easily be applied to other problems where cardiac dynamics play an essential role in the diagnosis. To ensure reproducibility, we made the code available: https://github.com/thomassutter/mmodeecho. ## 2 Methods This work aims to create a pipeline with as little intervention as possible; thus, our method consists of two parts, as shown in Figure 1. The first part is extracting M-mode images from readily available B-mode videos. The second part includes representation learning, which are lower-level information that preserves more information of the input image and are used to predict EF from M-mode images, including two schemes: supervised and self-supervised learning. Figure 1: Overview of our proposed method. (a) Generate M-mode images from B-mode echocardiography videos at different scan lines. (b) Learn representations from the generated M-mode images using supervised and self- supervised learning schemes. (c) Evaluate EF prediction to diagnose cardiomyopathy. ### 2.1 From B-mode Videos to M-mode Images Assume our dataset contains $N$ patients. For each patient $i=\\{1,2,\cdots,N\\}$, the label $y_{i}$ indicates its EF. Furthermore, the B-mode echocardiogram video of each patient $i$ is given of size $h\times w\times t$ with $h$ being height, $w$ width, and $t$ number of frames of the video. The $m$-th M-mode image of patient $i$ is given as $\boldsymbol{x}_{i}^{m}$ with $m=\\{1,2,\cdots,M\\}$. It is a single line of pixels through the center of the image with an angle $\theta_{m}$ over frames, assuming LV is around the center throughout the video, as in Figure 1(a). This image, corresponding to $\theta_{m}$, is then of size $s_{m}\times t$, with $s_{m}$ as the length of the scan line. For simplicity, we set $s_{m}=h\hskip 2.84544pt\forall\hskip 2.84544ptm$ independent of its angle $\theta_{m}$. For generating multiple M-mode images, a set of $M$ angles $\boldsymbol{\theta}=[\theta_{1},\ldots,\theta_{M}]$ is used to generate $M$ M-mode images, where the angles $\boldsymbol{\theta}$ are equally spaced between $0^{\circ}$ and $180^{\circ}$. While the proposed approach for generating M-mode images is intuitive and works well (see Section 3.3), other approaches are also feasible. For instance, the center of rotation in the middle of the image in our M-mode generation process could be changed. Like that, we could mimic the behavior of the data collection process as every generated M-mode image would resemble a scan line of the US probe. However, the main goal of this work is to highlight the potential of M-mode images for the analysis of US videos. Given our convincing results, we leave the exploration of different M-mode generation mechanisms for future work. ### 2.2 Learning Representations from M-mode Images #### 2.2.1 Supervised Learning for EF Prediction We aim to learn supervised representations using off-the-shelf model architectures to estimate EF. Instead of using a single M-mode, one can aggregate the information of M-mode images from the same patient to increase robustness. We evaluate two fusion methods for aggregating information among the $M$ M-mode images: early-fusion and late-fusion [2]. With early fusion, we construct a $M\times s\times t$ image with the $M$ M-mode images being the $M$ channels of the newly created image. In late-fusion, we exploit three different methods. For all of the late-fusion schemes, we first infer an abstract representation $\boldsymbol{z}_{i}^{m}$ for every M-mode image $\boldsymbol{x}_{i}^{m}$. The representations $\boldsymbol{z}_{i}^{m}$ are then aggregated to a joint representation $\boldsymbol{\tilde{z}}_{i}$ using an LSTM cell [11], averaging, or concatenating. We utilize a standard ResNet architecture [9] with 2D-convolutional layers independent of the fusion principle. With 2D-convolutions, we assume a single M-mode image as a 2D gray-scale image with two spatial dimensions, $s$ and $t$. #### 2.2.2 Self-Supervised Learning for EF Prediction This part aims to learn meaningful representations from unlabeled data to estimate EF using echocardiograms. To this end, we propose an SSL scheme for M-mode images based on contrastive learning, where M-mode images from the same patient can naturally serve as positive pairs since they share labels for many downstream tasks. As discussed by [30], bio-signal data is inherently highly heterogeneous; thus, when applying learning-based methods to patient data, we need to consider both the similarity and the difference between samples originating from the same patient. Thus, we propose a problem-specific contrastive loss with patient and structure awareness, as shown in Figure 2. Figure 2: Overview of our proposed SSL method. The contrastive loss includes (a) patient awareness to attract similarity between data from the same patient while discouraging it between different patients and (b) structure awareness to take the (possible) dissimilarity from the same patient into account. ##### Contrastive Learning Framework The framework contains training and evaluation stages and the overview is illustrated in Figure 3. In the training stage, we optimize the model with the contrastive loss leveraging the information from underlying structures of the unlabeled images. In the evaluation stage, a multilayer perceptron (MLP) head is trained on top of the learned representations in a supervised manner. For each generated M-mode image $\boldsymbol{x}_{i}^{m}$, we generate its augmented view $\boldsymbol{x}_{i}^{v(m)}$ using the $Aug(\cdot)$ module. So the augmented dataset is represented as $\\{(\boldsymbol{x}_{i}^{m},\ \boldsymbol{x}_{i}^{v(m)},\ y_{i})\\}$. The encoder network $Enc(\cdot)$ maps each image $\boldsymbol{x}_{i}^{m}$ to a feature vector $\boldsymbol{z}_{i}^{m}$. We utilize a standard ResNet architecture [9]. In the training stage, $\boldsymbol{z}_{i}^{m}$ is normalized to the unit hyper-sphere before being passed to the projection network. Following the work [4], we introduce a learnable non-linear projection network between the representation and the contrastive loss. The projection network $Proj(\cdot)$ takes the normalized lower-level representation $\boldsymbol{z}_{i}^{m}$ as input and outputs the higher-level representation $\boldsymbol{p}_{i}^{m}$. We use a two-layer MLP with ReLU activation as $Proj(\cdot)$ in this work. In the evaluation stage, we initialize the parameters of the encoder network $Enc(\cdot)$ with the model obtained from contrastive learning and add an MLP head $Head(\cdot)$ to the top. For each patient $i$, we have $M$ feature vectors $\boldsymbol{z}_{i}^{m}\in\mathbb{R}^{K}$. The $M$ vectors are then fused to get the joint representation $\boldsymbol{\tilde{z}}_{i}\in\mathbb{R}^{K\times M}$ and passed to $Head(\cdot)$. One can have different fusion methods for aggregating information among the $M$ vectors, e. g. using an LSTM cell [11], averaging, or concatenating. Figure 3: Schema of the contrastive learning framework with training and evaluation stages. The training stage exploits the contrastive loss to learn a representation leveraging the unlabelled images. The evaluation stage exploits these learned representations in a supervised manner to predict EF. ##### Contrastive Loss for M-mode Images To account for (dis)similarities, we design two loss functions for learning both patient- and structure-awareness. (a) Patient-aware loss: The goal is to attract the representations from the same patient to be similar while pushing apart representations from different patients (see Figure 2 (a)). This enforces two M-mode images to be considered similar if they are from the same patient and dissimilar if they are from different patients. The patient-aware loss is given as: $L^{PA}=-\frac{1}{M-1}\sum_{i=1}^{N}\sum_{m=1}^{M}\sum_{l\neq m}\log\frac{\exp(\boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{l}/\tau)}{\sum_{j,k}\exp(\boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{j}^{k}/\tau)-\exp(\boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{m}/\tau)}$ (1) where $N$ is the number of patients in one batch, $M$ is the number of original M-mode images used for each patient, and $\tau$ is the temperature scaling parameter. The term $\boldsymbol{p}_{i}^{m}$ represents the output of $Proj(\cdot)$. Inspired by [30], we tried defining a neighborhood function to limit the similarity of M-mode images from the same patient. However, incorporating neighbourhood to patient-awareness did not further improve the results; thus, we used all M-mode images per patient to define the patient-aware loss. (b) Structure-aware loss: If we only use patient-aware loss $L^{PA}$, there exists a risk that all images from the same patient collapse to a single point [30]. So we propose the structure-aware loss to introduce some diversity (see Figure 2 (b)). To incorporate this into the learned representations, we construct positive pairs from each M-mode image with its augmentation and consider other combinations as negative pairs. It is then defined as: $L^{SA}=-\sum_{i=1}^{N}\sum_{m=1}^{2M}\log\frac{\exp(\boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{v(m)}/\tau)}{\sum_{l\neq m}\exp(\boldsymbol{p}_{i}^{m}\cdot\boldsymbol{p}_{i}^{l}/\tau)}$ (2) If image $m$ is an original image, then $v(m)$ represents its augmented view; if image $m$ is an augmented image, then $v(m)$ represents the original image. Minimizing $L^{SA}$ drives the representation pairs from the augmented images in the numerator close while pushing the representations in the denominator far away, where the denominator contains M-mode images from the same patient. Finally, we combine the two losses to get structure-aware and patient-aware contrastive loss for M-mode images using the hyperparameter $\alpha$ to control the trade-off between the awareness terms: $L^{CL}=\alpha L^{PA}+(1-\alpha)L^{SA}.$ (3) ## 3 Experiments and Results ### 3.1 Dataset We use the publicly available EchoNet-Dynamic dataset [20]. It contains $10^{\prime}030$ apical-4-chamber echocardiography videos from individuals who underwent imaging between 2016-2018 as part of routine clinical care at Stanford University Hospital. Each B-mode video was cropped and masked to remove information outside the scanning sector and downsampled into standardized $112\times 112$ pixel videos. For simplicity, we used videos with at least 112 frames. We use the official splits with $7465$ training, $1289$ validation, and $1282$ test set samples. ### 3.2 Experimental Setup We evaluate the models’ performance using classification accuracy for five random seeds and report the mean performance and standard deviation. During training, all supervised models optimize the estimation of EF as a regression task. For testing, we use a constant threshold $\tau$ for classifying cardiomyopathy. In all experiments, we set $\tau=0.5$. Hence, an estimation of $\hat{\tau}<0.5$ results in classifying a sample as cardiomyopathic. We evaluate all models using the area under the receiver operating characteristic (AUROC) and the area under the precision-recall curve (AUPRC) with respect to whether a patient is correctly classified as healthy or cardiomyopathic. Additionally, we report the mean absolute error (MAE) and the root mean squared error (RMSE) of the predicted EF with respect to the true EF in the Supplementary Material. We report the mean performance, including standard deviations over five random seeds for all results. We use the training set from EchoNet for pre-training (SSL), and apply a linear learning rate scheduler during the first 30 epochs as warm-up. For the supervised fine-tuning, we select different proportions of the training set in the limited labeled data scenario. All M-mode models are trained for $100$ epochs using Adam optimizer [12] with an initial learning rate of $0.001$ and a batch size of $64$. For image augmentation, we apply random horizontal flip and Gaussian noise. For the fusion method of the the M-mode representations we used concatenation. For the EchoNet model, we use the same model and parameters as in [21]. The model is trained for $45$ epochs with a learning rate of $0.0001$ and a batch size of $20$. We do not use test-time augmentation for any of the models. We report the full set of hyperparameters used in our experiments in Table 1. Table 1: List the hyperparameters used in our experiments. We use the same hyper-parameters for E2E setup and the fine-tuning stage of SSL setup (denoted as "_sup" in Table 1). "_cl" denotes the hyper-parameters used in the SSL pre-training stage. Parameter | Value | Description ---|---|--- lr_sup | $0.001$ | learning rate for supervised training lr_cl | $1.0$ | learning rate for SSL training opt | Adam | optimizer for SSL and supervised training bsz_sup | $64$ | batch size for supervised training bsz_cl | $256$ | batch size for SSL training epoch_sup | $100$ | epochs for supervised training epoch_cl | $300$ | epochs for SSL training epoch_warm | $30$ | warm-up epochs for SSL training $\alpha$ | $0.8$ | loss trade-off $\tau$ | $0.01$ | temperature scaling Dim_e | $512$ | $Enc(\cdot)$ output dimension Dim_ph | $2048$ | $Proj(\cdot)$ hidden layer dimension Dim_po | $128$ | $Proj(\cdot)$ output dimension Dim_lstm | $256$ | LSTM output dimension ### 3.3 Results and Discussion #### 3.3.1 Evaluating M-mode Images in Supervised Setting We train and evaluate models with different numbers of M-modes for $M\in\\{1,2,5,10,20,50\\}$. We use the complete training set, including labels, as we are interested in the performance of the models depending on the number of available M-modes. Figure 4 shows the results for different numbers of M-modes. We see that late fusion models benefit from an increasing number of modes, whereas the early fusion method overfits quickly and never achieves a comparable performance. (a) AUPRC (b) AUROC (c) RMSE (d) MAE (e) $R^{2}$ Figure 4: Performance for different numbers of M-mode images using early and late-fusion methods. In (a), we evaluate the classification performance with respect to AUPRC and AUROC in (b), the regression performance with respect to RMSE in (c), MAE in (d), and $R^{2}$-score in (e). #### 3.3.2 Evaluating Limited Data Regime We evaluate the accuracy of the different models introduced in Section 2 for different amount of labeled training samples. As most medical datasets do not have the size of EchoNet-Dynamic [13], methods for medical machine learning should perform best in the limited labeled data regime. We use _E2E_ for the supervised and _CL_ for the self-supervised setting. Additionally, we introduce _E2E+_ and _CL+_ , which, inspired by EchoNet [21], uses random short clips for each training epoch. Both models use M-mode images of 32 frames with a sampling period of 2. We train and evaluate models using $p\%$ of the full training set for $p\in\\{1,2,3,5,10,20,30,$ $50,75,100\\}$. All M-mode methods are trained with $M=10$. Figure 5 shows the limited labeled data experiment results. Although we are not able to reach the performance of the EchoNet model for any number of modes (see Figure 4(b)) if the number of labeled training samples is high (see Figure 5(a)), both supervised and self-supervised learning methods using M-mode instead of B-mode can outperform the EchoNet model in the low labeled data regime ($p<5\%$, Figure 5(b)). Also, we observe that using shorter clips is useful for the self-supervised learning methods, with _CL+_ being able to achieve an AUROC over $0.85$ with only around $200$ labeled samples. 2 (a) 10% - 100% (b) 1% - 10% Figure 5: Results for different training set sizes using the proposed end-to- end supervised (E2E) and contrastive learning (CL) approaches. In (a), we train and evaluate the models on 10%-100% labeled training samples, in (b) only on 1%-10% of the samples. E2E and CL models are trained using a fixed long clip with length 112; E2E+ and CL+ are trained using random short clips with length 32. CL freeze and CL+ freeze are fine-tuned with the encoder parameters frozen. #### 3.3.3 Computational Cost Furthermore, we compare the number of parameters and computational costs for different models in Table 2, where we used a multi-GPU setup with four NVIDIA GeForce RTX 2080 Ti GPUs. We report the computation time in seconds per batch (sec/B) and milliseconds per sample (msec/sample), and the memory requirements in gigabytes per batch (GB/B). Our proposed M-mode image based models require around six times less time and ten times less memory to train and run inference per sample. Given the used memory per batch, we could increase the batch size for the M-mode methods, lowering the computation time per sample even further, whereas the baseline model is already at the limit due to its architecture. Table 2: Computational costs. We evaluate the EchoNet and the proposed M-mode methods with respect to the number of parameters, the computation time, and the memory requirements. All M-mode models are evaluated using $M=10$. E2E defines the end-to-end supervised and CL the contrastive learning approach. | | | Time (sec/B) | Time (msec/sample) | Memory (GB/B) ---|---|---|---|---|--- Model | BS | #Params (Mio.) | Train | Test | Train | Test | Train | Test EchoNet | 20 | 31.5 | 2.898 | 2.474 | 144.9 | 123.7 | 5.294 | 1.187 E2E & CL | 64 | 11.7 | 1.568 | 1.330 | 24.5 | 21.1 | 1.013 | 0.120 ## 4 Discussion and Conclusion In this work, we propose to generate M-mode images from readily available B-mode echocardiography videos and fuse these to estimate EF and, thus, cardiac dysfunction. Our results show that M-mode-based prediction methods are comparable to the baseline method while avoiding its complex training routine and reducing the computational cost and the need for expensive expert input. Conventional M-mode images have a very high sampling rate, which results in a high temporal resolution so that even very rapid motion can be recorded. The generated M-mode images have significantly less temporal resolution than the conventional M-mode images from US machines. However, our results indicate that exploiting generated M-mode images does not limit the performance for EF estimation. As we do not use the M-mode images collected directly from the US machines, there is no need for an additional data collection step. Additionally, we show the potential of pre-trained methods. In scenarios where expensive expert labels are not readily available, pre-training using unlabeled M-mode images outperforms more complicated pipelines highlighting the potential of M-Mode based pipelines for clinical use cases. In our future work, we want to investigate the use cases for M-mode on different diseases and further improve the performance of the proposed pre-training pipeline. ## 5 Acknowledgements EO was supported by the SNSF grant P500PT-206746 and TS by the grant 2021-911 of the Strategic Focal Area “Personalized Health and Related Technologies (PHRT)” of the ETH Domain (Swiss Federal Institutes of Technology). ## References * [1] Avila, J., Smith, B., Mead, T., Jurma, D., Dawson, M., Mallin, M., Dugan, A.: Does the Addition of M-Mode to B-Mode Ultrasound Increase the Accuracy of Identification of Lung Sliding in Traumatic Pneumothoraces? Journal of Ultrasound in Medicine 37(11), 2681–2687 (2018) * [2] Baltrušaitis, T., Ahuja, C., Morency, L.P.: Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence 41(2), 423–443 (2018) * [3] Bamira, D., Picard, M.H.: Imaging: Echocardiology–Assessment of Cardiac Structure and Function. In: Encyclopedia of Cardiovascular Research and Medicine, pp. 35–54. Elsevier (2018) * [4] Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International conference on machine learning. pp. 1597–1607 (2020) * [5] Dai, W., Li, X., Chiu, W.H.K., Kuo, M.D., Cheng, K.T.: Adaptive Contrast for Image Regression in Computer-Aided Disease Assessment. IEEE Transactions on Medical Imaging 41(5), 1255–1268 (5 2022) * [6] Devereux, R.B., Lutas, E.M., Casale, P.N., Kligfield, P., Eisenberg, R.R., et al.: Standardization of M-mode echocardiographic left ventricular anatomic measurements. J Am Coll Cardiol. 4(6), 1222–1230 (1984) * [7] Gaspar, H.A., Morhy, S.S., Lianza, A.C., de Carvalho, W.B., Andrade, J.L., et al.: Focused cardiac ultrasound: a training course for pediatric intensivists and emergency physicians. BMC Medical Education 14(1) (2014) * [8] Ghorbani, A., Ouyang, D., Abid, A., He, B., Chen, J.H., Harrington, R.A., et al.: Deep learning interpretation of echocardiograms. npj Dig Med (2020) * [9] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) * [10] Hensel, K.O., Roskopf, M., Wilke, L., Heusch, A.: Intraobserver and interobserver reproducibility of M-mode and B-mode acquired mitral annular plane systolic excursion (MAPSE) and its dependency on echocardiographic image quality in children. PLOS ONE 13(5), e0196614 (2018) * [11] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997) * [12] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) * [13] Kiryati, N., Landau, Y.: Dataset growth in medical image analysis research. Journal of imaging 7(8), 155 (2021) * [14] Kulhare, S., Zheng, X., Mehanian, C., Gregory, C., Zhu, M., Gregory, K., et al.: Ultrasound-Based Detection of Lung Abnormalities Using Single Shot Detection Convolutional Neural Networks. In: Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation (2018) * [15] Lagopoulos, A., Hristu-Varsakelis, D.: Measuring the Left Ventricular Ejection Fraction using Geometric Features. In: IEEE International Symposium on Computer-Based Medical Systems. pp. 1–6. IEEE (7 2022) * [16] LeCun, Y., Misra, I.: Self-supervised learning: The dark matter of intelligence. Meta AI 23 (2021) * [17] Madani, A., Ong, J.R., Tibrewal, A., Mofrad, M.R.K.: Deep echocardiography: data-efficient supervised and semi-supervised deep learning towards automated diagnosis of cardiac disease. npj Digital Medicine 1(1) (2018) * [18] Mehanian, C., Kulhare, S., Millin, R., Zheng, X., Gregory, C., Zhu, M., et al.: Deep Learning-Based Pneumothorax Detection in Ultrasound Videos. pp. 74–82 (2019) * [19] Muhtaseb, R., Yaqub, M.: EchoCoTr: Estimation of the LV Ejection Fraction from Spatiotemporal Echocardiography. pp. 370–379 (2022) * [20] Ouyang, D., He, B., Ghorbani, A., Lungren, M.P., Ashley, E.A., et al.: Echonet-dynamic: a large new cardiac motion video data resource for medical machine learning. In: NeurIPS ML4H Workshop (2019) * [21] Ouyang, D., He, B., Ghorbani, A., Yuan, N., Ebinger, J., Langlotz, C.P., et al.: Video-based AI for beat-to-beat assessment of cardiac function. Nature 580(7802), 252–256 (2020) * [22] Sarkar, P.G., Chandra, V.: A Novel Approach for Detecting Abnormality in Ejection Fraction Using Transthoracic Echocardiography with Deep Learning. Int J of Online and Biomed Eng 16(13), 99 (2020) * [23] Saul, T., Siadecki, S.D., Berkowitz, R., Rose, G., Matilsky, D., Sauler, A.: M-Mode Ultrasound Applications for the Emergency Medicine Physician. The Journal of Emergency Medicine 49(5), 686–692 (2015) * [24] Shurrab, S., Duwairi, R.: Self-supervised learning methods and applications in medical imaging analysis: A survey. PeerJ Comp Sci 8, e1045 (2022) * [25] Singh, A.K., Mayo, P.H., Koenig, S., Talwar, A., Narasimhan, M.: The Use of M-Mode Ultrasonography to Differentiate the Causes of B Lines. Chest 153(3), 689–696 (2018) * [26] Skinner, H., Kamaruddin, H., Mathew, T.: Tricuspid Annular Plane Systolic Excursion: Comparing Transthoracic to Transesophageal Echocardiography. Journal of Cardiothoracic and Vascular Anesthesia 31(2), 590–594 (2017) * [27] Tian, Y., Xu, S., Guo, L., Cong, F.: A Periodic Frame Learning Approach for Accurate Landmark Localization in M-Mode Echocardiography. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2021) * [28] Tromp, J., Seekings, P.J., Hung, C.L., Iversen, M.B., Frost, M.J., et al.: Automated interpretation of systolic and diastolic function on the echocardiogram: a multicohort study. The Lancet Digital Health 4(1) (1 2022) * [29] WHO: Cardiovascular diseases (CVDs). https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) (2022) * [30] Yèche, H., Dresdner, G., Locatello, F., Hüser, M., Rätsch, G.: Neighborhood contrastive learning applied to online patient monitoring. In: International Conference on Machine Learning. pp. 11964–11974 (2021) * [31] Zhang, J., Gajjala, S., Agrawal, P., Tison, G.H., Hallock, L.A., et al.: Fully Automated Echocardiogram Interpretation in Clinical Practice. Circulation 138(16), 1623–1635 (2018)
# Flexible Sampling of Discrete Scale Invariant Markov Processes: Covariance and Spectrum N. Modarresi and S. Rezakhah Faculty of Mathematics and Computer Science, Amirkabir University of Technology, 424 Hafez Avenue, Tehran 15914, Iran. E-mail<EMAIL_ADDRESS>(N. Modarresi<EMAIL_ADDRESS>(S. Rezakhah). ###### Abstract In this paper we consider some flexible discrete sampling of a discrete scale invariant process $\\{X(t),t\in{\bf R^{+}}\\}$ with scale $l>1$. By this method we plan to have $q$ samples at arbitrary points ${\bf s}_{0},{\bf s}_{1},\ldots,{\bf s}_{q-1}$ in interval $[1,l)$ and proceed our sampling in the intervals $[l^{n},l^{n+1})$ at points $l^{n}{\bf s}_{0},l^{n}{\bf s}_{1},\ldots,l^{n}{\bf s}_{q-1}$, $n\in{\bf Z}$. Thus we have a discrete time scale invariant (DT-SI) process and introduce an embedded DT-SI process as $W(nq+k)=X(l^{n}{\bf s}_{k})$, $q\in{\bf N}$, $k=0,\ldots,q-1$. We also consider $V(n)=\big{(}V^{0}(n),\ldots,V^{q-1}(n)\big{)}$ where $V^{k}(n)=W(nq+k)$, as an embedded $q$-dimensional discrete time self-similar (DT-SS) process. By introducing quasi Lamperti transformation, we find spectral representation of such process and its spectral density matrix is given. Finally by imposing wide sense Markov property for $W(\cdot)$ and $V(\cdot)$, we show that the spectral density matrix of $V(\cdot)$ and spectral density function of $W(\cdot)$ can be characterized by $\\{R_{j}(1),R_{j}(0),j=0,\ldots,q-1\\}$ where $R_{j}(k)=E[W(j+k)W(j)]$. AMS 2000 Subject Classification: 60G18, 62M15. Keywords: Discrete scale invariance; Wide sense Markov; Multi-dimensional self-similar processes. ## 1 Introduction The concept of stationarity and self-similarity are used as a fundamental property to handle many natural phenomena. Lamperti transformation defines a one to one correspondence between stationary and self-similar processes. Discrete scale invariance (DSI) process can be defined as the Lamperti transform of periodically correlated (PC) process. Many critical systems, like statistical physics, textures in geophysics, network traffic and image processing can be interpreted by these processes [1]. Fourier transform is known as a suited representation for stationarity, but not for self- similarity. A harmonic like representation of self-similar process is introduced by using Mellin transform [4]. A process which is Markov and self-similar, is called self-similar Markov process. These processes are involved in various parts of probability theory, such as branching processes and fragmentation theory [2]. Current authors considered DSI processes in the wide sense with some scale $l>1$. They proposed to have some fixed number of samples, say $T$, in each scale at points $\alpha^{k}$, $k\in{\bf Z}$ where $l=\alpha^{T}$, $T\in{\bf N}$. By such sampling they provided a discrete time scale invariant process in the wide sense and found a closed formula for its covariance function [6]. In this paper we consider $X(\cdot)$ as DSI process with scale $l>1$, and sampling at arbitrary points $1\leqslant{\bf s}_{0}<{\bf s}_{1}\ldots<{\bf s}_{q-1}<l$ in the interval $[1,l)$. We also take our samples at points $l^{n}{\bf s}_{0},l^{n}{\bf s}_{1},\ldots,l^{n}{\bf s}_{q-1}$, $n\in{\bf Z}$ in the intervals $[l^{n},l^{n+1})$. Then we introduce some discrete time embedded scale invariant (DT-ESI) process $W(nq+k)=X(l^{n}{\bf s}_{k})$, $q\in{\bf N}$, $k=0,\ldots,q-1$ and corresponding multi-dimensional discrete time embedded self-similar (DT-ESS) process as $V(n)=\big{(}V^{0}(n),\ldots,V^{q-1}(n)\big{)}$ where $V^{k}(n)=W(nq+k)$. We investigate properties of these processes when they are Markov in the wide sense. This paper is organized as follows. In section 2, we present a review of multi-dimensional stationary, periodically correlated, self-similar and discrete scale invariant processes. Then we define discrete time self-similar (DT-SS) and scale invariant (DT-SI) processes. We also introduce quasi Lamperti transformation in this section. Section 3 is devoted to the structure of the multi-dimensional DT-SS process resulting from the above method of sampling. We define DT-ESI process and corresponding multi-dimensional DT-ESS process and characterize the spectral density matrix of it in this section. Finally covariance function and spectral density matrix of the discrete time embedded scale invariant Markov (DT-ESIM) processes and corresponding multi- dimensional discrete time embedded self-similar Markov (DT-ESSM) are obtained in section 4. ## 2 Theoretical framework This section is organized in tree subsections. First we review the structure of the covariance function and spectral distribution matrix of multi- dimensional stationary processes. We present definitions of DT-SS, DT-SI, wide sense self-similar and scale invariant processes in subsection 2.2. We define quasi Lamperti transformation and present its properties which provide a one to one correspondence between DT-SS and discrete time stationary processes and also between DT-SI and DT-PC processes. ### 2.1 Stationary and multi-dimensional stationary processes ###### Definition 2.1 A process $\\{Y(t),t\in{\bf R}\\}$ is said to be stationary, if for any $t,\tau\in{\bf R}$ $\\{Y(t+\tau)\\}\stackrel{{\scriptstyle d}}{{=}}\\{Y(t)\\}$ (2.1) where $\stackrel{{\scriptstyle d}}{{=}}$ is the equality of all finite- dimensional distributions. If $(2.1)$ holds for some $\tau\in{\bf R}$, the process is said to be periodically correlated. The smallest of such $\tau$ is called period of the process. By Rozanov [8], if $Y(t)=\\{Y^{k}(t)\\}_{k=1,\ldots,n}$ be an $n$-dimensional stationary process, then $Y(t)=\int e^{i\lambda t}\phi(d\lambda)$ (2.2) is its spectral representation, where $\phi=\\{\varphi_{k}\\}_{k=1,\ldots,n}$ and $\varphi_{k}$ is the random spectral measure associated with the $k$th component $Y^{k}$ of the $n$-dimensional process $Y$. Let $B_{kr}(\tau)=E[Y^{k}(\tau+t)\overline{Y^{r}(t)}],\hskip 19.91692ptk,r=1,\ldots,n$ and $B(\tau)=[B_{kr}(\tau)]_{k,r=1,\ldots,n}$ be the correlation matrix of $Y$. The components of the correlation matrix of the process $Y$ can be represented as $B_{kr}(\tau)=\int e^{i\lambda\tau}F_{kr}(d\lambda),\hskip 19.91692ptk,r=1,\ldots,n$ (2.3) where for any Borel set $\Delta$, $F_{kr}(\Delta)=E[\varphi_{k}(\Delta)\overline{\varphi_{r}(\Delta)}]$ are the complex valued set functions which are $\sigma$-additive and have bounded variation. For any $k,r=1,\ldots,n$, if the sets $\Delta$ and $\Delta^{\prime}$ do not intersect, $E[\varphi_{k}(\Delta)\overline{\varphi_{r}(\Delta^{\prime})}]=0$. For any interval $\Delta=(\lambda_{1},\lambda_{2})$ when $F_{kr}(\\{\lambda_{1}\\})=F_{kr}(\\{\lambda_{2}\\})=0$ the following relation holds $F_{kr}(\Delta)=\frac{1}{2\pi}\int_{\Delta}\sum_{\tau=-\infty}^{\infty}B_{kr}(\tau)e^{-i\lambda\tau}d\lambda$ (2.4) $=\frac{1}{2\pi}B_{kr}(0)[\lambda_{2}-\lambda_{1}]+\lim_{T\rightarrow\infty}\frac{1}{2\pi}\sum_{0<|\tau|\leqslant T}B_{kr}(\tau)\frac{e^{-i\lambda_{2}\tau}-e^{-i\lambda_{1}\tau}}{-i\tau}$ in the discrete parameter case, and $F_{kr}(\Delta)=\lim_{a\rightarrow\infty}\frac{1}{2\pi}\int_{-a}^{a}\frac{e^{-i\lambda_{2}\tau}-e^{-i\lambda_{1}\tau}}{-i\tau}B_{kr}(\tau)d\tau$ in the continuous parameter case. ### 2.2 Discrete time scale invariant processes ###### Definition 2.2 A process $\\{X(t),t\in{\bf R^{+}}\\}$ is said to be self-similar of index $H>0$, if for any ${\lambda}>0$ $\\{\lambda^{-H}X(\lambda t)\\}\stackrel{{\scriptstyle d}}{{=}}\\{X(t)\\}.$ (2.5) The process is said to be DSI of index $H$ and scaling factor ${\lambda}_{0}>0$ or (H,${\lambda}_{0}$)-DSI, if $(2.5)$ holds for $\lambda=\lambda_{0}$. As an intuition, self-similarity refers to an invariance with respect to any dilation factor. However, this may be a too strong requirement for capturing in situations that scaling properties are only observed for some preferred dilation factors. ###### Definition 2.3 A process $\\{X(k),k\in{\bf\check{T}}\\}$ is called discrete time self-similar (DT-SS) process with parameter space $\check{T}$, where $\check{T}$ is any subset of countable distinct points of positive real numbers, if for any $k_{1},k_{2}\in\check{T}$ $\\{X(k_{2})\\}\stackrel{{\scriptstyle d}}{{=}}(\frac{k_{2}}{k_{1}})^{H}\\{X(k_{1})\\}.$ (2.6) The process $X(\cdot)$ is called discrete time scale invariance (DT-SI) with scale $l>0$ and parameter space $\check{T}$, if for any $k_{1},k_{2}=lk_{1}\in\check{T}$, $(2.6)$ holds. ###### Remark 2.1 If the process $\\{X(t),t\in{\bf R^{+}}\\}$ is DSI with scale $l=\alpha^{T}$ for fixed $T\in{\bf N}$ and $\alpha>1$, then by sampling of the process at points $\alpha^{k},k\in{\bf Z}$, we have $X(\cdot)$ as a DT-SI process with parameter space $\check{T}=\\{\alpha^{k},k\in{\bf Z}\\}$ and scale $l=\alpha^{T}$. If we consider sampling of $X(\cdot)$ at points $\alpha^{nT+k},n\in{\bf Z},\mbox{for fixed}\ k=0,1,\ldots,T-1$, then $X(\cdot)$ is a DT-SS process with parameter space $\check{T}=\\{\alpha^{nT+k},n\in{\bf Z}\\}$. Yazici et.al. [9] introduced wide sense self-similar processes as the following definition, which can be obtained by applying the Lamperti transformation ${\cal L}_{H}$ to the class of wide-sense stationary processes. This class encompasses all strictly self-similar processes with finite variance, including Gaussian processes such as fractional Brownian motion but no other alpha-stable processes. ###### Definition 2.4 A random process $\\{X(t),t\in{\bf R^{+}}\\}$ is said to be wide sense self- similar with index H, for some $H>0$ if the following properties are satisfied for each $c>0$, $t,t_{1},t_{2}>0$ $(i)\,\,\ E[X^{2}(t)]<\infty$, $(ii)\,\,E[X(ct)]=c^{H}E[X(t)]$, $(iii)\,\,E[X(ct_{1})X(ct_{2})]=c^{2H}E[X(t_{1})X(t_{2})]$. This process is called wide sense DSI of index $H$ and scaling factor $c_{0}>0$, if the above conditions hold for some $c=c_{0}$. ###### Definition 2.5 A random process $\\{X(k),k\in\check{T}\\}$ is called DT-SS in the wide sense with index $H>0$ and with parameter space $\check{T}$, where $\check{T}$ is any subset of distinct countable points of positive real numbers, if for all $k,k_{1}\in\check{T}$ and all $c>0$, where $ck,ck_{1}\in\check{T}:$ $(i)\,\,\ E[X^{2}(k)]<\infty$, $(ii)\,\,E[X(ck)]=c^{H}E[X(k)]$, $(iii)\,\,E[X(ck)X(ck_{1})]=c^{2H}E[X(k)X(k_{1})]$. If the above conditions hold for some fixed $c=c_{0}$, then the process is called DT-SI in the wide sense with scale $c_{0}$. ###### Remark 2.2 Let $\\{X(t),t\in{\bf R^{+}}\\}$ in Remark $2.1$ be DSI in the wide sense, with the same scale $l=\alpha^{T}$. Then $X(\cdot)$ with parameter space $\check{T}=\\{\alpha^{k},k\in{\bf Z}\\}$ for $\alpha>1$ is DT-SI in the wide sense and $X(\cdot)$ with parameter space $\check{T}=\\{\alpha^{nT+k},n\in{\bf Z}\\}$ for fixed $T\in{\bf N}$, $\alpha>1$ is DT-SS for $k=0,\ldots,T-1$ in the wide sense. Through this paper we are dealt with wide sense self-similar and wide sense scale invariant process, and for simplicity we omit the term ”in the wide sense” hereafter. ### 2.3 Quasi Lamperti transformation We introduce the quasi Lamperti transformation and its properties by followings. ###### Definition 2.6 The quasi Lamperti transform with positive index $H$ and $\alpha>1$, denoted by ${\cal L}_{H,\alpha}$ operates on a random process $\\{Y(t),t\in{\bf R}\\}$ as ${\cal L}_{H,\alpha}Y(t)=t^{H}Y(\log_{\alpha}t)$ (2.7) and the corresponding inverse quasi Lamperti transform ${\cal L}^{-1}_{H,\alpha}$ on process $\\{X(t),t\in{\bf R^{+}}\\}$ acts as ${\cal L}^{-1}_{H,\alpha}X(t)={\alpha}^{-tH}X(\alpha^{t}).$ (2.8) ###### Corollary 2.1 If $\\{Y(t),t\in{\bf R}\\}$ is stationary process, its quasi Lamperti transform $\\{{\cal L}_{H,\alpha}Y(t),t\in{\bf R^{+}}\\}$ is self-similar. Conversely if $\\{X(t),t\in{\bf R^{+}}\\}$ is self-similar process, its inverse quasi Lamperti transform $\\{{\cal L}^{-1}_{H,\alpha}X(t),t\in{\bf R}\\}$ is stationary. ###### Corollary 2.2 If $\\{X(t),t\in{\bf R^{+}}\\}$ is $(H,{\alpha}^{T}$)-DSI then ${\cal L}^{-1}_{H,\alpha}X(t)=Y(t)$ is PC with period $T>0$. Conversely if $\\{Y(t),t\in{\bf R}\\}$ is PC with period $T$ then ${\cal L}_{H,\alpha}Y(t)=X(t)$ is $(H,{\alpha}^{T}$)-DSI. ###### Remark 2.3 If $X(\cdot)$ is a DT-SS process with parameter space $\check{T}=\\{l^{n},n\in{\bf Z}\\}$, then its stationary counterpart $Y(\cdot)$ has parameter space $\check{T}=\\{nT,n\in{\bf Z}\\}$ $X(l^{n})={\cal L}_{H,\alpha}Y(l^{n})=l^{nH}Y(\log_{\alpha}{\alpha^{nT}})=\alpha^{nTH}Y(nT).$ Also it is clear by the following relation that if $X(\cdot)$ is a DT-SI process with scale $l=\alpha^{T}$, $T\in{\bf N}$ and parameter space $\check{T}=\\{\alpha^{n},n\in{\bf Z}\\}$, then $Y(\cdot)$ is a discrete time periodically correlated (DT-PC) process with period $T$ and parameter space $\check{T}=\\{n,n\in{\bf Z}\\}$ $Y(n)={\cal L}^{-1}_{H,\alpha}X(n)=\alpha^{-nH}X(\alpha^{n}).$ ## 3 Structure of the process In this section we define a multi-dimensional DT-SS process in the wide sense. We also introduce a new method for sampling of a DSI process with scale $l>1$, which provide sampling at arbitrary points in the interval $[1,l)$ and at multiple $l^{n}$ of such points in the intervals $[l^{n},l^{n+1})$, $n\in{\bf N}$. We introduce DT-ESI process corresponding to the multi-dimensional DT-ESS process. Finally in Theorem 3.1 we find harmonic like representation and spectral density matrix of the multi-dimensional DT-ESS process. ###### Definition 3.1 The process $U(t)=(U^{0}(t),U^{1}(t),\ldots,U^{q-1}(t))$ with parameter space $\check{T}=\\{l^{n},n\in{\bf Z}\\}$, $l=\alpha^{T}$, $\alpha>1$ and $T\in{\bf N}$ is a q-dimensional discrete time self similar process in the wide sense, where $\bf(a)$ $\\{U^{j}(\cdot)\\}$ for all $j=0,1,\cdots,q-1$ is DT-SS process with parameter space $\check{T}^{j}=\\{l^{n},n\in{\bf Z}\\}$. $\bf(b)$ For every $n,\tau\in{\bf Z},\,\ j,k=0,1,\cdots,q-1$ $\mathrm{Cov}\big{(}U^{j}(l^{n+\tau}),U^{k}(l^{n})\big{)}=l^{2nH}\mathrm{Cov}\big{(}U^{j}(l^{\tau}),U^{k}(1)\big{)}.$ Our method of sampling is to provide enough flexibility to choose arbitrary sample points of a discrete time scale invariant process. So, one could decide to have $T$ partitions in each scale interval $I_{n}=[l^{n},l^{n+1})$, $n\in{\bf Z}$ of a continuous time DSI process $X(\cdot)$ with scale $l>1$ and find $\alpha$ by $l=\alpha^{T}$. Then our partitions in scale interval $I_{n}$ are $[\alpha^{nT},\alpha^{nT+1}),[\alpha^{nT+1},\alpha^{nT+2}),\ldots,[\alpha^{nT+T-1},\alpha^{(n+1)T}).$ So we consider to have $n_{k}$ samples in partition $[\alpha^{nT+k},\alpha^{nT+k+1})$ at points $\alpha^{nT+k}s_{k_{1}},\alpha^{nT+k}s_{k_{2}},\ldots,\alpha^{nT+k}s_{k_{n_{k}}}$ where $1\leqslant s_{k_{1}}<s_{k_{2}}<\ldots<s_{k_{n_{k}}}<\alpha$, $k=0,\ldots,T-1$ and $q=\sum_{i=0}^{T-1}n_{i}$. Now we can state the following remark. ###### Remark 3.1 Let $U^{k}(l^{n})=X(l^{n}{\bf s}_{u})$ in Definition $3.1$, where ${\bf s}_{u}=\alpha^{k}s_{k_{x}}$ in which $\sum_{i=-1}^{k-1}n_{i}\leqslant u<\sum_{i=-1}^{k}n_{i}$, $n_{-1}=0$ and $u=x+\sum_{i=0}^{k-1}n_{i}$, $x=1,\ldots,n_{k}$. Thus $X(l^{n}{\bf s}_{u})$ for $u=0,\ldots,q-1$ is a DT-SS process and $U(l^{n})=\big{(}X(l^{n}{\bf s}_{0}),\ldots,X(l^{n}{\bf s}_{q-1})\big{)}$ is a $q$-dimensional DT-SS process. By such method of sampling at discrete points we provide a $q$-dimensional DT- ESS process $V(n)$ as $V(n)=\big{(}V^{0}(n),V^{1}(n),\ldots,V^{q-1}(n)\big{)},\hskip 28.45274ptn\in{\bf Z}$ where $q=\sum_{i=0}^{T-1}n_{i}$ and $V^{u}(n):=X(\alpha^{nT}{\bf s}_{u})$ (3.1) $\sum_{i=-1}^{k-1}n_{i}\leqslant u<\sum_{i=-1}^{k}n_{i}$, $n_{-1}=0$, ${\bf s}_{u}=\alpha^{k}s_{k_{x}}$ and $x=u-\sum_{i=0}^{k-1}n_{i}$, $u=0,\ldots,q-1$. ###### Remark 3.2 Corresponding to the $q$-dimensional DT-ESS process $V(n)$ there exist a DT- ESI process $W(\kappa)$ with scale $l=\alpha^{T}$ as $W(\kappa):=V^{u}(n)=X(\alpha^{nT}{\bf s}_{u})\hskip 28.45274pt\kappa\in{\bf Z}$ (3.2) where $u=\kappa-q[\frac{\kappa}{q}]$, $n=[\frac{\kappa}{q}]$ and $\kappa=nq+u$, since by $(3.1)$ and $(3.2)$ $W(\kappa+q)=X(\alpha^{(n+1)T}{\bf s}_{u})\stackrel{{\scriptstyle d}}{{=}}\alpha^{TH}X(\alpha^{nT}{\bf s}_{u})=l^{H}W(\kappa).$ By the following theorem, the spectral density matrix of the $q$-dimensional DT-ESS process and harmonic like representation of each column is obtained. ###### Theorem 3.1 Let $X(\cdot)$ be a DSI process with scale $l=\alpha^{T}$ and $1\leqslant{\bf s}_{0}<{\bf s}_{1}<\ldots<{\bf s}_{q-1}<\alpha^{T}$, then $V(n)=\big{(}V^{0}(n),\ldots,V^{q-1}(n)\big{)}$, where $V^{u}(n)=X(\alpha^{nT}{\bf s}_{u})$, $n\in{\bf Z}$ and $u=0,\ldots,q-1$ is a $q$-dimensional DT-ESS process and (i) The harmonic like representation of $V^{u}(n)$ is $V^{u}(n)=(\alpha^{nT}{\bf s}_{u})^{H}\int_{0}^{2\pi}e^{i\omega n}d\phi_{u}(\omega)$ (3.3) where $\phi_{u}(\omega)$ is an orthogonal spectral measure, that is $E[d\phi_{u}(\omega)\overline{d\phi_{\nu}(\omega^{\prime})}]=0$, $u,\nu=0,\ldots,q-1$ when $\omega\neq\omega^{\prime}$. (ii) The corresponding spectral density matrix of $V(n)$ is $g^{H}(\omega)=[g_{u,\nu}^{H}(\omega)]_{u,\nu=0,\ldots,q-1}$, where $g_{u,\nu}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\sum_{\tau=-\infty}^{\infty}\alpha^{-TH\tau}e^{-i\omega\tau}Q^{H}_{u,\nu}(\tau)$ (3.4) $\tau\in{\bf N}$ and $Q^{H}_{u,\nu}(\tau)$ is the covariance function of $V^{u}(\tau)$ and $V^{\nu}(0)$. Proof of (i): Remark $2.3$ implies that $V^{u}(n)=X(\alpha^{nT}{\bf s}_{u})={\cal L}_{H,\alpha}Y(\alpha^{nT}{\bf s}_{u})=(\alpha^{nT}{\bf s}_{u})^{H}\eta^{u}(n)$ where $\eta^{u}(n)=Y(nT+\log_{\alpha}{\bf s}_{u})$. Thus $V^{u}(n)$ for every $u=0,1,\ldots,q-1$ is a DT-ESS process in $n$, where its discrete time stationary counterpart $\eta^{u}(n)$ for fixed $u=0,1,\ldots,q-1$ has spectral representation $\eta^{u}(n)=\int_{0}^{2\pi}e^{i\omega n}d\phi_{u}(\omega)$. Proof of (ii): The covariance matrix of $V(n)$ is denoted by $Q^{H}(n,\tau)=[Q^{H}_{u,\nu}(n,\tau)]_{u,\nu=0,\ldots,q-1}$ where $Q^{H}_{u,\nu}(n,\tau)=E[V^{u}(n+\tau)V^{\nu}(n)]=E[X(\alpha^{(n+\tau)T}{\bf s}_{u})X(\alpha^{nT}{\bf s}_{\nu})]$ By the scale invariant property of the process $X(\cdot)$ we have that $Q^{H}_{u,\nu}(n,\tau)=\alpha^{2nTH}E[X(\alpha^{\tau T}{\bf s}_{u})X({\bf s}_{\nu})]=\alpha^{2nTH}Q^{H}_{u,\nu}(\tau)$ (3.5) where $Q^{H}_{u,\nu}(\tau)=Q^{H}_{u,\nu}(0,\tau)=E[V^{u}(\tau)V^{\nu}(0)]$, then by (3.3) $Q^{H}_{u,\nu}(\tau)=E[(\alpha^{\tau T}{\bf s}_{u})^{H}({\bf s}_{\nu})^{H}\int_{0}^{2\pi}e^{i\omega\tau}d\phi_{u}(\omega)\int_{0}^{2\pi}\overline{d\phi_{v}(\omega^{\prime})}]$ $=\alpha^{\tau TH}({\bf s}_{u}{\bf s}_{\nu})^{H}\int_{0}^{2\pi}e^{i\omega\tau}dG^{H}_{u,\nu}(\omega)$ (3.6) where $E[d\phi_{u}(\omega)\overline{d\phi_{\nu}(\omega^{\prime})}]=dG^{H}_{u,\nu}(\omega)$ when $\omega=\omega^{\prime}$ and is $0$ when $\omega\neq\omega^{\prime}$. On the other hand, by the definition of $\eta^{u}(n)$ in the proof of part $(i)$ $Q^{H}_{u,\nu}(\tau)=E[X(\alpha^{\tau T}{\bf s}_{u})X({\bf s}_{\nu})]=E[{\cal L}_{H,\alpha}Y(\alpha^{\tau T}{\bf s}_{u}){\cal L}_{H,\alpha}Y({\bf s}_{\nu})]$ $=(\alpha^{\tau T}{\bf s}_{u}{\bf s}_{\nu})^{H}E[Y(\tau T+\log_{\alpha}{\bf s}_{u})Y(\log_{\alpha}{\bf s}_{\nu})]$ $=(\alpha^{\tau T}{\bf s}_{u}{\bf s}_{\nu})^{H}E[\eta^{u}(\tau)\eta^{\nu}(0)]=(\alpha^{\tau T}{\bf s}_{u}{\bf s}_{\nu})^{H}B_{u,\nu}(\tau).$ Then by (3.6) $B_{u,\nu}(\tau)=\int_{0}^{2\pi}e^{i\omega\tau}dG^{H}_{u,\nu}(\omega),\hskip 14.22636ptu,\nu=0,\ldots,q-1$ Now by (2.3) and (2.4) for $u,\nu=0,\ldots,q-1$ we have that $G^{H}_{u,\nu}(A)=\frac{1}{2\pi}\int_{A}\sum_{\tau=-\infty}^{\infty}B_{u,\nu}(\tau)e^{-i\lambda\tau}d\lambda.$ By substituting $B_{u,\nu}(\tau)=(\alpha^{\tau T}{\bf s}_{u}{\bf s}_{\nu})^{-H}Q^{H}_{u,\nu}(\tau)$, the elements of the spectral distribution function, $G^{H}_{u,\nu}(\cdot)$ has the following representation $G^{H}_{u,\nu}(A)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\int_{A}\sum_{\tau=-\infty}^{\infty}\alpha^{-TH\tau}e^{-i\lambda\tau}Q^{H}_{u,\nu}(\tau)d\lambda.$ (3.7) Let $A=(\omega,\omega+d\omega]$, then the elements of the spectral density matrix, $g_{u,\nu}^{H}(\omega)$ are $g_{u,\nu}^{H}(\omega):=\frac{G^{H}_{u,\nu}(d\omega)}{d\omega}=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\sum_{\tau=-\infty}^{\infty}\alpha^{-TH\tau}\big{(}\frac{1}{d\omega}\int_{\omega}^{\omega+d\omega}e^{-i\lambda\tau}d\lambda\big{)}Q^{H}_{u,\nu}(\tau)$ $=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\sum_{\tau=-\infty}^{\infty}\alpha^{-TH\tau}\big{(}\frac{1}{-i\tau}\lim_{d\omega\rightarrow 0}\frac{e^{-i{(\omega+d\omega)\tau}}-e^{-i\omega\tau}}{d\omega}\big{)}Q^{H}_{u,\nu}(\tau)$ $=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\sum_{\tau=-\infty}^{\infty}\alpha^{-TH\tau}\big{(}(\frac{1}{-i\tau})(-i\tau)e^{-i\omega\tau}\big{)}Q^{H}_{u,\nu}(\tau).$ Thus we get to the assertion of part (ii) of the theorem.$\square$ ## 4 Multi-dimensional DT-ESSM process Using our method of sampling in section 3, we find the covariance function of the DT-ESI process $W(\cdot)$, which is defined in (3.2) and its corresponding multi-dimensional DT-ESS process $V(\cdot)$, defined in (3.1) for the case that they are Markov in the wide sense as well, which we call them DT-ESIM and DT-ESSM respectively in subsection 4.1. We find the spectral density matrix of these processes in subsection 4.2. ### 4.1 Covariance function of DT-ESIM Here we characterize the covariance function of the DT-ESIM process $\\{W(\kappa),\kappa\in{\bf Z}\\}$ in Theorem 4.1 and the covariance function of the associated $q$-dimensional DT-ESSM process in Theorem 4.2. ###### Theorem 4.1 Let $\\{W(\kappa),\kappa\in{\bf Z}\\}$, defined in $(3.2)$, be DT-ESI and Markov in the wide sense DT-ESIM, with scale $\alpha^{T}$. Then for $\tau\in{\bf W}=\\{0,1,\ldots\\}$, $\kappa=nq+\nu$, $\kappa+\tau=mq+u$, $u,\nu=0,\ldots,q-1$ and $n,m\in{\bf Z}$, the covariance function $R_{\kappa}(\tau):=E[W(\kappa+\tau)W(\kappa)]=E[X(\alpha^{mT}{\bf s}_{u})X(\alpha^{nT}{\bf s}_{\nu})]$ (4.1) can be characterized as $R_{\kappa}(tq+s)=[\tilde{f}(q-1)]^{t}\tilde{f}(\kappa+s-1)[\tilde{f}(\kappa-1)]^{-1}R_{\kappa}(0)$ (4.2) $R_{\kappa}(-tq+s)=\alpha^{-2tqH}R_{\kappa+s}((t-1)q+q-s)$ where $1\leqslant{\bf s}_{0}<{\bf s}_{1}<\ldots<{\bf s}_{q-1}<\alpha^{T}$, $t\in{\bf Z}$, $s=0,\ldots,q-1$ $\tilde{f}(r)=\prod_{j=0}^{r}f(j)=\prod_{j=0}^{r}R_{j}(1)/R_{j}(0),\hskip 19.91692ptr\in{\bf Z}$ (4.3) and $\tilde{f}(-1)=1$. Before proceeding to the proof of the theorem we present the main property of covariance function of the wide sense Markov process. Let $\\{X(n),n\in{\bf Z}\\}$ be a second order process of centered random variables, $E[X(n)]=0$ and $E[|X(n)|^{2}]<\infty$, $n\in{\bf Z}$. Following Doob [3], the real valued second order process $X(\cdot)$ is Markov in the wide sense if $R(n_{1},n_{2})=G\big{(}\min(n_{1},n_{2})\big{)}H\big{(}\max(n_{1},n_{2})\big{)}$ (4.4) where $R(n_{1},n_{2}):=E[X(n_{1})X(n_{2})]$ is the covariance function of $X(\cdot)$ and $G$ and $H$ are defined uniquely up to a constant multiple and the ratio $G/H$ is a positive nondecreasing function. Proof of the theorem: As $\\{W(\kappa),\kappa\in{\bf Z}\\}$ is DT-ESI with scale $\alpha^{T}$, this theorem fully characterize the covariance function of the DT-ESIM process. From the Markov property (4.4), $R_{\kappa}(\tau)$ defined in (4.1), satisfies $R_{\kappa}(\tau)=G(\alpha^{nT}{\bf s}_{\nu})H(\alpha^{mT}{\bf s}_{u}),\hskip 28.45274pt\tau\in{\bf Z},\alpha>1$ (4.5) By substituting $\tau=0$ in the above relation we have $m=n$, then $G(\alpha^{nT}{\bf s}_{\nu})=\frac{R_{\kappa}(0)}{H(\alpha^{nT}{\bf s}_{\nu})}.$ Therefore $R_{\kappa}(\tau)=\frac{H(\alpha^{mT}{\bf s}_{u})}{H(\alpha^{nT}{\bf s}_{\nu})}R_{\kappa}(0),\hskip 28.45274pt\tau\in{\bf Z}$ (4.6) Thus $H(\alpha^{mT}{\bf s}_{u})=\frac{R_{\kappa}(\tau)}{R_{\kappa}(0)}H(\alpha^{nT}{\bf s}_{\nu}).$ So ${\bf H}(\kappa+\tau)=\frac{R_{\kappa}(\tau)}{R_{\kappa}(0)}{\bf H}(\kappa)$ where ${\bf H}(\kappa)=H(\alpha^{nT}{\bf s}_{\nu})$ and ${\bf H}(\kappa+\tau)=H(\alpha^{mT}{\bf s}_{u})$. Therefore $R_{\kappa}(\tau)=\frac{{\bf H}(\kappa+\tau)}{{\bf H}(\kappa)}R_{\kappa}(0).$ (4.7) For $\tau=1$, we have ${\bf H}(\kappa+1)=\frac{R_{\kappa}(1)}{R_{\kappa}(0)}{\bf H}(\kappa).$ By the recursive relation, it follows that ${\bf H}(\kappa+1)=\frac{R_{\kappa}(1)}{R_{\kappa}(0)}\frac{R_{\kappa-1}(1)}{R_{\kappa-1}(0)}\ldots\frac{R_{0}(1)}{R_{0}(0)}{\bf H}(0)={\bf H}(0)\prod_{j=0}^{\kappa}f(j)$ and ${\bf H}(\kappa)={\bf H}(0)\prod_{j=0}^{\kappa-1}f(j)$ where $f(j)=R_{j}(1)/R_{j}(0)$. By the assumptions $n=[\frac{\kappa}{q}]$, $\nu=\kappa-q[\frac{\kappa}{q}]$ we have $\kappa=nq+\nu$, then ${\bf H}(nq+\nu)={\bf H}(0)\prod_{j=0}^{nq+\nu-1}f(j).$ (4.8) As mentioned in Remark 3.2, $\\{W(\kappa),\kappa\in{\bf Z}\\}$ is DT-ESI with scale $l$, then $f(\kappa+q)=\frac{R_{\kappa+q}(1)}{R_{\kappa+q}(0)}=\frac{E[W(\kappa+q+1)W(\kappa+q)]}{E[W(\kappa+q)W(\kappa+q)]}$ $=\frac{\alpha^{2TH}E[W(\kappa+1)W(\kappa)]}{\alpha^{2TH}E[W(\kappa)W(\kappa)]}=\frac{R_{\kappa}(1)}{R_{\kappa}(0)}=f(\kappa).$ Hence by (4.8) ${\bf H}(nq+\nu)={\bf H}(0)\big{[}\prod_{j=0}^{q-1}f(j)\big{]}^{n}\prod_{j=0}^{\nu-1}f(j),\hskip 19.91692pt\nu\geqslant 1$ By the definition of $\tilde{f}$ in (4.3) ${\bf H}(nq+\nu)={\bf H}(0)[\tilde{f}(q-1)]^{n}\tilde{f}(\nu-1).$ (4.9) By a similar method one can verify that ${\bf H}(-nq+\nu)={\bf H}(0)[\tilde{f}(q-1)]^{-n}\tilde{f}(\nu-1).$ Let $\tau=tq+s$ in (4.7), $t\in{\bf W}$ and $s=0,1,\ldots,q-1$, then it follows from (4.9) that $R_{\kappa}(tq+s)=\frac{{\bf H}(\kappa+tq+s)}{{\bf H}(\kappa)}R_{\kappa}(0)=\frac{{\bf H}(0)[\tilde{f}(q-1)]^{t}\tilde{f}(\kappa+s-1)}{{\bf H}(0)\tilde{f}(\kappa-1)}R_{\kappa}(0).$ For $\tau=-tq+s$ we have that $R_{\kappa}(-tq+s)=E[X(\alpha^{-tq+\kappa+s})X(\alpha^{\kappa})]=\alpha^{-2tqH}E[X(\alpha^{\kappa+s})X(\alpha^{tq+\kappa})]$ $=\alpha^{-2tqH}R_{\kappa+s}(tq-s)=\alpha^{-2tqH}R_{\kappa+s}((t-1)q+q-s).\square$ Now we can use this theorem to prove the next result as follows, for $q$-dimensional DT-ESSM process. ###### Theorem 4.2 Let $\\{W(\kappa),\kappa\in{\bf Z}\\}$ be a DT-ESIM process, and $\\{V(n),n\in{\bf Z}\\}$ be its associated $q$-dimensional DT-ESSM process with covariance matrix $Q^{H}(n,\tau)$ which is defined by $(3.5)$. Then $Q^{H}(n,\tau)=\alpha^{2nTH}[\tilde{f}(q-1)]^{\tau}CR,\hskip 19.91692pt\tau\in{\bf Z}$ (4.10) where $\tilde{f}(\cdot)$ is defined in $(4.3)$ and the matrices $C$ and $R$ are given by $C=[C_{u,\nu}]_{u,\nu=0,\ldots,q-1}$, where $C_{u,\nu}=\tilde{f}(u-1)[\tilde{f}(\nu-1)]^{-1}$, and $R$ is a diagonal matrix with diagonal elements $R_{\nu}(0)$, $\nu=0,1,\ldots,q-1$, which is defined in $(4.1)$. Proof: As $W(\cdot)$ is DT-ESI with scale $l$, (3.2) and (3.5) indicate that $Q^{H}_{u,\nu}(n,\tau)=\alpha^{2nTH}Q^{H}_{u,\nu}(\tau)$. Now by the assumption $\kappa=nq+\nu$ and $\kappa+\tau=mq+u$ where $m,n\in{\bf Z}$, $\tau\in{\bf W}$, we have $\tau=(m-n)q+u-\nu$ and therefore $R_{\kappa}(\tau)=R_{nq+\nu}((m-n)q+u-\nu)=E[W(mq+u)W(nq+\nu)]$ $=E[X(\alpha^{mT}{\bf s}_{u})X(\alpha^{nT}{\bf s}_{\nu})].$ Hence $Q^{H}_{u,\nu}(\tau)=E[X(\alpha^{\tau T}{\bf s}_{u})X({\bf s}_{\nu})]=R_{\nu}(\tau q+u-\nu)$ (4.11) and by the Markov property of $W(\cdot)$ from (4.2) we have $R_{\nu}(\tau q+u-\nu)=[\tilde{f}(q-1)]^{\tau}\tilde{f}(u-1)[\tilde{f}(\nu-1)]^{-1}R_{\nu}(0)$ for $u,\nu=0,\ldots,q-1$. Let $C_{u,\nu}=\tilde{f}(u-1)[\tilde{f}(\nu-1)]^{-1}$, so $Q^{H}_{u,\nu}(\tau)=[\tilde{f}(q-1)]^{\tau}C_{u,\nu}R_{\nu}(0).$ (4.12) Thus we can represent the elements of the covariance matrix of $q$-dimensional DT-ESSM process as $Q^{H}_{u,\nu}(n,\tau)=\alpha^{2nTH}[\tilde{f}(q-1)]^{\tau}C_{u,\nu}R_{\nu}(0).\square$ ### 4.2 Spectral representation of the process The spectral density matrix of the $q$-dimensional DT-ESSM process is characterized by the following lemma which is proved in [7]. ###### Lemma 4.1 The spectral density matrix $g^{H}(\omega)=[g^{H}_{u,\nu}(\omega)]_{u,\nu=0,\ldots,q-1}$ of the $q$-dimensional DT-ESSM process $V(n)$ is specified by $g_{u,\nu}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\left[\frac{\tilde{f}(u-1)R_{\nu}(0)}{\tilde{f}(\nu-1)\big{(}1-e^{-i\omega}\alpha^{-HT}\tilde{f}(q-1)\big{)}}-\frac{\tilde{f}(\nu-1)R_{u}(0)}{\tilde{f}(u-1)\big{(}1-e^{-i\omega}\alpha^{HT}\tilde{f}^{-1}(q-1)\big{)}}\right]$ where $R_{k}(0)$ is the variance of $W(k)$ and $\tilde{f}(\cdot)$ is defined by $(4.3)$. Proof: By applying (3.4) and (4.12), the spectral density matrix of the process $\\{V(n),n\in{\bf Z}\\}$ which is denoted by $g^{H}(\omega)=[g^{H}_{u,\nu}(\omega)]_{u,\nu=0,\ldots,q-1}$ can be written as $g_{u,\nu}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\Big{[}\sum_{\tau=0}^{\infty}\alpha^{-TH\tau}e^{-i\omega\tau}Q^{H}_{u,\nu}(\tau)$ $+\sum_{\tau=-\infty}^{-1}\alpha^{-TH\tau}e^{-i\omega\tau}Q^{H}_{u,\nu}(\tau)\Big{]}=g_{u,\nu,1}^{H}(\omega)+g_{u,\nu,2}^{H}(\omega)$ where $g_{u,\nu,1}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\sum_{\tau=0}^{\infty}\alpha^{-TH\tau}e^{-i\omega\tau}[\tilde{f}(q-1)]^{\tau}\tilde{f}(u-1)[\tilde{f}(\nu-1)]^{-1}R_{\nu}(0)$ $=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}\tilde{f}(u-1)R_{\nu}(0)}{2\pi\tilde{f}(\nu-1)}\sum_{\tau=0}^{\infty}\big{(}\alpha^{-TH}e^{-i\omega}\tilde{f}(q-1)\big{)}^{\tau}.$ (4.13) By Remark 3.2, the scale invariant property of $W(\kappa)$ and the assumption, that at least one of the $\text{Corr}[W(j)W(j+1)]$ be smaller than one, we have that $|\tilde{f}(q-1)|<\alpha^{TH}$ for $j=0,\ldots,q-1$. Thus $|e^{-i\omega}\alpha^{-TH}\tilde{f}(q-1)|=|\alpha^{-TH}\tilde{f}(q-1)|<1,$ and (4.13) for $\tau\in{\bf W}$ is convergent. By the equality $Q_{u,\nu}(-\tau)=E[X(\alpha^{-\tau T}{\bf s}_{u})X({\bf s}_{\nu})]=\alpha^{-2\tau TH}E[X(\alpha^{\tau T}{\bf s}_{\nu})X({\bf s}_{u})]=\alpha^{-2\tau TH}Q_{\nu,u}(\tau),$ convergence of $g_{u,\nu,2}^{H}(\omega)$ follows by a similar method. Therefore $g_{u,\nu}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\Big{[}\frac{R_{\nu}(0)\tilde{f}(u-1)}{\tilde{f}(\nu-1)}\sum_{\tau=0}^{\infty}\big{(}\alpha^{-TH}e^{-i\omega}\tilde{f}(q-1)\big{)}^{\tau}$ $+\frac{R_{u}(0)\tilde{f}(\nu-1)}{\tilde{f}(u-1)}\sum_{\tau=1}^{\infty}\big{(}\alpha^{-TH}e^{i\omega}\tilde{f}(q-1)\big{)}^{\tau}\Big{]}$ $=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}}{2\pi}\Big{[}\frac{R_{\nu}(0)\tilde{f}(u-1)}{\tilde{f}(\nu-1)\big{(}1-\alpha^{-TH}e^{-i\omega}\tilde{f}(q-1)\big{)}}+\frac{R_{u}(0)\tilde{f}(\nu-1)\alpha^{-TH}e^{i\omega}\tilde{f}(q-1)}{\tilde{f}(u-1)\big{(}1-\alpha^{-TH}e^{i\omega}\tilde{f}(q-1)\big{)}}\Big{]},$ so we arrive at the assertion of the lemma.$\square$ ###### Example 4.1 Let $X(t)=\sum_{n=1}^{\infty}\lambda^{n(H-\frac{1}{2})}I_{[\lambda^{n-1},\lambda^{n})}(t)B(t)$ where $B(\cdot)$ is the standard Brownian motion, $I(\cdot)$ indicator function, $H>0$ and $\lambda>1$. We call this process Simple Brownian Motion. We showed in [7] that $\\{X(t),t\in R^{+}\\}$ is DSI and Markov with Hurst index $H$ and scale $\lambda$. By sampling of this process at points $\alpha^{nT}{\bf s}_{u}$, $n\in{\bf W}$, where $1\leq{\bf s}_{0}\leq{\bf s}_{1},\cdots,{\bf s}_{q-1}<\alpha^{T}$, and by assuming $\lambda=\alpha^{T}$, $W(\kappa):=X(\alpha^{nT}{\bf s}_{u}),$ is a DT-ESIM process, and $V(n)=\big{(}V^{0}(n),\ldots,V_{q-1}(n)\big{)}$ where $V^{u}(n)=W(\kappa)$ is the associated $q$-dimensional DT-ESSM process where $u=\kappa-q[\frac{\kappa}{q}]$, $n=[\frac{\kappa}{q}]$. By (4.1) we have that $R_{j}^{H}(0)=R_{j}^{H}(1)=\alpha^{2TH^{\prime}}{\bf s}_{j}$ for $j=0,\cdots,q-2$ and $R_{q-1}^{H}(1)=\alpha^{TH^{\prime}}R_{q-1}^{H}(0)=\alpha^{3TH^{\prime}}{\bf s}_{q-1}$, where $H^{\prime}=H-\frac{1}{2}$. So $R_{u}(0)=\alpha^{2TH^{\prime}}{\bf s}_{u}$, $\;R_{\nu}(0)=\alpha^{2TH^{\prime}}{\bf s}_{\nu}$. Also (4.3) implies that $\tilde{f}(u-1)=\tilde{f}(\nu-1)=1$, $\tilde{f}(q-1)=\alpha^{TH^{\prime}}$. Thus By Lemma 4.1, the spectral density matrix of $V(n)$ is $g_{u,\nu}^{H}(\omega)=\frac{({\bf s}_{u}{\bf s}_{\nu})^{-H}\alpha^{2TH^{\prime}}}{2\pi}\left[\frac{{\bf s}_{\nu}}{1-e^{-i\omega}\alpha^{-T/2}}-\frac{{\bf s}_{u}}{1-e^{-i\omega}\alpha^{T/2}}\right].$ ## References * [1] P. Borgnat, P.O. Amblard, P. Flandrin, 2005, ”Scale invariances and Lamperti transformations for stochastic processes”, Journal of Physics A: Mathematical and General, Vol.38, pp.2081 2101. * [2] M.E. Caballero, L. Chaumont, 2006, ”Weak convergence of positive self-similar Markov processes and overshoots of Levy processes”, The annals of probability, Vol.34, No.3, pp.1012-1034. * [3] J.L. Doob, ”Stochastic Processes”, Wiley, New York 1953. * [4] P. Flandrin, P. Borgnat, P.O. Amblard, 2002, ”From stationarity to selfsimilarity, and back : Variations on the Lamperti transformation”, appear in Processes with Long-Range Correlations, pp.88-117. * [5] E.G. Gladyshev, 1961, ”Periodically correlated random sequences”, Soviet Math. Dokl., No.2, pp.385-388. * [6] N. Modarresi, S. Rezakhah, 2009, ”Discrete time scale invariant Markov processes”, arxiv/pdf/0905/0905.3959v3.pdf. * [7] N. Modarresi, S. Rezakhah, 2010, ”Spectral analysis of Multi-dimensional self-similar Markov processes”, Journal of Physics A: Mathematical and General, Accepted,arxiv/pdf/0907/0907.2295v4.pdf * [8] Y.A. Rozanov, 1967, ”Stationary Random Processes”, Holden-Day, San Francisco. * [9] B. Yazici, R.L. Kashyap, 1997, ”A class of second-order stationary self-similar processes for 1/f phenomena”, IEEE Transactions on Signal Processing, No. 45, pp.396-410.
* [26] A. Koppel, G. Warnell, E. Stump, and A. Ribeiro, “Parsimonious online learning with kernels via sparse projections in function space,” _Journal of Machine Learning Research_ , vol. 20, no. 1, pp. 83–126, 2019. * [27] A. Koppel, H. Pradhan, and K. Rajawat, “Consistent online gaussian process regression without the sample complexity bottleneck,” _Statistics and Computing_ , vol. 31, no. 6, pp. 1–18, 2021. * [28] I. Csiszár, “I-divergence geometry of probability distributions and minimization problems,” _The annals of probability_ , pp. 146–158, 1975\. * [29] Y. Yang, H. Wang, N. Kiyavash, and N. He, “Learning positive functions with pseudo mirror descent,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 14 144–14 154. * [30] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, “Convolutional kernel networks,” in _Advances in neural information processing systems_ , 2014, pp. 2627–2635. * [31] B. Poljak and Y. Z. Tsypkin, “Pseudogradient adaptation and training algorithms,” _Automation and Remote Control_ , vol. 34, pp. 45–67, 1973\. * [32] D. Calandriello, A. Lazaric, and M. Valko, “Second-order kernel online convex optimization with adaptive sketching,” in _International Conference on Machine Learning_. PMLR, 2017, pp. 645–653. * [33] ——, “Efficient second-order online kernel learning with adaptive embedding,” in _Neural Information Processing Systems_ , 2017. * [34] S. Flaxman, Y. W. Teh, D. Sejdinovic _et al._ , “Poisson intensity estimation with reproducing kernels,” _Electronic Journal of Statistics_ , vol. 11, no. 2, pp. 5081–5104, 2017. * [35] Y. LeCun, “The mnist database of handwritten digits,” _http://yann. lecun. com/exdb/mnist/_. * [36] U. Marteau-Ferey, F. Bach, and A. Rudi, “Non-parametric models for non-negative functions,” in _Neural Information Processing Systems_ , 2020\. * [37] P.-C. Aubin-Frankowski and Z. Szabó, “Hard shape-constrained kernel machines,” _arXiv preprint arXiv:2005.12636_ , 2020. * [38] ——, “Handling hard affine sdp shape constraints in rkhss,” _arXiv preprint arXiv:2101.01519_ , 2021. * [39] J. Nocedal and S. J. Wright, _Numerical Optimization_ , 2nd ed. New York, NY, USA: Springer, 2006. * [40] V. Norkin and M. Keyzer, “On stochastic optimization and statistical learning in reproducing kernel hilbert spaces by support vector machines (svm),” _Informatica_ , vol. 20, pp. 273–292, 2009. * [41] B. A. Frigyik, S. Srivastava, and M. R. Gupta, “Functional bregman divergence and bayesian estimation of distributions,” _IEEE Trans. Inf. Theory_ , vol. 54, no. 11, pp. 5130–5139, 2008. * [42] A. Agarwal and J. C. Duchi, “Distributed delayed stochastic optimization,” in _Advances in Neural Information Processing Systems_ , 2011, pp. 873–881. * [43] A. Mokhtari, M. Eisen, and A. Ribeiro, “Iqn: An incremental quasi-newton method with local superlinear convergence rate,” _SIAM Journal on Optimization_ , vol. 28, no. 2, pp. 1670–1698, 2018. * [44] Z. Gao, A. Koppel, and A. Ribeiro, “Incremental greedy bfgs: An incremental quasi-newton method with explicit superlinear rate,” _Advances in neural information processing systems, 12th OPT Workshop on Optimization for Machine Learning_. * [45] E. Snelson and Z. Ghahramani, “Sparse gaussian processes using pseudo-inputs,” _Advances in neural information processing systems_ , vol. 18, p. 1257, 2006. * [46] G. Raskutti and S. Mukherjee, “The information geometry of mirror descent,” _IEEE Transactions on Information Theory_ , vol. 61, no. 3, pp. 1451–1457, 2015. * [47] Z. Wang, H. Liu, and T. Zhang, “Optimal computational and statistical rates of convergence for sparse nonconvex learning problems,” _Annals of statistics_ , vol. 42, no. 6, p. 2164, 2014. * [48] D. P. Bertsekas and J. N. Tsitsiklis, “Gradient convergence in gradient methods with errors,” _SIAM Journal on Optimization_ , vol. 10, no. 3, pp. 627–642, 2000. * [49] Z.-Q. Luo and P. Tseng, “Error bounds and convergence analysis of feasible descent methods: a general approach,” _Annals of Operations Research_ , vol. 46, no. 1, pp. 157–178, 1993. * [50] L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” _Siam Review_ , vol. 60, no. 2, pp. 223–311, 2018. * [51] H. L. Royden and P. Fitzpatrick, _Real analysis_. Macmillan New York, 1988, vol. 32. * [52] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, “Robust stochastic approximation approach to stochastic programming,” _SIAM Journal on optimization_ , vol. 19, no. 4, pp. 1574–1609, 2009. * [53] T. T. Doan, S. Bose, D. H. Nguyen, and C. L. Beck, “Convergence of the iterates in mirror descent methods,” _IEEE control systems letters_ , vol. 3, no. 1, pp. 114–119, 2018. * [54] Z.-C. Guo, S.-B. Lin, and D.-X. Zhou, “Learning theory of distributed spectral algorithms,” _Inverse Problems_ , vol. 33, no. 7, p. 074009, 2017. * [55] S.-B. Lin, X. Guo, and D.-X. Zhou, “Distributed learning with regularized least squares,” _The Journal of Machine Learning Research_ , vol. 18, no. 1, pp. 3202–3232, 2017. * [56] A. Rodomanov and Y. Nesterov, “New results on superlinear convergence of classical quasi-newton methods,” _Journal of optimization theory and applications_ , vol. 188, no. 3, pp. 744–769, 2021. * [57] B. W. Silverman, _Density estimation for statistics and data analysis_. CRC press, 1986, vol. 26. * [58] Y. Engel, S. Mannor, and R. Meir, “The kernel recursive least-squares algorithm,” _IEEE Transactions on signal processing_ , vol. 52, no. 8, pp. 2275–2285, 2004. * [59] Y. Nesterov, “Primal-dual subgradient methods for convex problems,” _Mathematical Programming_ , vol. 120, pp. 221–259, 2009. * [60] K. P. Murphy, _Machine learning: a probabilistic perspective_. MIT press, 2012. * [61] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” _ACM Transactions on Intelligent Systems and Technology_ , vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
11institutetext: Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal 22institutetext: Departamento de Física e Astronomia, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal 33institutetext: Instituto de Astrofísica de Canarias (IAC), 38205 La Laguna, Tenerife, Spain 44institutetext: Universidad de La Laguna (ULL), Departamento de Astrofísica, 38206 La Laguna, Tenerife, Spain 55institutetext: Département d’astronomie de l’Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland 66institutetext: INAF - Osservatorio Astronomico di Brera, Via Bianchi 46, 23807 Merate, Italy 77institutetext: European Southern Observatory, Alonso de Córdova 3107, Vitacura, Región Metropolitana, Chile 88institutetext: INAF - Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, 90134 Palermo, Italy99institutetext: INAF - Osservatorio Astrofisico di Torino, via Osservatorio 20, 10025 Pino Torinese, Italy 1010institutetext: Centro de Astrobiología (CSIC-INTA), Crta. Ajalvir km 4, E-28850 Torrejón de Ardoz, Madrid, Spain 1111institutetext: INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143 Trieste, Italy 1212institutetext: Consejo Superior de Investigaciones Cientícas, Spain1313institutetext: Physics Institute, University of Bern, Sidlerstrasse 5, 3012 Bern, Switzerland1414institutetext: Institute for Fundamental Physics of the Universe, Via Beirut 2, I-34151 Grignano, Trieste, Italy1515institutetext: Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, PT1749-016 Lisboa, Portugal 1616institutetext: Fundación G. Galilei – INAF (Telescopio Nazionale Galileo), Rambla J. A. Fernández Pérez 7, E-38712 Breña Baja, La Palma, Spain 1717institutetext: Faculdade de Ciências da Universidade de Lisboa (Departamento de Física), Edifício C8, 1749-016 Lisboa, Portugal 1818institutetext: European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching b. München, Germany 1919institutetext: Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto, Portugal2020institutetext: Department of Physics, and Institute for Research on Exoplanets, Université de Montréal, Montréal, H3T 1J4, Canada 2121institutetext: Centro de Astrobiología (CAB, CSIC-INTA), Depto. de Astrofísica, ESAC campus, 28692, Villanueva de la Cañada (Madrid), Spain # A novel framework for semi-Bayesian radial velocities through template matching A. M. Silva A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching J. P. Faria A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching N. C. Santos A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching S. G. Sousa A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching P. T. P. Viana A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching J. H. C. Martins A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching P. Figueira A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching C. Lovis A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching F. Pepe A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching S. Cristiani A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching R. Rebolo A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching R. Allart A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching A. Cabral A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching A. Mehner A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching A. Sozzetti A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching A. Suárez Mascareño A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching C. J.A.P. Martins A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching D. Ehrenreich A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching D. Mégevand A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching E. Palle A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching G. Lo Curto A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching H. M. Tabernero A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching J. Lillo-Box A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching J. I. González Hernández A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching M. R. Zapatero Osorio A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching N. C. Hara A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching N. J. Nunes A novel framework for semi- Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching P. Di Marcantonio A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching S. Udry A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching V. Adibekyan A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching X. Dumusque A novel framework for semi-Bayesian radial velocities through template matchingA novel framework for semi-Bayesian radial velocities through template matching (Received date / Accepted date ) ###### Abstract Context. The detection and characterization of an increasing variety of exoplanets has been in part possible thanks to the continuous development of high-resolution, stable spectrographs, and using the Doppler radial-velocity (RV) method. The Cross Correlation Function (CCF) method is one of the traditional approaches for the derivation of RVs. More recently, template matching has been introduced as an advantageous alternative for M-dwarf stars. Aims. We describe a new implementation of the template matching technique for stellar RV estimation within a semi-Bayesian framework, providing a more statistically principled characterization of the RV measurements and associated uncertainties. This methodology, named S-BART: Semi-Bayesian Approach for RVs with Template-matching, can currently be applied to HARPS and ESPRESSO data. We first validate its performance with respect to other template matching pipelines using HARPS data. Then, we apply S-BART to ESPRESSO observations, comparing the scatter and uncertainty of the derived RV time series with those obtained through the CCF method. We leave, for future work, a full analysis of the planetary and activity signals present in the datasets considered. Methods. In the context of a semi-Bayesian framework, a common RV shift is assumed to describe the difference between each spectral order of a given stellar spectrum and a template built from the available observations. Posterior probability distributions are obtained for the relative RV associated with each spectrum using the Laplace approximation, after marginalization with respect to the continuum. We also implemented, for validation purposes, a traditional template matching approach, where a RV shift is estimated individually for each spectral order and the final RV estimate is calculated as a weighted average of the individual order’s RVs. Results. The application of our template-based methods to HARPS archival observations of Barnard’s star allowed us to validate our implementation against other template matching methods. Although we found similar results, the RMS of the RVs derived with S-BART was smaller than that obtained with the HARPS-TERRA and SERVAL pipelines. We believe this is due to differences in the construction of the stellar template and the handling of telluric features. After validating S-BART, we applied it to 33 ESPRESSO GTO targets, evaluating its performance and comparing it with respect to the CCF method as implemented in the ESO pipeline. We found a decrease in the median RV scatter of $\sim$10% and $\sim$4% for M- and K-type stars, respectively. Our semi-Bayesian framework yields more precise RV estimates than the CCF method, in particular in the case of M-type stars where S-BART achieves a median uncertainty of $\sim$ 15 ${\rm cm}\ {\rm s}^{-1}$ over 309 observations of 16 targets. Further, with the same data we estimated the nightly zero point (NZP) of the instrument, finding a weighted NZP scatter below $\sim$ 0.7 ${\rm m}\ {\rm s}^{-1}$. Given that this includes stellar variability, photon noise, and potential planetary signals, it should be taken as an upper limit of the RV precision attainable with ESPRESSO data. ###### Key Words.: Techniques: radial velocities, Techniques: spectroscopic, Planets and satellites: detection, Planets and satellites: terrestrial planets, Methods: statistical, Methods: data analysis ## 1 Introduction Finding and characterizing other Earths – rocky planets with the physical conditions to hold liquid water on their surface – is one of the boldest goals of present-day astrophysics. The discovery (e.g. Mayor et al. 2011, 2014; Hsu et al. 2019; Rosenthal et al. 2021) that rocky planets are actually very common around solar-type stars, i.e. late F, G and early K stars, made this goal more achievable and motivated the development of a new generation of ground and space-based instruments and missions (e.g. ESPRESSO - Pepe et al. 2021, PLATO - Rauer et al. 2014, HIRES@ELT - Marconi et al. 2021). One of the most prolific exoplanet discovery method is the radial velocity (RV) method, based on the detection of variations in the velocity of a star along our line of sight, induced by the gravitational pull of planetary companions. However, the identification of Earth-like planets, orbiting solar- type stars, poses a significant challenge: Earth itself induces a signal with an amplitude of only ~9 ${\rm cm}\ {\rm s}^{-1}$ on the Sun. In order to achieve this RV precision domain a new generation of spectrographs have been developed. An example of a state-of-the-art spectrograph is ESPRESSO, the “Échelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations”, built to reach a precision of 10 ${\rm cm}\ {\rm s}^{-1}$ with a wavelength coverage from 380 to 788 nm (Pepe et al. 2021). The first confirmed detection of an exoplanet around a solar-type star, 51 Pegasi b (Mayor & Queloz 1995), was achieved with radial velocities computed using the Cross Correlation Function (CCF) method. In the method’s early stages, a binary mask, with fixed non-zero weights attributed to the expected positions of stellar absorption lines, was cross-correlated with the spectra (Baranne et al. 1996). However, as deep sharp lines contain more information than broad and shallower ones (as shown by the methodology introduced in Bouchy et al. 2001) the masks were improved by associating different weights to different lines (Pepe et al. 2002), depending on the RV information present in them, such that deep lines contribute more to the final CCF profile than shallow ones. Even though the CCF method has been widely used, building the masks can be a challenging task in some situations, especially for M dwarfs (e.g. Rainer et al. 2020; Lafarga et al. 2020). The high number of stellar spectral lines, due to the lower temperatures of M-type stars, results in the presence of a larger number of spectral lines, with most of them being spectroscopically blended, hardening the construction of the CCF mask and the fitting of the CCF profile. For such cases it has been shown that template matching can surpass the CCF method (e.g. Anglada-Escudé & Butler 2012; Zechmeister et al. 2018; Lafarga et al. 2020), as the stellar template will contain a large majority of the lines in the stellar spectrum. This is a data-driven method, where each spectra is compared against a template built from available observations, and has been implemented in HARPS-TERRA (Anglada-Escudé & Butler 2012), NAIRA (Astudillo- Defru 2015) and SERVAL (Zechmeister et al. 2018). More recently, new approaches to RV estimation have emerged, based on line-by-line measurement of RV shifts (Dumusque 2018; Cretignier et al. 2020), modelling of the observed spectrum as a linear combination of a time-invariant stellar spectrum and a time-variant telluric spectrum (wobble, Bedell et al. 2019), or pairwise spectrum comparison through Gaussian process interpolation (GRACE, Rajpaul et al. 2020). In the next sections, we recast the template matching approach within a semi- Bayesian framework, and then evaluate the performance of template-based RV extraction methodologies when applied to ESPRESSO data. In particular, in Sect. 2 we discuss how the spectral data is processed, the stellar template is created and the telluric features are removed. Afterwards, in Sect. 3 we re- visit the classical template matching algorithm, and then in Sect. 4 we discuss the working principles of our semi-Bayesian template matching approach, as well as our strategy to efficiently characterize the posterior distribution of the model. In Sect. 5 we evaluate the performance of our template matching algorithm using data from i) 22 HARPS observations of Barnard’s star, to validate our template-based methodologies against the results of other template matching pipelines, when applied to a common set of spectra; and ii) 1046 ESPRESSO observations of 33 M-, K- and G-type stars, which will be the main focus of this paper. To the ESPRESSO dataset we will apply the classical template matching approach, the CCF of ESO’s official pipeline and our semi-Bayesian approach, allowing for the comparison of RV scatter and uncertainties, as well as the estimation of an upper bound for RV precision. We refrain from studying the impact that stellar activity has on S-BART derived RVs, as our ESPRESSO sample mainly contains quiet stars. Further, such endeavor would translate into a large scale modelling effort that lies outside the scope of the current paper. Lastly, in Sect. 6 we discuss some limitations of the developed methodology and present some possible improvements. ## 2 Model preparation In this Section we discuss the stages of data processing, common to all instruments supported by our algorithm, that must be applied before estimating RVs from spectra. We also discuss the creation of the stellar template and removal of telluric features through the usage of synthetic spectra of Earth’s atmosphere. Figure 1 shows the order in which the different procedures are applied. Figure 1: Workflow of the processing stage that we apply before the RV estimation. ### 2.1 Pre-processing data The extraction of the spectral orders from the image and necessary calibrations and corrections are handled by the official Data Reduction Software, DRS, of the respective instruments. Regions around spectral lines that are typically used as activity indicators or are clearly identifiable as emission features are removed from the spectra. Table 1: Central wavelength (measured in air) and size of the spectral regions removed from the spectra. Line | Wavelength (Air) [$\AA$] | Window [$\AA$] | Reference ---|---|---|--- CaK | 3933.66 | 0.6 | 1 CaH | 3968.47 | 0.4 | 1 $H\epsilon$ | 3970.075 | 0.6 | 3 $H\delta$ | 4101.734 | 1.4 | 4 $H\gamma$ | 4340.472 | 2.0 | 5 $H\beta$ | 4861.35 | 1.8 | 6 Na I D | 5889.96 | 1.4 | 1 Na I D | 5895.93 | 0.9 | 1 $H\alpha$ | 6562.808 | 2.0 | 2 CaI | 6572.795 | 1.8 | 2 (1) Robertson et al. (2016); (2) Kuerster et al. (2003); (3) Balmer series (n = 7 -¿ 2); (4) Balmer series (n = 6 -¿ 2); (5) Balmer series (n = 5 -¿ 2); (6) Balmer series (n = 4 -¿ 2), Flores et al. (2016); In Table 1 we identify the central wavelengths and the size of the spectral region that is removed around each feature. The chosen windows have been verified with the spectra of M-type stars. We also remove bad or hot pixels that are flagged by the instrument’s official pipeline, as well as those that have null data. If, when considering the mentioned effects, we remove more than 75% of an order in a stellar spectrum we do not consider the order when estimating RVs. ### 2.2 Telluric template Figure 2: Comparison, for three spectral regions, between the telluric spectrum obtained from Tapas (black) with the continuum level obtained with a median filter (dashed blue line) and the telluric threshold (dotted red line) built from the continuum. Earth’s atmosphere absorbs radiation, imprinting telluric absorption features in spectra acquired with ground-based spectrographs. The impact of this phenomenon strongly depends on the wavelength range and resolution of the spectrograph, airmass of the observations, water vapor content and weather conditions (e.g. Figueira et al. 2012; Cunha et al. 2014). If not corrected it can lead to biased and less precise RV estimates. Even shallow telluric lines, or micro-telluric lines, can induce a significant bias, about 10-20 ${\rm cm}\ {\rm s}^{-1}$ (Cunha et al. 2014), on par with, or larger than the signal produced by an Earth-like world around a solar-type star. The identification and removal of telluric features from stellar spectra is thus essential for the estimation of accurate and precise RVs. For this purpose we use a synthetic spectrum of Earth’s transmittance, with a resolution equal to that from the instrument mode, obtained through the Tapas (Bertaux et al. 2014) web-interface111http://cds-espri.ipsl.fr/tapas/. We start by estimating the continuum level of the transmittance spectrum through a rolling median filter which spans over 1000 points (though near the edges we reduce the window to 50 points to minimize numerical artifacts due to the choice of the filter’s boundary conditions). We flag, as wavelength domains affected by tellurics, those where the transmission is lower than a given threshold - by default 99% of the continuum level. Figure 2 shows the behaviour of the rolling median filter. In regions with shallower telluric features the continuum estimation is not affected. However, that is no longer the case in regions where there is a larger presence of deeper features (bottom panel). Despite this, the chosen threshold is still enough to properly identify the telluric features, as seen in the bottom pannel of the Figure. It is important to refer that this choice of threshold is not able to detect shallower telluric features, as seen in the upper pannels of the Figure. However, a more restrictive threshold would result in a rejection of larger spectral regions across the wavelength coverage of the instrument. Thus, we attempt to maximize the spectral coverage whilst still removing the deeper telluric features. We must also take into account the RV component introduced by Earth’s motion around the barycenter of the Solar System (and as such nicknamed as Barycentric Earth Radial Velocity, or BERV). This motion introduces a Doppler shift in the observed spectrum that can be corrected by shifting the reference frame from Earth to an inertial one, the barycenter of the Solar System. This correction is incorporated, usually by default, in the spectrographs official pipelines, as is the case of ESPRESSO, where the wavelength solution is shifted by the corresponding value. Since the telluric lines wavelengths are fixed on the detector on Earth’s reference frame, their position relative to the stellar lines will change, and in the BERV-corrected spectra the telluric lines will appear as shifted by -BERV. To take into account this relative movement, we discard a wavelength domain around each feature corresponding to the maximum BERV ($\sim$ 30 ${\rm km}\ {\rm s}^{-1}$, obtained for stars along the direction of Earth’s orbit). ### 2.3 Stellar template The stellar template is the most important component of our model, as it is assumed to be a high signal-to-noise model spectrum that represents very accurately the stellar spectrum, which is assumed to be immutable. Any observed spectrum is assumed to differ from this template only as a result of a Doppler-shift induced by the stellar RV. This high signal-to-noise template is built by combining the information of multiple observations of the same star. #### 2.3.1 Building the template The stellar template is built in an order-by-order basis, so that it can accurately represent the stellar spectra. Its construction starts with the choice of the reference frame for the template, i.e. the wavelength solution henceforth associated with it. For this purpose, we use the BERV-corrected observation with the smallest uncertainty in the RV estimated by the ESO pipeline (through the CCF method). We decided to place our stellar template in a rest frame, i.e. at a RV of zero. To do so, we remove from the template’s wavelengths the contribution of the stellar RV, either estimated beforehand through the CCF approach or a previous iteration of our template matching procedure. The next step is to remove, from all observations, the contribution of their own stellar RVs. As the wavelengths of the spectra will not be an exact match with those from the template, we have to interpolate them to a common wavelength grid - the one from the template. For this purpose we apply a cubic splines algorithm (see Section 3.3 of Press 1992). Due to the BERV, the stellar spectrum will shift on the CCD, and thus different spectra will have different starting and ending wavelengths in each spectral order (see Fig. 3). To avoid different SNR within the same order of the template we select, for each order, the wavelengths common to all spectra. Finally, we compute the mean of the fluxes in order to build a high SNR stellar template. We calculate the mean in order to keep the count level at a physically meaningful value and avoid possible numerical issues further ahead. Figure 3: BERV-corrected spectra of different observations at the start (top) and end (bottom) of a spectral order. In the top panel the dashed red line represents the minimum wavelength so that all (of the presented) orders have data at the start of the order. In the bottom panel it represents the last wavelength at which all orders have data. Lastly, we must also consider the presence of telluric features in the data. Even though the wavelength domains affected by the deeper features can be removed with the methodology discussed in Sect. 2.2, micro-tellurics will not be identified by our mask and, consequently, still be present in the individual observations. As seen in Cunha et al. (2014) they can have a considerable impact in the accuracy and precision of the estimated RVs, in particular when obtained from data acquired with instruments as stable as ESPRESSO. By constructing the template from a large number of spectra, obtained at different periods of the year, i.e. by having a wide BERV range, their effects in the spectral template can, in principle, be minimized by averaging them out. However, as this condition is not always met, we mitigate their impact by using in the building of the spectral template only observations whose associated airmass is smaller than 1.5222This value was selected as the default one, but it can quickly be changed to accommodate the observing conditions of the available observations. as the depth of the telluric features increases with airmass (Figueira et al. 2010). This choice allows to strike a balance between i) the number of observations that are discarded due to high micro-telluric contaminations and ii) the number of observations that can be used in the construction of the template. Furthermore, at higher airmasses the correction of the atmospheric dispersion is not as efficient as it is for lower airmasses (Wehbe et al. 2019). Thus, this selection is an attempt to select a set of homogeneous spectra to be used in the construction of the stellar template. #### 2.3.2 Estimation of uncertainty in the stellar template The stellar template will be affected by some uncertainty, given that it is built as a mean of $N$ spectra, all affected by flux measurement uncertainties. Within this sub-Section we start by discussing the calculation of the uncertainties associated with the stellar template, following with a comparison with the uncertainties in both low- and high-SNR observations. Lastly, we touch upon the computational trade-offs that must be made to ensure performance of the algorithm. If each spectrum had the same SNR, then we would expect the SNR of the template to be approximately equal to $\sqrt{N}$ times the SNR of each observation. Under this assumption, we would need 100 observations to achieve a mean uncertainty (standard deviation) per flux one order of magnitude smaller than that associated with any single observation. Since many targets will have fewer observations, we decided to propagate the uncertainties associated with the template towards the final RV estimate, as discussed in Sects. 3 and 4. For this purpose, we have to take into account that both the spectral data and the template are interpolated with cubic splines in two different stages: during the creation of the stellar template and in the RV extraction procedure. In Appendix A we describe the analytical uncertainty propagation through the cubic spline interpolation algorithm. We studied the characteristics of the uncertainty in the template by selecting the available observations of an M4 star ($N_{spectra}$=21) from the sample used in Sect. 5.2.3, allowing to assess the need of accounting for them during the RV estimation. The chosen observations were made after ESPRESSO’s fiber link upgrade in June 2019 (Pepe et al. 2021) and we selected data from the 100th spectral order (central wavelength of 541 nm). In order to evaluate the impact of the number of spectra used to construct the template we selected two sets of observations: those with an airmass below 1.2 (N = 8); and those with an airmass below 1.5 (N=13). After creating the stellar templates we align them with each observation and interpolate the template’s flux to the wavelength solution of the observation, also propagating the uncertainties in the template. Then, we compare them against the ones associated with the spectra, as computed by the ESO pipeline. A direct comparison is not possible, as observations with lower flux values also have lower photon noise and, consequently, smaller flux uncertainties. Instead, we compute the SNR ratios of the template and spectra, for each pixel of the order. Figure 4: Histogram of the SNR ratio, for the 100th order, of the template and each of the 21 individual observations of an M4 star, obtained after the fiber link upgrade of ESPRESSO in mid 2019. In order to compare the impact of the number of observations used to construct the template in its associated uncertainty, one template was built with 8 observations (black curve) and the other with 13 observations (blue curve), selected by the airmass at the start of each observation. For all observations we also highlight by filling the bins with the corresponding colour, the comparison with observations with a SNR of at least 100 for the selected spectral order. Figure 4 shows that an increase in the number of observations used to construct the template leads to an increase in its SNR, when compared against the individual observations, as one would expect. The comparison for the lower SNR spectra (non-filled bins) reveals that the two stellar templates have uncertainties close to one order of magnitude smaller than the ones found on the observations. Unfortunately, that is not the case for the higher SNR observations, i.e. those with an SNR in the 100th order of at least 100. For a more detailed analysis, Table 2 shows the comparison of each template against three different sets of observations: with all observations; with only the observations used in the construction of the template; with observations whose SNR is at least 100. We find that in all cases the SNR median value is larger than $\sqrt{N}$, a difference explained by the fact that the selected observations all have different SNRs. From this analysis we see that the SNR of the template is not one order of magnitude larger than the one from the observations, confirming the need of accounting for those uncertainties. Table 2: Analysis of the SNR ratio between each of the two templates and three different subsets of the available observations. Airmass | N | $\sqrt{N}$ | Observations | SNR ratio aa$a$The values and the associated uncertainty represent the median and standard deviation of the SNR ratios; ---|---|---|---|--- $\leq$ 1.2 | 8 | 2.8 | Allbb$b$Compares the two templates against all available observations from the two sets; | 3.7 $\pm$ 2.1 Template cc$c$Comparison against only those that were used to construct the corresponding stellar template; | 3.4 $\pm$ 0.5 SNR $\geq$ 100 dd$d$Comparison against all observations that have a SNR, in the 100th order, higher than 100. | 3.3 $\pm$ 0.5 $\leq$ 1.5 | 13 | 3.6 | Allbb$b$Compares the two templates against all available observations from the two sets; | 4.4 $\pm$ 1.9 Templatecc$c$Comparison against only those that were used to construct the corresponding stellar template; | 4.1 $\pm$ 0.9 SNR $\geq$ 100 dd$d$Comparison against all observations that have a SNR, in the 100th order, higher than 100. | 3.9 $\pm$ 0.6 333 The main problem with our uncertainty propagation procedure, described in Appendix A, lies in its computational efficiency, as it requires the inversion of large matrices. Whilst it is feasible to use it for the calculation of uncertainties during the creation of the stellar template, it is a computational bottleneck in the interpolation of the stellar template for each tentative RV shift that is tested during the RV extraction procedure. In an attempt to mitigate this problem, we decided to evaluate if we could estimate the template flux uncertainties by interpolation, instead of applying the analytical uncertainty propagation procedure. This approximation was tested through the interpolation of the two templates previously used in Fig. 4, and then comparing the flux uncertainties obtained through both methods. We found, for the two templates, that the interpolation of flux uncertainties (standard deviations) results in their overestimation by a factor of $1.07\pm 0.05$ across the 21 observations. Even though there is a slight increase in the template uncertainty, we do not deem it to be problematic, especially when also considering the high-SNR of the template itself, when compared against the individual observations. With this in mind we propagate, during the RV extraction, the uncertainties in the stellar template by interpolating them to the desired wavelength solution. #### 2.3.3 Removal of outliers in the spectra Even though the stellar template is generally a good match to the stellar spectrum associated with any given observation, there are some regions where such assumption does not hold, e.g. as seen in the top row of Fig. 5, thus raising the need to remove so-called flux outliers before starting the RV estimation procedure. It is important to note that any point that was discarded in Sect. 2.1 will be ignored during the search for outliers. We start by aligning the stellar template and a given spectrum using the initial guess for the associated RV, either estimated through the CCF method or a previous application of template matching. Then we adjust both continuum levels by fitting a first degree polynomial, with slope $m$ and intercept $b$, to the ratio between the spectrum and template. Finally, we compute the logarithm of the ratio between spectrum and template and use it as a metric to flag mismatch regions: $metric_{\lambda_{i}}=\log\left(\frac{S_{\lambda_{i}}}{p(m,b)_{\lambda_{i}}T_{\lambda_{i}}}\right)$ (1) where $S_{\lambda_{i}}$ is the flux of the stellar spectrum, $p(m,b)_{\lambda_{i}}$ the first degree polynomial and $T_{\lambda_{i}}$ the interpolated stellar template, all evaluated at wavelength ${\lambda_{i}}$. We use the logarithm, instead of the ratio, in order to mitigate the larger differences that exist for lower SNR regions closer to the edges of the spectral orders. This metric ensures performance for a large dynamic range of fluxes as seen inside a single order. Further, as the spectra are noisy and we are mostly interested in finding large flux differences, we allowed for a large tolerance in the identification of outliers. We consider all points whose metric is more than 6$\sigma$444We use $\sigma$ to refer to the standard deviation of the metric across the entire order. away from the median metric (of the entire order) to be outliers. Using lower thresholds would result in excessive flagging in both edges of the order, as the differences (in absolute values) between spectra and template are larger. In Fig. 5 we can see an application of the algorithm to two spectral chunks. Starting in the left panel, we find minimal (relative) differences between the spectrum and the normalized template, with the exception of the clear outlier that is flagged by our routine. On the right panel we see that the spectrum is noisier and, consequently, the match between spectrum and template is worse. Nonetheless, our algorithm does not flag any point, as there are no clear outliers. Figure 5: Outlier removal routine for two regions of the same order, with respect to an observation for which outlier identification achieved convergence after the first iteration. For representation purposes we normalized the flux in the center and the edge of the order. Top row: Comparison between the stellar template (red line) and the spectra (black line). The blue crosses represent the points that were flagged by the method. Bottom row: Differences between the template and spectra (black points). The blue line is the median value, whilst the dotted red lines represent a 6$\sigma$ difference from it. ## 3 Classical template matching In order to benchmark our methodology to estimate radial velocities we first implemented a template matching approach similar to those used to build the HARPS-TERRA (Anglada-Escudé & Butler 2012) and SERVAL (Zechmeister et al. 2018) pipelines. In Fig. 6 we provide an high-level schematic of the RV estimation procedure, whose boxes are described with more detail within this Section. Figure 6: Schematic of the Classical template matching RV estimation procedure, including the computation of the uncertainties in the stellar template. We iterate over orders at the highest level, not observations, in order to optimize the computational efficiency of the method. Radial velocities are determined individually for each order i, through least squares minimization: $\chi^{2}=\sum_{i=1}^{N_{pixels}}\frac{1}{\boldsymbol{\sigma_{S}^{2}+\sigma_{T}^{2}}}*[S_{\lambda_{i}}-p(m,b)_{\lambda_{i}}T_{\lambda_{i}}]^{2}$ (2) where $S_{\lambda_{i}}$ is the flux of the stellar spectrum, $\boldsymbol{\sigma_{S}}$ its associated (1$\sigma$) uncertainty, $p(m,b)_{\lambda_{i}}$ the first degree polynomial mentioned in Sect. 2.3.3, $T_{\lambda_{i}}$ the stellar template and $\boldsymbol{\sigma_{T}}$ its (1$\sigma$) uncertainty, all evaluated at wavelength $\lambda_{i}$. The $\chi^{2}$ minimization is performed with scipy’s (Virtanen et al. 2020) implementation of the Brent method, inside a window with a default size of 200 ${\rm m}\,{\rm s}^{-1}$ centered around the previous RV estimate. The stellar template is Doppler-shifted by each tentative RV and interpolated, with a cubic spline algorithm, to the wavelength solution of the spectra. Similarly to the creation of the stellar template, Sect. 2.3.2, the uncertainties of the stellar template are also interpolated to the new wavelength solution. After the minimizer converges, we use the proposed RV value and two adjacent ones, separated by the RV interval $\pm\Delta RV$, to numerically fit a parabola to the curve, similarly to Zechmeister et al. (2018). By default, we assume $\Delta RV$ to be 10 ${\rm cm}\,{\rm s}^{-1}$ for ESPRESSO and 50 ${\rm cm}\,{\rm s}^{-1}$ for HARPS. For further details with respect to this fit we refer to Sect. 10.2 of Press (1992). The fitted parabola is then used to correct the RV estimate and calculate its uncertainty $\displaystyle\begin{split}RV_{order}&=RV_{min}-\frac{\Delta RV}{2}\frac{\chi^{2}_{m+1}-\chi^{2}_{m-1}}{\chi^{2}_{m+1}+\chi^{2}_{m-1}-2\chi^{2}_{m}}\\\ \boldsymbol{\sigma_{RV}^{2}}&=\frac{2\Delta RV^{2}}{\chi^{2}_{m-1}-2\chi^{2}_{m}+\chi^{2}_{m+1}}\\\ \end{split}$ (3) where $RV_{order}$ is the final RV estimate, $\boldsymbol{\sigma_{RV}}$ is the measurement (1$\sigma$) uncertainty, $RV_{min}$ is the RV value that minimizes Eq. 2, while $m-1$ and $m+1$ identify the two RV values selected at a distance $\Delta RV$ from $RV_{min}$. If the minimizer proposes a value near the edges of the search window, there is no guarantee that the proposal is the true minimum of the $\chi^{2}$ curve. Thus, whenever the result of the Brent minimization lies within 5$\Delta RV$ of either edge of the search window we define a new one, centered at the edge of the interval, with a size of 30 ${\rm m}\,{\rm s}^{-1}$ and re-start the minimization procedure. As we do not expect such large differences between the CCF RVs and those obtained through a template matching method, we discard the entire order if we, once again, do not find a minimum RV that meets the aforementioned criteria within the new search window. In Fig. 7 we show the comparison of the CCF-based ESPRESSO pipeline (DRS) RV estimate and that obtained through Equation 3 for one spectral order. There is a very good match between the parabolic fit and the full $\chi^{2}$ curve. Furthermore, the advantages of using the Brent method for the minimization are evident: it can quickly achieve convergence, greatly increasing the computational efficiency of the routine (e.g. in the case shown in Fig. 7 we only have to sample Eq. 2 four times before achieving convergence for the spectral order). Figure 7: The $\chi^{2}$ curve for the 100-th spectral order of an ESPRESSO observation of GJ699 (black line), sampled with a step of 10 ${\rm cm}\,{\rm s}^{-1}$ inside the 200 ${\rm m}\,{\rm s}^{-1}$ search window centered on the ESPRESSO DRS estimate. The parabolic fit is shown as the red line. The red circles represent the RV estimates provided by the Brent method before meeting the convergence criteria. The dashed blue line is the final RV, calculated with Eq. 3, and the dotted green line the ESPRESSO DRS estimate for the given observation. After estimating a RV with respect to each individual order, we combine them through a weighted mean with inverse variance weights (Schmelling 1995) in order to calculate the final RV estimate (${\rm v}$) and uncertainty ($\boldsymbol{\sigma_{\rm v}}$) for the entire observation: $\displaystyle\begin{split}{\rm v}&=\frac{\sum\sigma^{-2}_{{\rm v}_{i}}{\rm v}_{i}}{\sum\sigma^{-2}_{{\rm v}_{i}}}\\\\[4.30554pt] \boldsymbol{\sigma_{\rm v}}&=\sqrt{\frac{1}{\sum 1/\sigma^{2}_{{\rm v}_{i}}}}\end{split}$ (4) where ${\rm v}_{i}$ is the RV of order $i$ and $\boldsymbol{\sigma_{{\rm v}_{i}}}$ its associated ($1\sigma$) uncertainty, while $N$ is the total number of orders for which we estimated an RV. The value of $\boldsymbol{\sigma_{{\rm v}_{i}}}$ is estimated whilst ignoring any correlations that might exist between the assumed independent spectral orders. As we have seen in this Section, and in Sect. 2.1, some spectral orders may be discarded for some of the observations. In this case, different wavelength domains would be used in the estimation of the final RVs of the different observations. This could introduce additional RV variability within the considered set of observations, due to possible differences in the spectral information contents and systematic effects in each spectral order. In order to avoid this additional source of RV variability we retain only the wavelength domain that is common (i.e. not discarded) to all observations. The drawback to this approach is that the inclusion of a single observation can cause the removal of a large number of orders. In this case there would be a smaller loss of information if the entire observation was discarded instead. We currently have no way of evaluating this trade-off other than doing a manual inspection of the number of orders removed per observation, and then determining how all the RV final estimates and its uncertainties change in the case one or more observations are discarded from the full RV estimation procedure. ## 4 Semi-Bayesian template matching The current approaches to template matching assume Doppler-shifts associated with the different spectral orders are independently generated. However, this is clearly not the case for the stellar RV component induced by orbiting bodies, like planets or companion stars. In fact, such shifts are achromatic, i.e. they are independent of the wavelengths at which they are measured. Relative RV estimates, as those obtained through template matching, are primarily used to detect orbiting bodies and characterize their masses and orbits. Thus, consistency then suggests that one should use a single RV shift to describe simultaneously the differences for all orders between a given spectrum and the stellar template. Any effects that may hinder the correct estimate of this single RV shift, like those of instrumental origin or due to stellar activity, should be dealt with explicitly, either through modelling or exclusion of affected spectral data. The casting of RV estimation through template matching into a Bayesian statistical framework allows for consistent and straightforward characterization of the RV (posterior) probability for any observation, including marginalization with respect to the parameters of the first-degree polynomials that are used to adjust the continuum level of spectra and template. However, within a Bayesian framework all aspects of the model considered need to be specified prior to the actual data analysis, i.e. the information contained in the data cannot be used twice, for building the model (prior specification) and also for comparison with its predictions (through the likelihood). Unfortunately, the later takes place in the context of template matching, because the available spectra are used to specify the template (model building) as well as to estimate the RVs at the times of spectra acquisition (data analysis). This is the reason why we call semi- Bayesian to the template matching approach for RV estimation described schematically in Fig. 8 and with greater detail in this Section. This approach has been implemented in a pipeline capable of processing HARPS and ESPRESSO data, which has been named S-BART: Semi-Bayesian Approach for RVs with Template-matching555Publicly available in https://github.com/iastro- pt/sBART. It is important to note that the S-BART pipeline allows for the usage and configuration of all techniques that have been described in this manuscript, including the traditional template matching approach, i.e. the one introduced in Sect. 3. Figure 8: Schematic of the semi-Bayesian approach, considering both an MCMC and Laplace approximation to characterize the posterior distribution. ### 4.1 The RV model We apply our RV model independently to each observation. It contains only one parameter of interest, the relative RV with respect to some reference frame, which is the one for which the template RV is zero. We thus wish to characterize the RV posterior probability distribution given each observed spectrum, which is proportional to the product of the RV prior probability distribution by the likelihood of the spectral data. We will assume a uninformative prior, taking the form of an uniform probability distribution. The likelihood of the full spectral data, conditioned on a given RV value, will be assumed to equal the product of the likelihoods of the fluxes measured for each pixel, i.e. the flux measurements for all pixels are considered to be independent. In practice, we first calculate the likelihood of the spectral data for each order, and then multiply these to obtain the full likelihood. However, our RV model also contains so-called nuisance parameters that we have to marginalize over. The nuisance parameters are those involved in the template matching procedure described in Sect. 2.3: * • The slope, $m$, and intercept, $b$, of the first degree polynomial that is used to adjust the continuum level of the template spectrum to that of each spectrum under analysis; * • The flux associated with each pixel in the template, relative to the continuum. The latter are effectively latent variables, affected by some uncertainty, characterized in Sect. 2.3.2. Each spectral order has its own set of independent nuisance parameters, thus the marginalization procedure can be applied order by order. We assume the joint prior distribution with respect to all model parameters to be separable, i.e. can be written as the product of prior distributions specifically associated with each parameter. Thus, in practice, the marginalization procedure involves the integration of the product of the prior distributions with respect to the nuisance parameters by the likelihood as a function of all parameters. The likelihood of a given observed spectrum, $S$, conditioned on an assumed RV value, is thus given by $P(S|RV)=\prod\limits_{i=1}^{N_{\rm orders}}\int P(m_{i},b_{i})\prod\limits_{j=1}^{N_{i}}P(T_{\lambda_{i,j}})\times\\\ \times P(S_{\lambda_{i,j}}|{\rm RV},m_{i},b_{i},T_{\lambda_{i,j}})\,{\rm d}m\,{\rm d}b\,{\rm d}T_{\lambda_{i,j}}$ (5) where $N_{\rm orders}$ is the total number of orders in the spectrum that are not discarded as a result of the procedure discussed in Sect. 3, $N_{i}$ is the number of data points in order $i$ (usually smaller than the number of pixels in the order, due to the masking of spectral regions contamination by telluric lines and outlier removal), $T_{\lambda_{i,j}}$ and $S_{\lambda_{i,j}}$ are the fluxes associated with pixel $\lambda_{i,j}$ in the (interpolated) template and spectrum, respectively. We will assume that the prior probabilities, $P(m_{i},b_{i})$ and $P(T_{\lambda_{i,j}})$, are uninformative Gaussians. Given that the last probability is also a Gaussian, with an expected value that is a linear function of ${\lambda_{i,j}}$, i.e. $(b+m\lambda_{i,j})T_{\lambda_{i,j}}$, in the limit of infinite variance for the prior probabilities, the integral is equal to a Gaussian and $\begin{split}\log P(S|RV)=\sum\limits_{i=1}^{N_{\rm orders}}\left(-\frac{1}{2}S_{i}^{T}K_{i}^{-1}S_{i}+\frac{1}{2}S_{i}^{T}CS_{i}-\frac{1}{2}\log\ |K_{i}|-\right.\\\ \left.-\frac{1}{2}\log\ |A|-\frac{n_{i}-n_{H}}{2}\log\ 2\pi\vphantom{\int_{1}^{2}}\right)\end{split}$ (6) where $S_{i}$ is a vector with the flux measurements for all pixels in order $i$, A is equal to $H_{i}K_{i}^{-1}H_{i}^{T}$, C is given by $K_{i}^{-1}H_{i}^{T}A_{i}^{-1}H_{i}K_{i}^{-1}$, and $n_{i}$ is the number of data points in order $i$ (for more details see Sect. 2.7 of Rasmussen & Williams 2006). The $2\times n_{i}$ matrix $H_{i}$ contains the values of the $n_{H}=2$ basis functions associated with the linear model $E[S_{\lambda_{i,j}}]=(b+m\lambda_{i,j})T_{\lambda_{i,j}}$ for each data point, i.e. $1$ and $\lambda_{i,j}$ with the associated parameters $bT_{\lambda_{i,j}}$ and $mT_{\lambda_{i,j}}$, respectively. Finally, the variance-covariance matrix $K_{i}$ is diagonal with entries $\sigma_{S_{\lambda_{i,j}}^{2}}+\sigma_{T_{\lambda_{i,j}}^{2}}$, where $\sigma_{S_{\lambda_{i,j}}}$ and $\sigma_{T_{\lambda_{i,j}}}$ are the standard deviations associated with the Gaussian probability distributions that describe the uncertainties, respectively, in the flux measurement and stellar template construction (including the cubic spline interpolation) processes. The construction of the RV estimate is made through the RV posterior distribution, further discussed in Sect. 4.2. Its mean value and standard deviation will provide estimates of the RV value and uncertainty, respectively. We summarize the posterior with its mean value, as this minimizes the quadratic loss function associated with this estimation. The uncertainty in the RV estimates will thus account for all sources of noise, including noise in the observed spectra and stellar template. ### 4.2 Characterization of the RV posterior distribution We used the emcee package (Foreman-Mackey et al. 2013) to characterize the RV posterior probability distribution associated with each observation through the Markov Chain Monte Carlo (MCMC) methodology. We assessed MCMC convergence through several criteria that must be met simultaneously: * • The chain length must be at least 50 times larger than the autocorrelation time ($\tau$); * • The value of $\tau$ cannot change more than 1% after each iteration; * • The mean and standard deviation of the chains cannot change by more than 2 % after each iteration. Unfortunately, achieving convergence for a single ESPRESSO observation takes $\sim$10 minutes on a 24<EMAIL_ADDRESS>and 128 GB RAM server, making the method computationally expensive for stars with a large number of available observations. However, since the posterior distribution for the RV shift is approximately Gaussian (due to the large amount of information in the data), we can use the Laplace approximation to characterize it. By performing a second order Taylor expansion around the posterior distribution maximum we can approximate it with a Gaussian distribution centered at the posterior’s mode (also known as MAP - maximum a posteriori) and variance equal to the inverse of the Hessian of the log-posterior evaluated at the mode (e.g. see Sect. 3.4 of Rasmussen & Williams 2006), as shown in Fig. 9. In practice, this approximation transforms the characterization of the posterior into a optimization problem - the minimization of the negative log likelihood (they are equivalent, since we assume a uniform prior for the RV). To solve it we apply, once again, scipy’s implementation of the Brent method, reducing the computational time to $\sim 30$ seconds per observation on the same machine. Figure 9: Comparison of the RV posterior distribution derived using the MCMC methodology (black curve) with the result of the Laplace approximation (dashed red line), for one observation of the M4 star considered in Sect. 2.3.2. In order to compare RV estimates obtained through the MCMC methodology and the Laplace approximation we tested two targets, a K-type star with 27 ESPRESSO18 observations and a M-type star with 21 ESPRESSO19 observations, where ESPRESSO18 and ESPRESSO19 respectively refer to observations obtained before and after the ESPRESSO fiber link upgrade in June 2019 (Pepe et al. 2021). In Fig. 10 we show that the expected values for the RV shift obtained with the MCMC methodology and the Laplace approximation are, for both targets, within the respective uncertainties. The associated standard deviations are also in agreement at the ${\rm cm}\ {\rm s}^{-1}$ level, given that they differ at most by $\sim$ 2 ${\rm cm}\ {\rm s}^{-1}$ , with mean and median differences smaller than 0.3 ${\rm cm}\ {\rm s}^{-1}$), well below the expected 10 ${\rm cm}\ {\rm s}^{-1}$ precision of ESPRESSO. Figure 10: Differences between the RV posterior distribution as characterized through the MCMC methodology and the Laplace approximation, for 27 ESPRESSO18 observations of a K-type star (left column) and 21 ESPRESSO19 observations of a M-type star (right column). Top pannel: Comparison of the RV expected value, in ${\rm m}\ {\rm s}^{-1}$. We show, in each axis, the uncertainty associated with the corresponding measurement. Bottom: Difference, in ${\rm cm}\ {\rm s}^{-1}$, of the standard deviation of the RV posterior distribution. ## 5 Results In this Section we will showcase the results of the two template-based methodologies previously described by comparing them against the CCF method and two other template matching methods. For this purpose we use ESPRESSO data, reduced with version 2.2.8 of the official pipeline666https://www.eso.org/sci/software/pipelines/, and HARPS archival data, reduced with version 3.5 of its official pipeline. We will use ‘classical’ to refer to results obtained with the $\chi^{2}$ methodology discussed in Sect. 3, ’S-BART’ to refer to those from the semi-Bayesian methodology (coupled with the Laplace approximation) from Sect. 4, and ’DRS’ to those from the CCF of the instrument’s pipeline. In order to assess the performance of our RV estimation methodologies we start by comparing our results with those from HARPS-TERRA (Anglada-Escudé & Butler 2012) and SERVAL Zechmeister et al. (2018) using the same 22 HARPS observations of Barnard’s star. After that we focus our analysis on ESPRESSO data. In Sect. 5.2.2, we select one ESPRESSO target and create multiple stellar templates, each from a different number of observations, to evaluate the impact on the RV scatter and median RV uncertainty. In Sect. 5.2.3, we select 33 ESPRESSO GTO targets and compare the scatter and precision of the two template-based methodologies (classical and S-BART) with those from the ESO pipeline (CCF). In Sect. 5.2.4 we evaluate whether our radial velocity uncertainties are consistent with the information present in the data, through the simulation of stellar spectra from one of the available observations. Lastly, in Sect. 5.2.5 we use the same targets to estimate the nightly zero point (NZP) of the instrument with the three methodologies. ### 5.1 Validation with HARPS data In order to validate our algorithm against other template matching methods we selected 22 HARPS observations of Barnard’s star (GJ699), obtained between 2007-04-04 and 2008-05-02, with program ID 072.C-0488(E). This set of observations was chosen as it is present in the introductory papers of the two pipelines chosen for this purpose (HARPS-TERRA and SERVAL) and the observations are publicly available. In Fig. 11 we present the results obtained for HARPS-TERRA, SERVAL and our two template-matching methodologies. The HARPS-TERRA time-series was obtained from Table 6 of Anglada-Escudé & Butler (2012)777We used the RVs obtained with the entire spectrum. and the SERVAL time-series was derived by us using the most recent public version of SERVAL888https://github.com/mzechmeister/serval; commit d31a918. For comparison purposes we show all RV estimates after subtracting the RV mean with respect to each method. A visual comparison of the different time-series allows to verify that the RV measurements show the same trends in all cases. Figure 11: RV time-series for 22 observations of Barnard’s star, corrected for the secular acceleration of the star. The black stars and orange crosses are RV estimates obtained with the classical and S-BART methodologies, respectively, with its own mean RV subtracted (for comparison purposes). The blue triangles and red dots are HARPS-TERRA and SERVAL RV estimates, respectively, also with their own mean RV subtracted. In Table 3 we compare the different methodologies with respect to the standard deviation of the RV estimates, a measure of scatter in the time-series. We include the RV scatter reported in (Zechmeister et al. 2018), as SERVAL-PAPER, for completeness. We find that both our methodologies reach the meter per second precision on HARPS data, achieving smaller scatter than with the HARPS- TERRA and SERVAL pipelines. The CCF-based HARPS pipeline leads to more scattered RV estimates, as expected given that Barnard’s star belongs to the M spectral class. We refrain from comparing the estimates for the RV uncertainties as the different template-matching algorithms use different estimators for their calculation. We find that our results are slightly less scattered than those obtained with other template-matching methods. Despite the small differences, they are concordant with the others, suggesting that our methodologies are working as intended. The same decrease in RV scatter is found for our two template-based RV time-series, suggesting that the different statistical framework is not the cause for such decrease. We believe it is due to differences in the way the stellar template is created and the telluric features are handled. In particular, S-BART attempts to minimize the impact of telluric features in RV estimation through a very conservative approach. This is achieved by creating a transmittance spectrum assuming the highest measured relative humidity amongst all observations of a given target, and then imposing a cut at 1% transmittance. Lastly, it should be noted that the difference in RV scatter between S-BART and HARPS-TERRA is equal to that between the later and SERVAL. Table 3: Time-series RV scatter obtained with different template-based methodologies when applied to Barnard’s star. Method | std RV [${\rm m}\ {\rm s}^{-1}$] ---|--- DRS-HARPSaa$a$ The SERVAL and DRS-HARPS results were obtained by using the latest (publicly) available version of SERVAL and with the HARPS pipeline, respectively; | 1.51 HARPS-TERRA bb$b$The HARPS-TERRA and SERVAL-PAPER results were obtained from Anglada-Escudé & Butler (2012) and Zechmeister et al. (2018), respectively; | 1.22 SERVAL-PAPER bb$b$The HARPS-TERRA and SERVAL-PAPER results were obtained from Anglada-Escudé & Butler (2012) and Zechmeister et al. (2018), respectively; | 1.30 SERVAL aa$a$ The SERVAL and DRS-HARPS results were obtained by using the latest (publicly) available version of SERVAL and with the HARPS pipeline, respectively; | 1.28 classical cc$c$Results obtained with the classical and S-BART methodologies. | 1.14 S-BART cc$c$Results obtained with the classical and S-BART methodologies. | 1.14 999 ### 5.2 Application to ESPRESSO data In this Section we compare of the performance of the CCF method of ESO’s official pipeline with the application of the classical and S-BART methodologies, when applied to ESPRESSO data. #### 5.2.1 Defining the stellar sample Our analysis of the performance of template matching uses data collected during 2018, 2019 and early 2020 (until March). The selected targets are part of ESPRESSO’s blind RV survey program (Hojjatpanah et al. 2019; Pepe et al. 2021) and all have at least 5 observations that can be used to construct a stellar template. ESPRESSO’s fiber link was upgraded in June 2019 (Pepe et al. 2021), resulting in a change in the instrumental profile. We treat the data collected before and after the upgrade as if it was obtained from different instruments, i.e. we create independent stellar templates for the data obtained before and after the technical intervention. We shall refer to data obtained before and after the fiber link upgrade as ‘ESPRESSO18’ and ‘ESPRESSO19’, respectively. In order to assess the performance of our template-based approaches we selected a sample of 33 targets, where 16 are M-type stars, 13 K-type stars and 4 G-type stars. In total, we used approximately 1000 observations distributed between ESPRESSO18 and ESPRESSO19, as specified in Table 4. Table 4: Number of observations, of each spectral type (ST), obtained before and after ESPRESSO’s fiber link upgrade. ST | Targets | ESPRESSO18 | ESPRESSO19 | Total ---|---|---|---|--- M | 16 | 176 | 133 | 309 K | 13 | 249 | 158 | 407 G | 4 | 251 | 79 | 330 The construction of the stellar template does not include any observation that has an airmass greater than 1.5, as discussed in Sect. 2.3.1. The selection of the targets in the sample was such that they all meet the condition, discussed in Sect. 5.2.2, of having at least 5 observations that can be used in the construction of the stellar template. #### 5.2.2 The impact of the number of observations in the template The first step to benchmark the performance of template-based RV estimation procedures with ESPRESSO data, and to understand for which targets we can take such an approach, is to evaluate the impact in the RV estimates of the number of observations used to construct the stellar template. For this purpose we selected, from the sample described in Sect. 5.2.1, 24 ESPRESSO18 observations of an M-type star, from which we reserved the first 11 observations to construct stellar templates and used the other 13 to evaluate the performance of the templates. The 11 observations selected for the construction of the template cover a BERV region that starts at 25 ${\rm km}\ {\rm s}^{-1}$, in the first observation, and ends at -19 ${\rm km}\ {\rm s}^{-1}$ in the last one. The stellar templates are created by gradually selecting observations based on their BERV values, after they have been sorted from largest to smallest. Each template is then used to compute RVs for the aforementioned set of 13 observations. We do not use the same data to construct the template and to evaluate the performance of the RV estimation methods so that templates constructed with a low number of observations are not too similar to the spectra used to construct them. Figure 12: Evolution of the RV scatter (top pannel) and median uncertainty (bottom pannel) with respect to 13 ESPRESSO18 observations of a M-type star, as a function of the number of spectra used to construct the template. In black we have the results from the classical methodology and in blue those from the S-BART methodology. We find an improvement in both the scatter and median uncertainty reported by both our methodologies as the SNR of the template increases (Fig. 12). If we focus only on the RV scatter, we find no meaningful improvement with templates constructed from more than 5 observations. We thus decided to estimate RVs only for targets that have at least 5 observations. #### 5.2.3 Comparison of the RV scatter and precision We now compare the RV scatter and uncertainties obtained through the classical and S-BART methodologies. In order to achieve this, we apply both of our template matching algorithms to the roughly 1000 observations that compose our ESPRESSO sample (Sect. 5.2.1). Similarly to other works, we find (Fig. 13 and Table 5) that the template- based methods, when applied to M-type stars, most often lead to a smaller scatter than the CCF method implemented in the DRS. This decrease is larger within the ESPRESSO19 dataset ($\sim$ 10% smaller) than in the ESPRESSO18 one ($\sim$ 8% smaller). For K-type stars we find a similar decrease across the two datasets, of $\sim$ 4%. Lastly, for G-type stars the scatter, both before and after ESPRESSO fiber link upgrade, is $\sim$0.5% larger than the DRS, a result that should be taken with caution due to the very limited sample size. The very similar RV estimates of the template-matching methodologies were expected, as both use the same information, i.e. the same spectral regions, and the same model (the template). Figure 13: Comparison of the results obtained with our template-matching methodologies and with the CCF-based ESPRESSO DRS, for ESPRESSO18 (black) and ESPRESSO19 (red) observations of the selected targets. Top pannel: Ratio of the rms in the template-matching and DRS RV time-series. The results from the classical and S-BART methods are represented by circles and diamonds, respectively. Bottom pannel: Median RV uncertainty for each target, as computed by the DRS (crosses), the classical approach (circle) and the S-BART method (diamonds). Table 5: Mean ratio between the scatter of the RV time series as derived through the two template-based methodologies and the CCF-based ESPRESSO DRS, separated by spectral type, ST, and methodology. Dataset | Method | M-type | K-type | G-type ---|---|---|---|--- ESPRESSO18 | classical | 0.928 | 0.960 | 1.032 S-BART | 0.923 | 0.961 | 1.029 ESPRESSO19 | classical | 0.894 | 0.964 | 1.060 S-BART | 0.893 | 0.964 | 1.057 We find very small differences, below the ${\rm cm}\ {\rm s}^{-1}$ mark, between the median RV uncertainties from the S-BART and classical approaches. Figure 14 shows the histograms of the individual RV uncertainty estimates for the observations, separated by spectral type, for each ESPRESSO dataset, whilst Table 6 summarizes the results. We see that for M-type stars both template-matching implementations yields a median RV uncertainty $\sim$ 13 ${\rm cm}\ {\rm s}^{-1}$ smaller than with the ESPRESSO DRS, corresponding to almost half of the median CCF RV uncertainty. For K- and G-type stars the gain in the median RV precision, in comparison with the CCF, is below 5 ${\rm cm}\ {\rm s}^{-1}$. Figure 14: Comparison of the RV uncertainties obtained for all ESPRESSO18 (top panel) and ESPRESSO19 (bottom panel) observations used in Figure 13, separated by spectral type. In blue we have the RV uncertainty estimated by the DRS, in black and red those estimated through the S-BART and classical methodologies, respectively. Table 6: Comparison of the median RV uncertainties, in ${\rm cm}\ {\rm s}^{-1}$, as obtained through the three methodologies. Dataset | Method | M-type | K-type | G-type ---|---|---|---|--- ESPRESSO18 | DRS | 26.7 | 10.4 | 13.4 classical | 14.8 | 8.6 | 10.6 S-BART | 14.3 | 8.1 | 10.6 ESPRESSO19 | DRS | 27.5 | 9.0 | 12.5 classical | 14.7 | 7.6 | 10.9 S-BART | 14.4 | 7.2 | 10.1 101010 The values were derived for the two ESPRESSO datasets (ESPRESSO18 and ESPRESSO19) used in Fig. 13. We leave for future work an analysis of the impact in the RV estimates obtained using S-BART due to different levels of stellar activity. Such a complex endeavour lays outside the scope of the current paper, and will require a complete analysis of the RV time-series of each target, allowing for both stellar activity and (an unknown number of) planetary companions, It is also important to note that our current sample mostly contains stars with low levels of stellar activity (Hojjatpanah et al. 2019), meaning that we would either have to select a different stellar sample or be limited by that fact. #### 5.2.4 Self-consistency of Radial Velocity uncertainties In order to determine whether our template matching estimates for the radial velocities uncertainties are consistent with the information present in the spectra, we simulated spectra and analysed it with the different methodologies: 1. 1. We start by selecting one reference observation; 2. 2. For each pixel in the reference spectrum, we draw a random value from a Gaussian distribution with mean and standard deviation equal to the flux value and uncertainty of the reference spectrum, respectively; 3. 3. We repeat the second step $N=100$ times to build N ’simulated’ spectra; 4. 4. We apply the template matching algorithm, using the stellar template that was built from the original data of the star whose observation was used as reference. We created two datasets from two observations of an M5 star, one from ESPRESSO18 and another from ESPRESSO19, and compared the uncertainty associated with the RV value estimated for the reference spectrum with the RV scatter and median RV uncertainty for the simulated dataset, as shown in Table 7. We find that for the three methodologies there is agreement between the RV uncertainty estimated for the original reference spectrum and the RV scatter and median uncertainty of the simulated datasets, even though for the assumed reference spectra the RV scatter and median uncertainties obtained with the CCF methodology are approximately double of those obtained with the S-BART and classical methods. Table 7: Results, in ${\rm cm}\ {\rm s}^{-1}$, of the application of template matching and CCF to two simulated datasets where white noise was injected. ESPRESSO | method | $\sigma_{RV}$ reference | Simulated data ---|---|---|--- std | median $\sigma_{RV}$ 18 | classical | 13.0 | 11.8 | 13.0 S-BART | 12.8 | 11.8 | 12.9 CCF | 24.6 | 26.2 | 24.6 19 | classical | 14.9 | 14.0 | 14.9 S-BART | 14.8 | 14.1 | 14.9 CCF | 28.0 | 29.3 | 28.0 111111One dataset was built from an ESPRESSO18 observation, whilst the other was from ESPRESSO19 data, with both observations being from the same M5 star. On top of this analysis, we also made a comparison with the expected RV precision (Bouchy et al. 2001), as implemented in eniric (Neal & Figueira 2019), revealing that the median S-BART uncertainties from each spectral type are just a few ${\rm cm}\ {\rm s}^{-1}$ above the corresponding photon noise limit. #### 5.2.5 Nightly Zero Point (NZP) variation The last study that we did with ESPRESSO data was an analysis of the nightly zero point (NZP) of each RV estimation procedure, following the methodologies implemented in Courcol et al. (2015); Tal-Or et al. (2019). For our analysis we again used the targets selected in Sect. 5.2.1 but, to enforce a balance of the number of observations between pre and post fiber link upgrade data, we do not consider G-type stars, as we only have 4 targets, with the majority of observations taken before the fiber link upgrade (Table 4). Nonetheless, we still find that the ESPRESSO18 observations represent $\sim 60\%$ of our sample. It is important to note that our analysis uses a limited dataset that neither underwent a careful selection of targets nor had the contributions of stellar activity, planetary signals, and photon noise removed from the derived RVs. Thus, the subsequent results must be taken as an upper bound for the achievable stability of ESPRESSO. The NZP calculation starts by subtracting, from the time-series of each target, its own error-weighted average, thus centering all time-series around an RV of 0 ${\rm m}\ {\rm s}^{-1}$. If a target has multiple observations in the same night, we replaced them by the median value. We computed the NZP, for all nights in which at least 3 targets were observed, as the weighted average of the RVs, using weights equal to the inverse of the RV variances. The uncertainty in the NZP measurement is taken to be the maximum value between the propagated (through the weighted mean) RV uncertainty and the RV scatter of the night in question. For further details we refer back to the Appendix A of the original article. The NZP time-series is shown in Fig. 15, as derived from the RVs obtained with the ESPRESSO DRS as well as with our two template-matching methodologies. First, it is important to note that we have a higher density of targets per night in ESPRESSO18 than we do in ESPRESSO19. When visually comparing the NZPs obtained with the different RV estimation methods we find no significant differences, but an apparent smaller scatter in ESPRESSO19 data. This can be corroborated with a comparison of the weighted standard deviation of the NZPs (Table 8). We see that in both datasets the template-based results have a slightly lower scatter than those from the DRS, with the classical approach yielding the smallest NZP scatter, particularly with regards to ESPRESSO18 data. A comparison of ESPRESSO18 and ESPRESSO19 data reveals, across all methodologies, a weighted variability about 10 ${\rm cm}\ {\rm s}^{-1}$ smaller in the latter dataset. Figure 15: Nightly Zero Point (NZP), black dots, for each night when at least 3 different targets were observed. The date of the fiber link upgrade is highlighted with a dashed red line. The zero-centered RV of each target, observed in the night, is represented with the blue crosses. As derived with results obtained with the following methodologies: ESPRESSO DRS (Top); classical (Middle); S-BART (Bottom). Table 8: Weighted standard deviation, in ${\rm cm}\ {\rm s}^{-1}$, of the derived NZPs for observations obtained before and after the 2019 ESPRESSO fiber link upgrade. Method | ESPRESSO18 | ESPRESSO19 ---|---|--- ESPRESSO DRS | 76.0 | 59.9 classical | 69.6 | 57.2 S-BART | 71.0 | 57.1 121212 The results were obtained with the data from M and K-type stars, with RVs estimated through the three different methodologies. Despite the aforementioned limitations of our analysis, it is still noteworthy that, even under such circumstances, we find a NZP scatter below the meter per second mark, both before and after the fiber link upgrade. ## 6 Limitations and possible improvements The assumption that the stellar spectrum is invariant with time is a clear limitation in our template-matching approaches, which they share with other RV estimation procedures. We know that stellar activity induces spectral line displacements and deformations that change with time. This will thus induce a time-varying systematic error in the RV estimation, with a magnitude that is difficult to determine and that will depend on the star considered. This is the major obstacle to achieving precise, at the few tens of centimeter level, RV estimates. The solution will probably involve some form of data selection, namely at the spectral line level (Dumusque 2018; Cretignier et al. 2020), given that correctly modelling such complex effects seems a much more daunting task. Even though the semi-Bayesian template-matching methodology, presented in Sect. 4, improves the RV estimation with respect to the classical template- matching method, it has some shortcomings, that we are working on. First, the model for the spectra, effectively the prior with respect to the spectral fluxes, is built from the data that will be analysed, which means that we are computing the likelihood of the data with respect to a model that was built using information from the same data. The most straightforward solution to this problem would be to reserve a set of spectra solely for the purpose of constructing the template, but at the cost of losing RV estimates at the times those spectra were acquired. Another way of tackling this problem would involve using a probabilistic model for the assumed time-invariant true spectrum. This could involve physically relevant information, as in the CCF method, and has been implemented to some degree in wobble (Bedell et al. 2019) and, using Gaussian Processes, in GRACE (Rajpaul et al. 2020). Further, our semi-Bayesian approach depends on the classical implementation for the selection of the spectral orders to use as data. As discussed in the text, we are currently using those for which the classical template-matching procedure was able to obtain results, i.e. when our RV estimation convergence criteria were satisfied. However, it is possible that we could be discarding more (or less) information than we need to. One possible solution to this problem could be selection of orders through a maximization of the information gain between the (Gaussian) RV posterior and the (uniform) RV prior, using the Kullback-Leibler (KL) divergence (Kullback & Leibler 1951), which in this case is equivalent to minimizing the RV uncertainty (standard deviation). Similarly to what is done in the classical procedure, this approach would have to select the orders for all available observations simultaneously, but now such selection involves the full RV results, not only those calculated the level of the spectral order. Consequently, for targets with a large number of observations this procedure is laborious and compounds a major computational burden. The approximation of the RV posterior probability distribution by a Gaussian using the Laplace approximation may not be the best procedure in some cases. This method only uses the information around the MAP estimate to build the approximation and, consequently, is unable to account for any skewness in the posterior. A more complex technique, such as Variational Inference (for a detailed explanation refer to Gunapati et al. 2022; Blei et al. 2017), can use information from the entire posterior to build a more realistic approximation and, consequently, is more robust to skewness in the posterior. ## 7 Conclusions In this work we revisited the template-matching approach for RV estimation in a semi-Bayesian framework, implemented in a pipeline names S-BART: Semi- Bayesian Approach for RVs with Template-matching. The key points of this approach are: 1. 1. The creation of an high-SNR stellar template with at least 5 observations with an airmass smaller than 1.5; 2. 2. A common RV shift is used to describe the differences between any given spectrum and a spectral template whose uncertainties are acounted for; 3. 3. The RV estimate and uncertainty are determined as the mean and standard deviation of the RV posterior probability distribution, respectively; 4. 4. Due to the high computational cost of achieving convergence with an MCMC algorithm, we instead approximate the posterior with a Gaussian distribution, using Laplace’s approximation. We compared the results of this new method with those obtained with the CCF approach and with a classical implementation of the template-matching algorithm, where independent RV shifts are assumed to describe the differences within each order between spectrum and template. The radial velocities are derived through the alignment of the spectra and a high signal-to-noise template, in which the uncertainties of the data used to construct it are considered. In order to validate and evaluate the performance of our algorithm we applied it to observations from both HARPS and ESPRESSO, respectively. First, we compared the RV time-series obtained with our template-matching algorithms with those derived with the HARPS-TERRA and SERVAL pipelines, using 22 HARPS observations of Barnard’s star. Our two methodologies yield a time series with a scatter $\sim$ 14 ${\rm cm}\ {\rm s}^{-1}$ smaller than the one from SERVAL and $\sim$ 8 ${\rm cm}\ {\rm s}^{-1}$ smaller than the one from HARPS-TERRA, revealing a good agreement between all RV estimates. Afterwards we used S-BART to estimate RVs and the associated uncertainties for 33 ESPRESSO GTO targets of spectral types M, K and G. The median ratio between the RVs RMS obtained with our semi-Bayesian methodology and the CCF approach was 0.92 for M-type stars, 0.96 for K-type and 1.03 for G-type, for observations made before ESPRESSO’s fiber link upgrade. After it, we obtain a median ratio of 0.89, 0.96 and 1.06 for M, K and G stars, respectively. The classical methodology also yielded similar results to those obtained with the S-BART method. This shows that the two template-matching approaches are able to provide more precise results for M-type stars, as one would expect, and also for K stars. Regarding the RVs uncertainties obtained with S-BART, we find a median value of $\sim$ 14 ${\rm cm}\ {\rm s}^{-1}$, $\sim$ 8 ${\rm cm}\ {\rm s}^{-1}$, $\sim$ 11 ${\rm cm}\ {\rm s}^{-1}$, for M-, K- and G-type stars, respectively. We left, for future work, a more detailed analysis of the signals, Keplerian or due to stellar activity, present in this sample. Lastly, we also computed the nightly zero point (NZP) of the instrument, revealing a weighted NZP scatter around 0.7 ${\rm m}\ {\rm s}^{-1}$ for data obtained before the fiber link upgrade and 0.6 ${\rm m}\ {\rm s}^{-1}$ after it. Even though the scatter is higher than the expected precision of ESPRESSO, the NZPs were calculated without either removing stellar activity or planetary signals from the data and, consequently, should be taken as an upper limit of the obtainable precision. ###### Acknowledgements. The authors acknowledge the ESPRESSO project team for its effort and dedication in building the ESPRESSO instrument. This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Inter- nacionalização by these grants: UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER-032113; PTDC/FIS-AST/28953/2017 & POCI-01-0145-FEDER-028953; PTDC/FIS-AST/28987/2017 & POCI-01-0145-FEDER-028987. A.M.S acknowledges support from the Fundação para a Ciência e a Tecnologia (FCT) through the Fellowship 2020.05387.BD. and POCH/FSE (EC). J.P.F. is supported in the form of a work contract funded by national funds through FCT with reference DL57/2016/CP1364/CT0005. S.G.S acknowledges the support from FCT through the contract nr. CEECIND/00826/2018 and POPH/FSE (EC) J.H.C.M. is supported in the form of a work contract funded by national funds through FCT (DL 57/2016/CP1364/CT0007). FPE and CLO would like to acknowledge the Swiss National Science Foundation (SNSF) for supporting research with ESPRESSO through the SNSF grants nr. 140649, 152721, 166227 and 184618. The ESPRESSO Instrument Project was partially funded through SNSF’s FLARE Programme for large infrastructures. ASM, JIGH, and RR acknowledge financial support from the Spanish Ministry of Science and Innovation (MICINN) project PID2020-117493GB-I00, and from the Government of the Canary Islands project ProID2020010129. JIGH also acknowledges financial support from the Spanish MICINN under 2013 Ramón y Cajal program RYC-2013-14875. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project Four Aces; grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726. H.M.T. and M.R.Z.O ackowledge financial support from the Agencia Estatal de Investigación of the Ministerio de Ciencia, Innovación y Universidades through projects PID2019-109522GB-C51. J.L-B. acknowledges financial support received from ”la Caixa” Foundation (ID 100010434) and from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847648, with fellowship code LCF/BQ/PI20/11760023. R. A. is a Trottier Postdoctoral Fellow and acknowledges support from the Trottier Family Foundation. This work was supported in part through a grant from FRQNT. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation. The authors acknowledge the financial support of the SNSF. The INAF authors acknowledge financial support of the Italian Ministry of Education, University, and Research with PRIN 201278X4FL and the ”Progetti Premiali” funding scheme. V.A. acknowledges the support from FCT through Investigador FCT contract nr. IF/00650/2015/CP1273/CT0001; NJN acknowledges support form the following projects: CERN/FIS-PAR/0037/2019, PTDC/FIS-OUT/29048/2017; Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 072.C-0488(E). ## References * Anglada-Escudé & Butler (2012) Anglada-Escudé, G. & Butler, R. P. 2012, The Astrophysical Journal Supplement Series, 200, 15 * Astudillo-Defru (2015) Astudillo-Defru, N. 2015, PhD thesis, Université Grenoble, Alpes * Baranne et al. (1996) Baranne, A., Queloz, D., Mayor, M., et al. 1996, Astronomy and Astrophysics Supplement Series, 119, 373 * Bedell et al. (2019) Bedell, M., Hogg, D. W., Foreman-Mackey, D., Montet, B. T., & Luger, R. 2019, The Astronomical Journal, 158, 164 * Bertaux et al. (2014) Bertaux, J. L., Lallement, R., Ferron, S., Boonne, C., & Bodichon, R. 2014, Astronomy & Astrophysics, 564, A46 * Blei et al. (2017) Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. 2017, Journal of the American Statistical Association, 112, 859 * Bouchy et al. (2001) Bouchy, F., Pepe, F., & Queloz, D. 2001, Astronomy & Astrophysics, 374, 733 * Courcol et al. (2015) Courcol, B., Bouchy, F., Pepe, F., et al. 2015, Astronomy & Astrophysics, 581, A38 * Cretignier et al. (2020) Cretignier, M., Dumusque, X., Allart, R., Pepe, F., & Lovis, C. 2020, Astronomy & Astrophysics, 633, A76 * Cunha et al. (2014) Cunha, D., Santos, N. C., Figueira, P., et al. 2014, Astronomy & Astrophysics, 568, A35 * Dumusque (2018) Dumusque, X. 2018, Astronomy & Astrophysics, 620, A47 * Figueira et al. (2012) Figueira, P., Kerber, F., Chacon, A., et al. 2012, Monthly Notices of the Royal Astronomical Society, 420, 2874 * Figueira et al. (2010) Figueira, P., Pepe, F., Lovis, C., & Mayor, M. 2010, Astronomy and Astrophysics, 515, A106 * Flores et al. (2016) Flores, M., González, J. F., Arancibia, M. J., Buccino, A., & Saffe, C. 2016, Astronomy & Astrophysics, 589, A135 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, Publications of the Astronomical Society of the Pacific, 125, 306 * Gardner (2003) Gardner, J. 2003, Journal of Research of the National Institute of Standards and Technology, 108, 69 * Gunapati et al. (2022) Gunapati, G., Jain, A., Srijith, P. K., & Desai, S. 2022, Publications of the Astronomical Society of Australia, 39, e001 * Hojjatpanah et al. (2019) Hojjatpanah, S., Figueira, P., Santos, N. C., et al. 2019, Astronomy & Astrophysics, 629, A80 * Hsu et al. (2019) Hsu, D. C., Ford, E. B., Ragozzine, D., & Ashby, K. 2019, The Astronomical Journal, 158, 109 * Kılıç (2008) Kılıç, E. 2008, Applied Mathematics and Computation, 197, 345 * Kuerster et al. (2003) Kuerster, M., Endl, M., Rouesnel, F., et al. 2003, Astronomy & Astrophysics, 403, 1077 * Kullback & Leibler (1951) Kullback, S. & Leibler, R. A. 1951, The Annals of Mathematical Statistics, 22, 79 * Lafarga et al. (2020) Lafarga, M., Ribas, I., Lovis, C., et al. 2020, Astronomy & Astrophysics, 636, A36 * Marconi et al. (2021) Marconi, A., Abreu, M., Adibekyan, V., et al. 2021, Published in The Messenger vol. 182, pp. 27-32, 6 pages * Mayor et al. (2014) Mayor, M., Lovis, C., & Santos, N. C. 2014, Nature, 513, 328 * Mayor et al. (2011) Mayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv:1109.2497 [astro-ph] [arXiv:1109.2497] * Mayor & Queloz (1995) Mayor, M. & Queloz, D. 1995, Nature, 378, 355 * Neal & Figueira (2019) Neal, J. & Figueira, P. 2019, Journal of Open Source Software, 4, 1053 * Pepe et al. (2021) Pepe, F., Cristiani, S., Rebolo, R., et al. 2021, Astronomy & Astrophysics, 645, A96 * Pepe et al. (2002) Pepe, F., Mayor, M., Galland, F., et al. 2002, Astronomy & Astrophysics, 388, 632 * Press (1992) Press, W. H., ed. 1992, Numerical Recipes in C: The Art of Scientific Computing, 2nd edn. (Cambridge ; New York: Cambridge University Press) * Rainer et al. (2020) Rainer, M., Borsa, F., & Affer, L. 2020, Experimental Astronomy, 49, 73 * Rajpaul et al. (2020) Rajpaul, V. M., Aigrain, S., & Buchhave, L. A. 2020, Monthly Notices of the Royal Astronomical Society, 492, 3960 * Rasmussen & Williams (2006) Rasmussen, C. E. & Williams, C. K. I. 2006, Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning (Cambridge, Mass: MIT Press) * Rauer et al. (2014) Rauer, H., Catala, C., Aerts, C., et al. 2014, Experimental Astronomy, 38, 249 * Robertson et al. (2016) Robertson, P., Bender, C., Mahadevan, S., Roy, A., & Ramsey, L. W. 2016, The Astrophysical Journal, 832, 112 * Rosenthal et al. (2021) Rosenthal, L. J., Fulton, B. J., Hirsch, L. A., et al. 2021, The Astrophysical Journal Supplement Series, 255, 8 * Schmelling (1995) Schmelling, M. 1995, Physica Scripta, 51, 676 * Tal-Or et al. (2019) Tal-Or, L., Trifonov, T., Zucker, S., Mazeh, T., & Zechmeister, M. 2019, Monthly Notices of the Royal Astronomical Society: Letters, 484, L8 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261 * Wehbe et al. (2019) Wehbe, B., Cabral, A., Figueira, P., & Avila, G. 2019, in Fourth International Conference on Applications of Optics and Photonics, ed. M. F. P. Martins Costa (Lisbon, Portugal: SPIE), 65 * Zechmeister et al. (2018) Zechmeister, M., Reiners, A., Amado, P. J., et al. 2018, Astronomy & Astrophysics, 609, A12 ## Appendix A Uncertainty propagation in cubic splines The cubic-spline interpolation, used to calculate a value y at position x in the interval [$x_{i},x_{i+1}$], can be written in the following form (p. 113 of Press 1992) : $y=Ay_{i}+By_{i+1}+Cy^{{}^{\prime\prime}}_{i}+Dy^{{}^{\prime\prime}}_{i+1}$ (7) where the double apostrophe represents the second derivative with respect to x and $\begin{split}A=\frac{x_{i+1}-x}{x_{i+1}-x_{i}}\\\ B=\frac{x-x_{i}}{x_{i+1}-x_{i}}\\\ C=\frac{1}{6}(A^{3}-A)(x_{i+1}-x_{i})^{2}\\\ D=\frac{1}{6}(B^{3}-B)(x_{i+1}-x_{i})^{2}\\\ \end{split}$ (8) The computation of the second derivatives requires choosing proper boundary conditions. As discussed in Sect. 2.3.1 we remove the wavelength regions that are not common to all observations. Thus, as we do not interpolate in the edges, we can use natural boundary conditions, where the second derivative takes a value of zero at the edges of the input data. Following the notation of Press (1992) we can write down a general expression for the second derivatives: $\begin{split}\pmatrix{}{y_{3}-y_{2}}{x_{3}-x_{2}}-\frac{y_{2}-y_{1}}{x_{2}-x_{1}}\\\ \vdots\\\ \frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}-\frac{y_{i}-y_{i-1}}{x_{i}-x_{i-1}}\\\ \vdots\\\ \frac{y_{N}-y_{N-1}}{x_{N}-x_{N-1}}-\frac{y_{N-1}-y_{N-2}}{x_{N-1}-x_{N-2}}\\\ =h_{i,j}\pmatrix{y}^{{}^{\prime\prime}}_{2}\end{split}{\\\ }\vdots{\\\ }y^{{}^{\prime\prime}}_{i}{\\\ }\vdots{\\\ }y^{{}^{\prime\prime}}_{N-1}{\\\ }{\\\ }y^{{}^{\prime\prime}}_{1}=y^{{}^{\prime\prime}}_{N}=0{}\end{equation}\par\noindent wherehisasymmetrictridiagonalmatrixgivenby:\par\begin{equation}h_{i,j}=\pmatrix{}{x_{3}-x_{1}}{3}&\frac{x_{3}-x_{2}}{6}00{\\\ }\frac{x_{i}-x_{i-1}}{6}\frac{x_{i+1}-x_{i-1}}{3}\frac{x_{i+1}-x_{i}}{6}0{\\\ }0\ddots\ddots\ddots{\\\ }{\\\ }00\frac{x_{N-1}-x_{N-2}}{6}\frac{x_{N}-x_{i-2}}{3}{\\\ }\end{equation}\par Anexpressionforthepropagationofuncertaintiesincubicsplineswasderivedin\cite[citet]{\@@bibref{Authors Phrase1YearPhrase2}{gardnerUncertaintiesInterpolatedSpectral2003}{\@@citephrase{(}}{\@@citephrase{)}}},withthecovariancebetweenanytwogiveninterpolatedpoints,$u(y_m,y_n)$,givenby:\par\begin{equation}{\\\ }\pmatrix{u}(y_{i},y_{j})&u(y_{i},y_{j+1})u(y_{i},y^{{}^{\prime\prime}}_{j})u(y_{i},y^{{}^{\prime\prime}}_{j+1}){\\\ }u(y_{i+1},y_{j})u(y_{i+1},y_{j+1})u(y_{i+1},y^{{}^{\prime\prime}}_{j})u(y_{i+1},y^{{}^{\prime\prime}}_{j+1}){\\\ }u(y^{{}^{\prime\prime}}_{i},y_{j})u(y^{{}^{\prime\prime}}_{i},y_{j+1})u(y^{{}^{\prime\prime}}_{i},y^{{}^{\prime\prime}}_{j})u(y^{{}^{\prime\prime}}_{i},y^{{}^{\prime\prime}}_{j+1}){\\\ }u(y^{{}^{\prime\prime}}_{i+1},y_{j})u(y^{{}^{\prime\prime}}_{i+1},y_{j+1})u(y^{{}^{\prime\prime}}_{i+1},y^{{}^{\prime\prime}}_{j})u(y^{{}^{\prime\prime}}_{i+1},y^{{}^{\prime\prime}}_{j+1}){\\\ }\pmatrix{A}_{j}{\\\ }B_{j}{\\\ }C_{j}{\\\ }D_{j}{}\end{equation}\par\noindent with\par\begin{equation}\begin{split}u(y^{{}^{\prime\prime}}_{m},y^{{}^{\prime\prime}}_{n})=Q^{T}_{m}U_{y}Q_{n}\\\ u(y^{{}^{\prime\prime}}_{m},y_{n})=Q^{T}_{m}U_{y}g_{n}\end{split}\end{equation}\par\noindent andwhere$U_y$isaN$\times$Ncovariancematrixoftheinputdata,$g_n$isacolumnvectoroflengthNwitha1inthen- thcolumnand0elsewhere,$Q_i$isacolumn- vectorofsensitivitycoefficientsofthesecond- derivativesinrelationtotheinputvalues\par\begin{equation}Q_{m}=\cases{}{\partial y_{m}^{{}^{\prime\prime}}}{\partial y_{k}}&k\in\\{2,...,N-1\\}{\\\ }0{\rm otherwise}{}\end{equation}\par Inordertocalculatethepartialderivative(ofthesecondderivatives)westartbyselectingagivenrowfromEquation\ref{Eq:derivative_expr}andre- arrangingthesummationlimits:\par\begin{equation}\begin{split}y^{{}^{\prime\prime}}_{i}=-\sum^{N-1}_{j=2}h_{i,j}^{-1}\left(\frac{1}{x_{j+1}-x_{j}}+\frac{1}{x_{j}-x_{j-1}}\right)y_{j}+\\\ +\sum^{N}_{j=3}h_{i,j-1}^{-1}\frac{y_{j}}{x_{j}-x_{j-1}}+\sum^{N-2}_{j=1}h_{i,j+1}^{-1}\frac{y_{j}}{x_{j+1}-x_{j}}\end{split}\end{equation}\par ThepartialderivativethenfollowsfromEquation\ref{Eq:second_der_final_form}:\par\begin{equation}\begin{split}\frac{\partial y_{i}^{{}^{\prime\prime}}}{\partial y_{j}}=-h_{i,j}^{-1}\left(\frac{1}{x_{j+1}-x_{j}}+\frac{1}{x_{j}-x_{j-1}}\right)+\\\ +\frac{h_{i,j-1}^{-1}}{x_{j}-x_{j-1}}+\frac{h_{i,j+1}^{-1}}{x_{j+1}-x_{j}}\end{split}\end{equation}\par SpecialcaremustbetakenduetothesummationlimitsinEquation\ref{Eq:second_der_final_form},asnotalltermsinEquation\ref{Eq:second_derivative_general}existforallindexes.Asweareonlyinterestedinthevariancesoftheinterpolatedvalues,i.e.wewillnotconsidertheeffectofthecovariances,weonlyhavetoevaluateEquation\ref{Eq:simplified_U}forthe$m = n$case.\par Lastly,aswearemainlydealingwithESPRESSOdata,thecomputationofthesecondderivativesthroughEq.\ref{Eq:derivative_expr}impliestheinversionofamatrixwithsize$N-2\times N-2$,with$N = 9111$,whichposesalargecomputationalburden.However,aswearedealingwithasymmetrictridiagonalmatrixoftheform:\par\begin{equation}h=\pmatrix{x}_{1}&y_{1}{\\\ }z_{1}x_{2}\ddots{\\\ }\ddots\ddots y_{N-1}{\\\ }z_{N-1}y_{N}\end{equation}\par\noindent where$z_n$=$y_n$,wecaninvertitwithanexplicitformula,usingbackwardscontinuedfractions\cite[citep]{(\@@bibref{AuthorsPhrase1Year}{kilicExplicitFormulaInverse2008}{\@@citephrase{ }}{})}:\par\begin{equation}h^{-1}_{ij}=\cases{\displaystyle}\frac{1}{C^{b}_{i}}+\sum^{N}_{k=i+1}\left(\frac{1}{C^{b}_{k}}\prod^{k-1}_{t=i}\frac{y^{2}_{t}}{(C^{b}_{t})^{2}}\right)&{\rm if}\ i=j{\\\ }{\\\ }\displaystyle(-1)^{i+j}\prod^{i-1}_{t=j}\frac{y_{t}}{C^{b}_{t}}h^{-1}_{ii}{\rm otherwise}{}\end{equation}\par\noindent with$C^b_n$givenbyEq4oftheaforementionedpaper:\par\begin{equation}C_{n}^{b}=\cases{x}_{1}&{\rm if}\ n=1{\\\ }x-n+\frac{-y_{n-1}z_{n-1}}{C^{b}_{n-1}}{\rm otherwise}{}\end{equation}\par\par\par\@add@PDF@RDFa@triples\LTX@newpage\par\end{document}$ (18)
# INTERNAL EXACT CONTROLLABILITY FOR NAGDHI SHELL Alexis Rodriguez Carranza A. Rodriguez. National University of Trujillo <EMAIL_ADDRESS>, Jose Luis Ponte Bejarano J.Ponte. National University of Trujillo<EMAIL_ADDRESS>and Juan Carlos Ponte Bejarano C. Ponte. National University of Trujillo<EMAIL_ADDRESS> ###### Abstract. In this work we study the exponential stability of the energy associated with a Naghdi shell model with localized internal dissipation. Using several tools from Riemannian Geometry we show the well possedness of the model, via semigroup theory, and obtain observability inequalities which allows to prove the exponential decay of the total energy. As a consequence then we use Russell Principle for obtain exact controllability. ###### Key words and phrases: Internal controllability, Naghdi shell ###### Contents 1. 1 Introduction and summary 1. 1.1 Preliminaries and stationary problem 2. 1.2 Equation of motion 2. 2 The stabilization result 3. 3 Some comments on scape vector fields: 4. 4 Naghdi shell stabilization with internal dissipation 5. 5 Controllability via Stability 6. 6 conclusions ## 1\. Introduction and summary Let us denote by $S$ a two dimensional smoooth Riemannian Manifold with the induced metric of $I\\!\\!R^{3}$ and inner product denoted by $g$ or simply $<.,.>$. Recall that this means that for each $p\in S$ we have an inner product $<.,.>$ on the tangent space $T_{p}S$ and this relation is $C^{\infty}$. We will consider $S$ as the middle surface of a thin shell. Suppose we consider a bounded open region $M$ of $S$ with a smooth boundary $\Gamma=\partial M$. This paper is devoted to consider $M$ as a Naghdi type thin shell. In this dynamical model we study exponential stabilization of the total energy provided we assume an internal localized dissipation acting on $M$. In the literature, mos of the authors prefer to use the classical geometrical approach while working on properties of the solutions of evolution thin shell. In those situations the middle surface is instead, the image under a smooth map of a two dimensional connected domain de $I\\!\\!R^{2}$ and therefore described under just a one coordinate path. Several interesting model for plates or shells with a variable coefficients and other hyperbolic type systems obtained by using traditional geomentry became very difficult to treat mainly due to the explicit presence of Chritoffel symbols $\Gamma^{k}_{ij}(p)$. Interesting aternative was given in the work due to peng fei Yao [35] and collaborators about twenty years ago. In [35], used and intrinsic model of the middle surface of a shallow shell as a two dimensional Riemannianan manifold. This approach allows to get satisfactory results using multipliers. The basic idea was iniciated by S. Bochner in [8]. In order to very an identity or a pointwise estimate, it sufficies to do so at each point $p$ relative to a coordinate frame field wich give as more simplifications. The best coordinate system in our case would be the one in wich the symbols $\Gamma^{k}_{ij}(p)$ vanish at the given point $p$. As in [8] the best frame will be given by the so called coordinate system normal at $p$. This paper is devoted to study exponential stabilization of the total energy associated with a dynamic thin shell equation of Naghdi type in the presence of localized internal dissipation.Using several Yao ideas we show the well posedness of the model, via semigroup theory, and obtain observability inequalities wich allo us to prove the exponential decay of the total energy. As a consequence then we use Russell Principle for obtain exact controllability. ### 1.1. Preliminaries and stationary problem Let $S$ and $M$ be as in the introduction. In addition we will assume $S$ orientable and a normal field at each $x\in M$ will be denote by $N(x)$. The shell, a body of $I\\!\\!R^{3}$ is definied as $S=\\{p\in I\\!\\!R^{3},\quad p=x+zN(x),\quad x\in M,\quad|x|<\frac{h}{2}\\}$ where $h>0$ denotes the, small, thickness off the shell and $z\in I\\!\\!R$. In Naghdi model, the displacement vector $\xi(p)$ at a point $p\in S$ can be approximated by (1) $\xi(p)=\xi_{1}(p)+z\Psi(x),\quad x\in M,$ where $p=x+zN(x)\xi_{1}(x)\in I\\!\\!R^{3}$ denotes the displacement vector of the middle surface and $\Psi(x)$ captures the rotations of the normal $N(x)$ at each $x\in M$. Following Naghdi description [27] we assume that a normal after a deformation may not be again a normal, but, the distance from a point on the normal to the surface remains invariant. As a consequence, the deformations in the direction of the normal can be neglected. This implies in particular that $\xi_{1}(x)$ and $\Psi(x)$ in (1) can be decompose as (2) $\xi_{1}(x)=W_{1}(x)+w_{1}(x)N(x)$ and (3) $\Psi_{1}(x)=V(x)+w_{2}(x)N(x)$ where $W_{1}$ and $V\in\chi(M)$, and $w_{1}$, $w_{2}\in C^{\infty}(M)$. Here $\chi(M)$ denotes the set of all vector fieds on $M$. We recall that a vector field is a map wich associates to each point $p\in M$ a vectr $X(p)\in T_{p}(M)$ wich is the tangential plane of $M$ at $p$. In order to find a model describing deformations of the middle surface we need to analyse the tensor field (of variation of the metric) using the second and thrid-fundamental forms on the surface $M$. In other words, we consider (4) $\Upsilon=\frac{1}{2}(\tilde{g}-g)$ where $g$ and $\tilde{g}$ denote the metric induced on the middle surface before and after the deformation, respectively. $\Upsilon$ is called the strain tensor of the middle surface. Given $x\in M$, we choose a coordinare system normal at $x$, say $\\{E_{1}(x),E_{2}(x),E_{3}(x)=N(x)\\}$ wich is a basis of $I\\!\\!R^{3}$. Now, we calculate $\tilde{g}(E_{i},E_{j})-g(E_{i},E_{j})$ to find, after linearization (5) ${}\Upsilon(\xi)=\frac{1}{2}(DW_{1}+D^{*}W_{1})+w_{1}\Pi$ where $\xi=(W_{1},V,w_{1},w_{2})$, $DW_{1}$ is the covariant differential of $W_{1}$ of $W_{1}$ and $D^{*}W_{1}$ is the transpose of $DW_{1}$ and $\Pi$ is the second fundamental form of $M$ wich is a $2-$ covariant tensor. $\Upsilon(\xi)$ is called the tensor change of metric(linearized). In a similar way we can deduce the change of curvature tensor(linearized) (6) ${}\chi_{0}(\xi)=\frac{1}{2}\\{DV+D^{*}V+\Pi(.,DW_{1})+\Pi(DW_{1},.)\\}+w_{2}\Pi+w_{1}c$ where $c$ is the third fundamental form on the surface $M$. Also, the tensor wich captures rotations of the normal is given by: (7) ${}\varphi_{0}(\xi)=\frac{1}{2}[Dw_{1}+V-i(W_{1})\Pi]$ where $i(W_{1})\Pi$ is the interior product of the tensro field $\Pi$ by the vector field $W_{1}$. It is convenient to write (5), (6), (7) in a more concise way. For example, consider the change of variable, (8) ${}W_{2}=V+i(W_{1})\Pi$ for $x\in M$. As above we consider the system normal at $x$, $\\{E_{1}(x),E_{2}(x),E_{3}(x)=N(x)\\}$. Direct calculations give us, $DW_{2}(E_{i},E_{j})=DV(E_{i},E_{j})+D\Pi(W_{1},E_{i},E_{j})+\Pi(E_{i},D_{E_{j}}W_{1})$ because $D_{E_{j}}E_{i}(x)=0.$ Thus $DW_{2}=DV+\Pi(.,DW_{1})+i(W_{1})D\Pi$ for $x\in M.$ Substitution into (6) gives (9) ${}\chi_{0}(\xi)=\frac{1}{2}(DW_{2}+D^{*}W_{2})+K_{ol}(\xi)$ substitution of (8) into (7) give us (10) ${}\varphi_{0}(\xi)=\frac{1}{2}Dw_{1}+\varphi_{ol}(\xi)$ where $K_{ol}(\xi)=-i(W_{1})D\Pi+w_{1}c+w_{2}\Pi$ and $\varphi_{ol}(\xi)=-i(W_{1})\Pi+\frac{W_{2}}{2}$ Assuming the material of the shell is homogeneous and isotropic we find in the literature (see for instance [15]) the stress-strain relations of the $3-$ dimensional shell on the middle surface $M$ and express energy as an integral over $M\times[\frac{-h}{2},\frac{h}{2}]$. Let $R>0$ be the smallest principal radius of curvature of the undeformed middle surface (see [15], pg 166). As usual, for a thin shell it is assumed that $\frac{h}{R}<<1$ (see [15], pg 18). Using this assumption, the following approximation of the strain energy of the shell is obtain (see [15], pg 253) (11) $\displaystyle I(\xi)=$ $\displaystyle\alpha h\int_{M}\left\\{\left.|\Upsilon(\xi)\right|^{2}+2\left|\varphi_{0}(\xi)\right|^{2}+\omega_{2}^{2}+\beta\left(tr(\Upsilon(\xi))+w_{2}\right)^{2}\right.$ $\displaystyle\left.+\gamma\left[|\chi_{0}(\xi)|^{2}+\frac{\left|Dw_{2}\right|^{2}}{2}+\beta\left(tr(\chi_{0}(\xi))\right)^{2}\right]\right\\}dM$ where $\alpha=\frac{E}{1+\mu}$, $\beta=\frac{\mu}{1-2\mu}$ and $\gamma=\frac{h^{2}}{12}$. Here, $E$ denotes Young modules and $\mu$ is Poisson ratio $(0<\mu<\frac{1}{2})$. The above expression (11) of $I(\xi)$ allows us to consider the following symetric, bilinear form $\tilde{B_{0}}$ associated to the strain energy, definied on the space $Z=[H^{1}(M,\wedge)]^{2}\times[H^{1}(M)]^{2}:$ (12) ${}\tilde{B_{0}}(\xi,\theta)=\frac{\alpha h}{2}\int_{M}B_{0}(\xi,\theta)dM$ where $\xi=(W_{1},W_{2},w_{1},w_{2})\in Z$, $\theta=(\theta_{1},\theta_{2},u_{1},u_{2})\in Z$ and (13) $\displaystyle B_{0}(\xi,\theta)=2\langle\Upsilon(\xi),\Upsilon(\theta)\rangle$ $\displaystyle\left.+4<\varphi_{0}(\xi),\varphi_{0}(\theta)\right\rangle+2\omega_{2}u_{2}+$ $\displaystyle+2\beta\cdot\left(\operatorname{tr}(\Upsilon(\xi))+\omega_{2}\right)\left(t_{2}\Upsilon(\theta)+u_{2}\right)$ $\displaystyle\left.\left.+2\gamma<\chi_{0}(\xi),\chi_{0}(\theta)\right)\right\rangle+\gamma\left\langle D\omega_{2},Du_{2}\right\rangle$ $\displaystyle+2\gamma\beta\operatorname{tr}(\chi_{0}(\xi)).\operatorname{tr}(\chi_{0}(\theta))$ In order to obtain Green identify we consider the Hodge-Laplace type operator $\Delta_{\beta}$ given by: (14) ${}\Delta_{\beta}=-[\delta d+2(1+\beta)d\delta]$ where $\beta=\frac{\mu}{1-2\mu}$, $d$ is the exterior derivative and $\delta$ is its formal adjoint. The operator $\Delta_{\beta}$ acts taking a $p$-form to another $p$-form. In our case, we will need only $\Delta_{\beta}$ operating on $1$-forms. In [35](theorem 5.1) the following result was prove: Let us consider the bilinear form $\tilde{B_{0}}(.,.)$ given by(12). Then, for any $\xi=(W_{1},W_{2},w_{1},w_{2})\in Z$ and $\theta=(\theta_{1},\theta_{2},u_{1},u_{2})\in Z$, the identity: (15) ${}\widetilde{B}_{0}(\xi,\theta)=\frac{\alpha h}{2}\left\langle A_{0}\xi,\theta\right\rangle L^{2}+\frac{\alpha h}{2}\int_{\Gamma=\partial M}\partial(A_{0}\xi,\theta)d\Gamma$ holds, where $L^{2}=[L^{2}(M,\wedge)]^{2}\times[L^{2}(M)]^{2}$ and (16) $\displaystyle A_{0}(\xi)=-$ $\displaystyle\left(\Delta\omega_{1}+F_{1}(\xi),\gamma\Delta_{\beta}W_{2}+F_{2}(\xi),\right.$ $\displaystyle\left.\Delta w_{1}+f_{1}(\xi),\gamma\Delta\omega_{2}+f_{2}(\xi)\right)$ $\displaystyle\partial\left(A_{0}\xi,\theta\right)$ $\displaystyle=\left\langle c_{1}(\xi),\theta_{1}\right\rangle+\gamma\left\langle\operatorname{c}_{2}(\xi),\theta_{2}\right\rangle$ $\displaystyle+2\left\langle\varphi_{0}(\xi);\eta\right\rangle u_{1}+\gamma\frac{\partial\omega_{2}}{\partial\eta}u_{2}$ with (17) $\displaystyle c_{1}(\xi)=2i(\eta)\Upsilon(\xi)+2\beta\left(\operatorname{tr}\Upsilon(\xi)+w_{2}\right)\eta$ $\displaystyle c_{2}(\xi)=2\gamma i(\eta)\chi_{0}(\xi)+2\beta(\operatorname{tr}(\chi_{0}(\xi))\eta$ here $\eta$ denotes the exterior normal vector along the curve $\Gamma=\partial M,$ $\Delta$ is the usual Laplace-Beltrami operator on the Riemannian manifold $M$ and $F_{j}(\xi)$ and $f_{j}(\xi)$ are first order terms, i.e, of order $\leq 1$ for $j=1\,\,\mbox{or}\,\,2$ The above description was shown in [35]. In fact, the variable $\xi=(W_{1},W_{2},w_{1},w_{2})$ satisfies the following system $\begin{cases}W_{1}^{\prime\prime}-\Delta W_{\beta}+F_{1}(\xi)=0&\text{ on }M\times(0,\infty)\\\ W_{2}^{\prime\prime}-\Delta W_{2}+F_{2}(\xi)=0&\text{ on }M\times(0,+\infty)\\\ w_{1}^{\prime\prime}-\Delta w_{1}+f_{1}(\xi)=0&\text{ on }M\times(0,+\infty)\\\ w_{2}^{\prime\prime}-\Delta w_{2}+f_{2}(\xi)=0&\text{ on }M\times(0,+\infty)\end{cases}$ with $\left\\{\begin{array}[]{l}\xi=0\quad\text{ on }\Gamma(M)\times(0,+\infty)\\\ \xi(0)=\xi_{0},\xi_{t}(0)=\xi\quad\text{on}\quad\mbox{M}\end{array}\right.$ Here $\Gamma(M)$ denotes the boundary of $M$. Let us consider the following spaces $\displaystyle H_{\Gamma}^{1}(M)=\left\\{u\in H^{1}(M),u\equiv 0\text{ on }\Gamma=\Gamma(M)\right\\}$ $\displaystyle H_{\Gamma}^{1}(M,\Lambda)=\left\\{z\in H^{1}(M,\Lambda),z\equiv 0\text{ on }\Gamma=\Gamma(M)\right\\}$ and $X_{\Gamma}=\left[H_{\Gamma}^{1}(M,N)\right]^{2}\times\left[H_{\Gamma}^{\prime}(M)\right]^{2}$ Next, we can prove that the symetric bilinear form $\tilde{B}(\xi,\theta)$ defined in (15) for any $\bar{\xi},\theta\in Z$ is coercive, that is, there exits a positive constant $c_{3}$ such that (18) ${}\widetilde{B}(\xi,\xi)\geqslant c_{3}\|\xi\|_{X_{\Gamma}}^{2}\text{ for any }\xi\epsilon X_{\Gamma}$ In fact, substitution of (5), (9) and (10) into (13) followed by integration over $M$ give us (19) ${}\tilde{B}_{0}(\xi,\xi)+K_{2}\|\xi\|_{L^{2}}^{2}\geqslant K_{1}\|\xi\|_{H_{\Gamma}^{1}}^{2}(M)$ for some positive constants $K_{1}$ and $K_{2}$. In order to use (19) to obtain (18) we can use the following uniqueness results: Let $\xi=\left(W_{1},W_{2},w_{1},w_{2}\right)$ belonging to $X_{\Gamma}$ such that $\Upsilon(\xi)=0,\chi_{0}(\xi)=0,\varphi_{0}(\xi)=0$ and $w_{2}=0$. Then $\xi=0$ for all $x\in M$. Using this uniqueness result together with the method called compactness-uniqueness we can “adsorb" the term $K_{2}\|\xi\|_{L^{2}}^{2}$ into the term of the right hand side of (19) to conclude the validity of (18). The expression (15) for the bilinear form $\widetilde{B}_{0}(\xi,\theta)$ is known. Using the above discusiion we deduce that the variational problem associated to the bilinear form $\widetilde{B}_{0}$ it is equivalent to the following boundary value problem: To find $\xi=\left(\boldsymbol{W}_{1},\mathbf{W}_{2},\mathbf{w}_{1},\mathbf{w}_{2}\right)$ such that (20) ${}\left\\{\begin{array}[]{l}\frac{\alpha h}{2}A_{0}\xi=\tilde{F}\\\ \left.W_{1}\right|_{\Gamma}=\left.W_{2}\right|_{\Gamma}=0,\left.w_{1}\right|_{\Gamma}=\left.w_{2}\right|_{\Gamma}=0\end{array}\right.$ for a given $\tilde{F}\in\mathbf{L}^{2}=\left[L^{2}(M,\Lambda)\right]^{2}\times\left[L^{2}(M)\right]^{2}$ ### 1.2. Equation of motion In this Section we will consider the equations of motion of Naghdi model. We assume that there are no external loads on the shell and the shell is clamped along $\Gamma=\Gamma(M)$. In this situation $\tilde{\xi}=\tilde{\xi}(x,t),$ $x\in M$ and $t$ is time. In onder to include the kinetic energy to our problem it is convevient to consider the change of variables $t\rightarrow tc_{1}^{-1}$ where $c_{1}^{2}=\frac{2}{\alpha}$ with $\alpha$ as in (11). Also, denoting by $R=\left[\begin{array}[]{llll}1&0&0&0\\\ 0&\gamma&0&0\\\ 0&0&1&0\\\ 0&0&0&\gamma\end{array}\right]$ and $\xi=R^{1/2}\cdot\widetilde{\xi}$ we write the equations of evolution of a Naghdi shell for $\xi=\left(\boldsymbol{W}_{1},\mathbf{W}_{2}\mathbf{w}_{1},\mathbf{w}_{2}\right)$ as (21) ${}\left\\{\begin{array}[]{l}\xi+A\xi=0\text{ on }M\times\mathbb{R}^{+}\\\ \xi=0\text{ on }\Gamma(M)\times\mathbb{R}^{+}\\\ \xi(x,0)=\xi_{0}(x),\xi_{t}(x,0)=\xi_{1}(x)\text{ on }M\end{array}\right.$ where $\mathbf{A}=R^{-1/2}\mathbf{A}_{0}R^{-1/2}$. The bilinear form $J$ associated to operator $\mathbf{A}$ is given by (22) $\displaystyle\mathbf{J}(\xi,u)$ $\displaystyle=2\langle\Upsilon(\xi),\Upsilon(u)\rangle+4\left\langle\boldsymbol{\varphi}_{0}(\xi),\boldsymbol{\varphi}_{0}(u)\right\rangle\gamma$ $\displaystyle+2\beta\left[Tr(\Upsilon(\xi))+\frac{w_{2}}{\sqrt{\gamma}}\right]\left[Tr(\Upsilon(u))+\frac{u_{2}}{\sqrt{\gamma}}\right]$ $\displaystyle+2\beta Tr(\chi_{0}(\xi))Tr(\chi_{0}(u))+2\left\langle\chi_{0}(\xi),\chi_{0}(u)\right\rangle$ $\displaystyle+\left(Dw_{2},Du_{2}\right\rangle+\frac{2}{\gamma}w_{2}u_{2}$ for all $\xi=\left(W_{1},W_{2},w_{1},w_{2}\right)$ and $u=\left(U_{1},U_{2},u_{1},u_{2}\right)$ with $\xi\text{ and }u\in Z$. Also $\Upsilon,\chi_{0}$ and $\varphi_{0}$ are as in (5), (6) and (7) respectively. The bilinear form associated with the operator $A\text{ is }\mathbf{J}(\xi,u)$ and the coresponding Green formula would be (23) ${}\widetilde{J}(\xi,u)=\langle\mathbf{A}\xi,u\rangle_{Y}+\int_{\Gamma=\partial M}\partial\left(\mathbf{A}\xi,u\right)dr$ where $\tilde{J}(\xi,u)=\int_{M}\mathbf{J}(\xi,u)dM$. Now, we consider a localized perturbation model (21): To find $\xi=\left(W_{1},W_{2},w_{1},w_{2}\right)\in Z$ satisfying (24) ${}\left\\{\begin{array}[]{l}\xi_{tt}+A\xi+a(x)\,\,\xi_{t}=0\quad\text{ on }M\times(0,+\infty)\\\ \xi=0\text{ on }\Gamma(M)\times(0,+\infty)\\\ \xi(x,0)=\xi_{0}(x),\xi_{t}(x,0)=\xi_{1}(x)\text{ on }M\end{array}\right.$ where $a(x)$ is a real valued function defined for all $x\in M$ such that has support in a small interior region of $M$. Well posedness of problem (24) it follows using standar tools, for example, the semigroup theory [29] and omitt the proof here. Before we present a proof of the uniform stabilization of the total energy of model (24) we need some preliminaries Let us denote by $T^{2}(M)$ the set of all tensor fields on $M$ of rank $2$. We define the bilinear form $b(\cdot,\cdot):\quad T^{2}(M)\times T^{2}(M)\mapsto\mathbb{R}$ gives by (25) ${}b\left(T_{1},T_{2}\right)=\left\langle T_{1},T_{2}\right\rangle+\beta(trace(T_{1}))(trace(T_{2}))$ for any $T_{1},T_{2}\in T^{2}(M)$. Here, for any $T\in T^{2}(M)$, the trace of $T$ at $x\in M$ is given by $trace(T)=\sum_{i=1}^{2}T(e_{i},e_{i})$ where $\\{e_{1},e_{2}\\}$ is an ortonormal basis of $T_{x}M$. For any $W\in H^{1}(M,\Lambda)$ we define (26) ${}\widetilde{S}(W)=\frac{1}{2}\left(DW+D^{*}W\right)$ It is known that there exist a positive constant $\lambda$ such that (27) ${}2\|\tilde{S}(W)\|_{L^{2}\left(M,T^{2}\right)}=\left\|DW+D^{*}W\right\|_{{L^{2}\left(M,T^{2}\right)}}{\geq\lambda\|W\|_{H^{1}(M,\Lambda)}}$ for any $w\in H^{1}(M,\Lambda)$. See lemma $(4.5)$ in [35]. We claim that there exist $\lambda_{0}>0$ such that (28) ${}\lambda_{0}\int_{M}\left[b(\widetilde{S}(w),\tilde{S}(w))+|W|^{2}\right]dM\geqslant\|Dw\|_{{}_{L^{2}\left(M,\Lambda\right)}}^{2}$ holds for any $W\in H_{\Gamma}^{1}(M,\Lambda)$. In fact, $b(\tilde{S}(w),\tilde{S}(w))=\frac{1}{4}\left|Dw+D^{*}w\right|^{2}+\beta\left(Trace\tilde{S}(w)\right)^{2}$ consequently, (29) ${}b(\widetilde{S}(w),\widetilde{S}(w))+|w|^{2}\geq\frac{1}{4}\left|DW+D^{*}W\right|^{2}$ integration of (29) over $M$ and using (27) we obtain, $\displaystyle\int_{m}\left[b(\tilde{S}(w),\widetilde{S}(W))+\mid Wi^{2}\right]dM$ $\displaystyle\geqslant\frac{1}{4}\left\|DW+D^{*}w\right\|_{L^{2}(M,T^{2})}^{2}$ $\displaystyle\geqslant\frac{\lambda}{4}\|w\|_{H^{1}(M,\Lambda)}^{2}$ $\displaystyle\geqslant\frac{\lambda}{4}\|DW\|_{L^{2}(M,\Lambda)}^{2}$ wich prove our claim. Next, was will use the tedmiage of multiplies to stain appropiatte identitis and inequalities. wet us assume that given $V\in\chi(M)$ there exists a function $v(x)\in C^{\infty}(M)$ such that (30) ${}DV(x)(X,X)=v(x)|X|^{2}$ for all $X\in T_{x}(M),x\in M$. Given $\xi=\left(W_{1},W_{2},w_{1},w_{2}\right)\in Z$ we consider (31) ${}m(\xi)=\left(D_{V}W_{1},D_{V}W_{2},V\left(w_{1}\right),V\left(w_{2}\right)\right)$ we recall that $DV(X,X)=\operatorname{DV}_{X}(X)=\left\langle\operatorname{DV}_{X},X\right\rangle$, for all $X\in T_{x}(M),x\in M$. Using our assumption (30), we take the inner product of equation (24) with $m(\xi)$ and integrate over $M$ (32) $\displaystyle\left\langle\xi_{tt},m(\xi)\right\rangle_{L^{2}(M,\Lambda)}+\langle A\xi,m(\xi)\rangle+\langle a(x)\xi_{t},m(\xi)\rangle=0$ we can use identity (23) to deduce from (32) (33) $\displaystyle\left\langle\xi_{tt},m(\xi)\right\rangle_{L^{2}(M,\Lambda)}+\tilde{J}(\xi,m(\xi))-\int_{\Gamma=\Gamma(M)}\partial(A\xi,m(\xi))d\Gamma=-\left\langle a(x)\xi_{t},m(\xi)\right\rangle.$ Using similar calculations as the ones given in Lemma $5.2$ in [35] we deduce the identity (34) $\displaystyle 2\widetilde{J}(\xi,m(\xi))=$ $\displaystyle\int_{\Gamma}J(\xi,\xi)\langle V,\eta\rangle d\Gamma-2\int_{M}vJ(\xi,\xi)dM+2\int_{M}K(\xi,\xi)dM$ $\displaystyle+l_{0}(\xi)$ where $\eta$ is the outside normal vector field along $\Gamma=\Gamma(M)$, (35) $\displaystyle K(\xi,\xi)$ $\displaystyle=2b(\tilde{S}(W_{1}),G\left(V,DW_{1}\right))+2b(\tilde{S}\left(W_{2}\right),G\left(V,DW_{2}\right))+$ $\displaystyle+4v\left|\Psi_{0}(\xi)\right|^{2}+v\left|DW_{2}\right|^{2}.$ Here $b(\cdot,\cdot)$ is as in (25) and $G$ is a map defined by, (36) $\displaystyle G:\chi(M)\times T^{2}(M)\longmapsto T^{2}(M)$ $\displaystyle G(W,T)=\frac{1}{2}\left[T(\cdot,\nabla\cdot W)+T^{*}(\cdot,W\cdot W)\right]$ Finally, in (34), $l_{0}(\xi)$ denotes lower order terms with respect to the energy in the sense that for any $\epsilon>0$ there exists $C_{\epsilon}>0$ such that (37) ${}\left|l_{0}(\xi)\right|^{2}\leqslant\varepsilon d(\xi)+C_{\varepsilon}h(\xi)\text{ for all }x\in M$ where $d(\xi)$ is the density energy $e(t)=\frac{1}{2}\int_{M}d(\xi)dM$ y $h(\xi)$ has partial derivatives only up to order $1$. ## 2\. The stabilization result Let $\xi$ be the displacement field of Naghdi shell. The total energy of the model is (38) ${}E(t)=\frac{1}{2}\left\|\xi_{t}\right\|_{L^{2}(M)}^{2}+\frac{1}{2}\tilde{J}(\xi,\xi)$ where $\tilde{J}$ is given as in(23). In order to study the stabilization result for the solutions of model (24) we will consider the following assumption on $a(x)$ Assumption 1 a:M $\longmapsto\mathbb{R}^{+}$is a real valued function nonnegative and with support in a small interior region of $M$. ###### Theorem A. Consider the solution of problem (24), $\text{ (with }\Gamma_{0}=\Gamma\text{ ) }$ and initial data $\xi_{0}\in Z=\left[\mathbf{H}^{1}(M,1)\right]^{2}\times\left[\mathbf{H}^{1}(M)\right]^{2}$, $\left.\xi_{1}\in Y=\left[L^{2}(M,\Lambda)\right]^{2}\times\left[L^{2}(M)\right]\right]^{2}$. Assume that condition (30) holds. Then, the identity: (39) $\displaystyle\int_{0}^{T}\int_{\Gamma}\left[2\partial(A\xi,m(\xi))+\left(\left|\xi_{t}\right|^{2}-J(\xi,\xi)\right)\langle V,\eta\rangle\right]drdt$ $\displaystyle\left.\quad=2\left(\xi_{t},m(\xi)\right)_{L^{2}}\right]_{0}^{T}+2\int_{0}^{T}\int_{M}v\left[|\xi_{t}|^{2}-J(\xi\xi)\right]dMdt$ $\displaystyle+2\int_{0}^{T}\int_{M}^{T}k(\xi,\xi)dMdt-\int_{0}^{T}\left(a(x)\xi_{t},2m(\xi)\right)dt$ $\displaystyle\quad+l_{0}(\xi)$ holds. holds. Here $m(\xi)$ is a in (31) and $K$ given in (35). Furthermore, if we consider $P:M\longrightarrow\mathbb{R}$ any smooth function, then, the identity (40) $\displaystyle\int_{0}^{T}\int_{\Gamma}\partial(A\xi,p\xi)d\Gamma dt$ $\displaystyle=\int_{0}^{T}\int_{M}p\left[J(\xi,\xi)-|\xi|^{2}\right]dMdt$ $\displaystyle-\int_{0}^{T}\left(a(x)\xi_{t},p\xi\right)_{L^{2}}dt+l_{0}(\xi)$ holds. Proof: As before we can use (32) and (23) to obtain from equation (24) the identity (41) $\displaystyle\left\langle\xi_{tt},2m(\xi)\right\rangle_{L^{2}(M,\lambda)}+\tilde{J}(\xi,2m(\xi)\rangle$ $\displaystyle-\iint_{\Gamma}\partial(A\xi,2m(\xi))d\Gamma=-\left\langle a(x)|\xi_{t}|,2m(\xi)\right\rangle.$ Using (30), (31) we obtain after taking inner product, integrate over $M$ and use Green formula (42) ${}-2\left\langle\xi_{t},m\left(\xi_{t}\right)\right\rangle_{L^{2}}=2\int_{M}v\left|\xi_{t}\right|^{2}dM-\int_{\Gamma=\partial M}\left|\xi_{t}\right|^{2}\langle V,\eta\rangle d\Gamma$ Substitution of (42) into (41) gives us (43) $\displaystyle\left\langle\xi_{tt},2m(\xi)\right\rangle_{L^{2}}$ $\displaystyle=2\frac{\partial}{\partial t}\left\langle\xi_{t},m(\xi)\right\rangle_{L^{2}}+2\int_{M}v\left|\xi_{t}\right|^{2}dM$ $\displaystyle-\int_{\Gamma=\partial M}\left|\xi_{t}\right|^{2}\langle V,\eta\rangle d\Gamma$ Using Green formula for operator $A$ and relation (34) we deduce (44) $\displaystyle\langle A\xi,2m(\xi)\rangle_{L^{2}}$ $\displaystyle=\int_{\Gamma=\partial M}[\boldsymbol{J}(\xi,\xi)\langle V,\eta\rangle-2\partial(\boldsymbol{A}\xi,m(\xi))]d\Gamma$ $\displaystyle-2\int_{M}v\boldsymbol{J}(\xi,\xi)dM+2\int_{M}K(\xi,\xi)dM+l_{0}(\xi)$ Using identities (43), (44) follow (40). We need some geometric hypothesis in orde to obtain the desired stabilization result in case of localized internal dissipation. ###### Definition B. Let $V$ a vector field on $M$ that is $V\in X(M)$. We say that $V$ is an escape vector field for the Naghdi shell if the following conditions are satisfy: * a) There exists a function $v$ on $M$ such that $DV(X,X)=v(x)|X|^{2}$ for all $X\in T_{x}(M),x\in M$ * b) Let $\varepsilon(x)$ denote the “volume element of the middle surface M", consider $f(x)=\frac{\langle DV,\varepsilon\rangle}{2}\quad\text{for}\,\,\,x\in M$ The functions $v(x)$ and $l(x)$ are assume to satisfy the inequality $2\min_{x\in M}r(x)>\lambda_{0}(1+2\beta)\max_{x\in M}|\ell(x)|$ where $\lambda_{0}\geqslant 1$ satisfies (28) with $\lambda_{0}^{-1}=\frac{c}{4}$ and $\beta=\frac{\mu}{1-2\mu}$ ## 3\. Some comments on scape vector fields: * 1. It is well known that on a $2$-dimensional middle surface $M$ there always exists a vector freed $V$ satisfying assumption a). To obtain assumption b) may be the difficult part. It is known that a necessary condition to hold condition b) is that there is no closed geodesics inside the middle surface $M$. * 2. Condition b) says in a sense that function $l(x)$ is related to the symmetry of the covariant differential DV. In fact, if $DV$ is symmetric then $l(x)=0$ for all $x\in M$. * 3. In our case $M$ is an oriented Riemsannian manifold of $\operatorname{din}M=2$. Let $\left\\{e_{1},e_{2}\right\\}\in T_{x}(M)$ be linearly independent and let $\varepsilon$ be a differential form of degree 2 defined as $\displaystyle\varepsilon\left(e_{1},e_{2}\right)(x)$ $\displaystyle=\pm\sqrt{\operatorname{det}\left(\left\langle e_{i}e_{j}\right\rangle\right)}$ $\displaystyle=\text{orient.vol}\\{e_{1},e_{2}\\}$ $x\in M$. The oriented volume is affected by the sign $+$ or $-$ depending on wheter or not a basis $\\{e_{1},e_{2}\\}$ belongs to the orientation on $M$. $\varepsilon=\varepsilon(x)$ is called the volume element of $M$. Some texts define the volume element as a $2-$ form on $M$ if for any frame field $\\{e_{1},e_{2}\\}$, $|\varepsilon(e_{1},e_{2})|=1$. In chapter $4$ of [35] several examples are presented where we can assure the construction of escape vector fields for a shallow shells. Next, we define the motion of scape region a piece $\overline{M}$ of the middle surface $M$ wich will be convenient in order to obtain the desired stabilization result. We are interested in case where $\overline{M}$ is not the whole $M$ and maybe small as possible (in some sense). The notion was used by several authors in related work (see [35], [30], and references therein) ###### Definition C. A region $G\subset\Omega$ is called a scape region for the Naghdi shell, if 1. $1)$ There is a finite number of sub-regions $\left\\{\Omega_{i}\right\\}_{i=1}^{J}$, with boundary $\Gamma_{i}$, $J$ a natural positive, such that $\Omega_{i}\cap\Omega_{j}=\phi\quad\mbox{para todo}\quad 1\leq i<j\leq J.$ 2. $2)$ For each $\Omega_{i}$ there is a vector field $V_{i}$ and a function $v_{i}$ such that, $\displaystyle DV_{i}(X,X)$ $\displaystyle=$ $\displaystyle v_{i}(x)|X|^{2}\quad\mbox{on}\quad\Omega_{i}$ $\displaystyle 2\min_{x\in\Omega_{i}}v_{i}(x)$ $\displaystyle>$ $\displaystyle\lambda_{0}(1+2\beta)\max_{x\in\Omega_{i}}\frac{|l_{i}(x)|}{2},$ where $l_{i}(x)=\frac{\left<DV_{i},E\right>}{2}$ for all $1\leq i\leq J$; 3. $3)$ $G\supset\bar{\Omega}\cap N_{\epsilon}\left[\cup_{i=1}^{J}\Gamma_{i0}\cup\left(\Omega\setminus\cup_{i=1}^{J}\Omega_{i}\right)\right]$ where $\epsilon>0$, small, and: $\displaystyle N_{\epsilon}(S)$ $\displaystyle=$ $\displaystyle\cup_{x\in S}\left\\{y\in\Omega/d_{g}(y,x)<\epsilon\right\\}\quad{para}\quad S\subset M$ $\displaystyle\Gamma_{i0}$ $\displaystyle=$ $\displaystyle\left\\{x\in\Gamma_{i},\left<V_{i}(x),\nu(x)\right>>0\right\\},$ $\nu_{i}$ is the normal to $\Omega_{i}$ pointing out. In general, there is no defined scape vector field over the entire median surface $\Omega$. However, such fields can be defined on small geodesic balls. Then, considering $\Omega=\cup_{n\in I\\!\\!N}B(x_{n},\delta)$ with $x_{n}\in\Omega$ and $\delta>0$ small enough, an escape vector field can be defined in $B(x_{n},\delta)$. Then $\mu(\Omega)=\lim_{k\rightarrow\infty}\sum_{n=1}^{k}\mu(B(x_{n},\delta))$, where $\mu$ is the two-dimensional Lebesgue measure on the surface $\Omega$. So, given $\epsilon>0$, there is $N\in I\\!\\!N$ big enough that $\sum_{\infty}^{n=N+1}\mu(B(x_{n},\delta))<\epsilon.$ Then considering $\Omega_{i}=B(x_{i},\delta)$ con $1\leq i\leq N$, we have proved the following ###### Theorem D. For $\epsilon>0$ given, an escape region can be chosen $G\subset\bar{\Omega}$ such that $\mu(G)<\epsilon$ where $\mu(G)$ is the two-dimensional Lebesgue measure of $G$ Now we will give some examples. ###### Exemplo 3.1. In the case of an escape vector field, $V$, defined on all $\Omega$, then in the definition (C) we have to $J=1$. By condition($3$) a region escape is supported in the boundary region $\Gamma_{0}$, where $\Gamma_{0}=\left\\{x\in\partial\Omega/\left<V(x),\nu(x)\right>>0\right\\}.$ That escape region was already used by many authors[11], [6], [25]. The escape field considered, in the case $I\\!\\!R^{n}$, was $V=x-x_{0}$ ###### Exemplo 3.2. consider now $\Omega=C=\left\\{x=(x_{1},x_{2},x_{3})\in I\\!\\!R^{3}/x_{1}^{2}+x_{2}^{2}=1,\quad-1\leq x_{3}\leq 1\right\\},$ the cylinder limited in $I\\!\\!R^{3}$. it is known that is not possible to define a vanishing vector field over all $\Omega$. To construct an escape region, let $x_{0}=\left(x_{01},x_{02},x_{03}\right)\in C$, with $x_{03}=0$ and either $L_{0}$ the generating line containing $x_{0}$. Be $x_{1}\in C$ the antipode of $x_{0}$. Since the interior of the cut-locus of $x_{1}$ is $C\setminus\setminus L_{0}$, there exists an escape vector field defined over $C\setminus L_{0}$. Thus, an escape region for $\Omega$ is supported in a neighborhood of the edge of $\Omega$ and the $L_{0}$. ## 4\. Naghdi shell stabilization with internal dissipation To continue with the resolution of the problem of the exponential decay of energy, we need the following lemmas. ###### Lema E. Let $V\in\chi(\Omega)$ satisfying the first condition of the definition (B). So the tensor field $DV$ can be decomposed as $DV=v(x)g+l(x)E\quad\mbox{para}\quad x\in\Omega$ ###### Proof. decomposing $DV$ in its symmetric and antisymmetric part, by (45) ${}DV=\frac{1}{2}\left(DV+D^{*}V\right)+\frac{1}{2}\left(DV-D^{*}V\right)$ Given that $\frac{1}{2}\left(DV-D^{*}V\right)$ is a $2$-antisymmetric shape and $\Omega$ is $2$-dimension, there is a function $q$ such that (46) ${}\frac{1}{2}\left(DV-D^{*}V\right)=q(x)E\quad\mbox{para}\quad x\Omega.$ Because the dimension of the $2$-shapes defined over spaces $2$-dimensionais is $1$. substituting (46) in the expression of $l(x)$, we have $\displaystyle l(x)$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left<DV,E\right>=\frac{1}{2}\left<2q(x)E+D^{*}V,E\right>$ $\displaystyle=$ $\displaystyle\left<q(x)E,E\right>+\frac{1}{2}\left<D^{*}V,E\right>=2q(x)+\frac{1}{2}\left<D^{*}V,E\right>$ $\displaystyle=$ $\displaystyle 2q(x)-\frac{1}{2}\left<DV,E\right>$ $\displaystyle=$ $\displaystyle 2q(x)-l(x),$ de dónde, (47) ${}l(x)=q(x).$ substituting (47) in (46) and using that $DV=vg$ we have the result. ∎ Another notation to fix is the following. Given $W\in\chi(\Omega)$ e $T\in T^{2}(\Omega)$, be $G(W,T)\in T^{2}(\Omega)$ given by (48) ${}G(W,T)=\frac{1}{2}\left[T(.,\nabla_{.}W)+T^{*}(.,\nabla_{.}W)\right]$ Now we prove some necessary lemmas ###### Lema F. There exists a constant $c>0$ such that (49) ${}||DW+D^{*}W||_{L^{2}(\Omega,T^{2})}\geq c||W||_{H^{1}(\Omega,\Lambda)}\quad\forall W\in H_{\Gamma_{0}}^{1}(\Omega,\Lambda)$ We note that, for $W\in H_{\Gamma_{0}}^{1}(\Omega,\Lambda)$ $\displaystyle b(S(W),S(W))$ $\displaystyle=$ $\displaystyle\left<S(W),S(W)\right>+\left(\mbox{Tra\c{c}}(S(W))\right)^{2}$ $\displaystyle=$ $\displaystyle\frac{1}{4}|DW+D^{*}W|^{2}+\left(\mbox{Tra\c{c}}(S(W))\right)^{2}$ Then, $b(S(W),S(W))+|W|^{2}\geq\frac{1}{4}|DW+D^{*}W|^{2}.$ From where, integrating in $\Omega$ and using the lemma (49) $\displaystyle{}\int_{\Omega}\left[b(S(W),S(W))+|W|^{2}\right]dx$ $\displaystyle=$ $\displaystyle\frac{1}{4}||DW+D^{*}W||^{2}_{L^{2}(\Omega,T^{2})}\geq\frac{c}{4}||W||^{2}_{H^{1}(\Omega,\Lambda)}$ $\displaystyle\geq$ $\displaystyle\frac{c}{4}||DW||^{2}_{L^{2}(\Omega,\Lambda)}$ (51) $\displaystyle\lambda_{0}\int_{\Omega}\left[b(S(W),S(W))+|W|^{2}\right]dx$ $\displaystyle\geq$ $\displaystyle||DW||^{2}_{L^{2}(\Omega,\Lambda)}$ with $\lambda_{0}=\frac{4}{c}$. ###### Lema G. Be $V$ an escape vector field for the Naghdi shell model and let $G(V,DW)\in T^{2}(\Omega)$ given by (48), for $W\in H^{1}(\Omega,\Lambda)$. Then, $\sigma_{1}\int_{\Omega}b\left(S(W),S(W)\right)dx\leq\int_{\Omega}b\left(S(W),G(V,DW)dx+Lo(\xi)\right).$ where $\sigma_{1}=\min_{x\in\Omega}v(x)-(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}$. ###### Proof. Remembering that, for $T_{1},T_{2}\in T^{2}(\Omega)$, we have: $b(T_{1},T_{2})=\left<T_{1},T_{2}\right>+\beta\mbox{Tra\c{c}}T_{1}\mbox{Tra\c{c}}T_{2}.$ Then, (52) ${}b(S(W),G(V,DW))=\left<S(W),G(V,DW)\right>+\mbox{Tra\c{c}}(S(W))\mbox{Tra\c{c}}(G(V,DW))$ Now let estimate each term of (52). For this, given $x\in\Omega$, be $\\{e_{1},e_{2}\\}$ an orthonormal basis of $T_{x}\Omega$ such that (53) ${}DW(e_{1},e_{2})+D^{*}W(e_{1},e_{2})=0\quad\mbox{em}\quad x$ This is possible since the order tensor $2$, $DW+D^{*}W$, it is symmetrical. It follows that, (54) ${}S(W)(e_{1},e_{2})=0.$ Considering $W_{ij}=DW(e_{i},e_{j})$, have (55) $\displaystyle{}\mbox{Tra\c{c}}(S(W))$ $\displaystyle=$ $\displaystyle S(W)(e_{1},e_{2})+S(W)(e_{1},e_{2})$ $\displaystyle=$ $\displaystyle DW(e_{1},e_{2})+DW(e_{2},e_{1})$ $\displaystyle=$ $\displaystyle W_{1}1+W_{2}2$ (56) $\displaystyle{}\mbox{Tra\c{c}}G(V,DW)$ $\displaystyle=$ $\displaystyle G(V,DW)(e_{1},e_{2})+G(V,DW)(e_{2},e_{2})$ $\displaystyle=$ $\displaystyle DW(e_{1},\nabla_{e_{1}}V)+DW(e_{2},\nabla_{e_{2}}V)$ Now, using the lemma (E), have (57) $\displaystyle{}\nabla_{e_{1}}V$ $\displaystyle=$ $\displaystyle\left<\nabla_{e_{1}}V,e_{1}\right>e_{1}+\left<\nabla_{e_{1}}V,e_{2}\right>e_{2}$ $\displaystyle=$ $\displaystyle DV(e_{1},e_{2})e_{1}+DV(e_{2},e_{1})e_{2}$ $\displaystyle=$ $\displaystyle v(x)e_{1}-l(x)e_{2}$ (58) $\displaystyle{}\nabla_{e_{2}}V$ $\displaystyle=$ $\displaystyle\left<\nabla_{e_{2}}V,e_{1}\right>e_{1}+\left<\nabla_{e_{2}}V,e_{2}\right>e_{2}$ $\displaystyle=$ $\displaystyle DV(e_{1},e_{2})e_{1}+DV(e_{2},e_{2})e_{2}$ $\displaystyle=$ $\displaystyle l(x)e_{1}+v(x)e_{2}$ substituting (58), (57) en (56), have $\displaystyle\mbox{Tra\c{c}}G(V,DW)$ $\displaystyle=$ $\displaystyle DW(e_{1},v(x)e_{1}-l(x)e_{2})+DW(e_{2},l(x)e_{1}+v(x)e_{2})$ $\displaystyle=$ $\displaystyle v(x)DW(e_{1},e_{1})-l(x)DW(e_{1},e_{2})+l(x)DW(e_{2},e_{1})+v(x)DW(e_{2},e_{2})$ que, por (53), tenemos (59) $\displaystyle{}\mbox{Tra\c{c}}G(V,DW)$ $\displaystyle=$ $\displaystyle v(x)(W_{11}+W_{22})+2l(x)W_{21}$ $\displaystyle=$ $\displaystyle v(x)\mbox{Tra\c{c}}(DW)+2l(x)W_{21}$ substituting (54), (55) and (59) in (52), obtain: $\displaystyle b(S(W),G(V,DW))$ $\displaystyle=$ $\displaystyle\beta\mbox{Tra\c{c}}DW\left(v(x)\mbox{Tra\c{c}}DW+2l(x)W_{21}\right)$ $\displaystyle=$ $\displaystyle\beta v(x)\left(\mbox{Tra\c{c}}DW\right)^{2}+2\beta l(x)\left(W_{11}+W_{22}\right)W_{21}$ $\displaystyle=$ $\displaystyle v(x)b(S(W),S(W))+2\beta l(x)\left(W_{11}+W_{22}\right)W_{21}$ $\displaystyle\geq$ $\displaystyle\min_{x\in\Omega}v(x)b(S(W),S(W))-(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}|DW|^{2}+Lo(\xi),$ Integrating this last equation into $\Omega$, have: (60) ${}\int_{\Omega}b(S(W),G(V,DW))dx\geq\min_{x\in\Omega}\int_{\Omega}b(S(W),S(W))dx-(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}\int_{\Omega}|DW|^{2}dx$ Finally, using (4) in (60), have: $\displaystyle\int_{\Omega}b(S(W),G(V,DW))dx$ $\displaystyle\geq$ $\displaystyle\min_{x\in\Omega}\int_{\Omega}b(S(W),S(W))dx$ $\displaystyle-$ $\displaystyle\lambda_{0}(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}\int_{\Omega}b(S(W),S(W))dx+Lo(\xi)$ $\displaystyle=$ $\displaystyle\sigma_{1}\int_{\Omega}b(S(W),S(W))dx+Lo(\xi)$ ∎ ###### Lema H. $2\mathsf{B}(\xi,m(\xi))=\int_{\Gamma}\mathbf{B}(\xi,\xi)\left<V,\nu\right>d\Gamma-2\int_{\Omega}v\mathbf{B}(\xi,\xi)+2\int_{\Omega}e(\xi,\xi)dx+Lo(\xi)$ where $\mathsf{B}$ is the bilinear form given in (15) and $e(\xi,\xi)=2b(S(W_{1}),G(V,DW_{1}))+2b(S(W_{2}),G(V,DW_{2}))+4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}$ ###### Proof. by the formula (25) we have to estimate $\Upsilon(\xi),\chi(m(\xi)),\varphi(m(\xi))$ e $\left<Dw_{2},D(V(w_{2}))\right>$. We start with the first term, (61) ${}\Upsilon(m(\xi))=\frac{1}{2}\left[D(\nabla_{V}W)+D^{*}(\nabla_{V}W)\right]+V(w_{1})\Pi$ We will use the Bochner technique. Be $x\in\Omega$ e $\left\\{E_{i}\right\\}_{i=1}^{2}$ a normal referential in $x$, we have: $\displaystyle D(\nabla_{V}W_{1})(E_{i},E_{j})$ $\displaystyle=$ $\displaystyle E_{j}\left(\left<\nabla_{V}W_{1},E_{i}\right>\right)$ $\displaystyle=$ $\displaystyle E_{j}\left(DW_{1}(E_{i},V)\right)=D^{2}W_{1}\left(E_{i},V,E_{j}\right)+DW_{1}(E_{i},\nabla_{E_{j}}V)$ $\displaystyle=$ $\displaystyle D^{2}W_{1}\left(E_{i},E_{j},V\right)+\left<R_{VE_{j}}W_{1},E_{i}\right>+DW_{1}\left(E_{i},\nabla_{E_{j}}V\right)$ $\displaystyle=$ $\displaystyle\nabla_{V}DW_{1}(E_{i},E_{j})+R(V,E_{j},W_{1},E_{i})+DW(E_{i},\nabla_{E_{j}}V)$ Where from, (62) ${}D(\nabla_{V}W_{1})=\nabla_{V}DW_{1}+R(V,.,W_{1},.)+DW_{1}(.,\nabla_{.}V)$ Similarly, (63) ${}D^{*}(\nabla_{V}W_{1})=\nabla_{V}D^{*}W_{1}+R(V,.,W_{1},.)+DW_{1}(.,\nabla_{.}V)$ and, $\displaystyle V(w_{1}\Pi)$ $\displaystyle=$ $\displaystyle V(w_{1})\Pi+w_{1}\nabla_{V}\Pi$ So (64) ${}V(w_{1})\Pi=V(w_{1}\Pi)-w_{1}\nabla_{V}\Pi$ substituting (64), (63), (62) in (61), have (65) $\displaystyle{}\Upsilon(m(\xi))$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left\\{\nabla_{V}\left(DW_{1}+D^{*}W_{1}\right)+DW_{1}(.,\nabla_{.}V)+DW_{1}(\nabla_{.}V,.)\right\\}$ $\displaystyle+$ $\displaystyle R(V,.,W_{1},.)+V(w_{1}\Pi)-w_{1}\nabla_{V}\Pi$ $\displaystyle=$ $\displaystyle\nabla_{V}\Upsilon(\xi)+G(V,DW_{1})+Lo(\xi)$ where, $Lo(\xi)=R(V,.,W_{1},.)-w_{1}\nabla_{V}\Pi$. Continuing with the following terms, (66) ${}\chi(m(\xi))=\frac{1}{2}\left(D(\nabla_{V}W_{2})+D^{*}(\nabla_{V}W_{2})\right)+V(w_{2})\Pi-\sqrt{\gamma}\left(i(\nabla_{V}W_{1})D\Pi-V(w_{1})c\right)$ now estimating the terms of (66), $\displaystyle\nabla_{V}\left(i(W_{1})D\Pi\right)(E_{i},E_{j})$ $\displaystyle=$ $\displaystyle D\left(i(W_{1})D\Pi\right)(E_{i},E_{j},V)=V\left(i(W_{1})D\Pi(E_{i},E_{j})\right)$ $\displaystyle=$ $\displaystyle V\left(D\Pi(W_{1},E_{i},E_{j})\right)$ $\displaystyle=$ $\displaystyle D(D\Pi)(W_{1},E_{i},E_{j},V)+D\Pi(\nabla_{V}W_{1},E_{i},E_{j})$ $\displaystyle=$ $\displaystyle i(W_{1})\nabla_{V}D\Pi(E_{i},E_{j})+i(\nabla_{V}W_{1})D\Pi(E_{i},E_{j})$ Therefore, (67) ${}i(\nabla_{V}W_{1})D\Pi=\nabla_{V}\left(i(W_{1})D\Pi\right)-i(W_{1})\nabla_{V}D\Pi$ substituting (62), (63), (64) y (67) in (66), have (68) $\displaystyle{}\chi(m(\xi))$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left\\{\nabla_{V}\left(DW_{2}+D^{*}W_{2}\right)+DW_{2}(.,\nabla_{.}V)+DW_{2}(\nabla_{.}V,.)\right\\}$ $\displaystyle+$ $\displaystyle R(V,.,W_{2},.)+V(w_{2}\Pi)-w_{2}\nabla_{V}\Pi-\sqrt{\gamma}\nabla_{V}(i(W_{1})D\Pi)-\sqrt{\gamma}i(W_{1})\nabla_{V}D\Pi$ $\displaystyle-$ $\displaystyle\sqrt{\gamma}V(w_{1}c)-\sqrt{\gamma}w_{1}\nabla_{V}c$ $\displaystyle=$ $\displaystyle\nabla_{V}\chi(\xi)+G(V,DW_{2})+Lo(\xi)$ continuing with $\varphi(\xi)$, (69) ${}\varphi(m(\xi))=\frac{1}{2}D(V(w_{1}))-i(\nabla_{V}W_{1})\Pi+\frac{1}{\sqrt{\gamma}}\nabla_{V}W_{2}$ Estimating the terms of (69), $\displaystyle\left<D(V(w_{1})),E_{i}\right>$ $\displaystyle=$ $\displaystyle E_{i}(V(w_{1}))=E_{i}(\left<Dw_{1},V\right>)$ $\displaystyle=$ $\displaystyle\left<\nabla_{E_{i}}Dw_{1},V\right>+\left<Dw_{1},\nabla_{E_{i}}V\right>$ $\displaystyle=$ $\displaystyle\left<\nabla_{V}Dw_{1},E_{i}\right>+Dw_{1}\left(\nabla_{E_{i}}V\right)$ Therefore, (70) ${}D(V(w_{1}))=\nabla_{V}Dw_{1}+Dw_{1}\left(\nabla_{.}V\right)$ continuing, $\displaystyle\left<\nabla_{V}\left(i(W_{1})\Pi,E_{i}\right)\right>$ $\displaystyle=$ $\displaystyle D(i(W_{1})\Pi)(E_{i},V)=V(i(W_{1})\Pi(E_{i}))=V(\Pi(W_{1},E_{i}))$ $\displaystyle=$ $\displaystyle D\Pi(W_{1},E_{i},V)+\Pi(\nabla_{V}W_{1},E_{i})$ $\displaystyle=$ $\displaystyle\nabla_{V}\Pi(W_{1},E_{i})+\left<i(\nabla_{V}W_{1})\Pi,E_{i}\right>$ $\displaystyle=$ $\displaystyle\left<i(W_{1})\nabla_{V}\Pi,E_{i}\right>+\left<i(\nabla_{V}W_{1})\Pi,E_{i}\right>$ Therefore, (71) ${}i(\nabla_{V}W_{1})\Pi=\nabla_{V}\left(i(W_{1})\Pi\right)-i(W_{1})\nabla_{V}\Pi$ substituting (70), (71) in (69), have, (72) $\displaystyle{}\varphi(m(\xi))$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left(\nabla_{V}Dw_{1}+Dw_{1}\left(\nabla_{.}V\right)\right)-\nabla_{V}\left(i(W_{1})\Pi\right)+i(W_{1})\nabla_{V}\Pi+\frac{1}{\sqrt{\gamma}}\nabla_{V}W_{2}$ $\displaystyle=$ $\displaystyle\nabla_{V}\varphi(\xi)+\varphi(\xi)\left(\nabla_{.}V\right)+Lo(\xi)$ Now writing the equation (16), with $\eta=m(\xi)$, have $\displaystyle{}\mathbf{B}(\xi,m(\xi))$ $\displaystyle=$ $\displaystyle 2\left<\Upsilon(\xi),\Upsilon(m(\xi))\right>+4\left<\varphi(\xi),\varphi(m(\xi))\right>+2\beta\left(\mbox{Tra\c{c}}(\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}})w_{2}\right)$ $\displaystyle\left(\mbox{Tra\c{c}}(\Upsilon(m(\xi))+\frac{1}{\sqrt{\gamma}})V(w_{2})\right)+2\left<\chi(\xi),\chi(m(\xi))\right>+2\beta\mbox{Tra\c{c}}(\chi(\xi))\mbox{Tra\c{c}}(\chi(m(\xi)))$ $\displaystyle+$ $\displaystyle\left<Dw_{2},D(V(w_{2}))\right>+\frac{1}{\gamma}w_{2}V(w_{2})$ using (65), (68) and (72) Let estimate each term of (4). $\displaystyle 2\left<\varphi(\xi),\varphi(m(\xi))\right>$ $\displaystyle=$ $\displaystyle 2\left<\varphi(\xi),\nabla_{V}\varphi(\xi)+\varphi(\xi)\left(\nabla_{.}V\right)+Lo(\xi)\right>$ $\displaystyle=$ $\displaystyle 2\left<\varphi(\xi),\nabla_{V}\varphi(\xi)\right>+2\left<\varphi(\xi),\varphi(\xi)\left(\nabla_{.}V\right)\right>+Lo(\xi)$ $\displaystyle=$ $\displaystyle V(|\varphi(\xi)|^{2})+2\left<\varphi(\xi),\nabla_{\varphi(\xi)}V\right>+Lo(\xi)$ $\displaystyle=$ $\displaystyle V(|\varphi(\xi)|^{2})+2DV(\varphi(\xi),\varphi(\xi))+Lo(\xi).$ From where, using (B), we have (74) ${}2\left<\varphi(\xi),\varphi(m(\xi))\right>=V(|\varphi(\xi)|^{2})+2v|\varphi(\xi)|^{2}+Lo(\xi)$ Continuing with the terms of (4), (75) $\displaystyle{}2\left<Dw_{2},D(V(w_{2}))\right>$ $\displaystyle=$ $\displaystyle 2\left<Dw_{2},\nabla_{V}Dw_{2}+Dw_{2}\left(\nabla_{.}V\right)\right>$ $\displaystyle=$ $\displaystyle 2\left<Dw_{2},\nabla_{V}Dw_{2}\right>+2\left<Dw_{2},\nabla_{Dw_{2}}V\right>$ $\displaystyle=$ $\displaystyle V(|Dw_{2}|^{2})+2v|Dw_{2}|^{2}$ and, (76) $\displaystyle{}2\left<\Upsilon(\xi),\Upsilon(m(\xi))\right>$ $\displaystyle=$ $\displaystyle\left<\Upsilon(\xi),\nabla_{V}\Upsilon(\xi)+G(V,DW_{1})+Lo(\xi)\right>$ $\displaystyle=$ $\displaystyle V(|\Upsilon(\xi)|^{2})+2\left<\Upsilon(\xi),G(V,DW_{1})\right>+Lo(\xi)$ Continuing with the rest of the terms, (77) $\displaystyle 2\beta\left(\mbox{Tra\c{c}}\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}}w_{2}\right)\left(\mbox{Tra\c{c}}\Upsilon(m(\xi))+\frac{1}{\sqrt{\gamma}}V(w_{2})\right)=2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}\Upsilon(m(\xi))$ $\displaystyle+$ $\displaystyle\frac{2\beta}{\sqrt{\gamma}}\mbox{Tra\c{c}}\Upsilon(\xi)V(w_{2})+\frac{2\beta}{\sqrt{\gamma}}w_{2}\mbox{Tra\c{c}}\Upsilon(m(\xi))+\frac{2\beta}{\sqrt{\gamma}}w_{2}V(w_{2})$ $\displaystyle=$ $\displaystyle 2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\left(\mbox{Tra\c{c}}\nabla_{V}\Upsilon(\xi)+\mbox{Tra\c{c}}G(V,DW_{1})+Lo(\xi)\right)+\frac{2\beta}{\sqrt{\gamma}}\mbox{Tra\c{c}}\Upsilon(\xi)V(w_{2})$ $\displaystyle+$ $\displaystyle\frac{2\beta}{\sqrt{\gamma}}w_{2}\mbox{Tra\c{c}}\Upsilon(m(\xi))+\frac{\beta}{\gamma}V(w_{2}^{2})$ $\displaystyle=$ $\displaystyle 2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}\nabla_{V}\Upsilon(\xi)+2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}G(V,DW_{1})+\frac{2\beta}{\sqrt{\gamma}}\mbox{Tra\c{c}}\Upsilon(\xi)V(w_{2})$ $\displaystyle+$ $\displaystyle\frac{2\beta}{\sqrt{\gamma}}w_{2}\mbox{Tra\c{c}}\Upsilon(m(\xi))+\frac{\beta}{\gamma}V(w_{2}^{2})$ $\displaystyle=$ $\displaystyle\beta V\left(\left(\mbox{Tra\c{c}}\Upsilon(\xi)+\frac{1}{\gamma}w_{2}\right)^{2}\right)+2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}G(V,DW_{1})+Lo(\xi)$ and, (78) $\displaystyle{}2\left<\chi(\xi),\chi(m(\xi))\right>$ $\displaystyle=$ $\displaystyle 2\left<\chi(\xi),\nabla_{V}\chi(\xi)+G(V,DW_{2})+Lo(\xi)\right>$ $\displaystyle=$ $\displaystyle 2\left<\chi(\xi),\nabla_{V}\chi(\xi)\right>+2\left<\chi(\xi),G(V,DW_{2})\right>+Lo(\xi)$ $\displaystyle=$ $\displaystyle V\left(|\chi(\xi)|^{2}\right)+2\left<\chi(\xi),G(V,DW_{2})\right>+Lo(\xi)$ and, $\displaystyle{}2\beta\mbox{Tra\c{c}}(\chi(\xi))\beta\mbox{Tra\c{c}}(\chi(m(\xi)))$ $\displaystyle=$ $\displaystyle 2\beta\mbox{Tra\c{c}}(\chi(\xi))\left(\mbox{Tra\c{c}}\nabla_{V}\chi(\xi)+\mbox{Tra\c{c}}G(V,DW_{2})+Lo(\xi)\right)$ $\displaystyle=$ $\displaystyle\beta V\left(\left(\mbox{Tra\c{c}}\chi(\xi)\right)^{2}\right)+2\beta\mbox{Tra\c{c}}\chi(\xi)\mbox{Tra\c{c}}G(V,DW_{2})+Lo(\xi)$ substituting (4), (78), (LABEL:trauptramup), (76), (75) and (74) in (4), we have $\displaystyle\mathbf{B}(\xi,m(\xi))$ $\displaystyle=$ $\displaystyle V\left(|\Upsilon(\xi)|^{2}\right)+2\left<\Upsilon(\xi),G(V,DW_{1})\right>+2V\left(|\varphi(\xi)|^{2}\right)+4v|\varphi(\xi)|^{2}$ $\displaystyle+$ $\displaystyle\beta V\left(\left(\mbox{Tra\c{c}}\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}}w_{2}\right)^{2}\right)+2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}G(V,DW_{1})$ $\displaystyle+$ $\displaystyle 2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}G(V,DW_{1})+V\left(|\chi(\xi)|^{2}\right)+2\left<\chi(\xi),G(V,DW_{2})\right>+\beta V\left(\left(\mbox{Tra\c{c}}\chi(\xi)\right)^{2}\right)$ $\displaystyle+$ $\displaystyle 2\beta\mbox{Tra\c{c}}\chi(\xi)\mbox{Tra\c{c}}G(V,DW_{2})+\frac{1}{2}V\left(|Dw_{2}|^{2}\right)+v|Dw_{2}|^{2}$ $\displaystyle+$ $\displaystyle\frac{1}{\gamma}V(w_{2}^{2})+Lo(\xi)$ $\displaystyle=$ $\displaystyle\frac{1}{2}V\left(2|\Upsilon(\xi)|^{2}+4|\varphi(\xi)|^{2}+2\beta\left(\mbox{Tra\c{c}}\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}}w_{2}\right)^{2}+2|\chi(\xi)|^{2}+2\beta\left(\mbox{Tra\c{c}}\chi(\xi)\right)^{2}\right)$ $\displaystyle+$ $\displaystyle\frac{1}{2}\left(|Dw_{2}|^{2}+\frac{1}{\gamma}w_{2}^{2}\right)+2\left<\Upsilon(\xi),G(V,DW_{1})\right>+4v|\varphi(\xi)|^{2}+2\beta\mbox{Tra\c{c}}\Upsilon(\xi)\mbox{Tra\c{c}}G(V,DW_{1})$ $\displaystyle+$ $\displaystyle 2\left<\chi(\xi),G(V,DW_{2})\right>+2\beta\mbox{Tra\c{c}}\chi(\xi)\mbox{Tra\c{c}}G(V,DW_{2})+v|Dw_{2}|^{2}+Lo(\xi)$ $\displaystyle=$ $\displaystyle\frac{1}{2}V\left(B(\xi,\xi)\right)+2b\left(\Upsilon(\xi),G(V,DW_{1})\right)+2b\left(\chi(\xi),G(V,DW_{2})\right)$ $\displaystyle+$ $\displaystyle 4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}+Lo(\xi)$ Then, (80) $\displaystyle{}2\mathsf{B}(\xi,m(\xi))$ $\displaystyle=$ $\displaystyle\int_{\Omega}\left[V\left(B(\xi,xi)\right)+4b\left(\Upsilon(\xi),G(V,DW_{1})\right)+b\left(\chi(\xi),G(V,DW_{2})\right)\right]dx$ $\displaystyle+$ $\displaystyle\int_{\Omega}\left[8v|\varphi(\xi)|^{2}+2v|Dw_{2}|^{2}+Lo(\xi)\right]$ Now, being $V=\sum_{i=1}^{2}V_{i}E_{i}$, where $\\{E_{1},E_{2}\\}$ is a normal referential that varies with $x$. have (81) $\displaystyle{}\int_{\Omega}V\left(B(\xi,\xi)\right)dx$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{2}\int_{\Omega}V_{i}E_{i}(B(\xi,\xi))dx$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{2}\int_{\Omega}E_{i}\left(V_{i}B(\xi,\xi)\right)dx-\sum_{i=1}^{2}\int_{\Omega}E_{i}(V_{i})B(\xi,\xi)dx$ $\displaystyle=$ $\displaystyle\int_{\Omega}\mbox{div}\left(B(\xi,\xi)V\right)dx-\int_{\Omega}\mbox{div}(V)B(\xi,\xi)dx$ $\displaystyle=$ $\displaystyle\int_{\Omega}B(\xi,\xi)\left<V,\nu\right>dx-2\int_{\Omega}vB(\xi,\xi)dx$ In the last line of (81) we will use (B). In fact, $\mbox{div}(V)=\mbox{Tra\c{c}}DV=\sum_{i=1}^{2}DV(E_{i},E_{i})=\sum_{i=1}^{2}v|E_{i}|^{2}=2v$ Substituting (81) into (80), we have $2\mathsf{B}(\xi,m(\xi))=\int_{\Gamma}B(\xi,\xi)dx-2\int_{\Omega}vB(\xi,\xi)dx+2\int_{\Omega}e(\xi,\xi)dx+Lo(\xi)$ with $e(\xi,\xi)=2b\left(\Upsilon(\xi),G(V,DW_{1})\right)+b\left(\chi(\xi),G(V,DW_{2})\right)+4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}$ and the lemma is . ∎ Other result need is the following ###### Theorem I. Let $\xi=\left(W_{1},W_{2},w_{1},w_{2}\right)\in\mathsf{H}^{1}(\Omega)$ problem solution (82) ${}\xi_{tt}+A\xi+a(x)\xi_{t}=0,\quad\mbox{em}\quad(0,T)\times\Omega$ Then (83) $\displaystyle\int_{\Sigma}\left[2\partial(A\xi,m(\xi))+\left(|\xi_{t}|^{2}-B(\xi,\xi)\right)\left<V,\nu\right>\right]d\Sigma$ $\displaystyle=2\left(\xi_{t},m(\xi)\right)|_{0}^{T}+2\int_{Q}v\left[|\xi_{t}|^{2}-B(\xi,\xi)\right]dQ$ $\displaystyle+2\int_{Q}e(\xi,\xi)dQ-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)+L(\xi)$ where $\Sigma=(0,T)\times\Gamma$ and $e(\xi,\xi)=e(\xi,\xi)=2b(S(W_{1}),G(V,DW_{1}))+2b(S(W_{2}),G(V,DW_{2}))+4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}$ . $L(\xi)$denotes the terms of lower order with respect to energy. Additionally, if $p$ is a function in $\Omega$, then (84) ${}\int_{\Sigma}\partial\left(A\xi,p\xi\right)dQ=\int_{Q}p\left[B(\xi,\xi)-|\xi_{t}|^{2}\right]dQ-\int_{0}^{T}\left(a\xi_{t},p\xi\right)+L(\xi)$ ###### Proof. Multiplying the equation (82) by $2m(\xi)$ and integrating in $\Omega$ we have: $\displaystyle{}\left(\xi_{tt},2m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}+\left(A\xi,2m(\xi)\right)$ $\displaystyle=$ $\displaystyle-\left(a\xi_{t},2m(\xi)\right)$ (85) $\displaystyle\left(\xi_{tt},2m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}+B(\xi,2m(\xi))-\int_{\Gamma}\partial\left(\xi,2m(\xi)\right)$ $\displaystyle=$ $\displaystyle-\left(a\xi_{t},2m(\xi)\right)$ Estimating the first term on the left hand side of (4), since the second term was estimated in the lemma (H). (86) ${}\left(\xi_{tt},2m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}=2\left[\left(\xi_{t},m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}\right]_{t}-2\left(\xi_{t},m(\xi_{t})\right)_{\mathsf{L}^{2}(\Omega)}$ and, (87) $\displaystyle{}2\left(\xi_{tt},2m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}$ $\displaystyle=$ $\displaystyle 2\left(\left(W_{1t},W_{2t},w_{1t},w_{2t}\right),\left(\nabla_{V}W_{1t},\nabla_{V}W_{2t},V(w_{1t}),V(w_{2t})\right)\right)_{\mathsf{L}^{2}(\Omega)}$ $\displaystyle=$ $\displaystyle V\left(||W_{1t}||_{L^{2}(\Omega,\Lambda)}^{2}+||W_{2t}||_{L^{2}(\Omega,\Lambda)}^{2}+||w_{1t}||_{L^{2}(\Omega)}^{2}+||w_{2t}||_{L^{2}(\Omega)}^{2}\right)$ $\displaystyle=$ $\displaystyle V(||\xi_{t}||_{L^{2}(\Omega)}^{2})=\int_{\Omega}V(|\xi_{t}|^{2})dx=\sum_{i=1}^{2}\int_{\Omega}V_{i}E_{i}(|\xi_{t}|^{2})dx$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{2}\int_{\Omega}E_{i}\left(V_{i}|\xi_{t}|^{2}\right)dx-\sum_{i=1}^{2}\int_{\Omega}E_{i}(V_{i})|\xi_{t}|^{2}dx$ $\displaystyle=$ $\displaystyle\int_{\Gamma}|\xi_{t}|^{2}\left<V,\nu\right>d\Gamma-2\int_{\Omega}v|\xi_{t}|^{2}dx$ substituting (87) en (86), have (88) ${}\left(\xi_{tt},2m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}=2\left[\left(\xi_{t},m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}\right]_{t}+2\int_{\Omega}v|\xi_{t}|^{2}dx-\int{\Gamma}|\xi_{t}|^{2}\left<V,\nu\right>d\Gamma$ substituting (88) and using the lema (H) in (4), have: (89) $\displaystyle{}2\left[\left(\xi_{tt},m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}\right]_{t}$ $\displaystyle+$ $\displaystyle 2\int_{\Omega}v|\xi_{t}|^{2}dx-\int_{\Gamma}|\xi_{t}|^{2}\left<V,\nu\right>d\Gamma+\int_{\Gamma}B(\xi,\xi)\left<V,\nu\right>d\Gamma$ $\displaystyle-$ $\displaystyle 2\int_{\Omega}vB(\xi,\xi)dx+2\int_{\Omega}e(\xi,\xi)dx-\int_{\Gamma}\partial\left(\xi,2m(\xi)\right)d\Gamma+Lo(\xi)=$ $\displaystyle-$ $\displaystyle\left(a\xi_{t},2m(\xi)\right)$ integrating (89) of $0$ to $T$, have $\displaystyle 2\left[\left(\xi_{tt},m(\xi)\right)_{\mathsf{L}^{2}(\Omega)}\right]|^{T}_{0}$ $\displaystyle+$ $\displaystyle 2\int_{Q}v|\xi_{t}|^{2}dx-\int_{\Sigma}|\xi_{t}|^{2}\left<V,\nu\right>d\Sigma+\int_{\Sigma}B(\xi,\xi)\left<V,\nu\right>d\Sigma$ $\displaystyle-$ $\displaystyle 2\int_{Q}vB(\xi,\xi)dx+2\int_{Q}e(\xi,\xi)dx-\int_{\Sigma}\partial\left(\xi,2m(\xi)\right)d\Sigma+Lo(\xi)=$ $\displaystyle\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)$ where, $\displaystyle\int_{\Sigma}\left[2\partial(\xi,m(\xi))+\left(|\xi_{t}|^{2}-B(\xi,\xi)\right)\left<V,\nu\right>\right]d\Sigma$ $\displaystyle=$ $\displaystyle 2\left[\left(\xi_{t},m(\xi_{t})\right)_{\mathsf{L}^{2}(\Omega)}\right]|_{0}^{T}+2\int_{Q}v\left[|\xi_{t}|^{2}-B(\xi,\xi)\right]$ $\displaystyle+$ $\displaystyle 2\int_{Q}e(\xi,\xi)dQ-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)+Lo(\xi)$ And the theorem is shown. ∎ In this section we state and prove the main result, the stabilization of the evolution equation of the Naghdi model. We will need the following result. ###### Theorem J. Let $V$ an escape vector field for the Naghdi shell. Let $\xi$ problem solution. (90) ${}\xi_{tt}+A\xi+a(x)\xi_{t}=0.$ Then for $T>0$, (91) ${}\int_{\Sigma}SBd\Sigma+\lambda_{0}\sigma_{0}\left[E(0)+e(T)\right]-\int_{0}^{T}\int_{\Omega}a\left<\xi,\xi_{t}\right>\geq 2\sigma_{1}\int_{0}^{T}E(t)dt+L(\xi)$ where (92) $\displaystyle SB$ $\displaystyle=\partial\left(A\xi,2m(\xi)+\rho\xi\right)+\left[\mid\xi_{t}\mid^{2}-B(\xi,\xi)\right]\left<V,\nu\right>$ $\displaystyle m(\xi)$ $\displaystyle=\left(\nabla_{V}W_{1},\nabla_{V}W_{2},V(w_{1}),V(w_{2})\right),\quad\rho=2v-\sigma_{1}$ ###### Proof. Let $p=\rho$ in the identity (84) and adding with the identity (84), obtain (93) $\displaystyle\int_{\Sigma}SBd\Sigma$ $\displaystyle=2\left(\xi_{t},m(\xi)\right)_{(L^{2}(\Omega))}\mid_{0}^{T}+\sigma_{1}\int_{Q}\left[\mid\xi_{t}\mid^{2}+B(\xi,\xi)\right]dQ+$ $\displaystyle 2\int_{Q}e(\xi,\xi)dQ+L(\xi)$ and, by the expression of $B$, we have to (94) $\displaystyle B(\xi,\xi)$ $\displaystyle=2b(S(W_{1}),S(W_{1}))+2\beta b(S(W_{2}),S(W_{2}))+$ $\displaystyle 4\mid\varphi(\xi)\mid^{2}+\mid Dw_{2}\mid+L(\xi)$ Using the lemma (G), have (95) ${}\int_{Q}e(\xi,\xi)dQ\geq\sigma_{1}\int_{Q}B(\xi,\xi)dQ+L(\xi)$ using the identity (94), the coercivity of $b$, have (96) $\displaystyle 2\left(\xi_{t},m(\xi)\right)_{L^{2}(\Omega)}$ $\displaystyle\leq\sigma_{0}\left[\parallel\xi_{t}\parallel^{2}_{L^{2}(\Omega)}+\sum_{i=1}^{2}\left(\parallel DW_{i}\parallel^{2}_{L^{2}(\Omega,T^{2})}+\parallel Dw_{i}\parallel^{2}_{L^{2}(\Omega,\Lambda)}\right)\right]$ $\displaystyle\leq 2\lambda_{0}\sigma_{0}E(t)+L(\xi)$ finally, substituting (96) and (95), in (93), we have inequality (91). ∎ We now state and prove our main result, the stabilization of the Naghdi evolution model. ###### Theorem K. Let be the evolution equation for the Naghdi shell model, with internal dissipation (97) $\displaystyle\xi_{tt}+A\xi+a(x)\xi_{t}$ $\displaystyle=0\quad\mbox{em}\quad\Omega\times(0,T)$ $\displaystyle\xi$ $\displaystyle=0\quad\mbox{em}\quad\partial\Omega\times(0,T)$ where the function $a=a(x)$ is supported in an escape region $w\subset\bar{\Omega}$. Let $a_{0}>0$ such that (98) ${}a(x)\geq a_{0}>0\quad\mbox{para todo}\quad x\in w$ So, there are constants $c_{1},c_{2}$ such that (99) ${}E(t)\leq c_{1}E(0)\exp^{(-c_{2}t)}$ where $E(t)$ is the total energy of the system (97) ###### Proof. multiplying the equation (97) for $\xi_{t}$, integrating into $\Omega$ and considering the boundary conditions, we have that (100) ${}\frac{d}{dt}E(t)=-\int_{\Omega}a(x)\mid\xi_{t}\mid^{2}dx$ from where, integrating into $(0,T)$, we have (101) ${}E(T)=E(0)-\int_{\Omega}a(x)\mid\xi_{t}\mid^{2}dx.$ By (101) it is enough to prove that there is a $T>0$ e $C>0$, independent of the solutions of the problem (97), such that $E(T)\leq C\int_{Q}a\mid\xi_{t}\mid^{2}dQ,$ for in this case, substituting in (101), we have to (102) ${}E(T)\leq\frac{C}{C+1}E(0)$ We affirm that (99) if follow from (102). In fact, notice that (102) is equivalent (103) ${}E(T)\leq\gamma E(0)$ where $\gamma=\frac{C}{C+1}<1$. Since the system is invariant by translations in time, we have that (103)is valid in $[(m-1)T,mT]$, so (104) $\displaystyle E(mT)$ $\displaystyle\leq\gamma E((m-1)T)$ $\displaystyle\leq\gamma^{2}E((m-2)T)$ $\displaystyle\vdots$ $\displaystyle\leq\gamma^{m}E(0)$ $\displaystyle\leq e^{-\omega mT}E(0)$ where $\omega=\frac{1}{T}\ln(\frac{1}{\gamma})>0$. For $t>0$ arbitrary, exists $m=1,2,\dots$, such that $(m-1)T<t\leq mT$. Finally, using that the energy of the system is decreasing, we have: $\displaystyle E(t)$ $\displaystyle\leq$ $\displaystyle E((m-1)T)$ $\displaystyle\leq$ $\displaystyle e^{-\omega(m-1)T}E(0)$ $\displaystyle\leq$ $\displaystyle\frac{1}{\gamma}e^{-\omega mT}E(0)$ $\displaystyle\leq$ $\displaystyle\frac{1}{\gamma}e^{-\omega t}E(0)$ Which proves our statement. So, let try (102) By the definition of escape fields and regions discussed, we can assume that there are subsets of $\Omega$, $\\{\Omega_{i}\\}_{i=1}^{N}$, satisfying (B). Then the identities (92) and (84) can be used on each $\Omega_{i}$, since vectorial escape fields are defined on each of them. The idea is to estimate the total energy of the system, first inside $\Omega$ using that on the $\Omega_{i}$ we have vanishing vector fields defined on them and in the plugin use the property of the dissipation function, $a$. Now, since (91) is valid only in $\Omega_{i}$, We are going to restrict ourselves, first, to making estimates on them. be then $0<\epsilon_{2}<\epsilon_{1}<\epsilon_{0}<\epsilon$ and be $V_{j}=N_{\epsilon_{j}}\left\\{\cup_{i=1}^{N}\Gamma_{0}^{i}\cup\left(\Omega\setminus\cup_{i=1}^{N}\Omega_{i}\right)\right\\},\quad j=0,1,2.$ notice that $V_{2}\subset V_{1}\subset V_{0}\subset\bar{V_{0}}$. be $\phi^{i}=\left\\{\begin{array}[]{ccc}1,&\Omega_{i}\setminus V_{i},&i=1,2,\dots,N\\\ 0,&V_{2}\end{array}\right.$ consider now $V^{i}=\phi^{i}H^{i}$, $p^{i}=\phi^{i}q$ and $\xi^{i}=\phi^{i}\xi$ where $H^{i}$ is an escape vector field in $\Omega_{i}$ and $q$ a function defined in $\Omega$, respectively. So, in each $\Omega_{i}$ it is valid (91) and we have (105) $\displaystyle 2\sigma_{1}\int_{0}^{T}E^{i}(t)\leq\int_{\Sigma_{i}}SB_{i}d\Sigma_{i}+\lambda_{0}\sigma_{0}\left[E^{i}(0)+E^{i}(T)\right]-\int_{0}^{T}\int_{\Omega_{i}}a\left<\xi^{i},\xi^{i}_{t}\right>dxdt$ where, (106) ${}SB_{i}=\partial\left(A\xi^{i},2m(\xi^{i})+\rho\xi^{i}\right)+\left[\mid\xi^{i}_{t}\mid^{2}-B(\xi^{i},\xi^{i})\right]\left<V^{i},\nu\right>$ By the definition of $V_{2}$, we have to $\Gamma^{i}_{0}\subset V_{2}$ and what $\phi^{i}=0$ in $V_{2}$, we have to $x\in\Gamma^{i}_{0}$, the terms at the border are annulled. So (107) ${}SB_{i}=0\quad\quad\mbox{para}\quad x\in\Gamma^{i}_{0}.$ If $x\in\Omega_{i}\cap\Gamma$, using boundary conditions, $\xi=0$, replacing in (106) have (108) $\displaystyle SB_{i}$ $\displaystyle=2\partial\left(A\xi_{i},m(\xi^{i})\right)-B(\xi^{i},\xi^{i})\left<V^{i},\nu\right>$ $\displaystyle=B(\xi^{i},\xi^{i})\left<V^{i},\nu\right>$ $\displaystyle\leq 0$ where we use the coercivity of $B$. It is concluded that (109) ${}\int_{\Sigma_{i}}SB_{i}\leq 0\quad\mbox{para todo}\quad i$ Using the estimate (107) and (109) in (105) have $\displaystyle 2\sigma_{1}\int_{0}^{T}\int_{\Omega_{i}}\mid\xi_{t}\mid^{2}+2\sigma_{1}\int_{0}^{T}\int_{\Omega_{i}}B(\xi,\xi)\leq-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi)$ where, (110) $\displaystyle\int_{0}^{T}\int_{\Omega_{i}\setminus V_{1}}B(\xi,\xi)$ $\displaystyle\leq C_{1}\int_{T}^{0}\int_{\Omega_{i}\cap V_{1}}B(\xi,\xi)+C_{\beta}\int_{0}^{T}\int_{\Omega_{i}}\mid\xi\mid^{2}+\beta\int_{0}^{T}\int_{\Omega_{i}}B(\xi,\xi)dQ+$ $\displaystyle\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi)$ where $\beta>0$ is small enough that $\beta\int_{0}^{T}\int_{\Omega_{i}}B(\xi,\xi)dxdt\leq\int_{0}^{T}\int_{\Omega_{i}\cap V_{1}}B(\xi,\xi)dxdt$. Given that $\Omega\subset\left(\cup\Omega_{i}\right)\cup V_{1}$, then (111) $\displaystyle\int_{0}^{T}\int_{\Omega\setminus V_{1}}B(\xi,\xi)$ $\displaystyle\leq\sum_{i=1}^{N}\int_{0}^{T}\int_{\Omega_{i}\setminus V_{1}}B(\xi,\xi)$ $\displaystyle\leq C_{2}\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ$ $\displaystyle+C_{3}\int_{0}^{T}a\parallel\xi_{t}\parallel^{2}dt+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi)$ Now let us estimate in the complement of the union of the $\Omega_{i}$. For this, be $\psi\in C^{\infty}(I\\!\\!R^{3})$ given by (112) ${}\psi(x)=\left\\{\begin{aligned} 0&,\quad x\in I\\!\\!R^{3}\setminus V_{0}\\\ 1&,\quad x\in V_{1}\end{aligned}\right.$ Considering $p=\psi$ en (84) have $\int_{Q_{i}}B(\xi,\xi)dQ_{i}=\int_{Q_{i}}\mid\xi_{t}\mid^{2}dQ_{i}+\int_{0}^{T}\left(a\xi_{t},\psi\xi\right)dt+L(\xi)$ where from, $\displaystyle\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ+\int_{0}^{T}\int_{\Omega\cap V_{0}}B(\xi,\xi)dQ$ $\displaystyle\leq\int_{0}^{T}\int_{\Omega\cap V_{0}}B\parallel\xi_{t}\parallel^{2}dQ+\int_{0}^{T}\int_{\Omega\cap V_{0}}\left<a\xi_{t},\xi\right>+L(\xi)$ And what $\epsilon_{0}<\epsilon$, then $w\supset\bar{\Omega}\cap V_{0}$. Using the function hypothesis $a$ in $w$, have $\displaystyle\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ$ $\displaystyle\leq\int_{0}^{T}\int_{\Omega\cap V_{0}}\parallel\xi_{t}\parallel^{2}dQ+\int_{0}^{T}\int_{\Omega\cap V_{0}}\left<a\xi_{t},\xi\right>+L(\xi)$ $\displaystyle\leq\frac{1}{a_{0}}\int_{0}^{T}\int_{\Omega\cap V_{0}}a\mid\xi\mid^{2}dt+L(\xi)+\beta\int_{0}^{T}\parallel\xi_{t}\parallel^{2}dt+L(\xi)$ where $\beta$ will be chosen later. so, of (111), (4), have (113) $\displaystyle\int_{Q}B(\xi,\xi)$ $\displaystyle=\int_{0}^{T}\int_{\Omega\setminus V_{1}}B(\xi,\xi)dxdt+\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dxdt$ $\displaystyle\leq C_{4}\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dxdt+C_{5}\int_{0}^{T}a\parallel\xi_{t}\parallel^{2}dt+L(\xi)$ $\displaystyle\leq C_{6}\int_{Q}a\mid\xi_{t}\mid^{2}dt+\beta\int_{0}^{T}\parallel\xi_{t}\parallel^{2}dt+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi)$ now considering $p=\frac{1}{2}$ in (84), have (114) $\displaystyle\frac{1}{2}\int_{Q}\left[\mid\xi_{t}\mid^{2}-B(\xi,\xi)\right]dQ$ $\displaystyle=\frac{1}{2}\int_{0}^{T}\left(a\xi_{t},\xi\right)dt+L(\xi)$ $\displaystyle\leq C_{8}\int_{0}^{T}a\parallel\xi_{t}\parallel^{2}dt+L(\xi)$ grouping (113) and (114), have (115) $\displaystyle\int_{0}^{T}E(t)dt$ $\displaystyle=\int_{Q}B(\xi,\xi)dQ+\frac{1}{2}\int_{Q}\left[\mid\xi_{t}\mid^{2}-B(\xi,\xi)\right]dQ$ $\displaystyle\leq C\int_{Q}a\mid\xi_{t}\mid^{2}dQ+\beta\int_{0}^{T}\parallel\xi_{t}\parallel^{2}dt+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+C_{8}\int_{Q}a\mid\xi_{t}\mid dQ+L(\xi)$ $\displaystyle\leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+\beta\int_{0}^{T}E(t)dt+C_{9}\left[E(T)+E(0)\right]+L(\xi)$ Taking $\beta=\frac{1}{2}$, considering that the energy is decreasing, this is, $E(t)\geq E(T)$ for $t\in[0,T]$ and $E(T)=E(0)-\int_{Q}a\mid\xi\mid^{2}dQ$, have (116) $\displaystyle\frac{1}{2}\int_{0}^{T}E(t)dt$ $\displaystyle\leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+C_{9}\left[E(T)+E(0)\right]+L(\xi)$ $\displaystyle\frac{T}{2}E(T)$ $\displaystyle\leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+\lambda_{0}\sigma_{0}\left[E(T)+E(T)+\int_{Q}a\mid\xi_{t}\mid dQ\right]+L(\xi)$ $\displaystyle\left(\frac{T}{2}-2\lambda_{0}\sigma_{0}\right)$ $\displaystyle\leq C_{10}\int_{Q}a\mid\xi_{t}\mid dQ+L(\xi)$ For $T>4\lambda_{0}\sigma_{0}$ of (116) and the uniqueness-compactness argument [35], have $E(t)\leq C\int_{Q}a\mid\xi_{t}\mid dQ$ This proves the theorem. ∎ ## 5\. Controllability via Stability Russell Principle[32] provides a method for testing exact controllability using the uniform stabilization result. In this section we are going to study the exact controllability for the Naghdi evolution system. For this, we apply Russell Principle using the result of energy decay proved in the theorem K The exact controllability problem, with controls inside, consists of finding a vector function $F=F(x,t)$, control call, such that for some $T>0$ the following problem has a solution: (117) ${}\left\\{\begin{aligned} &\eta_{tt}+A\eta=F(x,t)\quad\mbox{em}\quad\Omega\times(0,T)\\\ &\eta=0\quad\mbox{em}\quad\partial\Omega\times(0,T)\\\ &\eta(0)=\eta_{0},\quad\eta_{t}(0)=\eta_{1}\\\ &\eta(T)=\tilde{\eta}_{0},\quad\eta_{t}(T)=\tilde{\eta}_{1}\end{aligned}\right.$ for any initial data $(\eta_{0},\eta_{1})$ and endings $(\tilde{\eta}_{0},\tilde{\eta}_{1})$ in the appropriate functional spaces and $F$ acting in a subregion of $\Omega$. We will prove that the function $F$ needs to act only in a sub-region of $\Omega$ arbitrarily small, precisely in the escape region for the Naghdi shell. We consider $T>0$ such that (118) ${}c_{1}e^{-c_{2}T}<1$ where $c_{1}>0$ and $c_{2}>0$ are the constants that appear in the theorem K. In this section we will prove the following result: ###### Theorem L. Let $\Omega$ the median surface of the Naghdi shell and $w\subset\Omega$ the vanishing region given in the proof of the theorem K. Then, if $T>0$ satisfies (118) the system (117) is exactly controllable at time $T$ with controls located at $w$ ###### Proof. Since the system is linear and reversible in time, it is enough to consider the controllability at zero, that is, $(\tilde{\eta}_{0},\tilde{\eta}_{1})=(0,0)$ We have to, for $(\eta_{0},\eta_{1})\in V\times H$, there is only one solution $\eta\in C\left([0,\infty);V\right)\times C^{1}\left([0,\infty);H\right)$ of the problem. (119) ${}\left\\{\begin{aligned} &\eta_{tt}+A\eta+a(x)\eta_{t}=0\quad\mbox{em}\quad\Omega\times(0,T)\\\ &\eta=0\quad\mbox{em}\quad\partial\Omega\times(0,T)\\\ &\eta(0)=\eta_{0},\quad\eta_{t}(0)=\eta_{1}\\\ \end{aligned}\right.$ Additional, for the initial data $(-\eta(T),\eta_{t}(T))$ there is only one solution to the problem. (120) ${}\left\\{\begin{aligned} &\theta_{tt}+A\theta+a(x)\theta_{t}=0\quad\mbox{em}\quad\Omega\times(0,T)\\\ &\theta=0\quad\mbox{em}\quad\partial\Omega\times(0,T)\\\ &\theta(0)=-\eta(T),\quad\theta_{t}(T)=\eta_{t}(T)\\\ \end{aligned}\right.$ where $a$ acts in the escape region given in the theorem K. let define $\xi(x,t)=\eta(x,t)+\theta(x,T-t)$. The field $\xi$ satisfies (121) ${}\left\\{\begin{aligned} &\xi_{tt}+A\xi=a(x)(\eta_{t}+\theta_{t})\quad\mbox{em}\quad\Omega\times(0,T)\\\ &\xi=0\quad\mbox{em}\quad\partial\Omega\times(0,T)\\\ &\xi(0)=\eta_{0}+\theta(T),\quad\xi_{t}(0)=\eta_{1}-\theta_{t}(T)\\\ &\xi(T)=0,\quad\xi_{t}(T)=0\end{aligned}\right.$ Of (121) we have that the initial data that are brought to equilibrium have the form (122) ${}\left(\xi_{0},\xi_{1}\right)=\left(\eta_{0}+\theta(T),\eta_{1}-\theta_{t}(T)\right)$ Thus it is enough to prove that for each initial data $\left(\xi_{0},\xi_{1}\right)\in V\times H$, exists $\left(\eta_{0},\eta_{1}\right)$ satisfying (122). Equivalently, show that the application: (123) $\displaystyle L:$ $\displaystyle V\times H\longrightarrow V\times H$ $\displaystyle\left(\eta_{0},\eta_{1}\right)\longrightarrow\left(\eta_{0}+\theta(T),\eta_{1}-\theta_{t}(T)\right)$ is surjective. As $L=I-K$ where $K$ is the application given by $K(\eta_{0},\eta_{1})=\left(-\theta(T),\theta_{t}(T)\right)$ it is enough to show that $\parallel K\parallel_{V\times H}<1$. Applying the theorem twice K have (124) $\displaystyle\parallel K(\eta_{0},\eta_{1})\parallel_{V\times H}$ $\displaystyle=\parallel\left(-\theta(T),\theta_{t}(T)\right)\parallel_{V\times H}$ $\displaystyle\leq c_{1}e^{-c_{2}T}\parallel\left(-\eta(T),\eta_{t}(T)\right)\parallel_{V\times H}$ $\displaystyle\leq c_{3}e^{-c_{4}T}\parallel\left(\eta(0),\eta_{1}\right)\parallel_{V\times H}.$ choosing $T>0$ such that $c_{3}e^{-c_{4}T}<1$ we have to $\parallel K\parallel_{V\times H}<1$. Thus $L=I-K$ is a surjective application and therefore the theorem L it is demonstrated. ∎ ## 6\. conclusions * (a) The dissipation can be considered in an arbitrarily small region of the shell and obtain stabilization and controllability. * (b) The existence of flight regions are verifiable for Naghdi shells, this implies that localized dissipation in the complement of the union of said regions generates stabilization and controllability. ## References * [1] R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. * [2] S. Agmon, A. Douglis, L. Nirenberg, Estimates near the boundary for the solutions of elliptic partial equations satisfying general boundary conditions II. Comm. Pure Appl. Math., 17(1), 3592 (1964) * [3] S. Agmon, A. Douglis, and L. Nirenberg, Estimates near the boundary for the solutions of elliptic partial equations satisfying general boundary conditions II, Comm. Pure Appl. Math. 17(1964, 35-92) * [4] C. Bardos, G. Lebeau, and J. Rauch, Sharp sufficient conditions for the observation, control and stabilization of wave from the boundary, SIAM J. Control Optim. 30(1992), 1024-1065. * [5] M. Bernadou and J. M. Boisserie, The finite Element Method in Thin Shell Theory: Applications to Arch Dam Simulation, Progress in Scientific Computing, Vol. 1, Boston, 1982. * [6] M. Bernadou, P. G. Ciarlet, and B. Miara, Existence theorems for two-dimensional linear shell theories, J. Elasticity 34(2)(1994), 111-138. * [7] A.Blouza, Nagdhi Shell Model: Existence, Uniqueness and Continuous Dependence on the Midsurface, Journal of Elasticity 64: 199-216, 2001. * [8] S. Bochner, Vector Fields and Ricci curvature Bull. Amer. Math Soc. 53 (1947), 179-195 * [9] M. do Carmo, Riemannian Geometry, Birkhäuser Boston, 1992. * [10] J. M. Coron, Control and nonlinearity, Mathematical Survey and Monographs 2009. * [11] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka, and J. A. Soriano, Asymptotic stability of the wave equation on compact surfaces and locally distributed damping-a sharp resul, Trans. Amer. Math. Soc. 361(9)(2009), 4561-4580. * [12] S. G. Chai and B. Z. Guo, Well-posedness and regularity of Naghdi shell equations under boundary control and observation, J. Differential Equations 249(2010), 3174-3214. * [13] S. G. Chai and P. F. Yao, Observability inequality for thin shell, Science in china Ser. A 44(3)(2003), 300-311. * [14] S. G. Chai, Boundary Feedback Stabilization of Naghdi Model, Acta Mathematica Sinica, English Series, 2005, Vol.21, No.1, pp. 169-184 * [15] P. G. Ciarlet, Mathematical Elasticity, Vol III Theory of Shells (2000), North Holland. * [16] J. Cheeger and D. Ebin, Comparison Theorem in Riemannian Geometry, Nort-Holland, Amsterdam, 1975. * [17] E. Hebey, Sobolev Spaces on Riemannian Manifolds, Lecture Notes in Mathematics 1635, Springer-Verlag, Berlin, Heidelberg, 1996. * [18] L. Hormander, The Analysis of Linear Partial Differential Operators, Vol III, Springer-Verlag, Berlin, New York, 1985. * [19] O. Iosifescu, Regularity for Naghdi Shell Equations, Mathematics and Mechanics of Solids, 5 (4), pp. 453-465.Mathematics, 2000. * [20] W. T. Koiter, A Consistent first approximation in the general theory of thin elastic shells, Proc. IUTAM Symposium on the theory of Thin Shells, North-Holland(1960). * [21] J. E. Lagnese, Boundary Stabilization of Thin Plates. SIAM Studies in Applied Mathematics, 1989 * [22] J. E. Lagnese and J. L Lions, Modelling Analysis and Control of thin Plates. Recherches en Mathmatiques Appliques, 6. Masson, Paris, 1988. * [23] I. Lasiecka, R. Triggiani, and W. Valente, Uniform stabilization of spherical shells by boundary dissipation, Adv. Differential Equations I(1996), 635-674. * [24] I. Lasiecka, R. Triggiani, Uniform stabilization of a shallow shell model with nonlinear boundary feedbacks, J. Math. Anal. Appl., 269 (2002), pp. 642-688 * [25] J. L. Lions, Extact controllability, Stabilization and perturbations for distributed system, SIAM Review 30(1988), 1-68. * [26] B. Miara and G. Perla, Exact Controllability Of Naghdi Shells, Comptes Rendus de lAcademia des Sciences, Paris, Serie Mathematique - Vol. 348 - Issue 5-6 - 2010, 341-346 * [27] P. M. Naghdi, Foundations of elastic shell theory, 1963. Progress in Solid Mechanics, Vol IV, pp. 1-90, North-Holland, Amsterdam. * [28] P. M. Naghdi, Theory of shells and Plates, Handbuch der physik, Vol VI. Springer-Verlag, Berlin, 1972. * [29] A. Pazy, Semigroup of Linear Operator and Application to Partial Differential equations, Springer-Verlag, New York, Berlin, 1983. * [30] P. Petersen, Riemannian Geometry, Second edition. Graduate Texts in Mathematics, Springer, New York, 2006. * [31] D. L. Russell, A unified boundary controllability theory for hyperbolic and parabólic partial differential equations, Studies in Appl. Math. 52(1973),189-211. * [32] D. L. Russell, Controllability and Stabilizability theory for linear partial differential equations: recent progress an open questions, SIAM Review 20(4)(1978), 639-739. * [33] M. Spivak, A Comprehensive Introduction to Differential Geometry I, II, IV, Boston Publish or Perish, 1970-1975. * [34] M. E. Taylor, Partial Differential Equations I, II. Springer-Verlag, New-York, 1996 * [35] P. F. Yao, Modeling and Control in Vibrational and Structural Dynamics: A Differential Geometric Approach, Chapman and Hall-CRC Applied Mathematics and Nonlinear Science, 2011 * [36] P. F. Yao, On the Observability inequalities for the exact controllability of the wave equation with variable coefficient, SIAM J. Control Optim. 37(6)(1999), 1568-1599. * [37] P. F. Yao, Observability inequalities for shallow shells, SIAM J. Contr. and Optim. 38(6)(2006), 1729-1756. * [38] P. F. Yao, Observability inequalities for Naghdi shells, MMAR 2000, the 6th International Conference on Methods and Models in Automation and Robotics, Poland 2000. * [39] P. F. Yao, Boundary controllability for the quasilinear wave equation with boundary dissipation, J. Differential Equations, Volume 61, Issue 2, pp 191-233, 2010. * [40] E. Zuazua, Exponential decay for the semilinear wave equation with locally distributed damping, Comm. Partial Differential Equations 15(1990), 205-235. * [41] E. Zuazua, Expoential decay for the semilinear wave equation with locally distributed damping in unbound domains, J. Math. Pures Appl 70(9) 70 (1991), 513-529.
11institutetext: Laboratory for Atmospheric and Space Physics, University of Colorado at Boulder, 3665 Discovery Drive, Boulder, CO, USA 11email<EMAIL_ADDRESS>22institutetext: NOAA/National Centers for Environmental Information, 325 Broadway, Boulder, CO, USA 33institutetext: High Altitude Observatory, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO, USA 44institutetext: Naval Research Laboratory, Washington, DC, USA 55institutetext: Colorado Research Associates Division, NorthWest Research Associates, 3380 Mitchell Lane, Boulder, CO, USA 66institutetext: NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD, USA 77institutetext: Institute of Physics & Kanzelhöhe Observatory for Solar and Environmental Research, University of Graz, A-8010 Graz, Austria 88institutetext: Royal Observatory of Belgium, Avenue Circulaire 3, 1180 Uccle, Belgium 99institutetext: Reflective X-ray Optics LLC, New York, NY, USA # SunCET: The Sun Coronal Ejection Tracker Concept James Paul Mason 11 Phillip C. Chamberlin 11 Daniel Seaton 22 Joan Burkepile 33 Robin Colaninno 44 Karin Dissauer 55 Francis G. Eparvier 11 Yuhong Fan 33 Sarah Gibson 33 Andrew R. Jones 11 Christina Kay 66 Michael Kirk 66 Richard Kohnert 11 W. Dean Pesnell 66 Barbara J. Thompson 66 Astrid M. Veronig 77 Matthew J West 88 David Windt 99 Thomas N. Woods 11 ###### Abstract The Sun Coronal Ejection Tracker (SunCET) is an extreme ultraviolet imager and spectrograph instrument concept for tracking coronal mass ejections through the region where they experience the majority of their acceleration: the difficult-to-observe middle corona. It contains a wide field of view (0–4 $R_{\odot}$) imager and a 1 Å spectral-resolution-irradiance spectrograph spanning 170–340 Å. It leverages new detector technology to read out different areas of the detector with different integration times, resulting in what we call “simultaneous high dynamic range”, as opposed to the traditional high dynamic range camera technique of subsequent full-frame images that are then combined in post-processing. This allows us to image the bright solar disk with short integration time, the middle corona with a long integration time, and the spectra with their own, independent integration time. Thus, SunCET does not require the use of an opaque or filtered occulter. SunCET is also compact — $\sim$15 $\times$ 15 $\times$ 10 cm in volume — making it an ideal instrument for a CubeSat or a small, complementary addition to a larger mission. Indeed, SunCET is presently in a NASA-funded, competitive Phase A as a CubeSat and has also been proposed to NASA as an instrument onboard a 184 kg Mission of Opportunity. ###### keywords: EUV instrument – Coronal Mass Ejections – high dynamic range – CubeSat ## 1 Introduction and Science Drivers The primary science question that the Sun Coronal Ejection Tracker (SunCET) instrument concept is designed to address is: What are the dominant physical mechanisms for coronal mass ejection acceleration as a function of altitude and time? In the standard model configuration of a coronal mass ejection (CME; Figure 1), a CME must overcome the constraint of overlying field in order to escape. Perhaps the simplest model of this defines a 1D, horizontal background magnetic field that declines in strength with height, characterized by the “decay index” (Bateman, 1978; Kliem and Török, 2006). If the background field decays too rapidly, the so-called torus instability of the embedded flux rope occurs, meaning the flux rope erupts. The decay index has a direct impact on the CME kinematics. The acceleration curves in the bottom of Figure 2, derived from magnetohydrodynamic (MHD) simulations by Török and Kliem (2007), correspond to decay index profiles, with each increase in acceleration corresponding to an increase in in the decay index profile and the final CME speed. Thus, the acceleration profile of a CME acts as a natural probe of the surrounding magnetic field. There are many complications layered on top of this simple model in reality, described later in this introduction. Figure 1: Standard cartoon CME model. The flux rope extends through the page. Overlying fields resist the flux rope’s elevation and expansion. Magnetic reconnection releases the energy stored in the field to accelerate the flux rope, producing a CME. Adapted from Forbes et al. (2018). The bulk of the CME acceleration profile in all cases occurs either in the observational gap or in the region where existing instruments are not optimized. This gap exists between extreme ultraviolet (EUV) imagers (widest outer field of view [FOV] of 1.7 $R_{\odot}$) and coronagraphs (typical inner FOV of 2.5 $R_{\odot}$ but effectively higher due to diffraction-degraded spatial resolution). Some instruments observe only part of the low-middle corona (Solar TErrestrial RElations Observatory [STEREO; Kaiser et al. 2007] / Coronagraph-1 [COR1; Howard et al. 2008], Geostationary Operational Environmental Satellite [GOES] / Solar Ultraviolet Imager [SUVI; Martínez- Galarce et al. 2010] , Project for On-Board Autonomy [PROBA2] / Sun Watcher with Active Pixels and Image Processing [SWAP; Seaton et al. 2013]). Some have low signal to noise in the middle corona (SUVI, SWAP). Some are ground-based with duty cycles $<20$% (K-Cor). Some have limitations on cadence (COR1). SunCET, however, avoids all of these issues because it is specifically optimized for this study of CMEs. Directly observing the CME height-time profile through the whole low and middle corona allows the derivation of complete speed-time and acceleration-time profiles, and thus strong model constraints, requiring accurate modeling of the magnetic environment to obtain the observed profiles. Such constraints do not presently exist, but SunCET can provide them. Figure 2: Top: Composite of SDO/AIA 171 Å image and SOHO/LASCO/C2 white-light coronagraph image. The longstanding observational gap is shown in dark grey. Bottom: Modeled acceleration profiles of torus instability CMEs, adapted from Török and Kliem (2007) Fig. 3. The different curves result from different background magnetic field decay index profile assumptions, with each higher acceleration peak corresponding to a larger decay index profile. Most of the acceleration occurs in the observational gap that SunCET fills. The torus instability is not the only mechanism involved in CME eruptions. Complicating factors are introduced by, e.g., the 3D structure of the erupting material and the surrounding magnetic field, by potential drainage of dense plasma, and by continued magnetic reconnection freeing more energy to drive the CME. The influence of these factors also evolve with altitude and time, as the CME dynamics play out. There have been at least 26 review papers on the topic over the last two decades (Green et al. 2018, and references therein) — a testament to the sustained, intense interest in this topic. Figure 3: Simulated CME kinematic profiles. Solid lines indicate the unperturbed torus instability. Dashed lines from right to left correspond to increasing durations (6 $\tau_{A}$ up to 10$\tau_{A}$) of an upward, linearly rising velocity perturbation, resulting in fundamentally different acceleration profiles. The SunCET FOV (0–4 $R_{\odot}$; indicated in light blue) covers and extends beyond this simulation. Adapted from Schrijver et al. (2008) Fig. 7. For example, a relatively modest complication to layer into the torus instability model is to add an upward velocity perturbation with finite duration. MHD simulations by Schrijver et al. (2008) showed that simply changing the duration of this perturbation results in fundamentally different acceleration profiles (Figure 3). With brief perturbations, the profile is single-peaked and occurs at later times. Increasing the duration of the perturbation does not simply result in an earlier peak, but in two peaks. Just as in Figure 2, the heights that these acceleration profiles differentiate themselves occurs across the Heliophysics System Observatory (HSO) measurement gap. SunCET observations can discriminate between single-peak versus double- peak CME acceleration profiles, which then determines the duration of a velocity perturbation in the torus instability model. Another CME initiation mechanism arises from the magnetic field topology of the flux rope. Hood and Priest (1981) showed that if the total twist in a flux rope exceeds a critical threshold (448°), a “helical kink” instability will occur, causing the flux rope to erupt. Such contortions lead to an impulsive acceleration and a large rotation of the flux rope (Fan 2016, Figure 4). Note the substantial differences in the simulated acceleration profiles between Figures 2, 3, and 4; and that they all occur in the under observed region. Figure 4: MFE simulation containing the helical kink instability, resulting in impulsive CME acceleration. The SunCET FOV (0–4 $R_{\odot}$; indicated in light blue) captures the impulse and small jerks. Adapted from Fan (2016). The other aspect of acceleration is direction: CMEs can be deflected away from “pure” radial propagation by as much as $\sim$30$\degree$, which is again determined primarily by Bex (Figure 5). This force has a non-radial component because the field is not perfectly symmetric about the flux rope, causing a magnetic gradient on the CME’s sides as the loops drape around the rising CME. The Forecasting a Coronal Mass Ejection’s Altered Trajectory (ForeCAT) analytical model accounts for these and other forces on a CME to determine its non-radial velocity (Kay et al., 2013, 2015, 2016; Kay and Gopalswamy, 2018). Furthermore, Kay and Opher (2015) modeled 200 CMEs in ForeCAT and found that deflection occurring in the middle corona accounts for nearly all of the deflection that occurs between initiation and 1 AU. The background magnetic field and radial CME speed are two free parameters in ForeCAT that are critical to get right; SunCET observations can strictly constrain them via forward modeling. Figure 5: ForeCAT simulations of a CME propagating through background magnetic fields (PFSS) of various strengths. R is radial distance. CMEs experience greater non-radial velocity in middle corona environments with stronger magnetic fields. The SunCET FOV (0–4 $R_{\odot}$; indicated in light blue) captures the majority of CME deflection. Adapted from Kay (2016). Additionally, coronal dimming often occurs as a result of CMEs. The faster a CME departs, the steeper the decline in coronal emission. The more mass the CME takes with it, the deeper the drop in coronal emission. A large number of studies have demonstrated this link with coronal imagers (e.g., Aschwanden 2009; Aschwanden et al. 2009; Dissauer et al. 2018, 2019; Thompson et al. 2000) and with spectral irradiance data (Woods et al., 2011; Mason et al., 2014, 2016, 2019). A major advantage of dimming measurements is that they are effective measures of CME kinematics even when they occur at disk center. Coronagraphs and imagers suffer from the problem of determining halo CME speed and/or mass. Dimming is an effective measure of CME kinematics both on and off-disk (Dissauer et al., 2019; Chikunova et al., 2020). Thus, instrument suites that can capture both the dimming and direct observations of limb CMEs are ideal for CME observation. This is precisely what SunCET does. SunCET will be the first mission that allows continuous measurements of CMEs during their initial acceleration phase using only a single instrument. This is advantageous compared to currently used instruments, where, e.g. EUV imagers in the low corona are combined with white-light coronagraphs higher up to track this phase. Artifacts can be introduced in the resulting CME kinematics using this combined data due to the tracking of different structures in the different instruments, since the observed emission is generated by different physical processes. SunCET is not dependent on other instruments to observe CME initiation and acceleration but does have a sufficiently wide field of view to overlap with coronagraphs for further expanded studies. The same challenges with different CME structures in EUV versus white light will be present, but SunCET’s broader temperature response should mitigate this somewhat. ## 2 Instrument Design SunCET is an instrument with a Ritchey-Chrétien, wide-field-of-view telescope (4 $R_{\odot}$), an off-rowland-circle EUV spectrograph, and a novel, simultaneous-high-dynamic-range detector. This new detector technology allows us to image the bright solar disk and CMEs through the dim middle corona simultaneously. It also allows us to measure solar irradiance spectra on the unused portion of the same detector with an integration time independent of the telescope image. The entire design is compact, fitting in a $\sim$15 $\times$ 15 $\times$ 10 cm volume; or about 2.5 CubeSat Units. This makes it ideal as a CubeSat or as a compact instrument suite to include on larger spacecraft that requires few physical resources. SunCET observes in the EUV rather than white light because 1) CMEs have already been demonstrated to be visible in the EUV and 2) it allows for major simplifications in the technical design of the instrument. While white-light observations are independent of temperature since they rely on light Thomson scattered from free electrons, SunCET observations do have the caveat that their temperature dependence (emission from ions at particular temperatures) means that CMEs whose plasma is not at ambient coronal temperatures will not be visible. The dynamic range between on and off disk in the EUV is already large ($\sim$105 by 2 $R_{\odot}$) but this is orders of magnitude larger in white light ($\sim$108), increasing the technical challenge. Moreover, the absolute brightness are vastly different; there are far more visible light photons. This presents a major challenge with scattered light: even small imperfections in optics would result in enough of the numerous disk photons to land on the part of the detector with the exceptionally faint middle corona, swamping out CME observations. This is further exacerbated by the fact that most surfaces scatter light more efficiently in visible light than in EUV light. Therefore, SunCET observes CMEs in the EUV. ## 3 Imager Design The SunCET imager was designed to provide high-dyamic range with moderate spatial resolution while providing a large field-of-view not heard of in historical on-disk EUV imagers out to 4 $R_{\odot}$. This section describes the technical design details that were traded in order to close on the science question. ### 3.1 Dynamic Range The SunCET imager requires a dynamic range of at least $7\text{\times}{10}^{4}$, based on GOES-16/SUVI observations of CMEs and SunCET’s design optimizations. The dimmest target of interest is a CME at the outer FOV, and the brightest is the coronal loops of an active region associated with a CME. SUVI-observed radiances are used to estimate brightness in SunCET (see Section 3.7). At 3.5 $R_{\odot}$, CMEs are $6.9\text{\times}{10}^{-4}$ W/m2/sr. A few of the brightest pixels in active regions reach $\sim$70 W/m2/sr, but are typically $\sim$4.8 W/m2/sr in SunCET. Another factor of 10 is included to distinguish the loops from the background solar disk. Thus, we have a required dynamic range of (4.8 / $6.9\text{\times}{10}^{-4}$) $\times$ 10 = $7\text{\times}{10}^{4}$. We allow solar flares and a small number of the brightest pixels inside active regions to saturate because 1) they are not our target of interest, 2) our entrance filter mesh mitigates diffraction (Section 3.4), and 3) the blooming in our detector is modest: only a few percent ranging across a few pixels (verified during the 33.336 NASA sounding rocket flight and in the lab). Projected performance: CME brightness at the outer SunCET FOV of 4 $R_{\odot}$ is $2.1\text{\times}{10}^{-4}$ W/m2/sr. That implies a dynamic range of $2.3\text{\times}{10}^{5}$. From 0–1.05 $R_{\odot}$, we run exposures of 0.025 seconds and from 1.05–4 $R_{\odot}$ the exposures will be 10 seconds — a factor of 400$\times$ dynamic range. Our detector has a native dynamic range of $\sim$$5\text{\times}{10}^{3}$. 2$\times$2 pixel binning provides an additional factor of 4. Combining these, we obtain SunCET’s high dynamic range of $8\text{\times}{10}^{6}$, well above the required range of $7\text{\times}{10}^{4}$. For comparison, the SDO/AIA dynamic range is $1\text{\times}{10}^{4}$ (Lemen et al., 2012). ### 3.2 Field of View Most CMEs accelerate through the low and middle corona (Bein et al., 2011; D’Huys et al., 2014). We set our required minimum field of view (FOV) at 0.5 $R_{\odot}$, corresponding to $\pm$30$\degree$ from disk-center. Lower than this and the events tend to be halo CMEs, which are difficult to obtain height-time profiles from. The outer FOV requirement is set to 3.5 $R_{\odot}$. SunCET covers the gap between existing instruments and includes enough overlap to ensure a smooth transition in any complementary height-time profiles. SOHO/LASCO’s inner FOV is 2.4 $R_{\odot}$ and its upcoming replacement, NOAA’s GOES-U/CCOR and SWFO/CCOR, will have an inner FOV of 3 $R_{\odot}$. The aforementioned traditional CME measurements, which are from white-light coronagraphs, use occulters that are mechanically restricted to be a limited distance away; therefore these observations have significantly degraded spatial resolution in their inner FOV that is much worse than their stated plate-scale resolution, sometimes upwards of 1 arc-min in the inner FOV. These effects are primarily due to vignetting (e.g. Koutchmy 1988; Aime et al. 2019). This is not the case with SunCET as it does not require an occulter to observe the CMEs in the low- and middle-corona, so its spatial resolution is not diffraction limited and is superior even in the FOV region that overlaps with the coronagraphs. Projected performance: The FOV of SunCET is 0–4 $R_{\odot}$ (5.6 $R_{\odot}$ in image corners). ### 3.3 Temporal Resolution: Exposure and Cadence SunCET is required to observe CMEs with speeds up to at least 1000 km/s, which accounts for 98% of all CMEs (Gopalswamy et al., 2009; Barlyaeva et al., 2018). Given the cadence described below and the field of view, SunCET’s projected performance is to observe CMEs with speeds up to 3900 km/s. The fastest CME in the CDAW catalog is $\sim$3400 km/s, meaning that SunCET will be able to track CMEs with any previously observed speed. SunCET requires an exposure time $\leq$23 seconds in order to avoid motion blur of the CME. Combining the fastest required CME to observe (1000 km/s), our required spatial resolution of 30/resolution-element, and the conversion of angular to spatial resolution at 1 AU ($\sim$750 km/arcsec), we obtain 750 $\times$ 30 / 1000 $\approx$ 23 seconds/resolution-element. Projected performance - exposure: SunCET’s exposure times are 0.025 seconds from 0–1.05 $R_{\odot}$ and 10 seconds beyond that. SunCET requires a cadence $\leq$3.2 minutes. SunCET must be able to track a 1000 km/s CME from the solar limb through its FOV, a range of 2.5 $R_{\odot}$, or $1.74\text{\times}{10}^{6}$ km. Therefore, the minimum time a CME would be in the FOV is 29 minutes. We require at least 9 height-time samples to distinguish acceleration profiles (Figure 3). Thus, our cadence must be less than 29 minutes / 9 samples = 3.2 minutes. Projected performance - cadence: The SunCET mission is designed to downlink 1 minute cadence data. The designed FOV actually extends to 4 $R_{\odot}$, meaning we will capture 38 height-time points for limb-CMEs traveling at a speed of 1000 km/s and more points for CMEs that start slightly on disk and/or with slower speeds. For example, the average CME speed is 490 km/s (Webb and Howard, 2012) and if it crosses from 0.7–4 $R_{\odot}$, we will obtain 78 height-time points. ### 3.4 Bandpass: Coatings and Filters Table 1: Strong emission lines in the SunCET bandpass. Irradiance measured by SDO/EVE (Woods et al., 2012) [HTML]1A73C9 Ion | $\uplambda$ [Å] | log10(T [K]) | Quiet Sun Irradiance [$\upmu$W/m2/Å] ---|---|---|--- Fe IX | 171.1 | 5.9 | 67 Fe X | 174.5 | 6.1 | 73 Fe X | 177.2 | 6.1 | 48 Fe XI | 180.4 | 6.2 | 77 | Fe XI --- (doublet) 188.2 | 6.2 | 61 Fe XII | 193.5 | 6.2 | 45 Fe XII | 195.1 | 6.2 | 63 CMEs have been routinely identified in narrowband EUV imagers sensitive to temperatures between $\sim$0.6–1.6 MK (e.g., GOES/SUVI). Therefore, SunCET is required to observe at least one of the emission lines identified in Table 1. Projected performance: SunCET’s baseline bandpass is 170–200 Å — capturing all of the emission lines in Table 1, which boosts the signal (Section 3.7). The telescope mirrors employ reflective multilayer coatings designed to provide broad spectral response spanning the instrument bandpass. These coatings follow an aperiodic design, and comprise 15 repetitions of alternating layers of B4C, Mo, and Al, with individual layer thicknesses ranging from $\sim$5–100 Å. The aperiodic coating design provides an average reflectance of $\sim$33% from 170–200 Å, as shown in Figure 6. For reference, periodic multilayer coatings operating in this portion of the EUV are generally used for narrow- band response: for example, the periodic Si/Mo coatings used for the 195 Å channel of the GOES/SUVI instrument, also shown in Figure 6, achieve a peak reflectance of $\sim$34% with a spectral bandpass of $\sim$9.5 Å full-width- half-max (FWHM). Figure 6 also shows the periodic Al/Zr coatings used for the Hi-C rocket instrument (Kobayashi et al., 2014), which achieve a peak reflectance of $\sim$50% with a spectral bandpass of $\sim$8.5 Å FWHM. The aperiodic B4C/Mo/Al multilayer coatings are currently under development with funding from the NASA H-TIDeS program. Figure 6: Calculated reflectance near normal incidence (5°) of the broad-band, aperiodic B4C/Mo/Al multilayers used for the SunCET telescope mirrors (green), and for reference, the narrow-band, periodic Si/Mo multilayer coatings used for the GOES/SUVI instrument (red), and the Al/Zr multilayer coatings used for the Hi-C rocket instrument (blue). The C/Al/C entrance filter from Luxel Corporation prevents visible light from entering the chamber and has high heritage (24 of them in GOES/EXIS). It is supported by a 5 lines/inch mesh, which has heritage from the Hi-C sounding rocket flights and avoids the diffraction issues of the 70 lines/inch mesh used on SDO/AIA and TRACE (Lemen et al., 2012; Lin et al., 2001). A second C/Al/C filter directly in front of the detector eliminates visible light from possible pinholes in the primary filter or from stray light in the instrument. ### 3.5 Spatial Resolution SunCET requires spatial resolution better than 30. CME flux ropes often manifest observationally as a cavity which trails behind a bright front (Forsyth et al., 2006). The smallest cavities have a diameter of 0.2 $R_{\odot}$ (180) and are approximately circular, which corresponds to a circumference of $\sim$600 (Fuller and Gibson, 2009). To account for non- circularities, we require $\sim$20 points outlining the cavity, which results in our spatial resolution requirement of 600/20 = 30. Figure 7 shows a cavity observed in PROBA2/SWAP (3.16 resolution) binned down to demonstrate that cavities can be resolved at this resolution in practice. Projected performance: SunCET provides 20 resolution. Its plate scale is 4.8/pixel so 2$\times$2 binning can be applied, which meets the Nyquist sampling criterion. Figure 7: CME cavity observed in PROBA2/SWAP 174 Å binned down to SunCET required resolution of 30 (projected performance is 20). The cavity remains easily identifiable. SunCET’s SNR will be 9–30$\times$ higher off disk, making CME identification even easier. The 1.7 $R_{\odot}$ FOV shown here, the largest of any solar EUV imager to date, is SWAP’s; SunCET’s extends to 4 $R_{\odot}$. Adapted from Byrne et al. (2014). ### 3.6 Mirrors Figure 8: SunCET’s compact Ritchey-Chrétien telescope, which fits inside a 6U CubeSat with all typical bus components. SunCET contains a Ritchey-Chrétien (RC) telescope encased in a vacuum chamber with a one-time-release door (Figure 8). This type of telescope has good performance for wide fields of view (Figure 9) and has been used frequently for similar instruments (e.g., SOHO/EIT, STEREO/EUVI, GOES/SUVI). Despite its compact size, the telescope achieves nearly flat resolution across the wide FOV. The mount for the secondary mirror is designed with a coefficient of thermal expansion matching the mirror to account for focus sensitivity. Figure 9: Left: Ray trace of SunCET optics. Right: 80% encircled spot diameter over the FOV. This simple design yields excellent performance, with a mean resolution of 20 that is flat across nearly the entire FOV. ### 3.7 Signal to Noise Ratio (SNR) SunCET requires a signal to noise ratio (SNR) $\geq$10\. This is the international standard that defines digital image quality as “acceptable” (ISO 12232, 2019). The same standard defines SNR of 40 as “excellent”. These numbers are in line with the expectations of experts that have done CME image processing with coronagraph and EUV imager data. Table 2: SunCET SNRs for on-disk features and CME loops above the limb. Radiances are from GOES/SUVI 195 Å images of the 2017-09-10 CME (Seaton and Darnel, 2018) and are extrapolated beyond its FOV of 1.7 $R_{\odot}$. SNR at all heights is above the level that ISO 12232 defines as “excellent”. [HTML]1A73C9 | Quiet Sun | Active Region | Flare | 1.05 $R_{\odot}$ | 1.5 $R_{\odot}$ | 2 $R_{\odot}$ | 3 $R_{\odot}$ | 3.5 $R_{\odot}$ | 4 $R_{\odot}$ ---|---|---|---|---|---|---|---|---|--- | Radiance --- [W/m2/sr] 0.1 | 10 | 40 | 0.2 | $1.5\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$ | $3\text{\times}{10}^{-5}$ Effective exposure [s] | 0.025 | 0.025 | 0.025 | 0.025 | 10 | 10 | 10 | 10 | 10 e-/res-element | $1.48\text{\times}{10}^{4}$ | $1.48\text{\times}{10}^{6}$ | $5.94\text{\times}{10}^{6}$ | $2.97\text{\times}{10}^{4}$ | $8.9\text{\times}{10}^{5}$ | $1.78\text{\times}{10}^{5}$ | $1.78\text{\times}{10}^{4}$ | $5.94\text{\times}{10}^{3}$ | $1.78\text{\times}{10}^{3}$ | Saturation limit --- [e-/res-element] $1.08\text{\times}{10}^{5}$ | $1.08\text{\times}{10}^{5}$ | $1.08\text{\times}{10}^{5}$ | $1.08\text{\times}{10}^{5}$ | $1.08\text{\times}{10}^{6}$ | $1.08\text{\times}{10}^{6}$ | $1.08\text{\times}{10}^{6}$ | $1.08\text{\times}{10}^{6}$ | $1.08\text{\times}{10}^{6}$ SNR | 122 | Saturated | Saturated | 172 | 944 | 422 | 133 | 77 | 42 Projected performance: Table 2 shows the SunCET SNR as a function of distance from the sun, based on the parameters shown in Table 3. Conservative radiance estimates come from GOES/SUVI 195 Å images of a CME that was tracked all the way to the edge of the SUVI 1.7 $R_{\odot}$ FOV (Seaton and Darnel, 2018). For the solar disk, the effective exposure is the median of three 0.025-second images; for 1.05–4 $R_{\odot}$, it is the median of ten 1-second exposures. This removes energetic particle tracks and, for the long exposure, increases the full-well saturation limit of the detector by a factor of 10. These conservative estimates show that SunCET CME measurements would have an excellent SNR of 42 even out at 4 $R_{\odot}$. Table 3: SunCET instrument parameters needed to calculate SNR. [HTML]1A73C9 Instrument parameter | Value | Description ---|---|--- Wavelength | 170–200 Å | Broadband response defined by mirror coating | Aperture --- size 44.9 cm2 | | 9.6 cm diameter truncated on two sides --- to a height of 7.62 cm and a 4.8 cm diameter secondary mirror obscuring its center | Weighted factor --- for broadband 6.88 | | 7 emissions in the bandpass weighted by their --- quiet-Sun intensity to the 195 Å emission line (see Table 1) Pixel size | 7 $\upmu$m $\times$ 7 $\upmu$m | e2v CIS115 datasheet and confirmed in house Pixel array | 1500 $\times$ 1500 | Full array is 1504 $\times$ 2000; $\sim$5 rows dedicated to dark FOV | 4 $R_{\odot}$ | Design FOV (requirement is 3.5 $R_{\odot}$) Plate scale | 4.8/pixel | | From pixel size, number of pixels, and FOV; --- Note that 2$\times$2 binning will be applied, resulting in 9.6/resolution-element | Optics --- throughput 0.06 | | 2 mirrors with B4C/Mo/Al coatings (0.35 each), --- entrance Al/C filter (0.6) with 5 lpi filter mesh (0.98), Al secondary/pinhole filter (0.85) Quantum yield | 18.3 e-/ph | Average over 170–200 Å bandpass Dark noise | $<$0.08 e-/pixel/sec | At -10°C, from LASP lab tests Readout noise | 5 e-/pixel | From LASP lab tests Fano noise | 1.3 e-/pixel | Fano factor of 0.1 for Si Max read rate | | 0.1 sec (full frame) --- 0.025 (up to 500 rows) In SunCET, 500 rows corresponds to 0–1.33 $R_{\odot}$ Few observations of the extended corona above $\sim$2 $R_{\odot}$ have been made in the EUV, but among these there is clear evidence that the CME signal will be detectable (Tadikonda et al. 2019, Figure 10). At about 3 $R_{\odot}$, noise in SUVI becomes comparable to solar signal. SunCET, however, is optimized for this large FOV. SunCET has a larger primary mirror geometric area (3.5$\times$), broadband wavelength response (6.88$\times$), and larger pixel solid angle (16$\times$) for a total 385$\times$ boost in signal compared to SUVI. Furthermore, the SunCET mirrors will be polished to highest degree possible, up to 3 times the smoothness of SUVI’s, to minimize scattered light. Figure 10: Composite of GOES/SUVI 195 Å off-point images that shows solar structure out to 3 $R_{\odot}$ — even without a bright CME — before straylight in the instrument becomes comparable with the coronal signal. Adapted from Tadikonda et al. (2019). ## 4 Spectrograph Design The SunCET irradiance spectrograph channel is a high-heritage off-Rowland circle design based on the SDO/EVE Multiple EUV Grating Spectrographs A2 (MEGS-A2) channel (Crotser et al., 2007). It provides the full-Sun solar irradiance from 170-340 Å at 1 Å spectral resolution. This EUV range is important for overlapping with the SunCET imager EUV bands for calibration purposes and provides additional science capability. It observes Fe IX through Fe XVI emission lines that often experience coronal dimming during CMEs (Woods et al., 2011; Mason et al., 2014, 2016, 2019). This allows for halo CME kinematics to be tracked even if SunCET is not deployed on multiple platforms with stereoscopic viewing angles. It also enables study of the energetics powering the CME as a function of time. It shares the vacuum door and detector with the SunCET imager, but has its own optical path including the entrance slit, filters, and grating. These measurements are especially pressing because EVE/MEGS-A experienced a CCD electronics anomaly in 2014 May, preventing the continued solar observations by MEGS-A. While other EVE channels and new GOES EUV Sensor (EUVS) channels are continuing solar EUV observations in the 170-340 Å range, they are only broadband measurements that are not optimized for coronal dimming irradiance observations nor for detailed calibration of solar EUV imagers. ### 4.1 Spectrograph Dynamic Range The solar irradiance values, as measured from SDO/EVE (Woods et al., 2012), from 170-340 Å range from ${10}^{-6}$–${10}^{-2}$ W/m2/nm due to variations in the peaks of the emission line in this range, the reduced irradiance values between the strong emission lines, as well as solar activity including solar minimum times and during the largest solar flares; therefore, the required dynamic range of the spectrograph is $1\text{\times}{10}^{4}$. Projected performance: The $8\text{\times}{10}^{6}$ dynamic range discussed in Section 3.1 is more than two orders of magnitude better than needed for the spectrograph. ### 4.2 Spectrograph Spectral Range and Resolution The SunCET spectrograph requires a spectral range between 170-340 Å and 1 Å spectral resolution. The entrance to the spectrograph is a 3 $\times$ 0.028 mm in order to maximize the slit image height (cross-dispersion direction) on the allotted 500 pixel height of the detector to maximize the SNR, while the width is optimized to meet the 1 Å spectral resolution requirement — it is this slit width and the grating ruling that limits the spectral resolution. The grating ruling, distance and curvature are all optimized in order to meet the spectral range and resolution as well. The optical path after being dispersed from the grating goes through the hole in the secondary imager mirror and onto the common detector. The grating is a Type-I concave imaging grating in order to image the slit onto the detector. There is an Al/C entrance filter mounted to the entrance slit in order to limit the spectral bandpass close to the required range, and an additional Al filter prior for additional bandpass rejection at the entrance to the imager optical cavity as well as to reduce any stray light or pinholes that may develop in the first filter. Given the 1500 allotted pixels in the dispersion range, this gives a plate- scale resolution of approximately 0.11 Å per pixel; therefore the spectrograph will oversample the spectral resolution by about a factor of 9$\times$, or 4.5$\times$ with the 2$\times$2 pixel binning. This allows for fits to spectral lines to be performed and allow for Doppler shift measurements of emission lines and plasma velocity flows during flares to be calculated (Chamberlin, 2016; Hudson et al., 2011) Projected performance: SunCET provides 1 Å spectral resolution across the fully observed 170-340 Å spectral range. ### 4.3 Spectrograph Signal to Noise Ratio (SNR) The SunCET spectrograph also requires a SNR of 10 or better as discussed in 3.7. This is achieved by using a long-slit and minimal optical elements, along with the high QE detector. The slit was also sized, and filter thickness optimized, to maximize the SNR without while conservatively not saturating or even go beyond the linear full well capacity of the the CMOS sensor. Even with a very large factor of 10 increase (Chamberlin et al., 2008, 2018) during flares for these lines given in Table 4, they will still be almost another factor of 2 below the full-well of this sensor. Table 4: The SunCET spectrograph SNRs for various strong emission lines. Irradiances are from SDO/EVE (Woods et al., 2011). SNR at all heights is above the level that ISO 12232 defines as “excellent”. [HTML]1A73C9 Wavelength (Å) | 171 | 193.5 | 195 | 304 | 335 ---|---|---|---|---|--- | Irradiance --- [W/m2/sr] $6.7\text{\times}{10}^{-4}$ | $4.5\text{\times}{10}^{-4}$ | $6.3\text{\times}{10}^{-4}$ | $1.0\text{\times}{10}^{-3}$ | $1.0\text{\times}{10}^{-4}$ Integration [s] | 10 | 10 | 10 | 10 | 10 Counts/Pixel | 737 | 495 | 693 | 1100 | 110 SNR | 272 | 237 | 282 | 444 | 145 Projected performance: Table 4 shows the SunCET spectrograph SNR for five strong emission lines, based on the parameters shown in Table 5. These estimates show that SunCET solar spectral irradiance measurements would have an excellent SNR of better than 100. Table 5: SunCET spectrograph instrument parameters needed to calculate SNR. [HTML]1A73C9 Instrument parameter | Value | Description ---|---|--- Wavelength | 170–340 Å | | Contains various strong emission lines, --- including some that show coronal dimming. Defined by grating equation. | Aperture --- size 0.0098 cm2 | 3.0 mm tall $\times$ 28 $\upmu$m wide | Number of Pixels --- per emission line 2000 | | 500 pixels tall $\times$ 4 pixels wide --- (defined by slit) Pixel size | 7 $\upmu$m $\times$ 7 $\upmu$m | Teledyne e2v CIS115 datasheet and confirmed in house Pixel allocation | 500 $\times$ 1500 | Full array is 1504 $\times$ 2000; $\sim$5 rows dedicated to dark FOV | Full Sun | Solar Irradiance, image the slit Plate scale | 0.011 nm | | From pixel size, number of pixels, wavelength range; --- Note: oversampling spectral resolution of 0.1nm | Optics --- throughput 0.0122 | | Grating Efficiency (0.06), Pt Grating Coating (0.4), --- Al/C entrance filter (0.6), Al secondary/pinhole filter (0.85) Quantum yield | 18.3 e-/ph | Average over 170-200 Å bandpass Dark noise | $<$0.08 e-/pixel/sec | At -10°C, from LASP lab tests Readout noise | 5 e-/pixel | From LASP lab tests Fano noise | 1.3 e-/pixel | Fano factor of 0.1 for Si ## 5 Detector Figure 11: The Teledyne e2v CIS115 detector and LASP Compact Camera and Processor (CCAP) that flew successfully on a NASA sounding rocket in 2018; CCAP is now flying on the CSIM CubeSat launched in 2018. SunCET uses a Teledyne e2v CIS115 back-illuminated, back-thinned CMOS sensor (Table 3, Figure 11). This sensor is a 1504$\times$2000 pixel array, where a square area of 1500$\times$1500 pixels will be dedicated to the image while the remaining 500$\times$1500 pixels will record the spectrally dispersed slit image from the irradiance spectrograph. Using a single detector to record data from two technically different but scientifically complementary channels significantly reduces the technical resources needed while maximizing science potential. In 2017, LASP developed custom electronics for readout of this sensor that enables independent exposure control per row. A per-pixel readout is now being developed. LASP’s “Compact Camera and Processor” (CCAP; Figure 11) system with this detector was successfully flown in 2018 on the NASA 36.336 sounding rocket (PI: T. Woods, U. of Colorado/LASP) and more recently in January 2020 on the NASA 36.356 sounding rocket (PI: S. Bailey, Virginia Tech). CCAP includes a Xilinx Kintex-7 FPGA with an embedded 32-bit processor and dedicated image compression core. ## 6 Instrument Requirements on Spacecraft The instruments described above place requirements on the performance and capabilities of whatever spacecraft hosts them. They are primarily driven by the imager. Pointing accuracy must be better than 30with stability better than 30 RMS over 23 seconds and knowledge better than 10. This ensures that the center of the sun stays in the center of portion of the detector dedicated to the imager and does not drift significantly during or between integrations. This pointing performance is achievable even on CubeSat platforms as demonstrated by the Miniature X-ray Solar Spectrometer (MinXSS), Arcsecond Space Telescope Enabling Research in Astrophysics (ASTERIA), Compact Spectral Irradiance Monitor (CSIM), and others (Mason et al., 2017; Pong, 2018). Prime science data generation is heavily dependent on CME occurrence rates, but downlink schemes can easily be designed for flexibility and the “poorest” CMEs can be ignored if there are bandwidth limitations. For CME occurrence rates at the middle of the rising phase of the solar cycle, SunCET generates $\sim$28 MB/day for the imager, and $\sim$65 MB/day of data for the spectrograph. These data are compressed using a lossless JPEG-LS scheme. ## 7 Conclusions The SunCET instrument fills a crucial, historically under observed region of the Sun — the middle corona — precisely the region where CMEs experience the majority of their acceleration. This region is inherently very difficult to observe because of the extreme intensity dynamic range between the bright solar disk and the dim corona. SunCET introduces a new technology that avoids the limitations of previous instruments. By developing a detector that can vary exposure time across its surface, we can simultaneously observe the disk without saturating and the dim middle corona; allowing us to track CMEs from their initiation all the way through their primary acceleration phase. Moreover, we can image spectra on the same detector with their own, independent integration time. Figure 12: Tracking a very fast CME in GOES/SUVI 195 Å base-difference images. The CME quickly extended beyond the FOV of SUVI. SunCET’s FOV (light blue shading) is more than twice as large. Adapted from Veronig et al. (2018). There is a large body of knowledge for tracking CMEs in coronagraphs and EUV imagers (Sarkar et al., 2019; O’Hara et al., 2019; Veronig et al., 2018; Byrne et al., 2014; Mierla et al., 2013; Bein et al., 2011; Gopalswamy et al., 2009; Vršnak et al., 2007). SunCET data processing will employ the techniques already developed for other observatories but improve the results because of its wider FOV (e.g., Veronig et al. 2018; Figure 12) and that it does not require the serendipitous alignment between instrument off-point campaigns and CME occurrence (e.g., O’Hara et al. 2019). Below we summarize: 1. 1. The majority of CME acceleration occurs in a historical observational gap: the middle corona 2. 2. Observations of full CME acceleration profiles provide tight constraints on models and thus our physical understanding of how the magnetically-dominated corona influences CME kinematics 3. 3. SunCET provides these observations, overcoming the limits of traditional technologies with a novel simultaneous high-dynamic-range detector 4. 4. SunCET is compact and thus suitable for CubeSat missions or an instrument on a larger spacecraft SunCET is presently in a NASA-funded, competitive Phase A as a 6U CubeSat and has also been proposed to NASA as an instrument onboard a 184 kg Mission of Opportunity. ## 8 Acknowledgements J.P.M thanks the numerous people who contributed to the development of the SunCET concept design and the reviewers for their commments that made this paper stronger. A.M.V. and K.D. acknowledge the Austrian Space Applications Programme of the Austrian Research Promotion Agency FFG (ASAP-11 4900217 CORDIM and ASAP-14 865972 SSCME, BMVIT). ## References * Aime et al. (2019) Aime, C., C. Theys, R. Rougeot, and H. Lantéri, 2019. Principle of Fredholm image reconstruction in the vignetting zone of an externally occulted solar coronagraph: Application to ASPIICS. _Astronomy & Astrophysics_, 622, A212. 10.1051/0004-6361/201833843, URL https://doi.org/10.1051/0004-6361/201833843. * Aschwanden (2009) Aschwanden, M. J., 2009. 4-D modeling of CME expansion and EUV dimming observed with STEREO/EUVI. _Annales Geophysicae_ , 27(8), 3275–3286. 10.5194/angeo-27-3275-2009, URL http://www.ann-geophys.net/27/3275/2009/. * Aschwanden et al. (2009) Aschwanden, M. J., N. V. Nitta, J.-P. Wuelser, J. R. Lemen, A. Sandman, A. Vourlidas, and R. C. Colaninno, 2009. First Measurements of the Mass of Coronal Mass Ejections From the EUV Dimming Observed With Stereo EUVI A + B Spacecraft. _The Astrophysical Journal_ , 706(1), 376–392. 10.1088/0004-637X/706/1/376, URL http://stacks.iop.org/0004-637X/706/i=1/a=376?key=crossref.88f60571a09db37b8197341ac713fd1a. * Barlyaeva et al. (2018) Barlyaeva, T., J. Wojak, P. Lamy, B. Boclet, and I. Toth, 2018. Periodic behaviour of coronal mass ejections, eruptive events, and solar activity proxies during solar cycles 23 and 24. _Journal of Atmospheric and Solar-Terrestrial Physics_ , 177, 12–28. 10.1016/j.jastp.2018.05.012. * Bateman (1978) Bateman, G., 1978. MHD Instabilities. MIT Press, Cambridge, Massachusetts. ISBN 9780262021319. * Bein et al. (2011) Bein, B. M., S. Berkebile-Stoiser, A. M. Veronig, M. Temmer, N. Muhr, I. Kienreich, D. Utz, and B. Vršnak, 2011. Impulsive Acceleration of Coronal Mass Ejections. I. Statistics and Coronal Mass Ejection Source Region Characteristics. _Astrophysical Journal_ , 738(2), 191. 10.1088/0004-637X/738/2/191, URL http://stacks.iop.org/0004-637X/738/i=2/a=191?key=crossref.f0398b90f91cbeb1263748f98e279bbd. * Byrne et al. (2014) Byrne, J. P., H. Morgan, D. B. Seaton, H. M. Bain, and S. R. Habbal, 2014. Bridging EUV and white-light observations to inspect the initiation phase of a “two-stage” solar eruptive event. _Solar Physics_ , 289(12), 4545–4562. 10.1007/s11207-014-0585-8. * Chamberlin (2016) Chamberlin, P. C., 2016. Measuring Solar Doppler Velocities in the He ii 30.38 nm Emission Using the EUV Variability Experiment (EVE). _Solar Physics_ , 291(6), 1665–1679. 10.1007/s11207-016-0931-0, URL https://link.springer.com/article/10.1007/s11207-016-0931-0. * Chamberlin et al. (2018) Chamberlin, P. C., T. N. Woods, L. Didkovsky, F. G. Eparvier, A. R. Jones, et al., 2018. Solar Ultraviolet Irradiance Observations of the Solar Flares During the Intense September 2017 Storm Period. _Space Weather_ , 16(10), 1470–1487. 10.1029/2018SW001866, URL https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018SW001866. * Chamberlin et al. (2008) Chamberlin, P. C., T. N. Woods, and F. G. Eparvier, 2008. Flare Irradiance Spectral Model (FISM): Flare component algorithms and results. _Space Weather_ , 6(5), n/a–n/a. 10.1029/2007SW000372, URL http://doi.wiley.com/10.1029/2007SW000372. * Chikunova et al. (2020) Chikunova, G., K. Dissauer, T. Podladchikova, and A. M. Veronig, 2020. Coronal dimmings associated with coronal mass ejections on the solar limb. _submitted_. URL http://arxiv.org/abs/2005.03348. * Crotser et al. (2007) Crotser, D. A., T. N. Woods, F. G. Eparvier, M. A. Triplett, and D. L. Woodraska, 2007. SDO-EVE EUV spectrograph optical design and performance. In S. Fineschi and R. A. Viereck, eds., Solar Physics and Space Weather Instrumentation II, vol. 6689, 66890M. SPIE. 10.1117/12.732592, URL http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.732592. * D’Huys et al. (2014) D’Huys, E., D. B. Seaton, S. Poedts, and D. Berghmans, 2014. Observational characteristics of coronal mass ejections without low-coronal signatures. _Astrophysical Journal_ , 795(1), 49. 10.1088/0004-637X/795/1/49. * Dissauer et al. (2019) Dissauer, K., A. M. Veronig, M. Temmer, and T. Podladchikova, 2019. Statistics of coronal dimmings associated with coronal mass ejections. II. Relationship between coronal dimmings and their associated CMEs. _The Astrophysical Journal_ , 874(2), 123. 10.3847/1538-4357/ab0962, URL http://stacks.iop.org/0004-637X/874/i=2/a=123?key=crossref.833720587c5d6f444910c7dec84f30d9http://arxiv.org/abs/1810.01589. * Dissauer et al. (2018) Dissauer, K., A. M. Veronig, M. Temmer, T. Podladchikova, and K. Vanninathan, 2018\. Statistics of Coronal Dimmings Associated with Coronal Mass Ejections. I. Characteristic Dimming Properties and Flare Association. _The Astrophysical Journal_ , 863(2), 169. 10.3847/1538-4357/aad3c6, URL http://stacks.iop.org/0004-637X/863/i=2/a=169?key=crossref.04b1e3af0e5af3d849583869b3fb6f27. * Fan (2016) Fan, Y., 2016. Modeling the Initiation of the 2006 December 13 Coronal Mass Ejection in AR 10930: The Structure and Dynamics of the Erupting Flux Rope. _The Astrophysical Journal_ , 824(93), 12. 10.3847/0004-637x/824/2/93, URL http://arxiv.org/abs/1604.05687http://dx.doi.org/10.3847/0004-637X/824/2/93. * Forbes et al. (2018) Forbes, T. G., D. B. Seaton, and K. K. Reeves, 2018. Reconnection in the Post-impulsive Phase of Solar Flares. _The Astrophysical Journal_ , 858, 70. 10.3847/1538-4357/aabad4. * Forsyth et al. (2006) Forsyth, R. J., V. Bothmer, C. Cid, N. U. Crooker, T. S. Horbury, et al., 2006. ICMEs in the inner heliosphere: Origin, evolution and propagation effects: Report of working group G. _Space Science Reviews_ , 123, 383–416. 10.1007/s11214-006-9022-0. * Fuller and Gibson (2009) Fuller, J., and S. E. Gibson, 2009. A survey of coronal cavity density profiles. _The Astrophysical Journal_ , 700, 1205–1215. 10.1088/0004-637X/700/2/1205. * Gopalswamy et al. (2009) Gopalswamy, N., S. Yashiro, G. Michalek, G. Stenborg, A. Vourlidas, F. S. L, and R. A. Howard, 2009. The SOHO / LASCO CME Catalog. _Earth Moon Planet_ , 104, 295–313. 10.1007/s11038-008-9282-7. * Green et al. (2018) Green, L. M., T. Török, B. Vršnak, W. Manchester, and A. Veronig, 2018. The Origin, Early Evolution and Predictability of Solar Eruptions. _Space Science Reviews_ , 214(1), 46. 10.1007/s11214-017-0462-5. * Hood and Priest (1981) Hood, A. W., and E. R. Priest, 1981. Critical Conditions for Magnetic Instabilities in Force-Free Coronal Loops. _Geophysical & Astrophysical Fluid Dynamics_, 17(1), 297–318. 10.1080/03091928108243687. * Howard et al. (2008) Howard, R. A., J. D. Moses, A. Vourlidas, J. S. Newmark, D. G. Socker, et al., 2008\. Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI). _Space Science Reviews_ , 136(1-4), 67–115. 10.1007/s11214-008-9341-4, URL http://link.springer.com/10.1007/s11214-008-9341-4. * Hudson et al. (2011) Hudson, H. S., T. N. Woods, P. C. Chamberlin, L. Fletcher, G. Del Zanna, L. Didkovsky, N. Labrosse, and D. Graham, 2011. The EVE Doppler Sensitivity and Flare Observations. _Solar Physics_ , 273(1), 69–80. 10.1007/s11207-011-9862-y, URL http://link.springer.com/10.1007/s11207-011-9862-y. * ISO 12232 (2019) ISO 12232, 2019. Photography – Digital Still Cameras – Determination of Exposure Index, ISO Speed Ratings, Standard Output Sensitivity, and Recommended Exposure Index. _Tech. rep._ , International Organization for Standardization, Geneva, CH. * Kaiser et al. (2007) Kaiser, M. L., T. A. Kucera, J. M. Davila, O. C. St. Cyr, M. Guhathakurta, and E. Christian, 2007. The STEREO Mission: An Introduction. _Space Science Reviews_ , 136(1-4), 5–16. 10.1007/s11214-007-9277-0, URL http://link.springer.com/10.1007/s11214-007-9277-0. * Kay and Gopalswamy (2018) Kay, C., and N. Gopalswamy, 2018. The Effects of Uncertainty in Initial CME Input Parameters on Deflection, Rotation, Bz , and Arrival Time Predictions. _Journal of Geophysical Research: Space Physics_ , 123, 7220–7240. 10.1029/2018JA025780. * Kay and Opher (2015) Kay, C., and M. Opher, 2015. The Heliocentric Distance Where the Deflections and Rotations of Solar Coronal Mass Ejections Occur. _The Astrophysical Journal Letters_ , 811, L36. 10.1088/2041-8205/811/2/L36. * Kay et al. (2016) Kay, C., M. Opher, R. C. Colaninno, and A. Vourlidas, 2016. Using ForeCAT Deflections and Rotations to Constrain the Early Evolution of CMEs. _The Astrophysical Journal_ , 827(1), 70. 10.3847/0004-637X/827/1/70, URL http://arxiv.org/abs/1606.03460%****␣main.bbl␣Line␣250␣****http://dx.doi.org/10.3847/0004-637X/827/1/70http://stacks.iop.org/0004-637X/827/i=1/a=70?key=crossref.fd824e09edb6a2c662122ed885296355. * Kay et al. (2013) Kay, C., M. Opher, and R. M. Evans, 2013. Forecasting a Coronal Mass Ejection’s Altered Trajectory: ForeCAT. _The Astrophysical Journal_ , 775(1), 5. 10.1088/0004-637X/775/1/5, URL http://stacks.iop.org/0004-637X/775/i=1/a=5?key=crossref.60dd88082ab7f70bf71897944c86b722. * Kay et al. (2015) Kay, C., M. Opher, and R. M. Evans, 2015. Global Trends of CME Deflections Based on CME and Solar Parameters. _The Astrophysical Journal_ , 805(2), 168. 10.1088/0004-637X/805/2/168, URL http://stacks.iop.org/0004-637X/805/i=2/a=168?key=crossref.7f7c0fc9c0ff14b3f341631e5120e261. * Kay (2016) Kay, C. D., 2016. ForeCAT - A Model for Magnetic Deflections of Coronal Mass Ejections. Ph.D. thesis, Boston University. URL https://search.proquest.com/docview/1767403214. * Kliem and Török (2006) Kliem, B., and T. Török, 2006. Torus Instability. _Physical Review Letters_ , 96(1), 4. 10.1103/PhysRevLett.96.255002. * Kobayashi et al. (2014) Kobayashi, K., J. Cirtain, A. R. Winebarger, K. Korreck, L. Golub, et al., 2014\. The High-Resolution Coronal Imager (Hi-C). _Solar Physics_ , 289(11), 4393–4412. 10.1007/s11207-014-0544-4, URL http://link.springer.com/10.1007/s11207-014-0544-4. * Koutchmy (1988) Koutchmy, S., 1988. Space-Born Coronagraphy. _Space Science Reviews_ , 47, 95–143. URL http://articles.adsabs.harvard.edu/pdf/1988SSRv...47...95K. * Lemen et al. (2012) Lemen, J. R., A. M. Title, D. J. Akin, P. F. Boerner, C. Chou, et al., 2012. The Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO). _Solar Physics_ , 275(1-2), 17–40. 10.1007/s11207-011-9776-8, URL http://link.springer.com/10.1007/s11207-011-9776-8. * Lin et al. (2001) Lin, A. C., R. W. Nightingale, and T. D. Tarbell, 2001. Diffraction pattern analysis of bright trace flares. _Solar Physics_ , 198(2), 385–398. 10.1023/A:1005213527766. * Martínez-Galarce et al. (2010) Martínez-Galarce, D., J. Harvey, M. Bruner, J. Lemen, E. Gullikson, R. Soufli, E. Prast, and S. Khatri, 2010. A novel forward-model technique for estimating EUV imaging performance: design and analysis of the SUVI telescope. In Space Telescopes and Instrumentation 2010: Ultraviolet to Gamma Ray, vol. 7732, 773,237–1. 10.1117/12.864577. * Mason et al. (2019) Mason, J. P., R. Attie, C. N. Arge, B. Thompson, and T. N. Woods, 2019. The SDO/EVE Solar Irradiance Coronal Dimming Index Catalog. I. Methods and Algorithms. _The Astrophysical Journal Supplement Series_ , 244(1), 13\. 10.3847/1538-4365/ab380e, URL https://iopscience.iop.org/article/10.3847/1538-4365/ab380e. * Mason et al. (2017) Mason, J. P., M. Baumgart, B. Rogler, C. Downs, M. Williams, et al., 2017. MinXSS-1 CubeSat On-Orbit Pointing and Power Performance: The First Flight of the Blue Canyon Technologies XACT 3-axis Attitude Determination and Control System. _Journal of Small Satellites_ , 6(3), 651–662. URL http://arxiv.org/abs/1706.06967https://jossonline.com/letters/minxss-1-cubesat-on-orbit-pointing-and-power-performance-the-first-flight-of-the-blue-canyon-technologies-xact-3-axis-attitude-determination-and-control-system/. * Mason et al. (2014) Mason, J. P., T. N. Woods, A. Caspi, B. J. Thompson, and R. A. Hock, 2014. Mechanisms and Observations of Coronal Dimming for the 2010 August 7 Event. _The Astrophysical Journal_ , 789(1), 61. 10.1088/0004-637X/789/1/61, URL http://adsabs.harvard.edu/abs/2014ApJ...789...61M. * Mason et al. (2016) Mason, J. P., T. N. Woods, D. F. Webb, B. J. Thompson, R. C. Colaninno, and A. Vourlidas, 2016. Relationship of EUV Irradiance Coronal Dimming Slope and Depth to Coronal Mass Ejection Speed and Mass. _The Astrophysical Journal_ , 830(20), 12. 10.3847/0004-637X/830/1/20, URL http://stacks.iop.org/0004-637X/830/i=1/a=20?key=crossref.2d956aff9237fc3069d8edd80c37186d. * Mierla et al. (2013) Mierla, M., D. B. Seaton, D. Berghmans, I. Chifu, A. De Groof, B. Inhester, L. Rodriguez, G. Stenborg, and A. N. Zhukov, 2013. Study of a Prominence Eruption using PROBA2/SWAP and STEREO/EUVI Data. _Solar Physics_ , 286(1), 241–253. 10.1007/s11207-012-9965-0, URL http://link.springer.com/10.1007/s11207-012-9965-0. * O’Hara et al. (2019) O’Hara, J. P., M. Mierla, O. Podladchikova, E. D’Huys, and M. J. West, 2019\. Exceptional Extended Field-of-view Observations by PROBA2 /SWAP on 2017 April 1 and 3 . _The Astrophysical Journal_ , 883(1), 59. 10.3847/1538-4357/ab3b08, URL http://dx.doi.org/10.3847/1538-4357/ab3b08. * Pong (2018) Pong, C., 2018. On-Orbit Performance and Operation of the Attitude and Pointing Control Subsystems on ASTERIA. In AIAA/USU Conference on Small Satellites. Logan, UT. URL https://digitalcommons.usu.edu/smallsat/2018/all2018/361. * Sarkar et al. (2019) Sarkar, R., N. Srivastava, M. Mierla, M. J. West, and E. D’Huys, 2019. Evolution of the Coronal Cavity From the Quiescent to Eruptive Phase Associated with Coronal Mass Ejection. _The Astrophysical Journal_ , 875, 101. 10.3847/1538-4357/ab11c5. * Schrijver et al. (2008) Schrijver, C. J., C. Elmore, B. Kliem, T. Torok, and A. M. Title, 2008. Observations and Modeling of the Early Acceleration Phase of Erupting Filaments Involved in Coronal Mass Ejections. _The Astrophysical Journal_ , 674(1), 586–595. 10.1086/524294. * Seaton et al. (2013) Seaton, D. B., D. Berghmans, B. Nicula, J. P. Halain, A. De Groof, et al., 2013\. The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing. _Solar Physics_ , 286(1), 43–65. 10.1007/s11207-012-0114-6. * Seaton and Darnel (2018) Seaton, D. B., and J. M. Darnel, 2018. Observations of an Eruptive Solar Flare in the Extended EUV Solar Corona. _The Astrophysical Journal Letters_ , 852, L9. 10.3847/2041-8213/aaa28e. * Tadikonda et al. (2019) Tadikonda, S. K., D. C. Freesland, R. R. Minor, D. B. Seaton, G. J. Comeyne, and A. Krimchansky, 2019. Coronal Imaging with the Solar UltraViolet Imager. _Solar Physics_ , 294, 28. 10.1007/s11207-019-1411-0. * Thompson et al. (2000) Thompson, B. J., E. W. Cliver, N. V. Nitta, C. Delannée, and J. P. Delaboudiniere, 2000. Coronal Dimmings and Energetic CMEs in April-May 1998. _Geophysical Research Letters_ , 27(10), 1431–1434. * Török and Kliem (2007) Török, T., and B. Kliem, 2007. Numerical simulations of fast and slow coronal mass ejections. _Astronomische Nachrichten_ , 328(8), 743–746. 10.1002/asna.200710795. * Veronig et al. (2018) Veronig, A. M., T. Podladchikova, K. Dissauer, M. Temmer, D. B. Seaton, D. Long, J. Guo, B. Vršnak, L. Harra, and B. Kliem, 2018. Genesis and Impulsive Evolution of the 2017 September 10 Coronal Mass Ejection. _The Astrophysical Journal_ , 868, 107. 10.3847/1538-4357/aaeac5, URL https://doi.org/10.3847/1538-4357/aaeac5. * Vršnak et al. (2007) Vršnak, B., D. Maričić, A. L. Stanger, A. M. Veronig, M. Temmer, and D. Roša, 2007. Acceleration phase of coronal mass ejections: I. Temporal and spatial scales. _Solar Physics_ , 241(1), 85–98. 10.1007/s11207-006-0290-3, URL https://link.springer.com/article/10.1007/s11207-006-0290-3. * Webb and Howard (2012) Webb, D. F., and T. A. Howard, 2012. Coronal Mass Ejections: Observations. _Living Reviews in Solar Physics_ , 9, 3. 10.12942/lrsp-2012-3, URL http://link.springer.com/10.12942/lrsp-2012-3. * Woods et al. (2012) Woods, T. N., F. G. Eparvier, R. A. Hock, A. R. Jones, D. L. Woodraska, et al., 2012\. Extreme Ultraviolet Variability Experiment (EVE) on the Solar Dynamics Observatory (SDO): Overview of Science Objectives, Instrument Design, Data Products, and Model Developments. _Solar Physics_ , 275, 115–143. 10.1007/s11207-009-9487-6, URL http://link.springer.com/10.1007/s11207-009-9487-6. * Woods et al. (2011) Woods, T. N., R. A. Hock, F. G. Eparvier, A. R. Jones, P. C. Chamberlin, et al., 2011. New Solar Extreme-Ultraviolet irradiance Observations During Flares. _The Astrophysical Journal_ , 739, 59. 10.1088/0004-637X/739/2/59.
# Manipulating a Continuous Instrumental Variable in an Observational Study of Premature Babies: Algorithm, Partial Identification Bounds, and Inference under Randomization and Biased Randomization Assumptions Zhe Chen † Department of Statistics, University of Illinois Urbana-Champaign Min Haeng Cho † Department of Biostatistics, University of Washington Bo Zhang Assistant Professor of Biostatistics, Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Center. Email<EMAIL_ADDRESS>† The first two authors contributed equally. Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Center Abstract: Regionalization of intensive care for premature babies refers to a triage system of mothers with high-risk pregnancies to hospitals of varied capabilities based on risks faced by infants. Due to the limited capacity of high-level hospitals, which are equipped with advanced expertise to provide critical care, understanding the effect of delivering premature babies at such hospitals on infant mortality for different subgroups of high-risk mothers could facilitate the design of an efficient perinatal regionalization system. Towards answering this question, Baiocchi et al. (2010) proposed to _strengthen_ an excess-travel-time-based, continuous instrumental variable (IV) in an IV-based, matched-pair design by switching focus to a smaller cohort amenable to being paired with a larger separation in the IV dose. Three elements changed with the strengthened IV: the study cohort, compliance rate and latent complier subgroup. Here, we introduce a non-bipartite, template matching algorithm that embeds data into a target, pair-randomized encouragement trial which maintains fidelity to the original study cohort while strengthening the IV. We then study randomization-based and IV- dependent, biased-randomization-based inference of partial identification bounds for the sample average treatment effect (SATE) in an IV-based matched pair design, which deviates from the usual effect ratio estimand in that the SATE is agnostic to the IV and who is matched to whom, although a strengthened IV design could narrow the partial identification bounds. Based on our proposed strengthened-IV design, we found that delivering at a high-level NICU reduced preterm babies’ mortality rate compared to a low-level NICU for $81,766\times 2=163,532$ mothers and their preterm babies and the effect appeared to be minimal among non-black, low-risk mothers. Keywords: Biased randomization; Heterogeneous treatment effect; Instrumental variable; Partial identification bound; Target trial emulation ## 1 Introduction ### 1.1 Does delivery at a high-level NICU reduce infant mortality? Regionalization of intensive care for premature babies refers to a triage system of mothers with high-risk pregnancies to hospitals of varied capabilities based on risks faced by infants. In the analyses of premature births in Pennsylvania, Baiocchi et al., (2010) and Lorch et al., (2012) found that delivering premature babies at a high-level neonatal intensive care unit (NICU), broadly defined as a medical unit equipped with advanced technical expertise and capacity to provide critical care for newborns and premature infants, was associated with lower infant mortality rate for a latent complier subgroup. Yet, due to the limited capacity of high-level NICUs, it is not practical to triage all mothers with high-risk pregnancies to high-level NICUs. Some previous works have further explored the treatment effect heterogeneity of delivering at a high-level versus low-level NICU. For instance, Yang et al., (2014) found that delivering at a high-level NICU significantly reduced deaths for babies of small gestational age but had a negligible effect for almost mature babies. More recently, Chen and Zhang, (2023) estimated individual treatment rules subject to different levels of resource constraints and found that mothers’ age and race/ethnicity may also be potential effect modifiers. These findings appear to align well with the clinical literature. According to a $2015$ National Vital Statistics Reports (Mathews et al.,, 2015), black infants in the US had a $2.2$-fold greater mortality rate than white infants. On the other hand, it has long been established that the risk of miscarriage increases with age; in particular, Hansen, (1986) found that pregnant women aged $35$ or older experience an increased risk of intrauterine fetal death, pregnancy-induced hypertension, gestational diabetes, and delivery by cesarean. Motivated by these findings, we utilize observational data from the Commonwealth of Pennsylvania to investigate whether delivering preterm babies at a high-level NICU reduces infant mortality for all mothers in Pennsylvania and certain high-risk subgroups like black mothers and mother who had reached or were past the advanced maternal age of $35$. ### 1.2 Excess travel time as an instrumental variable To estimate the treatment effect from retrospective observational data, a naïve study design would compare mothers who received care at a high-level NICU with those at a low-level NICU. However, as pointed out in numerous previous works (Baiocchi et al.,, 2010; Lorch et al.,, 2012; Yang et al.,, 2014; Pu and Zhang,, 2021; Chen and Zhang,, 2023), treatment effect estimates derived from such a naïve comparison would be biased because receipt of neonatal intensive care at a high-level NICU as opposed to a low-level one could be easily confounded by unmeasured variables not captured in administrative databases. When faced with unmeasured confounding bias, researchers often resort to quasi-experimental devices (see, e.g., Cook et al.,, 2002; Rosenbaum,, 2010). Instrumental variable (IV) methods are among the most popular (Haavelmo,, 1943; Angrist and Imbens,, 1995; Angrist et al.,, 1996). Roughly speaking, a valid IV is a variable independent of unmeasured treatment-outcome confounders, possibly within strata defined by observed covariates, and whose sole effect on the outcome is through its effect on the treatment (see Section 4 for a formal definition in the context of a continuous IV). In a seminal paper, Angrist et al., (1996) showed that under an additional “monotonicity” or “no defiers” assumption, a valid IV could be used to nonparametrically identify the treatment effect among a well-defined, albeit latent, subgroup referred to as “compliers.” In their original study design, Baiocchi et al., (2010) used excess travel time, or the difference in travel times (in minutes) from a mother’s zip code to the nearest high-level and low-level NICUs, as an IV for delivering at a high-level versus low-level NICU. According to this definition, a small excess travel time indicates proximity to a high-level NICU relative to a low-level NICU and corresponds to a strong encouragement to deliver at a high-level NICU. Similar proximity-based IVs have been extensively used in health services research (Newhouse and McClellan,, 1998; Baiocchi et al.,, 2014). Using an IV-based method, Baiocchi et al., (2010) concluded that for every $100$ additional mothers encouraged by the IV to deliver at a high-level NICU, $0.9$ additional infant deaths could be avoided. ### 1.3 Compiler average treatment effect; criticism; strengthening a continuous IV The extensive use of instrumental variables in empirical research is not without criticism. For instance, Deaton, (2009) argued that IV estimation methods are analogous to letting the light fall where it may when looking for an object and then proclaiming that whatever the light illuminates is what has been sought after all along. Heckman and Urzua, (2010) expressed similar sentiments. The crux of their arguments is concerning the tension between internal and external validities of IV estimates: it is, after all, not clear if the proclaimed treatment effect among compliers could be readily generalized to a target subgroup of interest. The inconvenient fact that the complier subgroup is not identified from the population and could possibly be altered by different incentive structures only makes the generalizability of IV estimates more challenging (Joffe,, 2011). Responding to these criticisms, Imbens, (2010) argued that the internal validity of an IV estimate is often superior to that of other estimands for observational data, and that “a credible estimate of the average effect for a subpopulation is preferred to an estimate of the average for the overall population with little credibility” (Imbens,, 2010). As discussed by Imbens, (2010), two approaches could complement an IV estimate. First, although a valid IV cannot point identify the average treatment effect (ATE), it can partially identify it; see Swanson et al., (2018) for a recent review and references therein. Alternatively, some researchers propose additional identification assumptions that allow the complier average treatment effect to be generalized to the entire population. Some examples include the principal ignorability assumption (Jo and Stuart,, 2009; Angrist and Fernandez-Val,, 2010; Ding and Lu,, 2017) and the homogeneity/no-interaction assumption (Hernán and Robins,, 2006; Wang and Tchetgen Tchetgen,, 2018), among others. In their original analysis, Baiocchi et al., (2010) used a study design technique called non-bipartite matching (Lu et al.,, 2001, 2011; Rigdon et al.,, 2018; Zhang et al.,, 2023) to construct matched pairs comprising comparable mothers with different excess travel times. Their key innovation is to consider two study designs: one that utilized all data and created $99,174$ pairs (Design I) and the other that “strengthened” the IV by only pairing two similar mothers whose IVs were far apart, yielding $49,587$ pairs (Design II). Three things changed between Design I and Design II. First, the population under investigation changed; for instance, the proportion of white mothers increased from $70\%$ in Design I to $85\%$ in Design II (Baiocchi et al.,, 2010, Table 1). Second, the compliance rate changed. Baiocchi et al.,’s (2010) key insight is that, by forcing the continuous IV to be further apart within each matched pair, the compliance rate induced by this strengthened IV increased, improving statistical inference of the effect ratio estimand, an analogue of the Wald estimate in a matched-pair design. Third, the latent complier subgroup changed. The original IV in Design I and its strengthened version in Design II targeted a sample average treatment effect over different complier subgroups. Baiocchi et al., (2010, Section 5) argued that the effect ratio estimate obtained from the strengthened design was more “typical” in the sense that a typical mother was more likely to respond to a strong incentive in Design II as opposed to a much weaker incentive in Design I. ### 1.4 Manipulating a continuous IV with two purposes; caveats In his perspective piece, Deaton, (2009) argued that standard statistical practice would first define a parameter of interest and then construct an estimator targeting the parameter. In the causal inference literature, a causal estimand of interest is often defined first, followed by identification assumptions and estimation procedures operated under these identification assumptions. On the contrary, IV-based procedures implicitly let the chosen IV, which may have little to do with the scientific question at hand, determine the causal estimand. In this article, we propose a study design framework that builds upon Baiocchi et al.,’s (2010) key insight and show that an ideal paradigm—stating a well- defined causal estimand for an identifiable target subgroup and _irrespective of_ the IV, followed by identification and estimation using an IV—could be achieved by manipulating a continuous IV in a design-based framework (Rosenbaum, 2002a, ; Imbens and Rosenbaum,, 2005). Our proposed study design starts by building matched pairs that satisfy the following three criteria: (i) the covariates of matched mothers are deemed well balanced according to formal diagnostic tests; (ii) the covariate distribution of the matched sample closely resembles that of a target study population; and (iii) the within-pair difference in the IV is maximized to the extent that the data can afford, with the goal of maximizing the IV-induced compliance rate. We then propose a randomization inference (RI) approach to infer partial identification bounds of the sample average treatment effect and extend the approach to accommodate an IV-dose-dependent, biased randomization scheme. Our framework deviates from previous works that conduct RI-based inference for the structural parameter in a constant, proportional treatment effect model (Imbens and Rosenbaum,, 2005; Small and Rosenbaum,, 2008) or the effect ratio estimand (Baiocchi et al.,, 2010; Zhang et al.,, 2022). By strengthening the IV and maximizing the compliance rate, a careful design could help derive more informative bounds for the desired effect. In Deaton,’s (2009) analogy, researchers clearly state what is desired to be sought after and then adjust the brightness of the light (i.e., strengthen the continuous IV) to illuminate the object. A valid, continuous IV is at the heart of our proposed design. A binary IV cannot be strengthened; it appears that researchers have to accept the IV as it is and its associated complier subgroup. Fortunately, a lot of commonly used IVs, such as variables related to geography and nature, are often continuous. Alternatively, one may combine multiple binary IVs to construct a many-category IV that is amenable to being strengthened. An important caveat is regarding the validity of the IV: if the putative IV is not valid, then strengthening it could increase the bias if the unmeasured IV-outcome confounder is correlated with the continuous IV (Heng et al.,, 2023). ## 2 Algorithm ### 2.1 Design considerations Three elements are key to our proposed study design. First, mothers with similar observed covariates but different IV-defined exposures, e.g., excess travel times, need be paired together. One approach would be to dichotomize the continuous exposure and adopt a conventional matching algorithm designed for a binary exposure (Rosenbaum,, 2020); however, this practice leads to a potential violation of the stable unit treatment value assumption (SUTVA) (Rubin,, 1980, 1986). As an alternative, non-bipartite matching creates matched pairs without dichotomizing the continuous exposure (Lu et al.,, 2001, 2011; Baiocchi et al.,, 2010; Zhang et al.,, 2023). Second, when the data can afford, it is useful to maximize the within-matched-pair difference in the IV dose so that the compliance rate can be maximized. To achieve this, we will resort to a design device called “dose caliper” (see, e.g., Zhang et al.,, 2023, Section 5). Third, we would like the final matched sample to closely resemble the template, i.e., a random sample from a target subgroup, to enhance the generalizability of the clinical conclusions. This design consideration is referred to as template matching in the literature and has been studied extensively for a binary exposure (Bennett et al.,, 2020; Zhang,, 2023). Yet to our knowledge, statistical matching methods that build representative matched pairs with a many-category or continuous exposure are currently lacking. ### 2.2 A non-bipartite matching algorithm that constructs representative samples To accommodate the three design considerations, we propose a modification of the optimal non-bipartite matching algorithm in Baiocchi et al., (2010). Our key idea is to add e “phantom units,” or “sinks,” that represent units in the template and design a discrepancy matrix in such a way that study participants who do not resemble the template are matched to the sinks and thus eliminated from the final matched sample. The discrepancy matrix is built using the following steps. Suppose there are $n$ observational units and $e<n$ sinks representing a target subgroup. Define the following $(n+e)\times(n+e)$ distance matrix: $\mathcal{M}=\left(\begin{array}[]{cccc|ccc}\infty&\delta_{1,2}&\cdots&\delta_{1,n}&\Delta_{1,n+1}&\cdots&\Delta_{1,n+e}\\\ \delta_{2,1}&\infty&\cdots&\delta_{2,n}&\Delta_{2,n+1}&\cdots&\Delta_{2,n+e}\\\ \vdots&&\ddots&\vdots&\vdots&\ddots&\vdots\\\ \delta_{n,1}&\delta_{n,2}&\cdots&\infty&\Delta_{n,n+1}&\cdots&\Delta_{n,n+e}\\\ \hline\cr\Delta_{n+1,1}&\Delta_{n+1,2}&\cdots&\Delta_{n+1,n}&\infty&\cdots&\infty\\\ \vdots&&\ddots&\vdots&\infty&\ddots&\infty\\\ \Delta_{n+e,1}&\Delta_{n+e,2}&\cdots&\Delta_{n+e,n}&\infty&\cdots&\infty\\\ \end{array}\right),$ where $\mathcal{M}_{i,j}$ denotes the $(i,j)$-th entry of $\mathcal{M}.$ Here, $\mathcal{M}_{i,j}$ represents a distance between two observational units for $i,j\in[n],$ between two sinks for $i,j\in\\{n+1,n+2,\dots,n+e\\},$ and between an observational unit and a sink for the remaining entries. For $i,j\in[n]$, let $\delta_{i,j}$ be a measure of the distance between the observed covariates $x_{i}$ and $x_{j}$ of units $i$ and $j,$ respectively. Intuitively, $\delta_{i,j}$ measures homogeneity between two observational units and can take different specifications. For example, $\delta_{i,j}$ can be the absolute difference in the estimated propensity scores of units $i$ and $j$ or (rank-based) Mahalanobis distance between $x_{i}$ and $x_{j}$ (Rosenbaum and Rubin,, 1985). To strengthen the continuous IV, we can additionally incorporate the dose caliper into $\delta_{i,j}$ (Zhang et al.,, 2023). That is, we let $\delta_{i,j}=\widetilde{\delta}_{i,j}+C\times\mathbb{I}\\{|\widetilde{Z}_{i}-\widetilde{Z}_{j}|\leq\tau\\},$ where $\widetilde{\delta}_{i,j}$ is a measure of similarity between $x_{i}$ and $x_{j},$ $\widetilde{Z}_{i}$ and $\widetilde{Z}_{j}$ are the continuous IV doses of units $i$ and $j$, respectively, $\tau$ is the dose caliper and $C$ is a large penalty applied when $\widetilde{Z}_{i}$ and $\widetilde{Z}_{j}$ differ by less than or equal to the caliper size. By setting $C=\infty$ or a large number, the dose caliper can be implemented as a hard or soft constraint, respectively (Zhang et al.,, 2023). In the NICU study, such a dose caliper would force excess travel times within matched pairs to be far apart by a pre-specified amount $\tau$, thus strengthening the IV and improving the IV-induced compliance rate. Note that we always set $\delta_{i,i}=\infty$ for $i\in[n]$ to prevent observational units from being matched to themselves. By symmetry, $\delta_{i,j}=\delta_{j,i}.$ For $i\in[n]$ and $j\in\\{n+1,n+2,\dots,n+e\\},$ let $\Delta_{i,j}$ be a measure of the distance between the observed covariates $x_{i}$ and $x_{j}$ of observational unit $i$ and sink $j$, which represents a unit from the template. Intuitively, if unit $i$ is dissimilar to sink $j$, we would want the algorithm to pair them together in order to eliminate the $i$-th unit, in which case $\Delta_{i,j}$ would need to be small. Hence, one possible specification is to let $\Delta_{i,j}$ equal an arbitrary large number minus the (rank-based) Mahalanobis distance between $x_{i}$ and $x_{j}$. On top of that, we can further incorporate a “reverse” generalizability caliper defined by a generalizability score such as the “probability of participation,” i.e., the conditional probability of being selected into the template given observed covariates, thereby controlling the degree of resemblance between unit $i$ and sink $j$ (Cole and Stuart,, 2010; Stuart et al.,, 2011). To this end, let $\Delta_{i,j}=\widetilde{C}-\widetilde{\Delta}_{i,j}+D\times\mathbb{I}\\{|\hat{S}_{i}-\hat{S}_{j}|\leq\gamma\\},$ where $\widetilde{C}$ is an arbitrary number added to ensure that $\Delta_{i,j}$ is non-negative; $\widetilde{\Delta}_{i,j}$ is rank-based Mahalanobis distance between $x_{i}$ and $x_{j}$; $\hat{S}_{i}$ and $\hat{S}_{j}$ are the estimated probability of participation for units $i$ and $j$, respectively; $\gamma$ is the generalizability score caliper; and $D=\infty$ or is a large penalty applied when $\hat{S}_{i}$ and $\hat{S}_{j}$ differ by less than or equal to the caliper size $\gamma$. If the generalizability scores of unit $i$ and sink $j$ are similar, this implies that unit $i$ closely resembles sink $j$. By imposing a large penalty $D$, we prevent them from being matched and thus retain unit $i$ in our final matched sample. By symmetry, $\Delta_{i,j}=\Delta_{j,i}$. Lastly, $\mathcal{M}_{i,j}=\infty$ for $i,j\in\\{n+1,n+2,\dots,n+e\\}.$ That is, we impose an arbitrarily large penalty for any entry corresponding to two sinks, or to a sink and itself, to prevent them from being matched into pairs. In practice, $\mathcal{M}_{i,j}$ can be set to a large finite number. An optimal non-bipartite matching algorithm takes as an input the distance matrix $\mathcal{M}$ and $n+e$ units and yields $(n+e)/2$ matched pairs that minimize the sum of the discrepancies. The final matched sample comprises $(n-e)/2$ pairs of observational units and the remaining $e$ pairs consisting of an observational unit and a phantom unit. The algorithm runs in polynomial time and is available in the R package nbpmatching (Derigs,, 1988; Lu et al.,, 2011). ### 2.3 Informal and formal balance diagnostics An IV-based matched cohort design seeks to embed the retrospective observational data into an “approximate” randomized encouragement experiment. A popular downstream analysis approach is to conduct randomization inference conditional on all the potential outcomes, where the only source of randomness in statistical inference comes from the probabilistic assignment mechanism of the IV doses (Imbens and Rosenbaum,, 2005; Baiocchi et al.,, 2010; Heng et al.,, 2023). Informal and formal balance diagnostics are often used to justify the _randomization assumption_ , which states that the two IV doses within each matched pair are randomly assigned (Silber et al.,, 2001; Gagnon-Bartsch and Shem-Tov,, 2019; Yu,, 2020). For instance, Gagnon-Bartsch and Shem-Tov,’s (2019) _Classification Permutation Test_ (CPT) provides a powerful machine- learning-based test for the randomization assumption. If the randomization assumption is rejected, then some relaxed version of it, such as the biased randomization assumption, may be tested and a minimal degree of residual imbalance due to inexact matching on the observed covariates may be quantified (Chen et al.,, 2023). If there is strong evidence against the randomization assumption, then one could opt to conduct the primary analysis under a biased randomization scheme. Regardless of whether the randomization or biased randomization assumption is used in the primary analysis, a sensitivity analysis that further examines the impact of unmeasured IV-outcome confounding is recommended. ## 3 Data and study design ### 3.1 NICU Data; target trial; details on statistical matching We considered data on $181,762$ preterm babies in the Commonwealth of Pennsylvania between 1995 and 2004. As stipulated by the American Academy of Pediatrics, there are six levels of NICUs of increasing technical expertise and capability: 1, 2, 3A, 3B, 3C and 3D, as well as regional centers 4 (Baiocchi et al.,, 2010). We followed Baiocchi et al., (2010) and defined a hospital to be high level if it delivered at least 50 preterm babies per year on average and its NICU was of level 3A-3D or 4; we defined a hospital as low level if it delivered fewer than 50 preterm babies annually or its NICU was below level 3A. Our goal is to embed the retrospective, observational data into a hypothetical randomized encouragement trial in such a way that the instrument is strengthened to the extent that the data can afford and the final matched sample closely resembles the template. We constructed and compared two matched samples, one constructed using a usual technique (M0) and the other based on the algorithm proposed in Section 2 (M1). For both designs, we matched mothers on birth weight, parity, insurance type (fee for service or others), below poverty, mother’s age, mother’s education, and a single birth indicator, and matched exactly on the year the data was collected. To prepare for the subgroup analyses based on M1, we further matched on a categorical, high-risk variable defined by a combination of a mom’s race, age, and gestational age, motivated by clinical literature discussed in Section 1.1: the high risk category equals $0$ if a mother is black, $1$ if a mother is at least $35$ years old and her gestational age does not exceed $36$, $2$ if a mother is younger than $35$ and her gestational age does not exceed $36$, and $3$ if a mother’s gestational age exceeds $36$. The first matched sample M0 retained and matched all $181,722$ mothers into pairs using a conventional non-bipartite matching algorithm (Lu et al.,, 2001, 2011). For the second matched sample M1, we first created a template subgroup of mothers by randomly sampling without replacement $10$ percent of mothers from the entire dataset spanning $1995$ to $2004$. We then implemented the statistical matching algorithm introduced in Section 2.2, which includes several design devices such as a generalizability caliper and an IV dose caliper. We implemented the “reverse” generalizability caliper by first estimating the “probability of participation” as the generalizability score (Cole and Stuart,, 2010). Specifically, we pooled the corresponding template and the entire observational data, created a binary indicator variable $Y_{i}$ ($1$ if participant $i$ belongs to the template and $0$ otherwise), and estimated the “probability of participation” by fitting a logistic regression model. Following the notation in Section 2.2, we let $D=500,\,\tilde{C}=1+\max_{i,j}{\tilde{\Delta}}_{i,j},$ and $\gamma$ equal to the standard deviation of $|S_{i}-S_{j}|$ over all observational units $i$ and template units $j.$ Then, to implement the dose caliper, we let $C=250$ and used rank-based Mahalanobis distance between two observational units $i,j$ for $\tilde{\delta}_{i,j}$. Finally, we considered a dose caliper $\tau$ equal to the standard deviation of the excess travel times from all mothers. Design M1 yielded $81,766$ matched pairs of mothers whose within-matched-pair rank-based Mahalanobis distance was minimized subject to the constraints enforced by the above generalizability and IV dose calipers. ### 3.2 Matched samples Table 1 contrasts the average excess travel times and covariate balance between mothers encouraged to deliver at high-level and low-level NICUs in the matched samples M0 and M1. In both matches, the absolute SMD of almost every covariate between matched pairs of mothers does not exceed $0.1$, or one-tenth of one standard deviation (Silber et al.,, 2001). We will formally test the randomization and relaxed randomization assumption before conducting inference in Section 7. Table 1: Degree of encouragement and covariate balance between mothers encouraged to deliver at high-level (near) versus low-level (far) NICUs in two matched comparisons using the $1995-2004$ data. Matches M0 and M1 are created without and with a dose caliper, respectively. For each variable, we report the mean and the absolute standardized difference. GA: gestational age. | Match 0 (M0) | Match 1 (M1) ---|---|--- | $90,861$ matched pairs | $81,766$ matched pairs | Near | Far | Absolute | Near | Far | Absolute | Mean | Mean | SMD | Mean | Mean | SMD Instrumental variable | | | | | | Excess travel time (min) | 6.07 | 20.97 | 0.93 | 2.07 | 27.10 | 1.88 Covariates | | | | | | Birthweight (g) | 2583 | 2584 | 0.00 | 2597 | 2597 | 0.00 Gestational age (weeks) | 35.12 | 35.13 | 0.00 | 35.21 | 35.19 | 0.01 Gestational diabetes (0/1) | 0.05 | 0.05 | 0.01 | 0.05 | 0.05 | 0.01 Single birth (0/1) | 0.83 | 0.83 | 0.00 | 0.85 | 0.84 | 0.02 Parity | 2.11 | 2.12 | 0.01 | 2.06 | 2.11 | 0.03 Mother’s age (years) | 28.05 | 28.04 | 0.00 | 28.03 | 27.80 | 0.04 Mother’s education (scale) | 3.69 | 3.68 | 0.00 | 3.73 | 3.63 | 0.08 Black (0/1) | 0.16 | 0.16 | 0.00 | 0.16 | 0.16 | 0.00 Below poverty (proportion) | 0.13 | 0.13 | 0.01 | 0.12 | 0.12 | 0.06 Fee for service (0/1) | 0.21 | 0.21 | 0.00 | 0.20 | 0.21 | 0.05 High risk category | | | | | | Black | 0.16 | 0.16 | 0.00 | 0.16 | 0.16 | 0 Non-Black, age $\geq 35$, GA $\leq 36$ | 0.08 | 0.08 | 0.00 | 0.08 | 0.08 | 0 Non-Black, age $<35$, GA $\leq 36$ | 0.40 | 0.39 | 0.00 | 0.40 | 0.40 | 0 Non-Black, GA $>36$ | 0.36 | 0.36 | 0.00 | 0.36 | 0.36 | 0 A key difference between M0 and M1 is the level of separation in the IV enforced by the study design. The average difference (larger minus smaller within each matched pair) in excess travel times between mothers encouraged to deliver at a low-level versus a high-level NICU for M0 and M1 are $20.97-6.07=14.90$ and $27.10-2.07=25.03$ minutes, respectively. This is in line with our study design: a large IV dose caliper enforces the excess travel times for mothers in each matched pair to be further apart, thereby strengthening the IV and improving the IV-induced compliance rate from $65.4\%-45.6\%=19.8\%$ in M0 to $71.9\%-35.5\%=36.4\%$ in M1. According to the analytical results in Heng et al., (2023), in order for M0 to attain the same power as M1 when testing Fisher’s sharp null hypothesis, M0 needs to have a sample size that is $(36.4\%/19.8\%)^{2}=3.38$ times that of M1. Design M0’s sample size is only $1.1$ times that of M1; therefore, from the perspective of testing Fisher’s sharp null hypothesis, the strengthened design M1 is more advantageous. Figure 1: Boxplots of the rank-based Mahalanobis distances of two types of matched pairs in M1: (i) two observational units (Obs-Obs) or (ii) an observational unit and a template unit (Obs-Template). Furthermore, by using the template matching technique and enforcing the generalizability caliper, the matched sample M1 also exhibits good generalizability to the template. For example, M1 largely retains the original composition of black mothers, who comprise $16\%$ of all the mothers in the observational data, and of mothers with fee-for-service health insurance. This contrasts the original strengthened IV design in Baiocchi et al., (2010) which, in transitioning from the unstrengthened to strengthened IV matching design, shifted the study cohort from $15\%$ to $5\%$ of black mothers and from $21\%$ to $25\%$ of mothers with fee-for-service health insurance. Figure 1, which illustrates the distribution of rank-based Mahalanobis distances between matched pairs retained in or eliminated from M1, further demonstrates the effectiveness of our matching algorithm in maintaining good generalizability of the final matched sample to the entire study cohort. For the matched pairs that each consist of two mothers from the observational database and thus comprise M1, the median rank-based Mahalanobis distance is $12$ (interquartile range: [5, 31]). For the matched pairs that are eliminated from M1, the median rank-based Mahalanobis distance is $279$ (interquartile range: [226, 329]). This is in line with our expectation, as mothers who resembled the template the least in terms of the covariates had large rank- based Mahalanobis distances when matched to the template and were thus eliminated from the final matched sample M1. ## 4 Notation: potential outcomes and estimands ### 4.1 Potential outcomes Our proposed design consists of $I$ matched pairs of two study participants. The $j$-th participant in the $i$-th pair is indexed by $ij$. Each participant $ij$ is associated with a continuous IV $\widetilde{Z}_{ij}=\widetilde{Z}_{ij}^{\text{obs}}$ e.g., excess travel time in the NICU study. The design embeds the observational data into a hypothetical randomized encouragement experiment as follows. Within each matched pair $i$, fix the two doses of the continuous IV at $\widetilde{Z}_{i1}^{\text{obs}}$ and $\widetilde{Z}_{i2}^{\text{obs}}$ and flip a fair coin. If the coin lands heads, assign the IV dose $\widetilde{Z}_{i1}^{\text{obs}}$ to $j=1$ and $\widetilde{Z}_{i2}^{\text{obs}}$ to $j=2$; if the coin lands tails, assign the IV dose $\widetilde{Z}_{i1}^{\text{obs}}$ to $j=2$ and $\widetilde{Z}_{i2}^{\text{obs}}$ to $j=1$. Let $\widetilde{\mathbf{Z}}=(\widetilde{Z}_{11},\dots,\widetilde{Z}_{I2})$ denote the vector of $I\times 2=2I$ IV dose assignments. In a randomized encouragement experiment, it is the encouragement (i.e., the continuous IV $\widetilde{Z}$), not the treatment itself, that is randomized either by nature or an experimenter. Let $D_{ij}(\widetilde{\mathbf{Z}})$ denote the indicator of whether participant $ij$ receives the binary treatment under the IV dose vector $\widetilde{\mathbf{Z}}$. We make the SUTVA assumption (Rubin,, 1980, 1986) so that $D_{ij}(\widetilde{\mathbf{Z}})$ depends on $\widetilde{\mathbf{Z}}$ only via $\widetilde{Z}_{ij}$ such that $D_{ij}(\widetilde{\mathbf{Z}})$ can be simplified to $D_{ij}(\widetilde{Z}_{ij})$. We adopt the notation in Zhang et al., (2022) and Heng et al., (2023) and define the following potential outcomes: $\begin{split}&d_{Cij}\overset{\Delta}{=}D_{ij}(\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}})\leavevmode\nobreak\ \quad\text{and}\quad\leavevmode\nobreak\ d_{Tij}\overset{\Delta}{=}D_{ij}(\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}),\end{split}$ (1) where $\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}}$ denotes the minimum of two IV doses and $\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}$ the maximum of two IV doses in the $i$-th matched pair. Importantly, participant $ij$’s compliance status is determined by two IV doses in matched pair $i$ and the compliance status is fixed once the study design is fixed. Participant $ij$ is said to be a complier with respect to the IV dose pair $(\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}},\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}})$ if $(d_{Tij},d_{Cij})=(1,0)$, an always-taker if $(d_{Tij},d_{Cij})=(1,1)$, a never-taker if $(d_{Tij},d_{Cij})=(0,0)$, and a defier if $(d_{Tij},d_{Cij})=(0,1)$, where $d_{Tij}$ and $d_{Cij}$ are defined as in (1). We make the so-called monotonicity assumption (Angrist et al.,, 1996) and assume there are no defiers. This is often a mild assumption as it merely states that if unit $ij$ accepts the treatment under dose $\widetilde{Z}$, then it will also accept the treatment under any dose $\widetilde{Z}^{\prime}$ such that $\widetilde{Z}^{\prime}>\widetilde{Z}$. Note that even under the monotonicity assumption, researchers can still only observe one of the two potential treatments received, so statistical inference is needed. Under the monotonicity assumption, the compliance rate in the matched design with respect to a continuous IV is then defined as $\iota_{C}=\frac{1}{2I}\sum_{i=1}^{I}\sum_{j=1}^{2}(d_{Tij}-d_{Cij}),$ (2) and a strengthened-IV design improves the compliance rate by maximizing the IV dose difference within each matched pair (Baiocchi et al.,, 2010). Let $R_{ij}(\widetilde{\mathbf{Z}},\mathbf{D})$ denote the outcome of $ij$ under $\widetilde{\mathbf{Z}}$ and $\mathbf{D}=(D_{11},\dots,D_{I2})$. We assume SUTVA again so that $R_{ij}(\widetilde{\mathbf{Z}},\mathbf{D})$ depends on $\widetilde{\mathbf{Z}}$ and $\mathbf{D}$ only via $\widetilde{Z}_{ij}$ and $D_{ij}$. Lastly, we make the exclusion restriction assumption so that the effect of $\widetilde{Z}_{ij}$ on $R_{ij}$ is only via $D_{ij}$; in other words, $R_{ij}(\widetilde{Z}_{ij},D_{ij})$ can be written as $R_{ij}(D_{ij}(\widetilde{Z}_{ij}))$ or $R_{ij}(\widetilde{Z}_{ij})$ as a shorthand. Define the following potential outcomes: $\begin{split}&r_{Cij}\overset{\Delta}{=}R_{ij}(\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}})\leavevmode\nobreak\ \quad\text{and}\quad\leavevmode\nobreak\ r_{Tij}\overset{\Delta}{=}R_{ij}(\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}),\end{split}$ (3) in parallel to $(d_{Tij},d_{Cij})$ in (1). Analogous to $(d_{Tij},d_{Cij})$, $(r_{Tij},r_{Cij})$ are defined with respect to the continuous IV dose pair $(\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}},\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}})$. In the NICU study, $r_{Cij}$ describes the potential infant mortality status had the mother lived $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}}$ (as opposed to $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}$) minutes farther away from a high-level NICU compared to a low-level NICU and hence encouraged to attend a high-level NICU, while $r_{Tij}$ describes the potential infant mortality status had the mother lived $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}$ (as opposed to $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}}$) minutes farther away from a high-level NICU compared to a low-level NICU and hence encouraged to attend a low-level NICU. Importantly, in either case, no mother is forced to attend any hospital. Potential outcomes $(r_{Tij},r_{Cij})$ are fixed once the design and hence the two IV doses $(\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}},\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}})$ within each matched pair are fixed. ### 4.2 Effect ratio estimand; sample average treatment effect Equipped with these potential outcomes, we now turn to causal estimands. Baiocchi et al., (2010) consider the following effect ratio estimand: $\lambda=\frac{\sum_{i=1}^{I}\sum_{j=1}^{2}(r_{Tij}-r_{Cij})}{\sum_{i=1}^{I}\sum_{j=1}^{2}(d_{Tij}-d_{Cij})},$ which is the ratio of the sample average treatment effect of the IV (possibly after strengthening) on the outcome and that of the IV on the treatment. The effect ratio estimand $\lambda$ is always well defined provided that the denominator $\sum_{i=1}^{j}\sum_{j=1}^{2}(d_{Tij}-d_{Cij})\neq 0$, and it coincides with the Wald estimator under an additional monotonicity assumption. In the NICU study, the effect ratio can be interpreted as follows: for every hundred mothers discouraged by the IV from delivering at a high-level NICU, there were $100\times\lambda$ additional infant deaths (Baiocchi et al.,, 2010). As researchers switch to a different IV-based matched design, such as one that further separates the two IV doses within each matched pair, the effect ratio estimand will in general be different. In other words, _the effect ratio estimand depends on who is matched to whom, and the design dictates the estimand._ Lastly, we define the following potential outcomes: $\begin{split}&r_{d=0,ij}\overset{\Delta}{=}R_{ij}(D_{ij}=0)\leavevmode\nobreak\ \quad\text{and}\quad\leavevmode\nobreak\ r_{d=1,ij}\overset{\Delta}{=}R_{ij}(D_{ij}=1),\end{split}$ which correspond to potential binary infant mortality outcomes had mother $ij$ attended a low-level NICU ($D_{ij}=1$) or a high-level NICU ($D_{ij}=0$). Compared to $r_{Tij}-r_{Cij}$ that describes the unit-level intention-to-treat effect (with respect to the designed IV), $r_{d=1,ij}-r_{d=0,ij}$ describes the unit-level treatment effect and arguably is the ultimate estimand of interest in the NICU study. Below, we will focus on the sample average treatment effect of $D$ on $R$: $\kappa=\frac{1}{2I}\sum_{i=1}^{I}\sum_{j=1}^{2}(r_{d=1,ij}-r_{d=0,ij}).$ Compared to the effect ratio estimand $\lambda$, the estimand $\kappa$ depends on the study cohort but not on the strength of the IV or who is matched to whom in the design phase. ### 4.3 Partial identification bounds The estimand $\kappa$ cannot in general be point identified, as data provides no treatment effect information for non-compliant participants. Partial identification bounds provide an assumption-lean alternative. Define the following index sets: $I_{\text{COM}}=\\{(i,j)\mathrel{\mathop{\ordinarycolon}}(d_{Tij},d_{Cij})=(1,0)\\}$, $I_{\text{AT}}=\\{(i,j)\mathrel{\mathop{\ordinarycolon}}(d_{Tij},d_{Cij})=(1,1)\\}$, and $I_{\text{NT}}=\\{(i,j)\mathrel{\mathop{\ordinarycolon}}(d_{Tij},d_{Cij})=(0,0)\\}$. Note that $\begin{split}\kappa=\frac{1}{N}\sum_{ij}(r_{d=1,ij}-r_{d=0,ij})=&\frac{1}{N}\left\\{\sum_{(i,j)\in I_{\text{COM}}}(r_{d=1,ij}-r_{d=0,ij})+\sum_{(i,j)\in I_{\text{AT}}\cup I_{\text{NT}}}(r_{d=1,ij}-r_{d=0,ij})\right\\},\end{split}$ where the first term can be further simplified to $\sum_{(i,j)\in I_{\text{COM}}}(r_{d=1,ij}-r_{d=0,ij})=\sum_{(i,j)\in I_{\text{COM}}}(r_{Tij}-r_{Cij})=\sum_{ij}(r_{Tij}-r_{Cij}).$ Let $r_{d=1,ij}-r_{d=0,ij}\in[K_{0},K_{1}]$ for all $ij$ and for some $K_{0}\leq K_{1}<\infty.$ Without loss of generality, assume $K_{0}\geq 0$. The second term is then bounded as follows: $K_{0}\times(N-|I_{\text{COM}}|)\leq\sum_{(i,j)\in I_{\text{AT}}\cup I_{\text{NT}}}(r_{d=1,ij}-r_{d=0,ij})\leq K_{1}\times(N-|I_{\text{COM}}|).$ Because $\sum_{ij}(d_{Tij}-d_{Cij})=\sum_{(i,j)\in I_{\text{COM}}}(d_{Tij}-d_{Cij})=|I_{\text{COM}}|$, we then have $K_{0}\times\left(N-\sum_{ij}(d_{Tij}-d_{Cij})\right)\leq\sum_{(i,j)\in I_{\text{AT}}\cup I_{\text{NT}}}(r_{d=1,ij}-r_{d=0,ij})\leq K_{1}\times\left(N-\sum_{ij}(d_{Tij}-d_{Cij})\right).$ (4) Put together, without additional assumptions, the target estimand $\kappa$ is lower bounded by $\text{LB}\mathrel{\mathop{\ordinarycolon}}=\frac{1}{N}\sum_{ij}(r_{Tij}-r_{Cij})-K_{0}\times\frac{1}{N}\sum_{ij}(d_{Tij}-d_{Cij})+K_{0},$ and upper bounded by $\text{UB}\mathrel{\mathop{\ordinarycolon}}=\frac{1}{N}\sum_{ij}(r_{Tij}-r_{Cij})-K_{1}\times\frac{1}{N}\sum_{ij}(d_{Tij}-d_{Cij})+K_{1}.$ Three remarks follow. First, the lower and upper bounds can be achieved when the treatment effect among all the non-compliers attain the minimum and maximum ($K_{0}$ and $K_{1}$, respectively). Second, the length of the partial identification interval, $\text{UB}-\text{LB}$, equals $(K_{1}-K_{0})\times(1-\iota_{C})$, where $\iota_{C}$ is the compliance rate defined in (2). Thus, by maximizing the compliance rate, a strengthened-IV design could help derive a narrower partial identification interval for $\kappa$. In an ideal situation where $\iota_{C}=1$, we have $\text{LB}=\text{UB}$ and the identification of $\kappa$ is achieved by design. Third, by dividing $N-\sum_{ij}(d_{Tij}-d_{Cij})$ in equation (4), $K_{0}$ and $K_{1}$ can also be interpreted as the lower and upper bounds of the SATE among non-compliers. The target parameters LB and UB involve both potential treatments received $(d_{Tij},d_{Cij})$ and both potential clinical outcomes $(r_{Tij},r_{Cij})$; for each study unit $ij$, either $(d_{Tij},r_{Tij})$ or $(d_{Cij},r_{Cij})$ is observed, so inference is needed for the LB and UB. ## 5 Inference for partial identification bounds ### 5.1 Inference under a randomization assumption One widely-adopted downstream analysis strategy for matched cohort data is randomization inference treating the matched cohort data as from a finely stratified experiment (Rosenbaum, 2002b, ; Rosenbaum,, 2010; Fogarty, 2018a, ; Fogarty, 2018b, ). We will focus on developing a valid statistical test and confidence statement for the partial identification bounds under the randomization assumption in this section; methods that further take into account within-pair residual bias that persists after matching (Gagnon-Bartsch and Shem-Tov,, 2019; Chen et al.,, 2023) will be discussed next. Write $\mathcal{F}=\\{(\mathbf{x}_{ij},d_{Tij},d_{Cij},r_{Tij},r_{Cij})\mathrel{\mathop{\ordinarycolon}}i=1,\dots,I,j=1,2\\}$, where $d_{Tij}$, $d_{Cij}$, $r_{Tij}$ and $r_{Cij}$ are defined as in (1) and (3). Write $\mathcal{Z}$ for the set containing the $2^{I}$ possible values $\widetilde{\mathbf{z}}$ of IV dose assignments $\widetilde{\mathbf{Z}}$, so that $\widetilde{\mathbf{z}}\in\mathcal{Z}$ if each $\widetilde{{z}}_{ij}$ equals $\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}$ or $\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}$. Let $\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee}=(\widetilde{Z}^{\text{obs}}_{11}\vee\widetilde{Z}^{\text{obs}}_{12},\dots,\widetilde{Z}^{\text{obs}}_{I1}\vee\widetilde{Z}^{\text{obs}}_{I2})$ and $\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge}=(\widetilde{Z}^{\text{obs}}_{11}\wedge\widetilde{Z}^{\text{obs}}_{12},\dots,\widetilde{Z}^{\text{obs}}_{I1}\wedge\widetilde{Z}^{\text{obs}}_{I2})$. In randomization inference with a continuous IV $\widetilde{\mathbf{Z}}$, the only probability distribution that enters statistical inference is the conditional probability $\pi_{i}=P(\widetilde{Z}_{i1}=\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2},\widetilde{Z}_{i2}=\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\mid\mathcal{F},\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee},\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge})$ that characterizes the underlying IV dose assignment mechanism in each matched pair $i$. Assuming the IV is valid and the randomization assumption holds, we have $\pi_{i}=1/2$ for $i=1,\cdots,I$. Define a shorthand $Z_{ij}=1$ if $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\vee\widetilde{Z}_{i2}^{\text{obs}}$, $Z_{ij}=0$ if $\widetilde{Z}_{ij}=\widetilde{Z}_{i1}^{\text{obs}}\wedge\widetilde{Z}_{i2}^{\text{obs}}$ and $\mathbf{Z}=(Z_{11},Z_{12},\dots,Z_{I2})$. Under the matched pair design conditional on $\mathcal{Z}$, $Z_{i1}+Z_{i2}=1$ for each $i$. Let $R_{ij}=Z_{ij}r_{Tij}+(1-Z_{ij})r_{Cij}$ denote the observed outcome and $D_{ij}=Z_{ij}d_{Tij}+(1-Z_{ij})d_{Cij}$ denote the observed treatment received. We consider the null hypothesis $H_{0}^{L}\mathrel{\mathop{\ordinarycolon}}\text{LB}=l$ and an ever-larger experiment with $I\rightarrow\infty$. Proposition 1 derives an asymptotically valid test for $H_{0}^{L}$. ###### Proposition 1. Define the test statistic $T\left(l;K_{0}\right)=\frac{1}{I}\sum_{i=1}^{I}\left\\{\sum_{j=1}^{2}Z_{ij}\left(R_{ij}-K_{0}D_{ij}\right)-\sum_{j=1}^{2}\left(1-Z_{ij}\right)\left(R_{ij}-K_{0}D_{ij}\right)\right\\}-(l-K_{0})$ and $S^{2}\left(K_{0}\right)=\frac{1}{I(I-1)}\sum_{i=1}^{I}\left(\hat{\tau}_{i}-\bar{\tau}\right)^{2},$ where $\hat{\tau}_{i}=\sum_{j=1}^{2}(2Z_{ij}-1)(R_{ij}-K_{0}D_{ij})$ and $\bar{\tau}=\frac{1}{I}\sum_{i=1}^{I}\hat{\tau}_{i}$. Under mild regularity conditions and conditional on $\mathcal{F},\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee},\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge}$, the test that rejects $H_{0}^{L}\mathrel{\mathop{\ordinarycolon}}\text{LB}=l$ when $|T(l;K_{0})|\geq z_{1-\alpha/2}\sqrt{S^{2}(K_{0})}$ is an asymptotically valid level-$\alpha$ test, where $z_{1-\alpha/2}$ is the $(1-\alpha/2)$-th quantile of the standard normal distribution. ###### Remark 1. By replacing $K_{0}$ with $K_{1}$ in Proposition 1, a valid test can be derived analogously for testing the null hypothesis $H_{0}^{U}\mathrel{\mathop{\ordinarycolon}}\text{UB}=u$. A one-sided level-$\alpha$ confidence interval of LB is $\left[T(l;K_{0})+l-z_{1-\alpha}\sqrt{S^{2}(K_{0})},\,\infty\right)$, where $z_{1-\alpha}$ is the $(1-\alpha)$-th quantile of the standard normal distribution. A valid confidence interval of $[\text{LB},\text{UB}]$ may then be obtained by combining the two one-sided, level-$\alpha/2$ confidence intervals of LB and UB. ###### Remark 2. The variance estimator $S^{2}(K_{0})$ in general overestimates the true variance (Imai,, 2008). One set of sufficient conditions for it to be an unbiased estimator of the true variance is when the proportional treatment effect model $r_{Tij}-r_{Cij}=\beta(d_{Tij}-d_{Cij})$ holds (Small and Rosenbaum,, 2008), the monotonicity assumption holds, and one of the following three conditions holds: (i) all units are compliers; (ii) each pair has one complier and one always-taker or never-taker; (iii) no units are compliers. Such a condition appears restrictive in practice, so the variance estimator $S^{2}(l)$ would tend to overestimate the true variance and the test based on it would tend to be conservative. A standard approach to improving the asymptotic precision of randomization- based inference is through regression adjustment (see, e.g., Lin,, 2013; Fogarty, 2018a, ; Fogarty, 2018b, ; Li and Ding,, 2020; Su and Ding,, 2021; Lei and Ding,, 2021; Zhao and Ding,, 2022, among others). Proposition 2 adapts this idea to making inference of the proposed partial identification bounds by proposing a regression-assisted variance estimator for $T(l;K_{0})$. ###### Proposition 2. Let $Q$ denote an $I\times p$ matrix with $p<I$ and $H_{Q}=Q(Q^{T}Q)^{-1}Q^{T}$ be the hat matrix for $Q$ with $h_{Qi}$ on the $i$-th diagonal. Let $\hat{\tau}_{Q}$ be an $I\times 1$ column vector with the $i$-th entry $\hat{\tau}_{i}/\sqrt{1-h_{Qi}}$, where $\hat{\tau}_{i}=\sum_{j=1}^{2}(2Z_{ij}-1)(R_{ij}-K_{0}D_{ij})$ as defined in Proposition 1. Let $\boldsymbol{\tau}$ be an $I\times 1$ column vector with the $i$-th entry $\tau_{i}=\frac{1}{2}\sum_{j=1}^{2}\left\\{r_{Tij}-r_{Cij}-K_{0}(d_{Tij}-d_{Cij})\right\\}$. Define $S^{2}_{Q}(K_{0})=I^{-2}\hat{\tau}^{T}_{Q}(I-H_{Q})\hat{\tau}_{Q}$. Under conditions S1-S2 in Supplemental Material A, as $I\rightarrow\infty$, $IS^{2}_{Q}(K_{0})-var\\{\sqrt{I}T(l;K_{0})\mid\mathcal{F},\widetilde{\mathbf{Z}}_{\vee},\widetilde{\mathbf{Z}}_{\wedge}\\}\xrightarrow{p}\lim\limits_{I\to\infty}\frac{1}{I}\boldsymbol{\tau}^{T}(I-H_{Q})\boldsymbol{\tau}\geq 0.$ ###### Remark 3. Proposition 2 indicates that $IS^{2}_{Q}(K_{0})$ is a conservative estimator for the variance of $\sqrt{I}T(l;K_{0})$ in probability, so testing $H_{0}^{L}$ based on the statistic $T(l;K_{0})/{\sqrt{S^{2}_{Q}(K_{0})}}$ and a Gaussian reference distribution are asymptotically valid. When $Q=\boldsymbol{e}$, a column vector of 1’s, $S^{2}_{Q}(K_{0})$ reduces to the classical variance estimator $S^{2}(K_{0})$ as defined in Proposition 1. In general, for $Q=\left[\boldsymbol{e},\boldsymbol{V}\right]$, where $\boldsymbol{V}$ is a matrix whose $i$-th row contains centered covariate $(\boldsymbol{x}_{i1},\boldsymbol{x}_{i2})$, $IS^{2}_{\boldsymbol{e}}(K_{0})-IS^{2}_{Q}(K_{0})\xrightarrow{p}\beta_{V}^{T}\Sigma_{V}^{-1}\beta_{V}\geq 0$, where $\Sigma_{V}=\lim\limits_{I\to\infty}I^{-1}\boldsymbol{V}^{T}\boldsymbol{V}$ and $\beta_{V}=\lim\limits_{I\to\infty}I^{-1}\boldsymbol{V}^{T}\boldsymbol{\tau}$. Therefore, $IS^{2}_{Q}(K_{0})$ is asymptotically no more conservative than $IS^{2}(K_{0})$ if $Q$ contains any covariate information, though the degree of conservativeness reduction depends on how well the covariates predict $\hat{\tau}_{i}$ (Fogarty, 2018b, ). ### 5.2 Inference under a IV-dose-dependent, biased randomization assumption Although matching helps remove most overt bias, residual bias due to inexact matching and from unmeasured confounders could bias the IV dose assignment within each pair and induce a so-called biased randomization scheme (Rosenbaum, 2002b, ; Rosenbaum,, 2010; Fogarty,, 2020; Fogarty et al.,, 2021; Chen et al.,, 2023). Randomization inference under a biased randomization scheme is useful in two scenarios. First, when the minimal degree of bias due to residual confounding from observed covariates can be quantified by formal statistical tests (Chen et al.,, 2023), primary analyses should adopt a biased randomization scheme. Second, since the observed data contains no information about unmeasured confounding bias, a sensitivity analysis that further relaxes the randomization or biased randomization assumption in the primary analysis is always warranted. Recall that $\pi_{i}=P(\widetilde{Z}_{i1}=\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2},\widetilde{Z}_{i2}=\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\mid\mathcal{F},\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee},\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge})$. Following Rosenbaum and Rubin, (2023), we consider a principal unobserved covariate, $u_{ij}\in[0,1]$, associated with each participant $ij$ and supported on the unit interval. We consider the following biased IV dose assignment model $\mathcal{M}_{\gamma}$ within each matched pair $i$ with an observed IV dose pair $(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2},\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})$ and some $\gamma\geq 0$: $\begin{split}\pi_{i}=\frac{\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\cdot u_{i1}\right\\}}{\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\cdot u_{i1}\right\\}+\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\cdot u_{i2}\right\\}},\end{split}$ for $i=1,\dots,n$, or equivalently, $\frac{1}{\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\right\\}+1}\leq\pi_{i}\leq\frac{\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\right\\}}{\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\right\\}+1}.$ The biased randomization model $\mathcal{M}_{\gamma}$ is analogous to that in Rosenbaum, (1989) and reduces to the classical Rosenbaum bounds model (Rosenbaum,, 1987; Rosenbaum, 2002b, ) when $(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2},\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})=(1,0)$. Under $\mathcal{M}_{\gamma}$, the degree of bias in each matched pair depends critically on the IV dose difference. For instance, if a matched pair happens to pair two participants with the same IV dose, that is, $\widetilde{Z}^{\text{obs}}_{i1}=\widetilde{Z}^{\text{obs}}_{i2}$, then $\mathcal{M}_{\gamma}$ would imply $\pi_{i}=1/2$, that is, IV dose assignment becomes randomized for this pair. As the IV dose difference increases, $\mathcal{M}_{\gamma}$ would stipulate a larger degree of potential bias within each pair. Instead of allowing a dose-dependent maximal bias within each pair, Rosenbaum’s classical $\Gamma$ sensitivity analysis model (Rosenbaum,, 1987; Fogarty et al.,, 2021) places a uniform bound on the maximal biased randomization probability across all pairs, which could be unduly conservative for some matched pairs, e.g., those with $\widetilde{Z}^{\text{obs}}_{i1}=\widetilde{Z}^{\text{obs}}_{i2}$. Write $\mathcal{F}^{\prime}=\\{(\mathbf{x}_{ij},u_{ij},d_{Tij},d_{Cij},r_{Tij},r_{Cij})\mathrel{\mathop{\ordinarycolon}}0\leq{u}_{ij}\leq 1,\leavevmode\nobreak\ i=1,\dots,I,\leavevmode\nobreak\ j=1,2\\}$. Theorem 1 derives an asymptotically valid test for the null hypothesis $H_{0}^{L}$ under the biased randomization model $\mathcal{M}_{\gamma}$ for some $\gamma\geq 0$, . ###### Theorem 1. For a fixed $\gamma\geq 0$, define $\Gamma_{i}=\exp\left\\{\gamma\cdot\left(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\right)\right\\}\geq 1$ to be the IV-dose-dependent odds $\pi_{i}/(1-\pi_{i})$ under $\mathcal{M}_{\gamma}$ in matched pair $i$. For each matched pair $i$, define the following scaled treated-minus-control difference in the observed outcome: $\hat{\tau}_{i,\Gamma_{i}}=\frac{\Gamma_{i}+1}{4\Gamma_{i}}\left\\{\left(\Gamma_{i}+1\right)\hat{\tau}_{i}-\left(\Gamma_{i}-1\right)\left|\hat{\tau}_{i}\right|\right\\},$ (5) where $\hat{\tau}_{i}$ is defined as in Proposition 1. Let $\bar{\tau}_{\gamma}=\frac{1}{I}\sum_{i=1}^{I}\hat{\tau}_{i,\Gamma_{i}}$ be the sample average across all pairs and $S^{2}\left(\gamma;K_{0}\right)=\frac{1}{I(I-1)}\sum_{i=1}^{I}\left(\hat{\tau}_{i,\Gamma_{i}}-\bar{\tau}_{\gamma}\right)^{2}$ the usual variance estimator for the sample mean. Under the biased IV assignment model $\mathcal{M}_{\gamma}$ and conditional on $\mathcal{F}^{\prime},\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee},\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge}$, the test that rejects $H^{L}_{0}\mathrel{\mathop{\ordinarycolon}}\text{LB}=l$ when $\bar{\tau}_{\gamma}-(l-K_{0})\geq z_{1-\alpha}\sqrt{S^{2}\left(\gamma;K_{0}\right)}$ (6) is an asymptotically valid level-$\alpha$ test, where $z_{1-\alpha}$ is the $(1-\alpha)$-th quantile of the standard normal distribution. A valid one- sided, level-$\alpha$ confidence interval can then be constructed by inverting the hypothesis testing. ###### Remark 4. When $\gamma=0$, $\mathcal{M}_{\gamma}$ reduces to the the randomization scheme studied in Section 5.1. In this case, $\Gamma_{i}=1$ for all $i$, and $\hat{\tau}_{i,\Gamma_{i}}$ defined in (5) reduces to the usual treated-minus- control difference in the observed outcome in each pair. The test (6) and the associated confidence interval reduce to those studied in Section 5.1 under the randomization assumption. ###### Remark 5. By replacing $K_{0}$ with $K_{1}$ in Theorem 1, a valid test can be derived analogously for testing the null hypothesis $H_{0}^{U}\mathrel{\mathop{\ordinarycolon}}\text{UB}=u$. A valid level-$\alpha$ confidence interval of the partial identification bound $[\text{LB},\text{UB}]$ may then be obtained by combining two one-sided, level-$\alpha/2$ confidence intervals of LB and UB. The test derived in Theorem 1 also applies to testing the effect ratio estimand under $\mathcal{M}_{\gamma}$, as formalized in Proposition 3. ###### Proposition 3. Consider testing the null hypothesis $H^{\lambda}_{0}\mathrel{\mathop{\ordinarycolon}}\lambda=\lambda_{0}$ regarding the effect ratio under the biased IV assignment model $\mathcal{M}_{\gamma}$. Let $\hat{\tau}_{i}=\sum_{j=1}^{2}(2Z_{ij}-1)(R_{ij}-\lambda_{0}D_{ij})$. Define $\hat{\tau}_{i,\Gamma_{i}}$, $\bar{\tau}_{\gamma}$, and $S^{2}\left(\gamma;\lambda_{0}\right)$ as in Theorem 1. Conditional on $\mathcal{F}^{\prime}$, $\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee}$, and $\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge}$, the test that rejects $H^{\lambda}_{0}$ when $\bar{\tau}_{\gamma}\geq z_{1-\alpha}\sqrt{S^{2}\left(\gamma;\lambda_{0}\right)}$ is an asymptotically valid level-$\alpha$ test. ## 6 Simulation studies We have two goals in the simulation studies. In Section 6.1, we illustrate how strengthening an IV in the design stage helps improve the compliance rate and thus narrow the partial identification bounds and deliver more informative inferential targets. In Section 6.2, we generate synthetic data modeled after the real data from the case study and verify the level of the proposed tests. ### 6.1 Simulation study I: study design and its effect on the causal estimand We generate potential outcomes data for each participant $n\in\\{1,\dots,N\\}$. We consider a 5-dimensional covariate as follows: $X_{n,1}\sim\mathcal{N}(0,1),\,X_{n,2}\sim\mathcal{N}(2,5),\,X_{n,3}\sim\text{Unif}[1,3],\,X_{n,4}\sim\text{Unif}[-2,0]$ and $X_{n,5}\sim\text{Bernoulli}(0.5)$. Each participant is associated with a step function $D_{n}(Z=z)$ with a jump at a randomly sampled threshold $T_{n}\sim\text{Unif}[20,30]$. The step function $D_{n}(Z=z)$ describes the potential treatment received for each participant as a function of the IV dose $Z$. According to this data-generating process, each participant has one’s own incentive structure. Factor $1$ specifies the distribution of the observed IV dose $Z^{obs}_{n}$ for participant $n$: Factor 1: We consider two scenarios for $\widetilde{Z}^{obs}_{n}$: (1) $\widetilde{Z}^{obs}_{n}\sim\text{Unif}[5,50]$ is randomized; and (2) $\widetilde{Z}^{obs}_{n}=4X_{n,2}+6X_{n,3}+\varphi_{n}$ is correlated with observed covariates, where $\varphi_{n}\sim\text{Unif}[0,2]$. The observed treatment received $D^{obs}_{n}$ is then determined by $D^{obs}_{n}=D_{n}(\widetilde{Z}^{obs}_{n})=\mathbb{I}(\widetilde{Z}_{n}^{obs}>T_{n}).$ Using the covariates and the observed IV dose $\widetilde{Z}^{obs}_{n},$ we perform non-bipartite matching. Factor 2 specifies three matching algorithms under consideration: Factor 2: $\widetilde{\mathcal{M}}_{1},\,\widetilde{\mathcal{M}}_{2}$ and $\widetilde{\mathcal{M}}_{3}$ refer to matching designs with no caliper, a small caliper of $7$ minutes, and a large caliper of $15$ minutes on the absolute difference between two within-matched-pair observed IV doses, respectively. Each matching design yields $I=N/2$ matched pairs of participants, where each matched pair is indexed by $i\in[I]$ and each within-matched-pair participant indexed by $j\in\\{1,2\\}.$ The two observed IV doses within each matched pair, $\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2}$ and $\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2},$ are fixed in each matched design. We then determine participant $ij$’s compliance status $S_{ij}$ as follows: $\displaystyle\text{If }D_{ij}(\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2})$ $\displaystyle=D_{ij}(\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2})=0,\text{ then }S_{ij}=(0,0);$ $\displaystyle\text{If }D_{ij}(\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2})$ $\displaystyle=0\text{ and }D_{ij}(\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2})=1,\text{ then }S_{ij}=(1,0);$ $\displaystyle\text{If }D_{ij}(\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2})$ $\displaystyle=D_{ij}(\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2})=1,\text{ then }S_{ij}=(1,1),$ where $S_{ij}\in\\{(0,0),(1,0),(1,1)\\}$ indicates that participant $ij$ is a never-taker, complier or always-taker with respect to the two observed IV doses in matched pair $i$, respectively. Finally, we generate the potential outcomes $r_{d=0,ij}\sim\mathcal{N}(0,1)$. Factor 3 specifies the unit-level treatment effect and hence the data- generating process for $r_{d=1,ij}$: Factor 3: We consider two scenarios for $r_{d=1,ij}$: (1) $r_{d=1,ij}=r_{d=0,ij}+\psi_{ij}$, where the unit-level treatment effect $\psi_{ij}$ is independent of participant $ij$’s compliance status and follows $\psi_{ij}\sim\text{Unif}[4,6]$; and (2) $r_{d=1,ij}=r_{d=0,ij}+\phi_{ij},$ where the unit-level treatment effect $\phi_{ij}$ depends on participant $ij$’s compliance status as follows: $\phi_{ij}\sim\text{Unif}[2,5]$ for a complier, $\phi_{ij}\sim\text{Unif}[4,6]$ for an always-taker, and $\phi_{ij}\sim\text{Unif}[1,3]$ for a never-taker. Participant $ij$’s potential outcomes $r_{T_{ij}}$ and $r_{C_{ij}}$ are then determined by the compliance status $S_{ij}$ and the potential outcomes $(r_{d=0,ij},r_{d=1,ij})$ as follows: $(r_{T_{ij}},r_{C_{ij}})$ equals $(r_{d=0,ij},r_{d=0,ij})$ for never-takers, $(r_{d=1,ij},r_{d=0,ij})$ for compliers, and $(r_{d=1,ij},r_{d=1,ij})$ for always-takers. For each matched dataset, we calculate $LB$ and $UB$ as described in Section 4.3 with $(K_{0},K_{1})\in\\{(4,6),(1,6)\\}$ for Scenario 1 or 2 in Factor 3, respectively. We generate data for $N=1,000$ participants and repeat the simulation $1,000$ times under each data-generating process. Table 2: Average widths of the partial identification bounds across $1,000$ simulations. In the first column, $F_{ij}$ refers to the $j$-th scenario in the $i$-th factor. | $\widetilde{\mathcal{M}}_{1}$ | $\widetilde{\mathcal{M}}_{2}$ | $\widetilde{\mathcal{M}}_{3}$ ---|---|---|--- $F_{11}\times F_{31}$ | 1.03 | 0.72 | 0.41 $F_{11}\times F_{32}$ | 2.57 | 1.80 | 1.02 $F_{12}\times F_{31}$ | 1.83 | 1.33 | 0.83 $F_{12}\times F_{32}$ | 4.58 | 3.34 | 2.07 Table 2 summarizes the empirical means of the widths of partial identification bounds, $\Delta=UB-LB$, across $1,000$ simulations for each combination of the three factors. In the first column of Table 2, $F_{ij}$ denotes the $j$-th scenario in the $i$-th factor; for instance, $F_{11}\times F_{32}$ denotes the simulation setting of randomized IVs $\widetilde{Z}_{k}^{obs}$ and potential outcomes $r_{d=1,k}$ that depend on the compliance status. Across the four data-generating processes specified by Factors $1$ and $3$, the more we strengthen the continuous IV, the narrower the widths of the partial identification bounds, yielding more useful information on the SATE. This is in line with our expectation, as the width of a partial identification bound is proportional to the compliance rate, which increases with the IV dose caliper. Table S3 in Supplemental Material D further summarizes within-matched-pair covariate balance under each data-generating process using the average absolute value of the standardized mean difference (SMD) across $1,000$ simulations. The average absolute value of the SMD increases with the IV dose caliper, with $X_{2}$ and $X_{3}$ exhibiting the largest increase when they are correlated with the IV dose as specified by Factor 1 Scenario 2. Nevertheless, SMDs are almost always less than $0.1$, which usually indicates a good covariate balance in empirical studies (Silber et al.,, 2001). ### 6.2 Simulation study II: validity of the proposed tests In this simulation study, we verify the validity of the proposed statistical tests under randomization inference (Proposition 1) and biased randomization inference (Theorem 1). For every unit $j$ in matched pair $i$, we first generate potential outcomes data. Specifically, we sample two continuous IV doses $\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2}$ and $\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2}$ with replacement from the real NICU dataset. Unit $ij$’s compliance status $S_{ij}(\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2},\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2})\in\\{(1,0),(1,1),(0,0)\\}$ is generated as follows: $\begin{split}&P\\{S_{ij}=(1,0)\\}=C^{-1};\\\ &P\\{S_{ij}=(1,1)\\}=C^{-1}\cdot\exp\\{-0.2\times(\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2}-\widetilde{Z}^{obs}_{i1}\wedge\widetilde{Z}^{obs}_{i2})\\};\\\ &P\\{S_{ij}=(0,0)\\}=C^{-1}\cdot\exp\\{-\alpha_{i}(\widetilde{Z}^{obs}_{i1}\vee\widetilde{Z}^{obs}_{i2})\\},\end{split}$ where $C$ is a normalizing constant, and $ij$’s compliance status depends on the two IV doses in each matched pair. The latent compliance category $S_{ij}$ then determines $(d_{Tij},d_{Cij})$ as defined in (1). We consider two scenarios for each of the two types of potential outcomes $(r_{d=0,ij},r_{d=1,ij})$: Continuous case: Scenario 1: $r_{d=0,ij}\sim\mathcal{N}(0,1)$ and $r_{d=1,ij}=r_{d=0,ij}$; Scenario 2: $r_{d=0,ij}\sim\mathcal{N}(0,1)$ and $r_{d=1,ij}=r_{d=0,ij}+\tau_{ij},$ where $\tau_{ij}\sim\text{Unif}[-1,1]$. Binary case: Scenario 1: $P(r_{d=0,ij}=1)=\text{expit}(0.5),$ where $\text{expit}(x)=\exp(x)/(1+\exp(x))$ is the inverse of the logit function, and $r_{d=1,ij}=r_{d=0,ij}$; Scenario 2: $P(r_{d=0,ij}=1)=\text{expit}(0.5)$ and $P(r_{d=1,ij}=1)=\text{expit}(0.5+\tau_{ij}),$ where $\tau_{ij}\sim\text{Unif}[-1,1]$. For both outcome types, Scenario 1 represents a sharp null hypothesis of no unit-level treatment effect, while Scenario 2 allows a heterogeneous treatment effect. Once we generate the potential outcomes data, for every matched pair $i$, we let the participant with the larger potential outcome under control, i.e., $r_{d=0,ij}$, be more likely to receive the treatment under a biased IV dose assignment mechanism $\pi_{i},$ which is introduced below. In this way, the IV doses are correlated with both the treatment assignment and the potential outcomes $(r_{d=0,ij},r_{d=1,ij})$. According to this data generating process, $ij$’s potential outcomes $(r_{T_{ij}},r_{C_{ij}})$ are completely specified by the compliance category $S_{ij}$ and potential outcomes $(r_{d=1,ij},r_{d=0,ij})$. Finally, we generate the observed IV dose $\widetilde{Z}_{ij}^{obs}$ according to one of the following IV dose assignment mechanisms: $\begin{split}\pi_{i}&=P(\widetilde{Z}_{i1}=\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2},\widetilde{Z}_{i2}=\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2}\mid\mathcal{F}^{\prime},\widetilde{\mathbf{Z}}^{\text{obs}}_{\vee},\widetilde{\mathbf{Z}}^{\text{obs}}_{\wedge})\\\ &\sim\text{Unif}\left[\frac{1}{2},\frac{\exp\left\\{\gamma(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})\right\\}}{\exp\left\\{\gamma(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})\right\\}+1}\right]\leavevmode\nobreak\ (\textbf{Mechanism I}),\leavevmode\nobreak\ \text{or}\\\ &=\frac{\exp\left\\{\gamma(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})\right\\}}{\exp\left\\{\gamma(\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})\right\\}+1}\leavevmode\nobreak\ (\textbf{Mechanism II}).\end{split}$ The observed data comprises the observed IV dose $\widetilde{Z}_{ij}^{obs},$ treatment $D_{ij},$ and outcome $R_{ij}$, with the observed $D_{ij}$ and $R_{ij}$ completely determined by $\widetilde{Z}_{ij}^{obs}$ and the potential outcomes data. We vary the following factors in the simulation study: Factor 1: Number of matched pairs $I$: $100$, $500$, $1000$ and $2000$. Factor 2: IV dose assignment probability $\gamma$: $0$, $0.025$ and $0.05$. If $\gamma=0$, the two IV doses are randomly assigned within each pair and the randomization assumption holds in either mechanism. Otherwise, the IV dose assignment could be biased within each pair and the degree of bias depends on the difference between the two IV doses. In Scenario 1, we have $(K_{0},K_{1})=(0,0)$ regardless of whether the outcome is continuous or binary. In Scenario 2, $(K_{0},K_{1})=(-1,1)$ for both the continuous and binary potential outcomes. In each setting, we repeat the simulation $1,000$ times. For each simulated dataset, we calculate the ground truth $LB_{0}=\frac{1}{N}\sum_{ij}(r_{Tij}-r_{Cij})-K_{0}\times\frac{1}{N}\sum_{ij}(d_{Tij}-d_{Cij})+K_{0}$ and construct the 95% confidence interval for $LB$ according to Proposition 1 and Theorem 1. We apply Proposition 1 and Theorem 1 to data generated under the biased IV dose assignment scheme, i.e., $\gamma\neq 0$, to illustrate that the 95% confidence intervals constructed according to Theorem 1 have the desired level while those constructed using Proposition 1 may not. Table 3 considers the case where the potential outcomes $(r_{d=0,ij},r_{d=1,ij})$ are continuous and summarizes the coverage of $95\%$ confidence intervals under Proposition 1 and Theorem 1 across $1,000$ simulations in Scenario 1 and Scenario 2. In the absence of a biased IV dose assignment, i.e., $\gamma=0$, the two IV assignment mechanisms collapse to one, and the test statistic almost or fully achieves nominal coverage in both scenarios across a range of sample sizes. As the degree of bias in the IV dose assignment increases in either scenario, the test exhibits exceedingly small or zero coverage under Proposition 1. In such cases, conducting randomization inference for $LB$ overestimates $LB$ and leads to under-coverage, even though the test tends to be conservative in the first place. By contrast, the test constructed according to Theorem 1 tends to achieve perfect coverage under Mechanism I and close to nominal coverage under Mechanism II, whereby the degree of coverage varies with unit-level treatment effect heterogeneity. Two sources of conservativeness account for this trend. First, the test constructed according to Theorem 1 assumes a worst-case allocation of bias within each pair; hence, the test will tend to be more conservative for Mechanism 1 compared to Mechanism 2. Second, even under the worst-case bias allocation in Mechanism 2, the variance estimator of the test statistic is a conservative estimator of the true variance in Scenario 2; therefore, the test still tends to be conservative with a coverage often exceeding its nominal level. We observe similar trends for the case where the potential outcomes $(r_{d=0,ij},r_{d=1,ij})$ are binary; see Supplemental Material D for details. Table 3: Simulation results when the potential outcomes $(r_{d=0,ij},r_{d=1,ij})$ are continuous under two scenarios: causal null hypothesis (Scenario 1) and heterogeneous unit-level treatment effect (Scenario 2). We report the average coverage of $95\%$ confidence intervals under Proposition 1 and Theorem 1 across $1,000$ simulations. | | | Scenario 1 | Scenario 2 ---|---|---|---|--- | | | 95% CI Coverage | 95% CI Coverage | 95% CI Coverage | 95% CI Coverage | $I$ | $\gamma$ | Proposition 1 | Proposition 3 | Proposition 1 | Proposition 3 Mechanism I | 100 | 0.000 | 0.937 | 0.946 | 0.953 | 0.944 | 500 | 0.000 | 0.949 | 0.951 | 0.961 | 0.958 | 1000 | 0.000 | 0.955 | 0.945 | 0.972 | 0.971 | 2000 | 0.000 | 0.954 | 0.941 | 0.965 | 0.959 | 100 | 0.025 | 0.797 | 0.997 | 0.826 | 0.998 | 500 | 0.025 | 0.349 | 1.000 | 0.416 | 1.000 | 1000 | 0.025 | 0.100 | 1.000 | 0.142 | 1.000 | 2000 | 0.025 | 0.002 | 1.000 | 0.005 | 1.000 | 100 | 0.050 | 0.537 | 1.000 | 0.611 | 1.000 | 500 | 0.050 | 0.020 | 1.000 | 0.030 | 1.000 | 1000 | 0.050 | 0.000 | 1.000 | 0.000 | 1.000 | 2000 | 0.050 | 0.000 | 1.000 | 0.000 | 1.000 Mechanism II | 100 | 0.000 | 0.953 | 0.951 | 0.963 | 0.966 | 500 | 0.000 | 0.953 | 0.945 | 0.960 | 0.954 | 1000 | 0.000 | 0.945 | 0.963 | 0.961 | 0.964 | 2000 | 0.000 | 0.947 | 0.951 | 0.963 | 0.957 | 100 | 0.025 | 0.405 | 0.932 | 0.522 | 0.973 | 500 | 0.025 | 0.002 | 0.943 | 0.007 | 0.996 | 1000 | 0.025 | 0.000 | 0.946 | 0.000 | 0.998 | 2000 | 0.025 | 0.000 | 0.940 | 0.000 | 1.000 | 100 | 0.050 | 0.028 | 0.893 | 0.069 | 0.980 | 500 | 0.050 | 0.000 | 0.922 | 0.000 | 0.998 | 1000 | 0.050 | 0.000 | 0.920 | 0.000 | 1.000 | 2000 | 0.050 | 0.000 | 0.944 | 0.000 | 1.000 ## 7 Revisiting the NICU study ### 7.1 Assessing the randomization assumption As a first step towards a randomization-based outcome analysis for the design M1, we assessed the randomization assumption using the Classification Permutation Test (CPT) (Gagnon-Bartsch and Shem-Tov,, 2019). Specifically, we tested the null hypothesis that the excess travel times of mothers within each matched pair are randomly assigned. When implementing the CPT, we fit a logistic regression model to predict which mother within each matched pair had a larger excess travel time based on the observed covariates and used in- sample classification accuracy rate as our test statistic. We then obtained the permutation-based null distribution of the test statistic based on $500$ permutations of the IV doses. Finally, we computed an exact p-value by comparing the observed test statistic to the reference distribution. Figure 2a shows this null distribution with the observed test statistic superimposed. A large value of the test statistic indicates that the covariates, even after matching, are still predictive of the IV dose magnitude within each matched pair and serves as evidence against the randomization assumption. Thus, the randomization assumption for the design M1 was rejected (p-value $<0.001$). To what extent was the randomization assumption violated? Towards understanding this question, we proceeded to test the following biased randomization assumption: $H_{0,\Gamma}\mathrel{\mathop{\ordinarycolon}}\frac{1}{1+\Gamma}\leq\pi_{i}\leq\frac{\Gamma}{1+\Gamma}$ for all $i$ (Chen et al.,, 2023). The null hypothesis $H_{0,\Gamma}$ implies that the IV-dose-dependent odds ratio $\Gamma_{i}$ for every matched pair $i$ within M1 is uniformly bounded between $1/\Gamma$ and $\Gamma$ and hence the IV assignment probability $\pi_{i}$ is uniformly bounded between $1/(1+\Gamma)$ and $\Gamma/(1+\Gamma)$. Using Chen et al.,’s (2023) sample- splitting CPT, we rejected the null hypothesis $H_{0,\Gamma}$ for $\Gamma$ as large as $1.17$; therefore, we have evidence that at least one matched pair exhibits bias in the IV dose assignment odds $\Gamma_{i}$ at a magnitude larger than $1.17$. This implies that the smallest $\gamma$ in $\mathcal{M}_{\gamma}$ not falsified by the observed data and the diagnostic test equals $\gamma=\log(1.17)/(\max_{i}\widetilde{Z}^{\text{obs}}_{i1}\vee\widetilde{Z}^{\text{obs}}_{i2}-\widetilde{Z}^{\text{obs}}_{i1}\wedge\widetilde{Z}^{\text{obs}}_{i2})=\log(1.17)/129=0.0012$ for M1. Figure 2b displays the distribution of $\Gamma_{i}$’s across all matched pairs in M1 when $\gamma=0.0012$. This value of $\gamma$ and its associated sensitivity analysis model $\mathcal{M}_{\gamma}$ served as the basis for our biased-randomization-based primary outcome analysis. (a) (b) Figure 2: Panel (a): Distribution of the in-sample classification accuracy rate under the null hypothesis for M1. The vertical bar denotes the observed test statistic. Panel (b): density plot of $\Gamma_{i}$’s with $\gamma=0.0012$ in the design M1. ### 7.2 Primary analysis of the matched sample We conducted a biased-randomization-based primary analysis for the matched sample M1 under $\mathcal{M}_{\gamma}$ with $\gamma=0.0012$. We considered two outcomes of interest. The primary outcome of interest was infant mortality rate. We also considered a composite outcome that combines infant mortality rate and length of stay in the intensive care unit (ICU) as an overall measurement of health care quality; such a composite outcome is referred to as a “placement of death” approach (Lin et al.,, 2017) and used in many clinical settings (see, e.g., Courtright et al., 2024). Specifically, this composite outcome equals the length of stay in the ICU (LOSI) if the baby survived and the top 1% quantile of survivor LOSI if the baby died. Table 4 summarizes the sample size, compliance rate, mortality, LOSI among survivors, and the composite outcome in the near group (mothers with the smaller excess travel time in each pair) and far group (mothers with the larger excess travel time in each pair). Table 4: Level of encouragement, proportion attending a high-level NICU, and mortality in the near (mothers with the smaller excess travel time in each pair) group and far (mothers with the larger excess travel time in each pair) group. | | M1 | Inferential Results | ---|---|---|---|--- | | Near Group | Far Group | Pt. Est. | 95% CI | Primary analysis: | Sample size | 81,766 | 81,766 | | | Entire cohort | Excess travel time (min) | 2.07 | 27.1 | | | | High-level NICU (%) | 71.9 | 35.5 | | | | Death (%) | 2.1 | 2.6 | | | | Effect ratio, $\lambda$ | | | 1.28 | [0.69, 1.90] | | SATE, $\kappa$ | | | | Fig 3(a) | | LOSI among survivors (days) | 8.16 | 7.65 | | | | Composite outcome | 9.64 | 9.50 | | | | Effect ratio, $\lambda$ | | | -0.38 | [-1.34, 0.59] | | SATE, $\kappa$ | | | | Fig 3(b) | Subgroup Analysis I: | Sample size | 13,454 | 13,454 | | | Black | Excess travel time (min) | -0.39 | 9.25 | | | | High-level NICU (%) | 82.96 | 64.64 | | | | Death (%) | 3.54 | 3.84 | | | | Effect ratio, $\lambda$ | | | 1.62 | [-1.16, 4.50] | | SATE, $\kappa$ | | | | Fig 4(a) | | LOSI among survivors (days) | 9.95 | 9.58 | | | | Composite outcome | 12.81 | 12.72 | | | | Effect ratio, $\lambda$ | | | -0.48 | [-1.34, 0.59] | | SATE, $\kappa$ | | | | Fig S1 | Subgroup Analysis II: | Sample size | 6,863 | 6,863 | | | Non-Black, Age $\geq 35$, | Excess travel time (min) | 1.95 | 26.01 | | | Gestational age $\leq 36$ | High-level NICU (%) | 77.78 | 45.32 | | | | Death (%) | 2.77 | 3.77 | | | | Effect ratio, $\lambda$ | | | 3.10 | [0.92, 5.42] | | SATE, $\kappa$ | | | | Fig 4(b) | | LOSI among survivors (days) | 11.84 | 11.36 | | | | Composite outcome | 14.15 | 14.58 | | | | Effect ratio, $\lambda$ | | | 1.32 | [-1.96, 4.74] | | SATE, $\kappa$ | | | | Fig S1 | Subgroup Analysis III: | Sample size | 32,294 | 32,294 | | | Non-Black, Age $\leq 35$, | Excess travel time (min) | 2.54 | 30.86 | | | Gestational age $\leq 36$ | High-level NICU (%) | 71.29 | 37.40 | | | | Death (%) | 2.31 | 2.85 | | | | Effect ratio, $\lambda$ | | | 2.02 | [0.85, 3.26] | | SATE, $\kappa$ | | | | Fig 4(b) | | LOSI among survivors (days) | 11.58 | 10.66 | | | | Composite outcome | 13.42 | 13.72 | | | | Effect ratio, $\lambda$ | | | -0.81 | [-2.78, 1.21] | | SATE, $\kappa$ | | | | Fig S1 | Subgroup Analysis IV: | Sample size | 29,155 | 29,155 | | | Gestational age $>$ 36 | Excess travel time (min) | 2.70 | 31.43 | | | | High-level NICU (%) | 65.87 | 17.73 | | | | Death (%) | 0.23 | 0.41 | | | | Effect ratio, $\lambda$ | | | 0.35 | [0.10, 0.61] | | SATE, $\kappa$ | | | | Fig 4(b) | | LOSI among survivors (days) | 2.72 | 2.60 | | | | Composite outcome | 2.73 | 2.62 | | | | Effect ratio, $\lambda$ | | | -0.24 | [-0.42, -0.07] | | SATE, $\kappa$ | | | | Fig S1 | We first made inference for the effect ratio estimand under $\mathcal{M}_{\gamma}$ with $\gamma=0.0012$ using Proposition 3. The effect ratio of the mortality outcome (low-level NICU vs. high-level NICU) was estimated to be $1.28\%$ with a $95\%$ confidence interval of $[0.69\%,1.90\%]$, providing evidence that delivering at a low-level NICU increased infants’ mortality rate among the $71.9\%-35.5\%=36.4\%$ compliers in M1. Such an estimated treatment effect on compliers is substantial, almost half of the overall mortality rate in the entire study cohort. On the other hand, the effect on the composite outcome was estimated to be $-0.38$ among compliers with a $95\%$ confidence interval of $[-1.34,0.59]$, not providing evidence for an effect on the composite outcome. (a) Mortality rate (b) Composite outcome Figure 3: $95\%$ confidence interval plot of $\kappa$ for (a) the mortality rate with $K_{0}=0$ and $0\leq K_{1}\leq 0.03$ and (b) for the composite outcome with $0\leq-K_{0}=K_{1}=K\leq 2$, under model $\mathcal{M}_{\gamma}$ with $\gamma=0.0012.$ The dotted lines represent the point estimates while the solid lines represent the one-sided confidence limits of UB (in blue) and LB (in red), respectively. We next considered inferring the sample average treatment effect for the entire matched cohort M1 under $\mathcal{M}_{\gamma}$ with $\gamma=0.0012$ based on Theorem 1. The left panel of Figure 3 plots the 95% confidence intervals of the mortality outcome for $81,766\times 2=163,532$ participants in M1 against different hypothesized upper bound $K_{1}$ on the unidentified SATE among non-compliers. We assumed that compared to a low-level NICU, delivering at a high-level NICU would not hurt non-compliers so $K_{0}=0$. For instance, the 95% confidence interval for the SATE was estimated to be $[0.25\%,1.32\%]$ when $K_{1}=1\%$ and $[0.25\%,1.97\%]$ when $K_{1}=2\%$. Results in Figure 3 simultaneously account for the sampling variability, the bias in the instrumental variable (controlled by $\mathcal{M}_{\gamma}$), and the unidentifiability of the SATE among non-compliers (controlled by the sensitivity parameters $K_{0}$ and $K_{1}$). Finally, the right panel of Figure 3 summarizes the results for the composite outcome for some selected $K_{0}$ and $K_{1}$ values. ### 7.3 Subgroup analyses: Does the treatment effect vary by risk groups? To further examine treatment effect heterogeneity, we perform an additional biased-randomization-based primary outcome analysis for the matched sample M1 under Proposition 3 and model $\mathcal{M}_{\gamma}$ with $\gamma=0.0012$ on four subgroups of mothers: (i) black mothers; (ii) non-black mothers who had reached 35 and whose gestational age were at most 30 weeks (non-black, high risk); (iii) non-black mothers who had not reached 35 and whose gestational age were at most 30 weeks (non-black, medium risk); and (iv) non-black mothers whose gestational age exceeded 30 weeks (non-black, low risk). Table 4 summarizes the information of sample size, compliance rate, and outcomes of interest in the near and far groups for these subgroups in the design M1. For the subgroup of black mothers, the estimated effect ratio (low-level NICU vs. high-level NICU) is $1.62\%$ with a $95\%$ confidence interval of $[-1.16\%,4.50\%].$ Thus, we did not find evidence for a treatment effect among the $82.96\%-64.64\%=18.32\%$ compliers in $\textsf{M1}.$ Figure 4a further plots the 95% CIs for the sample average treatment effect among black mothers against different levels of $K_{1}$. Because of the low compliance rate among black mothers (even after strengthening) and a smaller sample size, both the effect ratio estimate and the partial identification bounds estimates are less precise compared to the analysis based on the entire study cohort in the design M1. The results from our analysis acknowledge that excess travel time yields rather limited information regarding a potential treatment effect of delivering at a high-level NICU vs. low-level NICU for the black mothers in Pennsylvania, mostly as a consequence of the geography of the state. Among the non-black mothers, the effect ratio (low-level NICU vs. high-level NICU) was estimated to be $3.10\%$ with a $95\%$ confidence interval of $[0.92\%,5.42\%]$ among the non-black, high risk subgroup; $2.02\%$ with a $95\%$ confidence interval of $[0.85\%,3.26\%]$ among the non-black, medium risk subgroup; and $0.35\%$ with a $95\%$ confidence interval of $[0.10\%,0.61\%]$ among the non-black, low risk subgroup. Our analyses suggest that the clinical benefit of delivering at a high-level NICU appeared to be most pronounced among compliers in the high-risk subgroup, with a risk difference estimate almost 10-fold that among compliers in the low-risk subgroup. Figure 4 further plots the 95% CIs of the sample average treatment effect for each non-black subgroup against different values of $K_{1}$. Comparing the partial identification bounds of three non-black risk groups, we have an impression that the low-risk subgroup appeared to minimally benefit from the high-level NICU; for instance, a $95\%$ CI for the SATE equals $[0.05\%,0.39\%]$ when $K_{1}=0.2\%$, $[0.05\%,0.50\%]$ when $K_{1}=0.4\%$, and $[0.05\%,0.61\%]$ when $K_{1}=0.6\%$. Interestingly, among the four subgroups, the partial identification bounds are the narrowest for the non- black, low-risk subgroup, as a consequence of the large sample size ($n=29,155\times 2=58,310$) and high compliance rate ($65.9\%-17.7\%=48.2\%$) in this subgroup. Finally, we repeated the subgroup analysis for the composite outcome. We did not find any difference in the composite outcome among compliers in the black, high-risk non-black, and the medium-risk non-black subgroups; however, the effect ratio was estimated to be $-0.24$ with a $95\%$ confidence interval of $[-0.42,-0.07]$ for low-risk non-black mothers, providing some additional evidence that the triage system may consider sending low-risk mothers to low- level NICUs, especially under a resource constraint for high-level NICUs. For a $95\%$ confidence interval plot of the sample average treatment effect for each subgroup; see Supplemental Material C. (a) Black mothers (b) Non-black, high-risk mothers (c) Non-black, medium- risk mothers (d) Non-black, low-risk mothers Figure 4: $95\%$ confidence interval plot of $\kappa$ for (a) the mortality rate with $K_{0}=0$ and $0\leq K_{1}\leq 0.03$ and (b) for the composite outcome with $0\leq-K_{0}=K_{1}=K\leq 2$, under model $\mathcal{M}_{\gamma}$ with $\gamma=0.0012.$ The dotted lines represent the point estimates while the solid lines represent the one-sided confidence limits of UB (in blue) and LB (in red), respectively. ## 8 Discussion Strengthening an instrumental variable when designing an observational study is a novel and useful tool currently underused in empirical comparative effectiveness research. It also has profound consequences. In their discussion, Baiocchi et al., (2010, Section 5) discussed aspects that would change as one switch between an unstrengthened IV design and a strengthened IV design, or between two IV designs that strengthen an IV to different extent. This article builds upon the discussion in Baiocchi et al., (2010) and related critiques in Deaton, (2009) and proposes strategies to better reconcile different strengthened IV designs. One promising strategy is to complement an effect ratio estimate with partial identification bounds on the sample average treatment effect. Unlike the effect ratio estimate that targets the SATE among compliers, a study-design-dependent estimand, the SATE on the entire matched cohort is agnostic about how one forms the matched pairs or matched sets. Just like strengthening an IV improves the precision of the effect ratio estimate, strengthening an IV also helps narrow the partial identification bounds by improving the compliance rate. We studied the effect of delivering at a high-level NICU versus a low-level NICU using a comprehensive administrative database in Pennsylvania and a suite of tools developed in this article, including an improved non-bipartite matching algorithm that emulates a randomized encouragement trial, inferential methods for constructing a valid level-$\alpha$ confidence interval for partial identification bounds, and inferential methods that conducts valid inference under a more realistic and less conservative, IV-dependent biased randomization scheme. Our data analyses suggest that delivering at a high- level NICU decreased preterm infants’ mortality rate. The effect was found to be heterogeneous, with a most pronounced effect for non-black, high-risk mothers but marginal for the non-black, low-risk mothers, providing some evidence that a triage system may not need to send mothers and their preterm babies to high-level NICUs. Interestingly, although black mothers were at elevated risk and could potentially benefit from delivering at a high-level NICU, our analysis did not find evidence for this, most likely because the excess travel time was a weak IV, even after strengthening, for black mothers in Pennsylvania. In this study, we pre-specified several subgroups based on previous clinical findings (Hansen,, 1986; Yang et al.,, 2014; Mathews et al.,, 2015). With a binary treatment and in the context of testing Fisher’s sharp null hypothesis, data-dependent methods that identify promising subgroups have been developed and deployed in matched observational studies (Hsu et al.,, 2013, 2015; Lee et al.,, 2018). It is of great interest to develop methods that could identify subgroups most amenable to being strengthened and with promise for a large treatment effect. ## References * Angrist and Fernandez-Val, (2010) Angrist, J. and Fernandez-Val, I. (2010). Extrapolate-ing: External validity and overidentification in the late framework. Technical report, National Bureau of Economic Research. * Angrist and Imbens, (1995) Angrist, J. D. and Imbens, G. W. (1995). Two-stage least squares estimation of average causal effects in models with variable treatment intensity. Journal of the American Statistical Association, 90(430):431–442. * Angrist et al., (1996) Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434):444–455. * Baiocchi et al., (2014) Baiocchi, M., Cheng, J., and Small, D. S. (2014). Instrumental variable methods for causal inference. Statistics in Medicine, 33(13):2297–2340. * Baiocchi et al., (2010) Baiocchi, M., Small, D. S., Lorch, S., and Rosenbaum, P. R. (2010). Building a stronger instrument in an observational study of perinatal care for premature infants. Journal of the American Statistical Association, 105(492):1285–1296. * Bennett et al., (2020) Bennett, M., Vielma, J. P., and Zubizarreta, J. R. (2020). Building representative matched samples with multi-valued treatments in large observational studies. Journal of Computational and Graphical Statistics, 29(4):744–757. * Chen et al., (2023) Chen, K., Heng, S., Long, Q., and Zhang, B. (2023). Testing biased randomization assumptions and quantifying imperfect matching and residual confounding in matched observational studies. Journal of Computational and Graphical Statistics, 32(2):528–538. * Chen and Zhang, (2023) Chen, S. and Zhang, B. (2023). Estimating and improving dynamic treatment regimes with a time-varying instrumental variable. Journal of the Royal Statistical Society Series B: Statistical Methodology, 85(2):427–453. * Cole and Stuart, (2010) Cole, S. R. and Stuart, E. A. (2010). Generalizing evidence from randomized clinical trials to target populations: the ACTG 320 trial. American Journal of Epidemiology, 172(1):107–115. * Cook et al., (2002) Cook, T. D., Campbell, D. T., and Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin Boston, MA. * Courtright et al., (2024) Courtright, K. R., Madden, V., Bayes, B., Chowdhury, M., Whitman, C., Small, D. S., Harhay, M. O., Parra, S., Cooney-Zingman, E., Ersek, M., et al. (2024). Default palliative care consultation for seriously ill hospitalized patients: A pragmatic cluster randomized trial. JAMA, 331(3):224–232. * Deaton, (2009) Deaton, A. S. (2009). Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development. Technical report, National Bureau of Economic Research. * Derigs, (1988) Derigs, U. (1988). Solving non-bipartite matching problems via shortest path techniques. Annals of Operations Research, 13(1):225–261. * Ding and Lu, (2017) Ding, P. and Lu, J. (2017). Principal stratification analysis using principal scores. Journal of the Royal Statistical Society. Series B (Statistical Methodology), pages 757–777. * (15) Fogarty, C. B. (2018a). On mitigating the analytical limitations of finely stratified experiments. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(5):1035–1056. * (16) Fogarty, C. B. (2018b). Regression-assisted inference for the average treatment effect in paired experiments. Biometrika, 105(4):994–1000. * Fogarty, (2020) Fogarty, C. B. (2020). Studentized sensitivity analysis for the sample average treatment effect in paired observational studies. Journal of the American Statistical Association, 115(531):1518–1530. * Fogarty et al., (2021) Fogarty, C. B., Lee, K., Kelz, R. R., and Keele, L. J. (2021). Biased encouragements and heterogeneous effects in an instrumental variable study of emergency general surgical outcomes. Journal of the American Statistical Association, 116(536):1625–1636. * Gagnon-Bartsch and Shem-Tov, (2019) Gagnon-Bartsch, J. and Shem-Tov, Y. (2019). The classification permutation test: A flexible approach to testing for covariate imbalance in observational studies. Annals of Applied Statistics, 13(3):1464–1483. * Haavelmo, (1943) Haavelmo, T. (1943). The statistical implications of a system of simultaneous equations. Econometrica, Journal of the Econometric Society, pages 1–12. * Hansen, (1986) Hansen, J. P. (1986). Older maternal age and pregnancy outcome: a review of the literature. Obstetrical & Gynecological Survey, 41(11):726–742. * Heckman and Urzua, (2010) Heckman, J. J. and Urzua, S. (2010). Comparing iv with structural models: What simple iv can and cannot identify. Journal of Econometrics, 156(1):27–37. * Heng et al., (2023) Heng, S., Zhang, B., Han, X., Lorch, S. A., and Small, D. S. (2023). Instrumental variables: to strengthen or not to strengthen? Journal of the Royal Statistical Society Series A: Statistics in Society, page qnad075. * Hernán and Robins, (2006) Hernán, M. A. and Robins, J. M. (2006). Instruments for causal inference: an epidemiologist’s dream? Epidemiology, pages 360–372. * Hsu et al., (2013) Hsu, J. Y., Small, D. S., and Rosenbaum, P. R. (2013). Effect modification and design sensitivity in observational studies. Journal of the American Statistical Association, 108(501):135–148. * Hsu et al., (2015) Hsu, J. Y., Zubizarreta, J. R., Small, D. S., and Rosenbaum, P. R. (2015). Strong control of the familywise error rate in observational studies that discover effect modification by exploratory methods. Biometrika, 102(4):767–782. * Imai, (2008) Imai, K. (2008). Variance identification and efficiency analysis in randomized experiments under the matched-pair design. Statistics in Medicine, 27(24):4857–4873. * Imbens, (2010) Imbens, G. W. (2010). Better late than nothing: Some comments on deaton (2009) and heckman and urzua (2009). Journal of Economic Literature, 48(2):399–423. * Imbens and Rosenbaum, (2005) Imbens, G. W. and Rosenbaum, P. R. (2005). Robust, accurate confidence intervals with a weak instrument: quarter of birth and education. Journal of the Royal Statistical Society: Series A (Statistics in Society), 168(1):109–126. * Jo and Stuart, (2009) Jo, B. and Stuart, E. A. (2009). On the use of propensity scores in principal causal effect estimation. Statistics in Medicine, 28(23):2857–2875. * Joffe, (2011) Joffe, M. (2011). Principal stratification and attribution prohibition: good ideas taken too far. The International Journal of Biostatistics, 7(1). * Lee et al., (2018) Lee, K., Small, D. S., Hsu, J. Y., Silber, J. H., and Rosenbaum, P. R. (2018). Discovering effect modification in an observational study of surgical mortality at hospitals with superior nursing. Journal of the Royal Statistical Society Series A: Statistics in Society, 181(2):535–546. * Lei and Ding, (2021) Lei, L. and Ding, P. (2021). Regression adjustment in completely randomized experiments with a diverging number of covariates. Biometrika, 108(4):815–828. * Li and Ding, (2020) Li, X. and Ding, P. (2020). Rerandomization and regression adjustment. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(1):241–268. * Lin, (2013) Lin, W. (2013). Agnostic notes on regression adjustments to experimental data: Reexamining freedman’s critique. Annals of Applied Statistics. * Lin et al., (2017) Lin, W., Halpern, S. D., Prasad Kerlin, M., and Small, D. S. (2017). A “placement of death” approach for studies of treatment effects on icu length of stay. Statistical Methods in Medical Research, 26(1):292–311. * Lorch et al., (2012) Lorch, S. A., Baiocchi, M., Ahlberg, C. E., and Small, D. S. (2012). The differential impact of delivery hospital on the outcomes of premature infants. Pediatrics, 130(2):270–278. * Lu et al., (2011) Lu, B., Greevy, R., Xu, X., and Beck, C. (2011). Optimal nonbipartite matching and its statistical applications. The American Statistician, 65(1):21–30. * Lu et al., (2001) Lu, B., Zanutto, E., Hornik, R., and Rosenbaum, P. R. (2001). Matching with doses in an observational study of a media campaign against drug abuse. Journal of the American Statistical Association, 96(456):1245–1253. * Mathews et al., (2015) Mathews, T., MacDorman, M. F., and Thoma, M. E. (2015). Infant mortality statistics from the 2013 period linked birth/infant death data set. * Newhouse and McClellan, (1998) Newhouse, J. P. and McClellan, M. (1998). Econometrics in outcomes research: the use of instrumental variables. Annual Review of Public Health, 19(1):17–34. * Pu and Zhang, (2021) Pu, H. and Zhang, B. (2021). Estimating optimal treatment rules with an instrumental variable: A partial identification learning approach. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(2):318–345. * Rigdon et al., (2018) Rigdon, J., Baiocchi, M., and Basu, S. (2018). Near-far matching in r: the nearfar package. Journal of Statistical Software, 86(CS-5). * Rosenbaum and Rubin, (2023) Rosenbaum, P. and Rubin, D. (2023). Propensity scores in the design of observational studies for causal effects. Biometrika, 110(1):1–13. * Rosenbaum, (1987) Rosenbaum, P. R. (1987). Sensitivity analysis for certain permutation inferences in matched observational studies. Biometrika, 74(1):13–26. * Rosenbaum, (1989) Rosenbaum, P. R. (1989). Sensitivity analysis for matched observational studies with many ordered treatments. Scandinavian Journal of Statistics, pages 227–236. * (47) Rosenbaum, P. R. (2002a). Covariance adjustment in randomized experiments and observational studies. Statistical Science, 17(3):286–327. * (48) Rosenbaum, P. R. (2002b). Observational Studies. Springer. * Rosenbaum, (2010) Rosenbaum, P. R. (2010). Design of Observational Studies, volume 10. Springer. * Rosenbaum, (2020) Rosenbaum, P. R. (2020). Modern algorithms for matching in observational studies. Annual Review of Statistics and Its Application, 7:143–176. * Rosenbaum and Rubin, (1985) Rosenbaum, P. R. and Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1):33–38. * Rubin, (1980) Rubin, D. B. (1980). Randomization analysis of experimental data: The fisher randomization test comment. Journal of the American Statistical Association, 75(371):591–593. * Rubin, (1986) Rubin, D. B. (1986). Statistics and causal inference: Comment: Which ifs have causal answers. Journal of the American Statistical Association, 81(396):961–962. * Silber et al., (2001) Silber, J. H., Rosenbaum, P. R., Trudeau, M. E., Even-Shoshan, O., Chen, W., Zhang, X., and Mosher, R. E. (2001). Multivariate matching and bias reduction in the surgical outcomes study. Medical Care, 39(10):1048–1064. * Small and Rosenbaum, (2008) Small, D. S. and Rosenbaum, P. R. (2008). War and wages: the strength of instrumental variables and their sensitivity to unobserved biases. Journal of the American Statistical Association, 103(483):924–933. * Stuart et al., (2011) Stuart, E. A., Cole, S. R., Bradshaw, C. P., and Leaf, P. J. (2011). The use of propensity scores to assess the generalizability of results from randomized trials. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(2):369–386. * Su and Ding, (2021) Su, F. and Ding, P. (2021). Model-assisted analyses of cluster-randomized experiments. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(5):994–1015. * Swanson et al., (2018) Swanson, S. A., Hernán, M. A., Miller, M., Robins, J. M., and Richardson, T. S. (2018). Partial identification of the average treatment effect using instrumental variables: review of methods for binary instruments, treatments, and outcomes. Journal of the American Statistical Association, 113(522):933–947. * Wang and Tchetgen Tchetgen, (2018) Wang, L. and Tchetgen Tchetgen, E. (2018). Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variables. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(3):531–550. * Yang et al., (2014) Yang, F., Lorch, S. A., and Small, D. S. (2014). Estimation of causal effects using instrumental variables with nonignorable missing covariates: application to effect of type of delivery nicu on premature infants. Annals of Applied Statistics, 8(1):48–73. * Yu, (2020) Yu, R. (2020). Evaluating and improving a matched comparison of antidepressants and bone density. Biometrics. * Zhang, (2023) Zhang, B. (2023). Efficient Algorithms for Building Representative Matched Pairs with Enhanced Generalizability. Biometrics, 79(4):3981–3997. * Zhang et al., (2022) Zhang, B., Heng, S., MacKay, E. J., and Ye, T. (2022). Bridging preference-based instrumental variable studies and cluster-randomized encouragement experiments: Study design, noncompliance, and average cluster effect ratio. Biometrics, 78(4):1639–1650. * Zhang et al., (2023) Zhang, B., Mackay, E. J., and Baiocchi, M. (2023). Statistical matching and subclassification with a continuous dose: characterization, algorithm, and application to a health outcomes study. The Annals of Applied Statistics, 17(1):454–475. * Zhao and Ding, (2022) Zhao, A. and Ding, P. (2022). Regression-based causal inference with factorial experiments: estimands, model specifications and design-based properties. Biometrika, 109(3):799–815.
remarkRemark hypothesisHypothesis claimClaim Structure-preserving Nonlinear Filtering: CG & DGVidhi Zala, Robert M. Kirby, and Akil Narayan SupplementDocs # Convex Optimization-Based Structure-Preserving Filter for Multidimensional Finite Element Simulations]Structure-preserving convex optimization for multidimensional finite element simulations††thanks: Accepted by editors (JCP) 07/08/2023. Vidhi Zala Scientific Computing and Imaging Institute and School of Computing, University of Utah, Salt Lake City, UT 84112<EMAIL_ADDRESS>Robert M. Kirby Scientific Computing and Imaging Institute and School of Computing, University of Utah, Salt Lake City, UT 84112<EMAIL_ADDRESS>Akil Narayan Scientific Computing and Imaging Institute and Department of Mathematics, University of Utah, Salt Lake City, UT 84112 (). <EMAIL_ADDRESS> ###### Abstract In simulation sciences, capturing the real-world problem features as accurately as possible is desirable. Methods popular for scientific simulations such as the finite element method (FEM) and finite volume method (FVM) use piecewise polynomials to approximate various characteristics of a problem, such as the concentration profile and the temperature distribution across the domain. Polynomials are prone to creating artifacts such as Gibbs oscillations while capturing a complex profile. An efficient and accurate approach must be applied to deal with such inconsistencies to obtain accurate simulations. This often entails dealing with negative values for the concentration of chemicals, exceeding a percentage value over 100, and other such problems. We consider these inconsistencies in the context of partial differential equations (PDEs). We propose an innovative filter based on convex optimization to deal with the inconsistencies observed in polynomial-based simulations. In two or three spatial dimensions, additional complexities are involved in solving the problems related to structure preservation. We present the construction and application of a structure-preserving filter with a focus on multidimensional PDEs. Methods used such as the Barycentric interpolation for polynomial evaluation at arbitrary points in the domain and an optimized root-finder to identify points of interest, improve the filter efficiency, usability, and robustness. Lastly, we present numerical experiments in 2D and 3D using discontinuous Galerkin formulation and demonstrate the filter’s efficacy to preserve the desired structure. As a real-world application, implementation of the mathematical biology model involving platelet aggregation and blood coagulation has been reviewed and the issues around FEM implementation of the model are resolved by applying the proposed structure- preserving filter. ## 1 Introduction A widely used application of mathematical modeling is creating simulations used to predict and analyze different processes. Simulations obtained from numerical solutions provide physically meaningful results if the values of simulation variables at each timestep follow the structure of the exact solution. Structure in this context refers to properties such as non-negative values for a chemical concentration or simulating a quantity requiring an upper bound, e.g., [0,1], and other such aspects of the numerical solution. If the solution fails to comply with these structural restrictions, the resulting solution is not meaningful. Therefore, designing an approach to regulating the solution is crucial for simulations in many applications. Examples include thermodynamics, numerical weather predictions, bio-chemical processes such as platelet aggregation and blood coagulation, and fluid flow problems. Computing solutions to mathematical models described in multiple dimensions further adds to the complexity of solving a structure-preservation problem. To implement multidimensional models using numerical methods, we require approximations of the quantities of interest in the domain. Popular choices for numerical methods such as the finite element method (FEM) and the finite volume method (FVM) employ piecewise polynomial-based approximations. These approximations are prone to artifacts such as the Gibbs phenomenon, leading to a violation of structure. We focus on the FEM implementation for the advection-diffusion-reaction (ADR) problem and design an algorithm that computes a solution at each timestep adhering to a particular structure (e.g., positivity and monotonicity). The nonphysical values from such simulations may result in a propagation of nonphysical values, which often cause a blow up, resulting in the failure of the entire simulation. For example, when the concentration values are expected to be bound within $[0,1]$ but the numerical solution produces a value greater than one, even a small change in the next timestep causes a rapid compounding of the solution. The problem of structure preservation has been well studied in different fields for applications to various partial differential equations (PDEs) and using different numerical approaches. The next section provides a brief overview of some existing methods proposed to solve the structure-preservation problem. ### 1.1 Review Many sophisticated approaches have been considered in the literature to tackle different aspects of the structure-preservation problem. We make a broad classification into ‘intrusive’ and ‘nonintrusive’ classes of solutions. Many popular approaches generally involve changing either the solution or the PDE and require the imposition of stringent conditions on solver parameters, such as the step-size of the numerical method or the domain shape and granularity. Such methods can be classified as intrusive. On the other hand, some methods constrain the solution obtained from the solver with minimal, often superficial, change to the numerical scheme, i.e., in a nonintrusive way. Prominent among the intrusive class of approaches is the transformation of structure preservation into a PDE-constrained optimization problem, methods that modify the spatial discretization [30], and limiters derived from Karush- Kahan-Tucker (KKT) optimality conditions [29]. Another example of the intrusive approach [21] proposes an operator-splitting positivity-preservation method based on the energy dissipation law. The strategies for constraint satisfaction in [3] consider a version of the problem that is a specialization of the one solved by the solution proposed in this manuscript, which falls under the nonintrusive class. In [2, 1] and extensions [3, 23], the authors explore approaches for positivity preservation on the basis functions derived from Bernstein polynomials such that they are non-negative and possess a partition of unity property. Therefore, the interpolated solution respects the original bounds at any point in the domain, nonintrusively. Other nonintrusive methods prescribe solutions such as limiters or truncation and modification of the domain (e.g., curvilinear mesh adaptation) to adhere to the desired solution structures. One such method is established in [4] where the authors present a new approach for multi-material arbitrary Lagrangian–Eulerian (ALE) hydrodynamics simulations based on high- order finite elements posed on high-order curvilinear meshes in which conservative fields are remapped and additional synchronization steps are introduced for structural preservation. Additionally, many nonintrusive methods perform pre- or postprocessing of intermediate solutions without modifying the underlying problem to incorporate the constraints or making changes to the domain. Some nonintrusive approaches to structure preservation can be found in [20, 34, 25, 22, 27]. In [33], the authors present a positivity-preserving limiter by maintaining cell averages within a certain tolerance. The work in [16] imposes positivity on a discrete grid for kinetic equations via a nonlinear filtering procedure. Satisfying constraints up to a numerical tolerance is sufficient for many applications. However some applications require the solution to adhere to a strict non-negative structure. For example, in [13], the authors enforce strict, global non- negativity of the diffusion propagator by formulating constraints specific to the propagator model. The algorithm in [28] employs alternating least squares procedures to implement general linear equality and inequality constraints by reducing it to a non-negativity-constrained least squares (NNLS) problem. The non-negativity constraints based on square root representations have also been of interest in many other works [14, 10]. We propose a nonintrusive method based on a general filtering approach in [31, 32]. Many of the existing techniques prescribe approaches to preserve the structure that work only for low-order problems. Additionally, an increase in the dimensionality of the problem further exacerbates the issue of accuracy and convergence. Many approaches struggle to maintain high-order accuracy and convergence robustly, including the filter’s ability to preserve structure without any additional dependence or restrictions imposed on a per-case basis. This includes problem-specific modifications as well as varying solutions based on other aspects of the problem. For example, [15, 26] presents redistribution-based local and global repair methods depending on a sweep through cells to locate and rectify the violating structures. The final solution depends on the sweep order, producing different solutions for different sweep orders. In addition, this repair procedure preserves element- averaged bound/positivity constraints at particular finite difference nodes. It does not preserve them pointwise, e.g., at every spatial location. The proposed method tries to preserve the constraints at every spatial location. An ideal approach would achieve a balance between robust structure preservation, maintaining accuracy as well as convergence, and providing usability by deterministic termination within an acceptable time. ### 1.2 Contribution We pose the problem of preserving the structure in polynomial-based numerical methods as a convex optimization problem. Our previous works discuss a filtering algorithm to solve this problem and demonstrate the solution for function approximations [31] and PDE solutions in 1D [32]. The robust design of the problem enables the preservation of different properties of the solution simultaneously (e.g., positivity, monotonicity, and boundedness). It transforms the problem into a composite constraint satisfaction problem that can be solved by convex minimization. To address this problem, we propose a filter that can be applied as a postprocessing step to deal with structural inconsistencies. The novel idea in the design of the filter is the efficiency achieved by applying the filter only to the violating regions in a subdomain and structure-preservation continuously throughout the domain, and not just a subset of points within the domain. This guarantee of structure-preservation depends on the granularity of the global minimization procedure in 2D and 3D, which is expensive and less reliable compared to its 1D counterpart. In 1D, the minimum can be found accurately using the confederate matrix approach to find zeros of the derivative of a polynomial. For finding the critical points in higher dimensions, an efficient gradient-based method is required which solves the nonconvex problem of finding the global minimum. In the case of the proposed filter, the global minimum represents a structure violation. Gradient-based methods are prone to get stuck at a local minimum, causing it to miss the point of larger structural inconsistency, thus failing to provide strong guarantees of preservation. Another important aspect that affects the outcome of the gradient-based method is the choice of a starting point. To address this aspect, we propose an investigation of structural inconsistencies on a lattice of points as the initial step in the gradient-based method. The point with the most inconsistency is chosen as a starting point. The accuracy of the minimum found depends on the pattern and granularity of the initial lattice used, which is detailed in Section 2.1. To streamline the minima-finding stage and reduce the time and space complexity, we adopt the Barycentric interpolation approach from [6], expanded upon by [17]. Using these techniques, a considerable speed-up in time taken by minimization and overall filter computations is achieved. This paper focuses on designing a structure-preserving filter using a convex optimization approach for 2D and 3D problems on different element types. We will present the formulation derived from [31] in Section 2 and extend the concepts to more complex problems tackled in this paper. The remainder of the paper is organized as follows: Section 2 discusses the setup for applications of the filter to time-dependent PDEs in 2D and 3D. Section 3 introduces the notations, details the design of different building blocks of the filter, and summarizes the 1D problem formulation from [31, 32]. A procedure for the filtering process developed for multidimensional applications is presented in Section 4. Section 5 describes the numerical results to demonstrate the filter’s efficacy in preserving the desired structures in different application scenarios. This section is divided into subsections describing the process of choosing the parameters to run the experiments, the 2D and 3D canonical experiments, and an advection problem on different homogeneous meshes using the discontinuous Galerkin (dG) formulation. We conclude with a demonstration of the use of the proposed filter on a real-world application: the model of platelet aggregation and blood coagulation problem. ## 2 Setup In this section, we discuss the setup for filter applications to function approximations and time-dependent PDEs to solve an advection problem using the FEM with method-of-lines discretizations with a focus on the dG formulation. The choice of advection problem is for illustration purposes only. The setup remains the same for any time-dependent PDE solution using a polynomial-based method. The filter behaves as a postprocessing step that preserves the desired structure at each timestep within a defined tolerance and is agnostic to the numerical method used to obtain the solutions. In all our experiments, we consider the numerical zero to be $10^{-7}$. The idea behind this choice is that in all the positivity preserving experiments the gradient-based method stagnates when the difference in minimum found by two consecutive iterations dip below $-10^{-7}$. In such a situation, there are no significant changes to the interpolation coefficients between iterations of the filter, and therefore, the convergence is considerably slow. Certain variations of the filter as investigated in [31, Section 5.1.2] can improve the rate of convergence below a particular tolerance. Section 5 shows numerical examples in higher dimensions using the same setup on element types such as triangle, quadrilateral, hexahedron, tetrahedron, prism, and meshes comprised of these element types. ### 2.1 Detecting and resolving structural inconsistency in 2D and 3D If the polynomial projections lose structural conformity, they are likely to do so at or between the quadrature points. We hope to capture the structural inconsistencies by checking the values on a lattice of quadrature points and the centroids formed by adjacent quadrature points. Here, the arbitrary choice of the centroids is one of the many possible options to choose the lattice. A quadrilateral is used as a sample element to describe the setup; however, the same applies to any canonical element type in 2D and 3D. Let us call such a lattice of points $(X)$. For a quadrilateral, let $Q$ represent the number of quadrature points in one dimension. $X$ is defined by a combination of the quadrature grid on the quadrilateral (total $Q^{2}$ points and $(Q-1)^{2}$ centroids formed by the midpoints of the quadrature grid) as defined in Equation 2.1.1. (2.1.1) $X=\\{Q^{2}\textrm{ points in the quadrature grid }\\}\cup\\{(Q-1)^{2}\textrm{ points in the staggered quadrature grid}\\},$ For a triangle, we can follow the same idea of combining the quadrature points and the centroids to construct the lattice. An example lattice on a mesh with quadrilaterals and triangles is shown in Figure 2.1.1. Figure 2.1.1: Example lattice used for detecting structural nonconformity in a mesh. Consider a function in 2D defined by Equation 2.1.2. (2.1.2) $f(x,y)=\nu\sin(2\pi x)\sin(2\pi y-0.85\pi),$ where $\nu(x,y)=\begin{cases}\text{1,}&\quad\text{if }x\geq 0\text{ and }{x\leq 0.5}\text{ and }{y\geq 0.4}\text{ and }{y\leq 0.85}\\\ \text{0,}&\quad\text{ otherwise.}\\\ \end{cases}$ Projecting Equation 2.1.2 using polynomial order $N=5$ on a quadrilateral in the domain $\Omega=[0,1]\times[0,1]$, we get the coefficients $\boldsymbol{\tilde{v}}=\\{\tilde{v}_{j}\\}_{j=0}^{P-1}$. The projection is shown in the left subfigure of Figure 2.1.2. In this case, $P=(N+1)^{d}$, where $d$ is dimension. Since $d=2$, $P=36$. The original function is non- negative, so the projection should preserve positivity. To this end, we employ the optimization outlined in Section 4 as a postprocessing filter. The process behaves as a spectral filter based on the empirical evidence presented in [31, Section 5.2] and further theoretical explanation provided in [31, Proof 5.1]. We will hereafter refer to this optimization as a nonlinear filter ($\mathcal{F}$). Let $\boldsymbol{{v}}$ be the filtered version of $\boldsymbol{\tilde{v}}$. The filtered projection is shown in the right subfigure of Figure 2.1.2. Figure 2.1.2: Left: Projection of Equation 2.1.2 using polynomial order $=5$. Right: Filtered version after application of the structure-preserving filter. The filter converges to a solution that preserves positivity within a tolerance (set to $10^{-6}$ here). ### 2.2 Solution to advection problem in dG The advection equation for a quantity described by a scalar field $u$ is expressed mathematically by a continuity equation Equation 2.2.1 in the domain $\Omega\subset{\mathbbm{R}}^{d}$. (2.2.1) ${u}_{t}+\boldsymbol{a}\cdot\nabla{u}=0,$ where $\boldsymbol{a}$ represents the velocity of advection. Assume proper initial and boundary conditions are specified. Galerkin-type methods assume an ansatz for $u$ as a time-varying element of a fixed $P$-dimensional linear subspace $V$ that contains the polynomial functions for a fixed degree $P$, where frequently $V\subset L^{2}(\Omega)$. (2.2.2) $\displaystyle u(x,t)\approx u_{P}(x,t)$ $\displaystyle\coloneqq\sum_{i=0}^{P-1}\widehat{v}_{i}(t)\phi_{i}(x),$ $\displaystyle V$ $\displaystyle=\mathrm{span}\\{\phi_{0}\ldots,\phi_{P-1}\\},$ where $x\in\Omega$, and $\\{\phi_{j}\\}_{j=0}^{P-1}$ represents the traditional FEM basis, which is not necessarily orthogonal. Let $\\{\psi_{j}\\}_{j=0}^{P-1}$ represent a collection of orthonormal basis functions We can transform the vector of coefficients $\boldsymbol{\widehat{v}}$ into its orthonormal form $\boldsymbol{\tilde{v}}$. Consider a partition of $\Omega$ consisting of $E$ subdomains defined by $\tau(\Omega)=\\{e_{0},e_{1},\cdots,e_{E-1}\\}$. The discontinuous Galerkin formulation assumes that $V$ is comprised of functions that are polynomials of a fixed degree $P$ on each element on $\Omega$; where discontinuities in derivatives are allowed only at partition boundaries. The semidiscrete form for Equation 2.2.1 is derived in the standard Galerkin way by using the ansatz Equation 2.2.2 and forcing the residual to be $L^{2}$-orthogonal to $V$. Usually, integration by parts is performed in the residual orthogonalization step, and often depending on the equation and spatial discretization, numerical flux and/or stabilization terms are included in the weak formulation. The result is a system of ordinary differential equations prescribing time- evolution of the discrete degrees of freedom represented by vector $\boldsymbol{\tilde{v}}=\\{\tilde{v}_{0},\ldots,\tilde{v}_{(P-1)}\\}.$ (2.2.3) $\displaystyle\boldsymbol{M}\frac{\partial}{\partial t}{\boldsymbol{\tilde{v}}}+\boldsymbol{A}{\boldsymbol{\tilde{v}}}=\boldsymbol{F(\tilde{v})}$ where $\boldsymbol{M}and\boldsymbol{A}$ are the $P\times P$ mass and advection matrices, respectively, defined as $\displaystyle(M_{i,j})$ $\displaystyle=\left\langle\phi_{i},\phi_{j}\right\rangle,$ $\displaystyle(A_{i,j})$ $\displaystyle=\left\langle\phi_{i},a\frac{\partial}{\partial x}\phi_{j}(x)\right\rangle,$ and F is a general term to compensate for any numerical fluxes or stabilization terms. Finally, Equation 2.2.3 is transformed into a fully discrete system that can be solved using an appropriate numerical integration scheme. Consider the case when the solution defined by $\widehat{v}^{n}$ on a particular partition $e$ at a timestep $n$ does not satisfy the desired structural properties. To preserve the structure, we employ the optimization defined in Equation 3.1.2 as a postprocessing filter. Let $\boldsymbol{v}$ represent the filtered solution that satisfies the structure. The process can be illustrated in Equation 2.2.4. The proposed filter works by simply augmenting this scheme nonintrusively. (2.2.4) $\boldsymbol{{v}}^{n}\xrightarrow{\textrm{Timestepper for \lx@cref{creftypecap~refnum}{eq:adv}}}\boldsymbol{\widehat{v}}\xrightarrow{(\mathcal{F})}\boldsymbol{{v}},$ Since DG discretizations have degrees of freedom that are decoupled across the subdomains, the total degrees of freedom in the optimization step need to be the only ones over the subdomain. In addition, the optimization needs to be applied only in the subdomains that have structural violations discovered at the lattice points, which leads to a parallelizable set of independent optimizations each in $P$ dimensions. We can filter the selected subdomains simultaneously, which further adds to the procedure’s efficiency. For a mesh with the number of elements $E>1$, the process of enforcing positivity per element leads to changes in solution properties; in particular, elementwise boundary values and the mean of the discrete solution are not preserved. Assume that numerical fluxes are computed as explicit functions of element boundary values. A shift in element boundary values by the filter causes changes in the corresponding fluxes, thus adding errors to the simulation. We can resolve this issue by imposing additional equality constraints (i.e., function values at element boundaries) in the filter. However, for the flux conservation, we cannot make an assertion that the resulting filtered solutions should satisfy the original flux/jump conditions or some updated conditions, based on the new filtered basis coefficients. In the case of mass (integral) conservation, additional equality constraints can be imposed. The values of fluxes are often used to ensure the properties of numerical schemes therefore by preserving fluxes one can attest that the properties of the original scheme have been retained. Preserving fluxes in this context corresponds to corrections in one element need not have an impact on neighboring elements, and one way to promote isolation of the optimization effect to the element under consideration is to ensure that interelement communication through fluxes is undisturbed. The formulation for flux conservation in [32, Section 3.2 and 3.3] is unique to one-dimensional scenarios. Generalization to higher dimensions is not immediate. Additional analysis is needed to incorporate different elements and, therefore, beyond the scope of this paper. The robust design of the filter allows for the incorporation of an arbitrary number of structural constraints into one optimization problem. However, increasing the number of constraints reduces the degrees of freedom for the filter accordingly. In particular, if the filter has $P$ degrees of freedom and we have $q$ linear equality constraints, then the dimension of the optimization problem can be reduced to $(P-q)$ using the procedure described in [32, Section 3.2]. ## 3 Formulation of structure-preservation problem and solution design This section establishes the notations and summarizes formulation, filter design, and implementation ideas from [31, 32]. ### 3.1 The problem Let $u(x)\in V$ be a function, where $V$ is a finite-dimensional subspace of a Hilbert space of real-valued functions on $\Omega\in\mathbbm{R}^{d}$. Let V contain a collection of orthonormal basis functions $\\{\psi_{n}\\}_{n=1}^{N}$ for some $N\in\mathbbm{N}$. Therefore, we have $\displaystyle V$ $\displaystyle=\mathrm{span}\left\\{\psi_{1}\ldots,\psi_{N}\right\\},$ $\displaystyle\left\langle\psi_{i},\psi_{j}\right\rangle$ $\displaystyle=\delta_{ij},$ $\displaystyle i,j$ $\displaystyle=1\ldots,N,$ where $\left\langle\cdot,\cdot\right\rangle$ is the inner product on $V$, and $\delta_{ij}$, the Kronecker delta function. The value of $d$ is set to 1 for simplicity. The formulation follows the same steps for higher dimensions. A comprehensive constraint-satisfaction problem can be constructed as detailed in this section. Let $K$ be the collection of $u$ that satisfies the constraints. Therefore, $K$ is the feasible subset of $V$. An example of such a family of linear constraints is {positivity, boundedness}. Consider a family of linear constraints such as the one in Equation 3.1.1, where ${L}$ is a linear operator bounded on $V$, and $\ell$ is a function on the domain $\Omega$. (3.1.1) $\displaystyle{L}(u)$ $\displaystyle\leq\ell(x),$ $\displaystyle x$ $\displaystyle\in\Omega.$ Note that the framework in [31] allows a finite number of such constraints to be considered simultaneously. According to the Riesz representation theorem, if $V^{*}$ is a dual of $V$, then a functional $L\in V^{*}$ can be associated with a unique V-representor $\ell\in V$ satisfying $\displaystyle L(u)$ $\displaystyle=\left\langle u,\ell\right\rangle,$ $\displaystyle\forall u\in V.$ Furthermore, this $L\leftrightarrow\ell$ identification is an isometry. We will use these facts in what follows. Given $L$ that identifies $\ell$, we consider the coordinates $\widehat{\ell}_{j}$ of $\ell$ in a $V$-orthonormal basis, $\displaystyle\ell(x)$ $\displaystyle=\sum_{j=1}^{N}\widehat{\ell}_{j}\psi_{j}(x),$ $\displaystyle\widehat{\ell}_{j}$ $\displaystyle=\left\langle\ell,\psi_{j}\right\rangle=L(\psi_{j}).$ Then we have the following relations: $\displaystyle\left\|L\right\|_{V^{\ast}}=\left\|\ell\right\|_{V}$ $\displaystyle=\|\boldsymbol{\widehat{\ell}}\|_{2},$ $\displaystyle\boldsymbol{\widehat{\ell}}$ $\displaystyle=\left(\widehat{\ell}_{1},\;\widehat{\ell}_{2},\;\ldots,\;\widehat{\ell_{N}}\right)^{T},$ where $\|\cdot\|_{2}$ is the Euclidean norm on vectors in $\mathbbm{R}^{N}$. The problem is essentially that of developing a map $\mathcal{F}$ from a function $u\in V$, to a unique function $\mathcal{F}(u)\in K$. Initially, $u$ may not satisfy the structural constraints but its mapped version $\mathcal{F}(u)$ does. In [31] we introduce a strategy to solve the following well-posed convex feasibility problem. (3.1.2) $\displaystyle\mathcal{F}(u)\coloneqq\operatorname*{argmin}_{f\in K}\|u-f\|_{V},$ Under certain conditions, such as if $0\in K$, $\mathcal{F}$ is norm- contractive; therefore, the operation can be called a filter. For a brief discussion of the norm-contractive nature of Equation 3.1.2, see [31, Proposition 5.1]. ### 3.2 Toward construction of a convex optimization-based solution to Equation 3.1.2 We first transform the continuous problem Equation 3.1.2 to a discrete version to construct a feasible solution. Let $\\{\psi_{j}\\}_{j=0}^{P-1}$ be a collection of orthonormal basis functions in $V$, different from the standard FEM basis ($\phi$), which are not orthonormal. Consider $C$ as the affine conic region in $\mathbbm{R}^{P}$ corresponding to $K\subset V$. From previously established notations, $\boldsymbol{\tilde{v}}$ is a vector of expansion coefficients of $u$ that does not satisfy that desired constraints, and $\boldsymbol{v}$ is a vector of filtered coefficients of $u$ that satisfies the desired constraints. Representing the Euclidean 2-norm on vectors as $\|\cdot\|_{2}$, and the fact that $C\subset V$, we get the equivalent of the optimization problem Equation 3.1.2 as follows: (3.2.1) $\displaystyle\operatorname*{argmin}_{\boldsymbol{{v}}\in C}\|\tilde{\boldsymbol{v}}-\boldsymbol{{v}}\|_{2}.$ For a fixed $x\in\Omega$, let $H_{x}$ be a $(P-1)$-dimensional planar surface defined by an equality constraint corresponding to Equation 3.1.1 for a fixed $x$. Therefore, $L(u)=\ell(x)$. Writing $u$ as an expansion in degrees of freedom $\tilde{v}$ that corresponds to a single linear equality constraint for the $\tilde{v}$, i.e., a $(P-1)$-dimensional plane. In a geometric sense, $H_{x}$ is a surface representing the constraint boundary in the space $\mathbbm{R}^{P}$, dividing it into two hyperspaces as shown in Figure 3.2.1. One hyperspace represents the region that satisfies a linear inequality constraint and the other that does not. For a single linear constraint, $C$ can be defined in terms of hyperspaces as Equation 3.1.1. $\displaystyle C=\bigcap_{x\in\Omega}\\{\textrm{feasible halfspace defined by plane }H_{x}\\},$ To obtain the full feasible set $C$, the filter iteratively projects $\tilde{\boldsymbol{v}}$ onto the intersection of feasible halfspaces defined by individual hyperplanes $\\{H_{x}\\}$. To describe the feasible regions further, let us consider a geometric interpretation. Let the current state vector of the projection defined by $\tilde{\boldsymbol{v}}$ be a point shown by the blue dot in Figure 3.2.1. The filter works by applying correction Equation 3.2.7 to $\tilde{\boldsymbol{v}}$ by computationally inspecting the signed distance function Equation 3.2.4. (3.2.4) $\displaystyle s(x)\coloneqq\mathrm{sdist}(\boldsymbol{\tilde{v}},C_{x})=\left\\{\begin{array}[]{cc}-\mathrm{dist}(\boldsymbol{\tilde{v}},H_{x}),&x\not\in C,\\\ +\mathrm{dist}(\boldsymbol{\tilde{v}},H_{x}),&x\in C.\end{array}\right\\}.$ For the positivity-preservation example, the process of “inspection” means the ability to compute the global minimum of $s(x)$ to determine a region where $s$ is negative. Based on this inspection, the algorithm projects the state vector of the current iterate $\boldsymbol{\tilde{v}}$ onto $H_{y}$ for some $y\in\Omega$. This projection can cause a particular $H_{y1}$ with a positive $s(y1)$ to go negative. Therefore, we need to repeat the projection step until the solution $\boldsymbol{\tilde{v}}$ lies completely in (or on the boundary of) $C$. [31, Algorithm 1] summarizes all the steps of the filter. Figure 3.2.1: Division of the space into half-spaces by the hyperplanes representing constraint boundaries defined by the set of points (say $y1,y2\cdots yn$) for the initial projection with coefficients $\boldsymbol{\tilde{v}}$. The algorithm greedily calculates the correction $s(x)$. Left: A geometrical visual of the distance calculation from $\boldsymbol{\tilde{v}}$ to the hyperplanes defining the boundaries of the constraints. The hatched area inside the cone represents the feasible region. Right: Projection of $\boldsymbol{\tilde{v}}$ on to $H_{y4}$. $H_{y4}$ is selected over $H_{y5}$ since it defines the violating hyperplane that is farthest away. At each iteration, the spatial point $x$ is found that minimizes the signed distance function given by Equation 3.2.5. Note that for more than one constraint we need to keep track of $x$ and the constraint specific to the minimum signed distance function. (3.2.5) $\displaystyle(x^{\ast})\coloneqq\operatorname*{argmin}_{x\in\Omega}s(x)$ This greedy procedure is an infinite-constraint generalization of Motzkin’s relaxation method [24]. The major computational work in the filtering procedure is to minimize the objective defined by Equation 3.2.6 at each iteration. (3.2.6) $\displaystyle s(x)$ $\displaystyle=\lambda(x)\left({L}_{x}(u)-\ell(x)\right),$ $\displaystyle\lambda^{2}(x)\coloneqq\frac{1}{\sum_{j=0}^{P-1}\left({L}_{x}(\psi_{j})\right)^{2}}.$ Next, update $\boldsymbol{\tilde{v}}$ by projecting it onto the hyperplane corresponding to $x^{\ast}$: (3.2.7) $\displaystyle\boldsymbol{\tilde{v}}\leftarrow\boldsymbol{\tilde{v}}+\boldsymbol{h}(x^{\ast})\min\left\\{0,s(x^{\ast})\right\\}.$ where $\boldsymbol{h}(x,)$ is the normal vector corresponding to hyperplane $H_{x}$ of a single constraint family that points toward $C$. This vector is readily computable from the orthonormal basis, see [31] for details. This procedure is repeated until $s(x^{\ast})$ vanishes to within a numerical tolerance. Finding the violating hyperplane that is farthest away corresponds to finding the global minimum of $s(x)$ on $\Omega$. Unlike the 1D procedure described in [31], multidimensional global minimization requires a more expensive approach such as gradient descent (GD). In this paper, we implement 2D and 3D global minimization using line-search-based GD with backtracking. We further provide a strategy to investigate the times taken for convergence and the accuracy of this GD approach by varying the values of the parameters. Then, we choose the best values for the parameters for the accuracy of the method. Section 4.1 provides details of the GD variant chosen for filter implementation and corresponding parameter selection to efficiently solve the minimization problem. ## 4 Algorithm The structure of the solution is preserved by applying the filter as a postprocessing step that corrects $\boldsymbol{\widehat{v}}$ obtained from the timestepping algorithm using iterative optimization. Each iteration of the filter involves the following crucial computational processes: 1. (1) A GD-based minimization to solve Equation 3.2.5, which is a part of the global search for structural violations. 2. (2) A calculation of the correction to $\boldsymbol{\widehat{v}}$ using Equation 3.2.7 We briefly discuss the algorithmic details and the choices made to streamline the processes and their applications to the PDE timestepping. ### 4.1 Finding the global minimum An important and the most expensive part of the optimization process is the global minimization Equation 3.2.5. For efficiency and usability of the filter, we need to fine-tune the most expensive step, i.e., minimizing the scaled distance function Equation 3.2.6, which quantifies the difference between the current state of the coefficients and the coefficients that satisfy the constraints at a critical point of the current iteration. The success of the proposed filter depends on accurately finding the points in the domain that violate the structure. Since the problem formulation uses dG, it can be solved element by element on a domain with $E$ elements. Applying the filter at each timestep is essentially solving $E$ optimization problems at every timestep. Each optimization problem is over a single element. We can reduce the number of optimization problems by testing the solution for structural inconsistencies at a lattice of points defined by Equation 2.1.1. For example, in the case of positivity preservation, if the minimum value of an element is nonpositive, the filter is applied to that element. In the 1D case, the filter finds the minimum value by evaluating the solution at the critical points on the element. If the subspace $V$ restricted to an element is a space of polynomials, then the critical points of Equation 3.2.6 are defined by the roots of the derivative of Equation 3.2.6. For the root- finding problem, we employ the confederate matrix-generation approach. Relative to 1D, minimization in higher dimensions is more complex and expensive. Many variants of the gradient descent-based (GD-based) methods have been proposed to perform this operation efficiently and accurately. These methods operate only up to a certain tolerance and, like all the GD-based methods, have the potential for stagnation. Since there is no multidimensional version of the confederate linearization approach that exactly solves the first-order optimality conditions, the problem of finding the critical points on the multidimensional domain is nonconvex. Therefore, we cannot ensure that all such points will be flagged for correction by any particular minimization algorithm. From our analysis, one suitable GD approach for this job is to use adaptive descent size and the ability to retrace the steps. For this reason, we choose the method of steepest descent with a backtracking line search strategy. The backtracking line search starts with a relatively large step size and repeatedly shrinks it by a factor $\gamma$ until the Armijo–Goldstein condition Equation 4.1.1 is fulfilled. This widely used algorithm makes an informed choice about step size and direction at each iteration to efficiently arrive at the optimum value. It avoids stagnation by tracing back its steps if the difference between descent values falls below a fixed tolerance. An essential part of this algorithm is choosing the parameters denoted by $c\in(0,1)$ and $\gamma\in(0,1)$ that determine the step size and backtracking performed by the algorithm. For an objective function $f$, with a starting position $x$, given parameters $c$ and $\gamma$ the Armijo–Goldstein condition is defined as Equation 4.1.1. (4.1.1) ${\displaystyle f({x}+\gamma_{j}{p})\leq f({x})+\gamma_{j}\,c\,\nabla f(x)^{T}p\,}\hskip 56.9055pt\forall\textrm{ }j\textrm{ until convergence,}$ where $\nabla f(x)^{T}$ is the slope in the direction of descent $p$ and $\gamma$ is calculated as $\gamma_{j}=\begin{cases}{\gamma,}&\quad\text{if }j=0\\\ \,c\gamma_{j-1}&\quad\text{ otherwise}.\\\ \end{cases}$ $c$ and $\gamma$ play a vital role in testing the condition [5], which determines whether a step-wise movement from a current position to a modified position achieves an adequately corresponding decrease in the objective function. Although many recommendations exist [5, 11, 12, 7, 8] for an optimal way to choose $c$ and $\gamma$, these parameters must be tuned for maximum efficiency. ### 4.2 Applying the structure-preserving filter to $\boldsymbol{\tilde{v}}$ obtained from the timestepper From the discussion in Section 2, the optimization that enforces a constrained solution is applied as a postprocessing part inside the timestepper, independent of the choice of the timestepping routine. As predicted by the ‘curse of dimensionality’, structure preservation and optimization for higher dimensional problems are more complex than their 1D counterparts. Since the central idea of the filter is agnostic to the dimensions of the problem, the Algorithm 1 remains the same as described in [31]. Similarly, the applications to PDE solutions follow the procedures established in [32] for 1D problems. Algorithm 1 Constrained PDE timestepping 1: Input: Terminal time $T$, timestep size $\Delta t$, PDE solver spatial basis ${\phi}$ 2: Define nsteps $=\frac{T}{\Delta t}$ 3: for $i=0,\cdots,$ nsteps do 4: Solve PDE and obtain the orthogonalized coefficients $\boldsymbol{\tilde{v_{e}}}^{i}\in\mathbbm{R}^{P}$ for all elements in $e$ in $\Omega$ 5: for Each element $e\in\Omega$ do 6: while True do 7: if $\exists(x\in\Omega)$ such that $s(x)<0$ then 8: Compute $(x^{\ast})$ as defined in Equation 3.2.5 via GD line search from Section 4.1 9: else 10: break 11: end if 12: Update $\boldsymbol{\tilde{v_{e}}}^{i+1}(t)$ via Equation 3.2.7. 13: end while 14: Append to global $\boldsymbol{{v}}^{i+1}=[\boldsymbol{{v}}^{i+1},\boldsymbol{\tilde{v_{e}}}^{i+1}]$ 15: end for 16: end for Conversion from an input coefficient vector $\widehat{\boldsymbol{v}}$ to corresponding coefficients $\tilde{\boldsymbol{v}}$ in an orthonormal basis is the first stage of the filtering procedure. Algorithm 1 presents a summary of steps taken by the PDE solver for the filter application. A significant chunk of work in the iterative correction stage is done by evaluating the solution at the lattice points using basis interpolation, an expensive operation that involves processing and storage of large interpolation matrices. To make the processing computationally efficient, we use the Barycentric interpolation method from [6], expanded upon by [17]. This method reduces the number of calculations and storage required to evaluate the basis functions at arbitrary points in the domain, thereby reducing the cost of the iterative correction stage. Although the implementation details of Barycentric interpolation method are beyond the scope of this paper, for the minimization part of the numerical experiments in Section 5, we observe a significant speed-up using the Barycentric approach. ## 5 Main results This section presents four-part numerical results of the filter’s application to different types of problems in 2D and 3D. Since applying multiple constraints together can be reduced to one optimization problem on a smaller solution space, the numerical results presented here focus on positivity preservation. For 1D examples that preserve multiple structural constraints simultaneously, refer to the results presented in [32]. Section 5.1 details the process to choose the parameter values for the GD line search algorithm described in Section 4.1. Section 5.2 shows the results of applying the filter to a single-element function projection in multiple dimensions. Section 5.3 presents the results and analysis of the application of the filter using the dG formulation of the advection problems defined on composite and homogeneous domains, respectively. Finally, in Section 5.6 we present the results of the FEM solution to the mathematical-biology problem of platelet aggregation and blood coagulation (PAC) with and without the application of the filter. The PAC model is posed as an advection-diffusion- reaction (ADR) system of PDEs. ### 5.1 Parameter selection for backtracking line search used in GD-based minimization To fix the values for $c$ and $\gamma$ Equation 4.1.1 for the numerical results in Section 5, we first choose a set of functions. The choice of the functions depends on factors such as the constraint type and the domain geometry. For example, for an application where positivity is the primary structural concern, a list of functions with values close to zero is a good choice. For applications involving discontinuities in the function or the domain, discontinuous functions would be a good choice. The feasible space for $c$ and $\gamma$ is $(0,1)$. A non-comprehensive sampling on $(0,1)$ is performed for both parameters, and the GD is called with the parameters set to permutations of the sampled values. GD_linesearch denotes the call to the GD with line search. As its output, the GD_linesearch routine provides the number of iterations (denoted by $niter$) taken by the algorithm. The error refers to the difference between the minimum value found and the known minimum value. As a large number of GD iterations per filter iteration makes the filtering process more expensive, choosing the parameter values for which GD converges in the least number of iterations is desirable. Another important quantity to consider here is the error. It is desirable to have GD return with a minimum value as close to the known (or golden) minimum value as possible. With these quantities ($niter$ and $err$) as the metrics, we prescribe a procedure to select the ideal value of $c$ and $\gamma$ in Algorithm A.1 [Appendix Appendix A]. The selection criteria narrow the samples to a range of $c$ and $\gamma$ with the minimal number of iterations and then pick from that subset the ones with the smallest error. In the case of a tie, all the $c$ and $\gamma$ values are included in the return variables. The procedure can be summarized as taking a list of functions that fairly represent the complexity of space and returning two ranges of ideal values: one for each of the parameters $c$ and $\gamma$, respectively. Since the quadrilateral and hexahedron represent richer space than their other canonical 2D and 3D counterparts, Algorithm A.1 [Appendix Appendix A] is applied and analyzed only on a quadrilateral for 2D and a hexahedron for 3D. Future developments in the approaches for finding optima in multiple dimensions may improve the efficacy and efficiency of the filter. To this end and to provide robustness, we employ the object-oriented principle by modularizing the minimization part of the filter. We consider different example functions Section 5.1 on the 2D domain $[-1,1]\times[-1,1]$ and Section 5.1 on the 3D domain $[-1,1]\times[-1,1]\times[-1,1]$, which have varying complexities and projection orders to set the GD parameters comprehensively and fairly. $\displaystyle f_{0}(x,y)$ $\displaystyle=(x+0.6)^{2}+(y-0.2)^{2}.$ $\displaystyle f_{1}(x,y)$ $\displaystyle=-\sin((x-0.1)+0.5\pi)\cos(y-0.2).$ (5.1.1) $\displaystyle f_{2}(x,y)$ $\displaystyle=\begin{cases}\text{1,}&\quad\text{if }x\leq 0\text{ and }{y\leq 0},\\\ \text{0,}&\quad\text{ otherwise. }\\\ \end{cases}$ $\displaystyle f_{3}(x,y,z)$ $\displaystyle=(x+0.6)^{2}+(y-0.2)^{2}+(z+0.1)^{2}.$ $\displaystyle f_{4}(x,y,z)$ $\displaystyle=-\sin((x-0.1)+0.5\pi)\cos(y-0.2)\cos(z-0.2).$ (5.1.2) $\displaystyle f_{5}(x,y,z)$ $\displaystyle=\begin{cases}\text{1,}&\quad\text{if }x\leq 0\text{ and }{y\leq 0}\text{ and }{z\leq 0},\\\ \text{0,}&\quad\text{ otherwise.}\\\ \end{cases}$ Using the orthogonal basis for Equation 3.2.7, $\boldsymbol{\tilde{{v}}}$ is obtained, which is then used by Algorithm A.1 to find the values of parameters $c$ and $\gamma$. For functions that have an exact analytical minimum, calculating the error between the exact minimum and the optimum returned by the GD is straightforward. For the discontinuous functions such as $f_{2}$ and $f_{5}$, an approximate golden minimum is calculated by projecting the function using polynomial order $N=8$ on a dense grid of points in $\Omega$ and finding the least value of the projected polynomial. In all our test cases, GD converges to the exact (or golden) minimum within an acceptable tolerance (numerical 0 set to $10^{-7}$). Therefore, the selection of the parameters is made based on the fewest iterations taken by GD ($niter$) to converge. Table 5.1.1: Results for the 2D experiment on a quadrilateral domain to find the optimal ranges of the gradient descent line search parameters: $c$ and $\gamma$. M denotes the polynomial order for projected functions Section 5.1. The number of iterations taken by GD is denoted by $niter$. The number of quadrature points for projection is constant ($Q=11$) Consider $k$ discrete equispaced samples of $\in(0,1)$. For $k=9$, by applying Algorithm A.1, we get the results shown in Table 5.1.1. Based on the results of these tests, we infer that for the numerical experiments, the appropriate choices for $c$ and $\gamma$ in 2D are 0.7 and 0.7, respectively. Following a similar procedure, Algorithm A.1 on a hexahedron for functions defined in Section 5.1, the choices of $c$ and $\gamma$ for 3D are 0.2 and 0.7, respectively. These values are used for all the numerical experiments in the numerical results. The C++ implementation of the filter in Nektar++ [9] supports change in $c$ and $\gamma$ as parameters in the configuration file. ### 5.2 Projection examples on 2D and 3D elements and application of the structure-preserving filter Consider a 2D function Equation 5.2.1 and a 3D function Equation 5.2.2 that are both discontinuous clamped versions of a smooth sinusoidal function. The initial projections of $f(x,y)$ on a quadrilateral and $f(x,y,z)$ on a hexahedron are shown in Figure 5.2.1. The discontinuity produces oscillations similar to Gibb’s phenomenon upon projection, which is interesting for the application and analysis of the filter. Figure 5.2.1: Galerkin projection of $f(x,y)$ Equation 5.2.1 and $f(x,y,z)$ Equation 5.2.2 on a quadrilateral and hexahedron element, respectively. The 2D projection uses polynomial order $=7$, and the 3D projection uses polynomial order $=5$. Left: Unfiltered version showing areas where the desired structure (positivity) is lost. Right: After the filter is applied, the solution is non- negative up to tolerance. (5.2.1) $f(x,y)=\nu(x,y)\sin\Big{(}\pi(0.2-x)\Big{)}\sin\Big{(}\pi(y+0.2)\Big{)},$ where $\nu(x,y)=\begin{cases}\text{1,}&\quad\text{if }x\in[-0.8,0.2]\text{ and }y\in[-0.2,0.8],\\\ \text{0,}&\quad\text{ otherwise.}\\\ \end{cases}$ (5.2.2) $f(x,y,z)=\nu(x,y,z)\sin\Big{(}\pi(0.2-x)\Big{)}\sin\Big{(}\pi(y+0.2)\Big{)}\sin\Big{(}\pi(z+0.2)\Big{)},$ where $\nu(x,y,z)=\begin{cases}\text{1,}&\quad\text{if }x\in[-0.8,0.2]\text{ and }y\in[-0.2,0.8]\text{ and }z\in[-0.2,0.8]\\\ \text{0,}&\quad\text{ otherwise.}\\\ \end{cases}$ It is evident from the left subfigures in Figure 5.2.1 that the projection of discontinuous non-negative functions leads to the negative values as shown in the highlighted regions. As seen in the right subfigures of Figure 5.2.1, the application of the structure-preserving filter restores the non-negative structures of Equation 5.2.1 and Equation 5.2.2, respectively. To analyze the p-convergence, the experiment is repeated for different polynomial orders, as shown in Figure 5.2.2. Figure 5.2.2: A p-convergence study of the filtered and unfiltered Galerkin projections from Figure 5.2.1. Left: $f(x,y)$ defined in Equation 5.2.1 and Right:$f(x,y,z)$ defined in Equation 5.2.2 Figure 5.2.2 presents $L_{2}$ error comparison between the filtered and unfiltered version of projections of nonsmooth functions in 2D and 3D. The choice of a nonsmooth function is to demonstrate the effect of filter application in worst-case scenarios. ### 5.3 Structure preservation in 2D and 3D advection problems Consider a dG solution to the advection problem Equation 5.3.1 on a 2D domain as shown in the first row of Figure 5.3.1. (5.3.1) ${u}_{t}+\textbf{a}\cdot\nabla{u}=0,$ Consider a function $f$ defined on $[-1,1]\times[-1,1]$ as Equation 5.3.2. (5.3.2) $f(x,y)=1-\cos\Big{(}\frac{\pi x}{2}\Big{)}\cos\Big{(}\frac{\pi y}{2}\Big{)}$ For a given initial condition to be $f(x,y)$, $\textbf{a}=[1,1]$, and the periodic boundary conditions, we formulate the problem in the discontinuous Galerkin framework. The first step is projecting $f$ on a set of $E$ elements $\\{e_{0},e_{1}\cdots e_{E-1}\\}\in\Omega$ using the typical nonorthogonal (hats and bubbles) basis $\phi$. Locally, for an element $e$, we have $f_{e}(x,y)=\sum_{i=0}^{P-1}\widehat{u}_{i}\phi_{i}(x,y),$ where $\Omega$ is the mesh shown in Figure 5.3.1 consisting of a mix of quadrilaterals and triangles. The filter steps do not change for different element types as emphasized by the choice of composite mesh. The solution for the advection problem using the timestep $\Delta t=1e-3$, polynomial order 4, RK-4 integration scheme, and upwind flux calculations is shown in Figure 5.3.1. Row 2 of the figure shows the simulation state at a particular timestep and highlights the negative values in the domain. In row 3 the non-negative structure is restored after the application of the filter, The filter is applied at each timestep to preserve the structure (positivity) of the solution on a lattice of points of interest. The lattice is a set of points defined in Equation 2.1.1. If a structure violation is found at any point in the lattice, the parent element is flagged for filtering. Since the boundaries are periodic, the final state of the simulation looks similar to the initial state. For a similar analysis of the 3D advection problem, consider the initial state as a smooth continuous function Equation 5.3.3 on a cube mesh of 64 hexahedron elements defined in the domain $[-1,-1,-1]\times[1,1,1]$. The advection velocity is defined by a $=[1,1,1]$, and all the boundary conditions are periodic. The integration method used is RK-4 with a timestep of $1e{-3}$. The timestepping is performed for a total of 2000 timesteps and the flux calculation is upwind. Following a similar selection procedure as in 2D, we discover the elements that violate the structure at each timestep and apply the structure-preserving filter to those elements. (5.3.3) $f(x,y,z)=1-\cos\Big{(}\frac{\pi x}{2}\Big{)}\cos\Big{(}\frac{\pi y}{2}\Big{)}\cos\Big{(}\frac{\pi z}{2}\Big{)}$ Figure 5.3.1: Row 1, Left: A 2D composite mesh used for solving Equation 5.3.1. The mesh contains 16 triangles and 8 quadrilaterals. Row 1, Right: Initial state of the system, which is a projection of Equation 5.3.2 on the mesh. Row 2: Unfiltered solution at timestep 1800. Row 3: Filtered solution at timestep 1800. Figure 5.3.2: The state of the simulation at t = 0.3 seconds (timestep = 300) using initial condition Equation 5.3.3. Left and middle: The highlighted region and its zoomed counterpart show the points in the elements that lose the positivity structure. Right: The filtered values at the same elements that preserve the positivity structure. Figure 5.3.2 shows an instance during the advection process where a loss of structure is encountered, i.e., negative intermediate values are found, as shown by the highlighted region in the left and middle subfigures of Figure 5.3.2. The process of discovering the structurally nonconformal elements and application of the filter to those elements adds to the total time cost of the advection solution, which is an important aspect to consider when deciding on the criteria for applying the filter. Figure 5.3.3: Percentage increase in time to complete the experiment by application of the filter. Left: 2D Right: 3D The percentage increase in total time taken by the experiment by the application of structure-preserving filter is shown in Figure 5.3.3. Note that for 2D, at low orders (N=2,3,4), the cost of filtering is between 50-70$\%$ of the total time. This is due, in part, to the efficiency at these orders of the unfiltered solver such as caching effects, as well as up-front costs associated with the optimization. We find that for 2D the ratio of filtered to unfiltered is lowest at N=5, and then proceeds to climb linearly as shown in the left subfigure of Figure 5.3.3. The time taken by the filter in 3D is notably higher than the time taken by the filter in 2D because of the higher complexity of finding the global minimum in 3D. The cost of overall filtering per timestep depends on the number of elements that the filter operates upon. Therefore, the procedure of selective application of the filter becomes increasingly important. ### 5.4 Structure preservation of the canonical rotating solid body test We now investigate the application of the filter on the solid body rotation experiment using a discontinuous initial condition defined as a combination of a notched cylinder, a cone, and a smooth hump. Consider the initial data shown in Figure 5.4.1. This example is extensively used in advection and structure preservation literature [19]. The parameters to reproduce the initial state: the cylinder, cone, and hump have a radius of 0.3 each. The centers of the cylinder, cone, and hump are (0, 0.5), (0,-0.5), and (-0.6,0) respectively. Without changing the domain and boundary conditions used for the 2D advection experiment shown in Figure 5.3.1, the advection velocity is changed to circular such that in one time period, the solution returns to its original state. For all tests in this section, the domain has 144 quadrilateral elements and the timestep $(\Delta t)$ is $5e-4$. Figure 5.4.1: Initial state for solid body rotation tests. Figure 5.4.2: Snapshots at the beginning and end of advection. The highlighted region shows the values below tolerance for negativity $(10^{-7})$. At t = 1s, the solid body finishes one rotation and returns to the original position. Parameters for the test: Timestep = $5e-4$, polynomial order = 5. Row 1: Initial function projection at time = 0s. Row 2: Solution at time = 1s. Column 1: Unconstrained. Column 2: Constrained. Figure 5.4.3: Snapshots at various points in time during advection. $\Delta t=5e-4$, time period = 1 and polynomial order = 5. Some subfigures show insets with the zoomed-in area of interest. Column 1: State of the system at t = 0. Column 2: State of the system at t = 1s. Row 1: Slice at x = -0.5 and Row 2: Slice at x = 0. Since the initial condition for the experiment is defined by a complicated discontinuous function, the solution suffers from a larger loss of structure as compared to the case with a smooth initial condition. Figure 5.4.2 shows the initial and final states during rotation of the body shown in Figure 5.4.1, with and without the filter application. Figure 5.4.4: Snapshots at various points in time during advection. $\Delta t=5e-4$, time period = 1 and polynomial order = 5. Some subfigures show insets with the zoomed-in area of interest. Column 1: State of the system at t = 0. Column 2: State of the system at t = 1. Row 1: Slice at y = -0.5 and Row 2: Slice at y = 0.5 Figures 5.4.3 and 5.4.4 show different cross sections of the solution with and without filtering at the initial and final time. The total time taken by the filter as a function of the entire simulation time is shown in Figure 5.4.5. Figure 5.4.5: Analysis of filter application to the experiment shown in Figure 5.4.1. Top left: Percentage of total simulation time spent inside the filter, Top right: p-convergence of $L_{2}$ errors for filtered and unfiltered solution at t=1s. Bottom left: Total number of filter iterations vs the total number of times the filter is called for various polynomial orders. Bottom right: Average number of filter iterations per call vs the average number of GD line-search algorithm per filter iteration for various polynomial orders. As evident from the top left panel in Figure 5.4.5, the filter takes proportionally longer to restore the structure as the loss of structure increases. In many practical applications, such as the one presented in Section 5.6, we observe that paying the extra filtering cost results in the successful termination of the simulation, which otherwise fails due to the presence of structural inconsistencies. In such cases, the tradeoff between the time cost vs guarantee of structure preservation leans towards the latter. The filtered and unfiltered $L_{2}$ errors after one rotation of the solid body are shown in the top right panel of Figure 5.4.5. Whereas filtering changes the final state of the solution, thus contributing to the errors, the difference between filtered and unfiltered errors is still comparable. The filtered version of the tests using different polynomial order follows the same p-convergence as the unfiltered counterpart. A comparison between the number of times the filter is invoked and the number of iterations taken per invocation is shown in the bottom left panel of Figure 5.4.5. We notice that the number of iterations taken per call to the filter increases with the increase in polynomial order because of the larger magnitude of structural inconsistencies observed as a result of the oscillations in the solution. The largest contributor to the cost of applying the filter occurs from the GD line-search to find the global minimum. At each iteration of the filter, we find the minimum. Each call to find the minimum, in turn, takes a few iterations of GD. On the bottom right panel of Figure 5.4.5 is a comparison between the average number of GD iterations per iteration of the filter. Also shown is the average number of filter iterations per call to the filter. ### 5.5 Structure preservation in advection tests on various 3D domains The procedure to filter the elements that exhibit loss of structure remains agnostic to the type of element. To demonstrate this, consider different tesselations of the domain $[-1,-1,-1]\times[1,1,1]$ and solve an advection problem with the nontrivial initial state Equation 5.5.1. The setup remains the same as the previous 3D example, i.e., a $=[1,1,1]$ in all directions, periodic boundaries, RK-4 integration scheme using timestep $1e{-3}$, and for a total of 2000 timesteps. Figure 5.5.1: A summary of the advection experiment using polynomial order $=4$ and $\Delta t=1e-3$ on the 3D cube domains meshed using different canonical elements individually. Left to right: The mesh profile, the unfiltered solution, and the filtered solution at timestep = 1000, respectively. Top row: Cube mesh with 27 hexahedra, Middle row: Cube mesh with 54 prisms, Bottom row: Cube mesh with 162 tetrahedra. (5.5.1) $f(x,y,z,t=0)=0.2\Big{(}[1-\sqrt{(x^{2}+y^{2})}]^{2}+z^{2}\Big{)}$ Figure 5.5.2: Left: Per timestep $L_{2}$ error for advection problem for polynomial order$=4$ on a cube mesh of 24 tetrahedral elements. The timestep = $1e-3$, and the total steps $=4000$. At each timestep, for the $L_{2}$ error computation, we consider 10 quadrature points in each direction. Right: p-convergence for filtered and unfiltered $L_{2}$ error at timestep 2000 for the advection problem on a cube mesh of 24 tetrahedral elements. The initial state looks like a torus inside a cube. The advection experiment is repeated on different homogeneous meshes with element types hexahedra, tetrahedra, and prisms individually. Figure 5.5.1 shows the results before and after the filter application at a particular timestep for all these meshes. The analysis of the filtering effects on the accuracy and convergence of the advection process, especially on the meshes with hexahedron and prism elements, reveals that the difference between the $L_{2}$ errors is too small to warrant a convergence comparison. However, for the mesh with tetrahedral elements, the $L_{2}$ errors perceptibly vary at each timestep, as shown by a longer run of the same experiment (total 4000 timesteps) in Figure 5.5.2. ### 5.6 Filter application to dG FEM implementation of the PAC model A prominent scenario in which the problem of structure-preservation is encountered is in the mathematical-biology domain. A detailed mathematical model for platelet aggregation and blood coagulation (PAC) by [18] is considered for this experiment. The model describes the process of evolution, interaction, and decay of the chemical species involved in the process of thrombosis. Although the details of the individual species in the PAC process are beyond the scope of this paper, [18] has a detailed explanation of the nature and the evolution of the chemical species. The model can be summarized as a comprehensive collection of ODEs and PDEs that track all the chemical species in various phases during the process. The results in [18] use a finite- difference method with a specified limiter to truncate nonpositive concentration values to zero. In order to implement this model using the FEM without loss of structure, an alternative approach to truncation, such as the one presented in this paper, is needed. For our experiments, we use the structure-preserving filter instead of a truncation limiter. The focus is limited to the system of equations to track the chemical species that advect, diffuse, and react. One such species is fluid-phase thrombin (FP $e_{2}$), an essential component of the thrombosis process. For this section, we track the evolution of thrombin by advection, diffusion, and decay in the fluid-phase. For the detailed results of the evolution of all the chemical species, including thrombin, under various circumstances in the original model, refer to [18]. We set up a version of the PAC model that solves the species evolution problem by a combination of advection, diffusion, and reaction (ADR) PDEs. Unlike the original model, the velocity $\boldsymbol{u}$ of the fluid medium is not reported by a modified Navier-Stokes solver. Instead, the velocity is constant $\boldsymbol{u}=[u_{x},u_{y}]=[5,0]$. To solve this system of PDEs on a sample blood vessel represented by a rectangular domain $\Omega=[0,300]\times[0,20]$, we employ the dG FEM. The domain is tessellated using 2048 quadrilaterals. The bottom wall has an injury site spanning from $x=20$ to $x=60$. We use polynomial order $=4$ and timestep of $\Delta t=1e{-2}$ to solve the problem. The total number of species tracked in our implementation of the model is 56. Since the focus is on the behavior of thrombin, we present the PDE describing the evolution of fluid-phase thrombin (FP $e_{2}$) Equation 5.6.1. For details on the other chemical species involved in Equation 5.6.1, refer to [18]. The boundary conditions vary depending on the chemical being tracked. For FP $e_{2}$, we have a Dirichlet zero boundary on the left, and a no-flux boundary on the right and top. The bottom boundary has two kinds of conditions: a robin boundary condition on the injury site and Dirichlet zero everywhere else. (5.6.1) $\begin{split}\frac{\partial\boldsymbol{e}_{2}}{\partial t}&=\underbrace{-\boldsymbol{u}\cdot\nabla e_{2}+\nabla\cdot(D\nabla e_{2})}_{\text{Transport by Advection Diffusion}}\\\ &+\underbrace{k_{e_{2}}^{on}e_{2}(N_{2}P^{b,a}+N_{2}P^{se,a}-z_{2}^{mtot}-e_{2}^{mtot})+k_{e_{2}}^{off}e_{2}^{m}}_{\text{Binding to platelet receptor}}\\\ &+\underbrace{(k_{z_{5}:e_{2}}^{cat}+k_{z_{5}:e_{2}}^{-})[Z_{5}:E_{2}]-k_{z_{5}:e_{2}}^{+}z_{5}e_{2}}_{\text{Activation of V}}\\\ &+\underbrace{(k_{z_{7}:e_{2}}^{cat}+k_{z_{7}:e_{2}}^{-})[Z_{7}:E_{2}]-k_{z_{7}:e_{2}}^{+}z_{7}e_{2}}_{\text{Activation of VII}}\\\ &+\underbrace{(k_{z_{8}:e_{2}}^{cat}+k_{z_{8}:e_{2}}^{-})[Z_{8}:E_{2}]-k_{z_{8}:e_{2}}^{+}z_{8}e_{2}}_{\text{Activation of VIII}}\end{split}$ Figure 5.6.1: Top left: The unconstrained simulation solution becomes invalid, as shown in the inset at timestep 2, which causes a blowup at timestep 87, leading to the failure of the simulation. Top right: The constrained simulation runs for 674 timesteps until the depletion of the chemicals causes the model to terminate naturally. Bottom left: Profile of fluid phase thrombin in unconstrained simulation at timestep 2 on a domain slice at $y=5$. Bottom right: Profile of fluid phase thrombin in constrained simulation at timestep 674 on a domain slice $y=5$ Figure 5.6.1 shows filtered and unfiltered runs of our implementation of this model and the resulting concentration profile of FP $e_{2}$. At timestep = 2, we get the invalid (negative) values around the injury site because of the sharp concentration changes (refer to the slice at $y=5$ in the bottom left part of Figure 5.6.1). Therefore, the fluid phase thrombin concentration does not adhere to the desired structure (positivity), and the simulation does not succeed. If this experiment is allowed to continue, the effect of invalid values will compound over time, rendering the simulation results nonphysical and useless. At timestep 38, the simulation starts becoming unstable because of structural discrepancies. If it is allowed to run further, the accumulation and propagation of negative values results in the simulation blow-up at timestep 87, at which point the code reports invalid values such as NaN (datatype: Not a Number). When the filter is applied, it runs for significantly more timesteps (total 674) and terminates naturally. The right side of Figure 5.6.1 shows the state of FP $e_{2}$ at the depletion point of the chemicals involved. ## 6 Conclusions We present a formalism that solves the problem of structure-preservation for PDE solutions in 2D and 3D. The construction and design of a postprocessing structure-preserving filter are detailed and applied to multidimensional dG FEM solutions to different PDEs. A geometric interpretation of the mathematical foundation behind the filter is presented, followed by an algorithm to apply the proposed filter in a timestepping PDE framework. At the core of the filter lies the expensive requirement for global minimization on a weighted distance function that corresponds to the objective we optimize. We employ gradient descent with backtracking line search for the minimization to reduce the cost of the filtering routine. To this end, we detail an investigative procedure to reduce the minimization cost by precomputing specific parameter values for the gradient descent approach. Using numerical examples, we compare the convergence rates with and without the application of the filter for different problem sizes and domain structures. The percentage increase in time taken by the simulation is computed to understand the cost of the filter and it is observed that the cost scales with the order and size of the problem. In the end, using a filtered solution to a mathematical-biology problem of platelet aggregation and blood coagulation, we provide evidence of the proposed method’s efficacy and utility. One future direction for investigation could attempt to understand when inter-element flux preservation is beneficial in these types of numerical simulations. For example, comparing two filtered solutions, one with and one without flux preservation, could form the basis for more experiments. ## Acknowledgments V. Zala and R.M. Kirby acknowledge that their part of this research was sponsored by ARL under cooperative agreement number W911NF-12-2-0023. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of ARL or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. A. Narayan was partially supported by NSF DMS-1848508. This material is based upon work supported by both the National Science Foundation under Grant No. DMS-1439786 and the Simons Foundation Institute Grant Award ID 507536 while A. Narayan was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Spring 2020 semester. ## References * [1] L. Allen and R. C. Kirby, Bounds-constrained polynomial approximation using the bernstein basis, Numerische Mathematik, 152 (2022), pp. 101–126. * [2] R. Anderson, V. Dobrev, T. Kolev, D. Kuzmin, M. Q. de Luna, R. Rieben, and V. Tomov, High-order local maximum principle preserving (mpp) discontinuous galerkin finite element method for the transport equation, Journal of Computational Physics, 334 (2017), pp. 102–124. * [3] R. W. Anderson, V. A. Dobrev, T. V. Kolev, and R. N. Rieben, Monotonicity in high-order curvilinear finite element arbitrary lagrangian–eulerian remap, International Journal for Numerical Methods in Fluids, 77 (2015), pp. 249–273. * [4] R. W. Anderson, V. A. Dobrev, T. V. Kolev, R. N. Rieben, and V. Z. Tomov, High-order multi-material ale hydrodynamics, SIAM Journal on Scientific Computing, 40 (2018), pp. B32–B58. * [5] L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives., Pacific Journal of Mathematics, 16 (1966), pp. 1 – 3. * [6] J.-P. Berrut and L. N. Trefethen, Barycentric lagrange interpolation, SIAM Review, 46 (2004), pp. 501–517. * [7] D. P. Bertsekas, Nonlinear programming, Journal of the Operational Research Society, 48 (1997), pp. 334–334. * [8] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization, Cambridge university press, 2004. * [9] C. D. Cantwell, D. Moxey, A. Comerford, A. Bolis, G. Rocco, G. Mengaldo, D. De Grazia, S. Yakovlev, J.-E. Lombard, D. Ekelschot, et al., Nektar++: An open-source spectral/hp element framework, Computer physics communications, 192 (2015), pp. 205–219. * [10] J. Cheng, R. Deriche, T. Jiang, D. Shen, and P.-T. Yap, Non-negative spherical deconvolution (nnsd) for estimation of fiber orientation distribution function in single-/multi-shell diffusion mri, NeuroImage, 101 (2014), pp. 750–764. * [11] J. B. Crockett, H. Chernoff, et al., Gradient methods of maximization., Pacific Journal of Mathematics, 5 (1955), pp. 33–50. * [12] H. B. Curry, The method of steepest descent for non-linear minimization problems, Quarterly of Applied Mathematics, 2 (1944), pp. 258–261. * [13] T. Dela Haije, E. Özarslan, and A. Feragen, Enforcing necessary non-negativity constraints for common diffusion mri models using sum of squares programming, NeuroImage, 209 (2020), p. 116405. * [14] A. Goh, C. Lenglet, P. M. Thompson, and R. Vidal, A nonparametric riemannian framework for processing high angular resolution diffusion images and its applications to odf-based morphometry, NeuroImage, 56 (2011), pp. 1181–1201. * [15] M. Kucharik, M. Shashkov, and B. Wendroff, An efficient linearity-and-bound-preserving remapping method, Journal of computational physics, 188 (2003), pp. 462–471. * [16] M. P. Laiu, C. D. Hauck, R. G. McClarren, D. P. O’Leary, and A. L. Tits, Positive Filtered P$_n$ Moment Closures for Linear Kinetic Equations, SIAM Journal on Numerical Analysis, 54 (2016), pp. 3214–3238. * [17] E. Laughton, V. Zala, A. Narayan, R. M. Kirby, and D. Moxey, Fast barycentric-based evaluation over spectral/hp elements, Journal of Scientific Computing, 90 (2022), p. 78. * [18] K. Leiderman and A. L. Fogelson, Grow with the flow: a spatial–temporal model of platelet deposition and blood coagulation under flow, Mathematical Medicine and Biology: a journal of the IMA, 28 (2011), pp. 47–84. * [19] R. J. Leveque, High-resolution conservative algorithms for advection in incompressible flow, SIAM Journal on Numerical Analysis, 33 (1996), pp. 627–665. * [20] D. Light and D. Durran, Preserving nonnegativity in discontinuous galerkin approximations to scalar transport via truncation and mass aware rescaling (tmar), Monthly Weather Review, 144 (2016), pp. 4771–4786. * [21] C. Liu, C. Wang, and Y. Wang, A structure-preserving, operator splitting scheme for reaction-diffusion equations with detailed balance, Journal of Computational Physics, 436 (2021), p. 110253. * [22] X.-D. Liu and S. Osher, Nonoscillatory high order accurate self-similar maximum principle satisfying shock capturing schemes i, SIAM Journal on Numerical Analysis, 33 (1996), pp. 760–779. * [23] C. Lohmann, D. Kuzmin, J. N. Shadid, and S. Mabuza, Flux-corrected transport algorithms for continuous galerkin methods based on high order bernstein finite elements, Journal of Computational Physics, 344 (2017), pp. 151–186. * [24] T. S. Motzkin and I. J. Schoenberg, The Relaxation Method for Linear Inequalities, Canadian Journal of Mathematics, 6 (1954), pp. 393–404. Publisher: Cambridge University Press. * [25] R. Sanders, A third-order accurate variation nonexpansive difference scheme for single nonlinear conservation laws, Mathematics of Computation, 51 (1988), pp. 535–558. * [26] M. Shashkov and B. Wendroff, The repair paradigm and application to conservation laws, Journal of Computational Physics, 198 (2004), pp. 265–277. * [27] C.-W. Shu and S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes, Journal of computational physics, 77 (1988), pp. 439–471. * [28] M. H. Van Benthem and M. R. Keenan, Fast algorithm for the solution of large-scale non-negativity-constrained least squares problems, Journal of Chemometrics: A Journal of the Chemometrics Society, 18 (2004), pp. 441–450. * [29] J. J. van der Vegt, Y. Xia, and Y. Xu, Positivity preserving limiters for time-implicit higher order accurate discontinuous galerkin discretizations, SIAM journal on scientific computing, 41 (2019), pp. A2037–A2063. * [30] M. J. Zahr and P.-O. Persson, An optimization based discontinuous galerkin approach for high-order accurate shock tracking, in 2018 AIAA Aerospace Sciences Meeting, 2018, p. 0063. * [31] V. Zala, R. M. Kirby, and A. Narayan, Structure-preserving function approximation via convex optimization, SIAM Journal on Scientific Computing, 42 (2020), pp. A3006–A3029. * [32] V. Zala, R. M. Kirby, and A. Narayan, Structure-preserving nonlinear filtering for continuous and discontinuous galerkin spectral/hp element methods, SIAM Journal on Scientific Computing (Under review), (2020). * [33] X. Zhang, On positivity-preserving high order discontinuous galerkin schemes for compressible navier–stokes equations, Journal of Computational Physics, 328 (2017), pp. 301–343. * [34] X. Zhang and C.-W. Shu, On maximum-principle-satisfying high order schemes for scalar conservation laws, Journal of Computational Physics, 229 (2010), pp. 3091 – 3120. ## Appendix A The parameter selection algorithm for backtracking line search used in GD- based minimization. Section 5.1 and Section 5.1 define the set of functions used in the selection process. Algorithm A.1 Experiment to determine ideal values of $c$ and $\gamma$ given a set of functions $f_{i}$, for $i=0,1,\cdots,n$ 1: $G\leftarrow{P}$ samples in $(0,1)$ 2: for each $f_{i}$ $\forall$ $i=0,1,\cdots,n$ do 3: $H\leftarrow\\{\\}$ 4: for each $\gamma_{j}$ in G do 5: for each $c_{k}$ in G do 6: $(niter,err)\leftarrow$ GD_linesearch ($f_{i},c_{k},\gamma_{j}$) 7: $H\leftarrow\Big{\\{}H;\\{c_{k},\gamma_{j},niter,err\\}\Big{\\}}$ 8: end for 9: end for 10: end for 11: $H_{1}\leftarrow$ select ranges of $c$ and $\gamma$ with least $niter$ from $H$ 12: $H_{2}\leftarrow$ select ranges of $c$ and $\gamma$ with least $err$ from $H_{1}$ 13: return Ranges for $c$ and $\gamma$ from $H_{2}$ ## Appendix B Additional results for experiment described in Section 5.4 for polynomial orders 3 and 7 are attached below: Figure B.1: Snapshots at the beginning and end of advection. The highlighted region shows the values below tolerance for negativity $(10^{-7})$. At t = 1s, the solid body finishes one rotation and returns to the original position. Parameters for the test: Timestep = $5e-4$, polynomial order = 3. Row 1: Time = 0s. Row 2: Time = 1s. Column 1: Unconstrained solution. Column 2: Constrained solution. Figure B.2: Snapshots at various points in time during advection. $\Delta t=5e-4$, time period = 1 and polynomial order = 3. Some subfigures show insets with the zoomed-in area of interest. Column 1: State of the system at t = 0. Column 2: State of the system at t = 1s. Row 1: Slice at x = -0.5. Row 2: Slice at x = 0. Row 3 :Slice at y = -0.5 and Row 4: Slice at y = 0.5 Figure B.3: Snapshots at the beginning and end of advection. The highlighted region shows the values below tolerance for negativity $(10^{-7})$. At t = 1s, the solid body finishes one rotation and returns to the original position. Parameters for the test: Timestep = $5e-4$, polynomial order = 7. Row 1: Time = 0s. Row 2: Time = 1s. Column 1: Unconstrained solution. Column 2: Constrained solution. Figure B.4: Snapshots at various points in time during advection. $\Delta t=5e-4$, time period = 1 and polynomial order = 7. Some subfigures show insets with the zoomed-in area of interest. Column 1: State of the system at t = 0. Column 2: State of the system at t = 1s. Row 1: Slice at x = -0.5. Row 2: Slice at x = 0. Row 3 :Slice at y = -0.5 and Row 4: Slice at y = 0.5
# A Cluster-Aggregate-Pool (CAP) Ensemble Algorithm for Improved Forecast Performance of influenza-like illness Ningxi Wei Department of Mathematics, College of Arts and Science, Lehigh University, Bethlehem, Pennsylvania, United States of America Xinze Zhou Department of Mathematics, College of Arts and Science, Lehigh University, Bethlehem, Pennsylvania, United States of America Wei-Min Huang Department of Mathematics, College of Arts and Science, Lehigh University, Bethlehem, Pennsylvania, United States of America Thomas McAndrew∗ Department of Community and Population health, College of Health, Lehigh University, Bethlehem, Pennsylvania, United States of America<EMAIL_ADDRESS> ###### Abstract Seasonal influenza causes on average 425,000 hospitalizations and 32,000 deaths per year in the United States. Forecasts of influenza-like illness (ILI)—a surrogate for the proportion of patients infected with influenza—support public health decision making. The goal of an ensemble forecast of ILI is to increase accuracy and calibration compared to individual forecasts and to provide a single, cohesive prediction of future influenza. However, an ensemble may be composed of models that produce similar forecasts, causing issues with ensemble forecast performance and non-identifiability. To improve upon the above issues we propose a novel Cluster-Aggregate-Pool or ‘CAP’ ensemble algorithm that first clusters together individual forecasts, aggregates individual models that belong to the same cluster into a single forecast (called a cluster forecast), and then pools together cluster forecasts via a linear pool. When compared to a non-CAP approach, we find that a CAP ensemble improves calibration by approximately 10% while maintaining similar accuracy to non-CAP alternatives. In addition, our CAP algorithm (i) generalizes past ensemble work associated with influenza forecasting and introduces a framework for future ensemble work, (ii) automatically accounts for missing forecasts from individual models, (iii) allows public health officials to participate in the ensemble by assigning individual models to clusters, and (iv) provide an additional signal about when peak influenza may be near. ## I Introduction Seasonal influenza (Flu) has caused in the United States 9m-41m illnesses, 140k-710k hospitalizations, and 12k-52k deaths annually between 2010 to 2020 [1, 2]. Flu disproportionately impacts children under the age of 3 and adults over the age of 65 [3]. Influenza burden in the United States is estimated to cost $87B annually in total costs ($10B in direct medical costs) [4]. This estimate accounts for costs due to hospitalization, deaths, productivity loss as a result of symptoms, and decreased economic activity [5, 4]. Influenza-like illness (ILI)—a fever plus sore throat/cough—is a syndrome that correlates with laboratory confirmed influenza and can be monitored to help guide public health response [6, 7]. A patient admitted to a healthcare facility is diagnosed with ILI if he/she present with a fever above 37.8 degrees celsius and a cough or sore throat [8]. ILI is a syndromic classification and may include other infectious agents that present symptoms similar to influenza such as SARS-CoV-2 and Respiratory syncytial virus (RSV) [8]. Public health officials monitor ILI to determine proper allocation of resources to hospitals, timely advice for when the public should be vaccinated, and, in severe cases, whether to take actions such as quarantine [9]. Accurate forecasts of ILI complement surveillance efforts by providing advanced warning of potential changes in burden due to influenza [10, 11, 12]. The importance of forecasting the trajectory of infectious agents is highlighted by the FluSight challenge hosted by the Centers for Disease Control and Prevention (CDC) [10, 11]. The FluSight challenge encourages the development of innovative forecasting models to predict the spread and impact of seasonal influenza, and promotes collaboration among experts in epidemiology, biostatistics, and public health to anticipate and prepare for flu outbreaks. [7, 13]. Models to forecast ILI can be separated into individual (or component) models and multi-model ensembles. Component models train on past observations of percent ILI (number of cases of ILI divided by number of recorded hospital visits) and potentially external data to generate predictive densities over future percent ILI [10, 11, 14, 15, 16]. Examples of external datasources used in training include commuter patterns, vaccine data, and viral strain data [17, 18, 16]. Component models can be phenomenological, taking advantage of statistical correlations between ILI and additional covariates; or mechanistic, supposing a deterministic relationship for how a set of observed and latent variables evolve over time [17, 18, 16, 14, 15]. A multi-model ensemble takes as input: (i) a set of component model forecasts, (ii) past/present observed ILI, and generates as output a single forecast [19, 20, 10, 21, 22]. Past work has shown multi-model ensemble forecasts perform well in practice [19]. A multi-model ensemble assumes no individual model can perfectly capture infectious disease dynamics. Instead, a multi-model ensemble attempts to incorporate many distinct possibilities for how an infectious agent may evolve over time [20, 22]. An ensemble provides a single message for public health officials to interpret [10, 11, 21]. Weaknesses associated with multi-model ensembles are (i) they often assume that forecasts from component models are statistically independent, (ii) these models usually assume that all component models will continue to generate forecasts for the entire season (i.e. no missing forecasts), and (iii) that past performance of component models is indicative of future performance. We propose a novel framework to ensemble modeling called Cluster-Aggregate- Pool (CAP). CAP partitions component model forecasts into sets (Cluster), aggregates each set of forecasts into a single, representative forecast called a cluster forecast (Aggregate), and then combines cluster forecasts into a single, ensemble forecast (Pool). Our CAP approach to ensemble modeling may better satisfy assumptions of independence, shows similar or improved forecast performance to a non-CAP ensemble, and is able to account for missing component model forecasts in real-time. ## II Epidemiological and forecast data ### II.1 Weighted influenza-like illness (ILI) Percent influenza-like illness (ILI) is collected weekly $(t)$ at the state level $s$ during an influenza season (epidemic week 40 of year $Y$ to epidemic week 20 of year $Y+1$) [6, 11]. ILI is defined as the number of patients who were diagnosed with ILI at a healthcare facility that is a part of U.S. Outpatient ILI Surveillance Network (ILINet) divided by the total number of patients who visited a healthcare facility for any reason [8]. Weighted percent ILI (wILI) is computed for each of the 10 Health and Human Service regions (HHS) and at the US national level and is computed as $\displaystyle\text{wILI}_{r}=\sum_{s\in S_{r}}w_{s}\cdot\text{ILI}_{s}$ (1) where $S_{r}$ is the set of states that belong to HHS region $r$, $w_{s}$ is a weight that equals the number of residents in state $s$ divided by the number of residents in HHS region $r$, and $\text{ILI}_{s}$ is the reported ILI for state $s$. The influenza season begins on the epidemic week where the following three consecutive weeks are above a CDC defined baseline percentage of wILI [8]. Details about the wILI dataset can be found in Supplemental LABEL:supp_ili. For computational convenience, ILI is discretized into 131 intervals: bins of the form $[X,Y)$ from 0% to 13% by 0.1% and one bin of the form $[X,Y]$ from 13% to 100%. We assume a sequence of random variables (i.e. sample) $(X_{40},X_{41},\cdots,X_{20})$ are responsible for generating observed ILI values. No additional assumptions are made about this sample. ### II.2 Component model forecasts During the FluSight challenge, eight research teams (a subset of all teams) generated twenty seven component model forecasts with the purpose of being combined into an ensemble called the FluSight Ensemble [6]. Teams submitted weekly forecasts for seven targets associated with ILI and for seven influenza seasons that begin with the 2011/2012 season and end on the 2018/2019 season. The seven targets are: one, two, three, four week-ahead percent ILI; season onset week (the week where three consecutive weeks’ ILI percentages are higher than the CDC baseline); season peak ILI percentage and season peak week. A component model forecast of 1-4 week ahead ILI at epidemic week $t$ is a discrete probability distribution over 131 intervals $[0,0.1],(0.1,0.2],(0.2,0.3],\cdots$ $(12.9,13],[13,100]$. For ‘week’ targets (season peak week and season onset week), each component model produces a probability distribution discretized over 32 epidemic weeks in a season. In this work we only consider forecasts of 1-4 week ahead percent ILI. See supplemental Fig. 8 for an example of component model forecasts and Supp. 1 for a brief description of model types that were trained. ### II.3 Comparator ensemble algorithms We compare our novel CAP-adaptive ensemble algorithm to three algorithms that have been implemented in past work: a equally weighted ensemble, a static ensemble, and an adaptive ensemble [22]. All three ensembles are linear pools, assuming that observations are generated as a convex combination of component models. $\displaystyle\begin{aligned} f_{Y}(y|\pi_{1:C})=\sum_{c=1}^{C}\pi_{c}f_{c}(y);\;\forall\pi\geq 0;\;\sum_{c=1}^{C}\pi_{c}=1\end{aligned}$ (2) where $f_{c}$ is the predictive density over ILI values for component model $c$, $\pi_{c}$ is the weight assigned to component model $c$, there are $C$ component models considered, $y$ is percent ILI, and $f(y)$ is the predictive density of the ensemble. Because this is a multi-model ensemble, the component model densities $f_{c}$ are fixed. We have no access to parameters or model specifications for component models. An equally weighted ensemble assigns component model weights to one over the number of component models. A static ensemble assigns weights $\pi_{c}$ that maximize the log-likelihood $\displaystyle\log\left[\ell(\pi_{1:C}|\mathcal{D})\right]$ $\displaystyle=\sum_{j=1}^{N}\log\left[f_{Y}(y_{j})\right]=\sum_{j=1}^{N}\log\left[\sum_{c=1}^{C}\pi_{c}f_{c}(y_{j})\right]$ (3) where $\mathcal{D}=(y_{1},y_{2},\cdots,y_{N})$ is the sequence of observed ILI values from all previous seasons up to, but not including, the present season. During season $S$, weights are computed based on previous season ILI values and past component model forecasts. Weights are assigned at the beginning of the season and stay fixed (static) throughout the season. Weights are then recomputed at the beginning of season $S+1$. In the first season (e.g. 2011/2012), with no past data, a static ensemble assigns equal weights to all component models. An adaptive ensemble (ensemble 3) assigns equal weights to component models on week one and, week by week, updates weights. Weights $(\pi_{1:C})$ are updated according to the log likelihood above (3) plus a time dependent Dirichlet prior. The prior is meant to temper the model from assigning too much weight to a small number of component models [22]. ## III CAP ensemble The CAP ensemble partitions (or clusters) the set of $C$ component models into a collection of $K$ sets of component models (Cluster step). For cluster $k$, map the set of component models to a single forecast (Aggregate step) called a cluster forecast. Then combine the $K$ cluster forecasts to a single CAP ensemble forecast (Pool). The aim of the cluster and aggregate steps are to reduce component model redundancy. The ‘Pool’ step generates a single forecast from a set of $K$ cluster forecasts. ### III.1 The impact of component model redundancy Consider a set of $C$ forecasts represented as random variables $X_{1},X_{2},\cdots,X_{C}$. The linear pool forms a new random variable $Y$ with probability density function (pdf) $\displaystyle f(y)=\sum_{c=1}^{C}\pi_{c}f_{X_{c}}(y)$ (4) where $f_{X_{c}}(y)$ corresponds to the pdf for component model $c$. Similar, or overlapping, densities submitted by more than one component models may present issues with: (i) forecast variance and (ii) identifiability. We can illustrate these two issues by combining overlapping component models with the following example. Suppose we consider combining in an equally weighted linear pool two component models represented by the below two random variables $\displaystyle X_{1}\sim\mathcal{N}\left(\mu_{0},\sigma^{2}\right);\;X_{2}\sim\mathcal{N}\left(\mu_{1},\sigma^{2}\right);\;$ (5) Then the variance of the linear pool ensemble (see [23] for a detailed derivation) will be $\displaystyle\mathbb{V}$ $\displaystyle=\sum_{c=1}^{2}\frac{1}{2}\sigma_{c}^{2}+\sum_{c=1}^{2}\frac{1}{2}\mu_{c}^{2}-\left[\sum_{c=1}^{2}\frac{1}{2}\mu_{c}\right]^{2}$ (6) Because squaring is a convex function, we can appeal to Jensen’s inequality to find that $\displaystyle\sum_{c=1}^{2}\frac{1}{2}\mu_{c}^{2}>\left[\sum_{c=1}^{2}\frac{1}{2}\mu_{c}\right]^{2}$ (7) If the expected values are equal than the variance reduces to $\sigma^{2}$, but if the expected values are distinct than the linear pool variance will be greater than $\sigma^{2}$ (Fig 1A.). We find that as the Kullback-Leibler (KL) divergence between the two component model probability density functions decreases (i.e. the two cmponent model forecasts become more similar) the ensemble variance of an equally weighted ensemble decreases linearly (Fig 1B.). The 27 component models submitted as part of the FluSight Ensemble have on average a low KL divergence which indicates a high level of redundant/similar model forecasts (Fig 1C.). Figure 1: (A.) An example of an equally-weighted ensemble of two non- overlapping component models. The variance of the ensemble is greater than the component models because they have different expected values. (B.) Kullback Leibler divergence between two component models (one model produced a Normal density with expected value 3/4 and variance 1. The second model has a variance of 1 and the expected value moves sequentially from, 3/4 to 3/2 by 0.05) vs the variance of an equally weighted ensemble of the two models. (C.) Pairwise KL divergence for all 27 component models averaged over 11 locations and 4 ‘week ahead’ targets. Models corresponding to teams are labeled with text and a white box. Overlap (i.e. redundancy) artificially reduces the variance of an ensemble forecast, producing a forecast that is too sharp. An ensemble where component models have little overlap should produce a forecast that is not too narrow. Redundancy also presents an issue if the modeler decides to estimate weights for an ensemble (Fig. 2). If there exists two models with a large overlap in predictive densities then the log-likelihood surface is flat (i.e. the determinant of the Hessian at the global optimum is near zero) near the global optimum (Fig. 2A.-C.). When training ensemble weights with a standard optimizer, random restarts will show that the optimal weight vector has high variance. High variance in an optimal solution is characteristic of non- indentifiability. Non-indentifiability issues are present in FluSight component models (Fig. 2D.-F.). Figure 2: (A.)-(C.) Loglikelhioood surface (contour lines) for a linear pool that combines three component models under three levels of redundancy. The black $X$ marks the true weights. Red shading indicates a multivariate approximation to the estimated optimal weight assignment and variability around that estimate. (A.) Little redundancy between component models, (B.) moderate redundancy, and (C). two component models that are identical. (D.)-(F.) From the FluSight challenge we chose three empirical examples (D. Season 2016/2017, Week 10, HHS Region 9, 1 week ahead forecast; E. Season 2014/2015, Week 16, HHS 8, 4 week ahead forecast; F. Season 2014/2015, Week 46, US, 1 week ahead forecast) of optimal weight assignments (white circles) using 100 random restarts for two component models ($w_{1}$ and $w_{2}$) and weights for the remaining 25 component models from FluSight. Our proposed CAP adaptive algorithm identifies component models whose predictions produce similar scores (logscores) and groups them into one cluster for further aggregations. This approach aim to enhance the diversity of predictions within the ensemble and potentially leadsto more robust and accurate overall forecating results. ### III.2 CAP framework Below we present the Clustering, Aggregation, and Pooling steps for the CAP ensemble and specific choices we made to compare a CAP to non-CAP ensemble. #### III.2.1 Clustering step The goal of the clustering step is to partition $C$ component model forecasts $f_{1},f_{2},\cdots,f_{C}$ into a collection $\Gamma=\\{c_{1},c_{2},\cdots,c_{K}\\}$ where the $i^{\text{th}}$ cluster, $c_{i}$, is defined as a set of component models forecasts. Each component model forecast may belong to only one cluster. There are no other constraints to implementing the clustering step, and want to stress that the reader/user should be free to choose their own algorithm for this step. Clustering algorithms may arise from a case-by-case intuition, professional expertise, or data-driven results. A specific example of such an algorithm is below. We chose a heuristic method to cluster component models. Component model $m$ is placed in cluster $c$ if this model has a linear correlation coefficient above a threshold value $\phi$ for all models in cluster $c$. If model $m$ does not satisfy the above criteria for any current clusters then we generate a new cluster with the single model $m$ and continue the algorithm. We use a data-driven approach to choose $\phi$. Let $t=1$ equal the first epidemic week in a season, $t=2$ the second week until $t=T$ where $T$ is the last epidemic week in the season. If $t=1$ set a threshold value $\phi$ equal to $1/2$. If $t>1$: then, given a value $\phi$, complete the Aggregate/Pool (see below for the options we chose) steps to generate ensemble forecasts over all regions and targets for which there exists ground truth percent ILI. Average the logscores for this choice of $\phi$. Select the value $\phi$ with the highest average logscore. #### III.2.2 Aggregating step The goal of the aggregation step is to produce $K$ forecasts—called cluster forecasts—by combining the component models that belong to each cluster into a single predictive density over future values of ILI. The aggregation step is a choice of function $h$ that for each cluster $c_{k}$, maps component model predictive densities $(f_{(c_{k},1)},f_{(c_{k},2)},\cdots,f_{(c_{k},s_{k})})$ belonging to a cluster into a single predictive density $F_{c_{k}}$ $\displaystyle F_{c_{k}}=h(f_{(c_{k},1)},f_{(c_{k},2)},\cdots,f_{(c_{k},s_{k})})$ (8) where $s_{k}$ equals the number of component models that belong to cluster $c_{k}$. Again, we can stress that the reader/user is free to choose any aggregation algorithm. For our example CAP ensemble, the aggregation algorithm we proposed—sometimes called a ‘follow the leader’ approach—computes the median log score assigned to all past forecasts for each component model that belongs to one cluster and then selects as the cluster forecast the component model with the highest median log score. #### III.2.3 Pooling Step The goal of the pooling step is to assign to $K$ cluster forecast predictive densities a vector of weights $\pi=[\pi_{1},\pi_{2},\cdots,\pi_{K}]^{\prime}$ that optimizes an objective function. We chose two approaches: (i) equal weights or (ii) an adaptive weighting scheme that is identical to the adaptive ensemble approach but applied to cluster, instead of component model, forecasts. ## IV Forecast Evaluation Ensemble forecasts were evaluated by using the logarithmic score, probability integral transform (PIT) value, and the Brier score. The logarithmic score (log score) was used to evaluate forecasts [24, 25]. The logarithmic score is defined as the natural log of the predicted probability assigned to the eventually realized true ILI value. Given a model with probability density function $f(z)$ and a true ILI value $i*$, the log score was defined as log of the probability assigned to the discretized bin $[a,b)$ which contains the true wILI value $\text{log score}(f)=\int_{a}^{a+0.1\%}f(z)dz,$ (9) where $0.1\%$ is the bin width. The log score approaches negative infinity as the probability a forecast assigns to the truth approaches zero, and the maximum (best) log score is a value of zero. Logscores that were smaller than -10 were set to -10 to replace any extreme small log scores, as was common practice during the ILI FluSight challenges [11, 6]. The probability integral transform (PIT) value is defined as the probability that a forecaster assigns to values that are less than or equal to the ground truth $t$. Suppose a forecaster submits a predictive density $f$. Then the PIT value is computed as $\displaystyle\text{PIT}(f,t)=\int_{-\infty}^{t}f(x)\;dx.$ (10) If a forecaster is perfectly calibrated, then out of 100 ground truth values we would expect that they assign the PIT value of $x$ to $x\times 100$ of these ground truth values [25]. The PIT value can identify if a forecaster is too confident about the future, too unsure, too often assigns values larger than the truth or vice versa [25]. If a forecaster is too sure about the future then they will submit a narrow predictive density, which will cause PIT values to be frequently small and close to zero or large and close to one. If a forecaster is too unsure about the future, then they will submit a wide predictive density causing PIT values to be frequently close to 0.50. If a forecaster frequently over estimates the truth, then they will submit a density with a median above the truth, which will cause PIT values to be frequently smaller than 0.50 and few PIT values to be larger than 0.50. If a forecaster frequently underestimates the truth, then we expect that the median will be smaller than the truth, which will cause PIT values to often be larger than 0.50. The Brier score is defined as $\displaystyle\text{BrS}(F,t,x)=\left[F(x)-\mathbbm{1}(x<t)\right]^{2}$ (11) where $F$ is the predictive density, $t$ the true percent ILI, and $x$ a threshold value [26]. We computed the Brier score over threshold values from 0 percent ILI to 10 percent ILI by steps of size 0.1 percent. To note, integrating the Brier score over all possible threshold values is equal to the continuous rank probability score (or CRPS) [27]. ## V Results ### V.1 Component model performance and correlation over an influenza season Component model performance as measured by log score oscillates from the beginning of the 2010/2011 season until the end of the 2018/2019 season (Fig. 3A.). Component model log scores are highest (best) during the beginning and end of a season and perform worst during the middle of the season, close to peak ILI (Fig. 3B.). The observation that log scores similarly decrease during a peak suggests that many component models may provide similar, redundant forecasts (See supplemental Fig. 9 of pairwise correlations between component models at four points during the 2017/2018 season). Figure 3: (A.) Logarithmic scores (log scores) for two-week ahead forecasts (‘nowcasts’) of percent ILI at the US national level produced by all 27 component models from the 2010/2011 season until the 2018/2019 season. (B.) Logscores for US national nowcasts generated by component models in the 2017/2018 season, and in black is the US national estimate of percent ILI. Component model log scores show a characteristic pattern throughout a season, suggesting that component model structures may be redundant. ### V.2 Clustering component models by log scores The CAP ensemble uses past log scores to group together component models with similar predictive densities over 1, 2 , 3, and 4 week ahead future ILI (Fig. 4). The choice to aggregate a cluster by choosing the component model with the highest median logscore appears, for most clusters, representative of the other component model forecasts belonging to the that cluster (dashed line in Fig. 4). An extra advantage of clustering by logscore is that component models within one cluster appear to have similar structural properties (Fig. 4 bottom). Figure 4: (Top.) Component model cumulative density functions submitted as 3-week-ahead forecasts for HHS region 1, MW114, grouped into seven clusters based on the log score. The top left panel computes for each cluster the mean CDF value and 95%CI. CDF lines that are dashed correspond to those models that were chosen (based on highest median log score within cluster) to be combined into a final CAP ensemble forecast. (Bottom) Cells that are filled with green/orange Rows correspond to clusters and columns correspond to component models. Black vertical lines group together component models by research group. If a cell is filled with orange/green in row $r$ and column $c$ then component model $c$ belongs to cluster $r$. Orange cells indicate a component model that is statistical and green indicates a mechanistic model. ### V.3 Comparison of performance between CAP and Non-CAP ensembles The CAP algorithm—compared to a non-CAP approach—tends to improve calibration and has similar accuracy (Fig. 5). Compared to a non-CAP algorithm, applying the CAP algorithm to an equally weighted and adaptive ensemble shows a similar distribution of logscores (Fig. 5A.). The distribution of PIT values and Brier scores show that the CAP algorithm produced less biased (i.e. overestimates ILI less) forecasts compared to non-CAP (Fig. 5B. and C.). The average area under the curve between the CDF for PIT values and the identify line (smaller values are better) was, for an equally weighted ensemble $0.25$ for a CAP and 0.30 non-CAP ($\sim$ 17% improvement) and for an adaptive weighted ensemble $0.24$ for CAP and 0.26 non-CAP ($\sim$ 8% improvement). The average integral over Brier scores from an ILI value of zero to 10 percent (smaller is better) is, for an equally weighted ensemble $0.66$ for CAP and $0.69$ non-CAP, and for an adaptive weighted ensemble $0.61$ for CAP, $0.60$ non-CAP. Applying a CAP tends to have similar accuracy at peak ILI, worse accuracy at the beginning of the season (i.e. at the beginning of the flu season), and improved accuracy after the peak (Fig. 5D.). The CAP algorithm—compared to a non-CAP and aggregating over equal or adaptive weighting—improves calibration (i.e. PIT) by $\sim$10% and has a close to zero difference in logscore and Brier score. Figure 5: The CAP algorithm (orange) outperforms a non-CAP (blue) for an equally weighted (dashed line) and adaptive (solid line) weighted ensemble. Performance metrics are aggregated across region and for the two-week ahead ”Nowcast” prediction. (A.) The median, 10 and 90, 25 and 75th percentiles of logscores. (B.) Cumulative density function of probability integral transform (PIT) values. (C.) Brier scores computed by partitioning forecasts above and below a sequence of ILI ”cutpoints” from 0 to 10. (D.) Logscore is presented on the vertical axis and the number of weeks before and after the peak ILI. ## VI Number of clusters and distribution of weights associated with CAP algorithm For the proposed CAP-adaptive ensemble algorithm, the number of clusters is on average 23 at the beginning of the influenza season (20 weeks or more before peak), 8 during peak ILI (within one week of peak), and 7 by the end of the season (20 weeks or more after the peak) (Fig. 6 blue). The percent entropy is on average 100% at the beginning of the season, 83% at the peak, and 92% by the end of the season(Fig. 6 orange). Figure 6: The number of clusters and the percent entropy (defined as entropy of weights assigned to clusters divided by maximum entropy) for the CAP- Adaptive ensemble aggregated over all locations and week-ahead targets. The large number of clusters before the peak and small number after the peak reflects the difficulty in forecasting before peak ILI. The decrease in percent entropy before and at the peak suggests that the CAP algorithm presented is learning which models perform best. However, the increase in percent entropy after the peak could be because of varied performance during peak ILI. ## VII Discussion A Combine-Aggregate-Pooling (CAP) approach has the potential to improve forecatsing performance of 1-4 week ahead forecasts of ILI compared to an equally weighted, static, and adaptive ensemble approach. In particular, a CAP approach was observed to be better calibrated when compared to a non-CAP approach. We feel that a major advantage to a CAP approach is the potential to reduce redundancy in component models. Redundant component models can cause issues of non-identifiability which may produce a smaller (than expected) linear pool variance and make it difficult, or impossible, to learn component model weights. By clustering component models and selecting from each cluster a single representative component models, the CAP approach is able to reduce redundancy between component models. A ‘built-in’ advantage of applying a CAP approach is the ability to handle missing forecasts. In real-time forecasting efforts, a schedule is typically assigned for when to submit component models forecasts. Forecasts may not be submitted successfully because a change in the data breaks a component model algorithm, the modeling team misses the deadline due to other priorities, the generated forecast does not match intuition, etc. Past ensemble approaches accounted for missing forecasts in an adhoc fashion, using methods that were separate from the ensemble approach [10, 22]. Because a CAP approach combines cluster forecasts and not component model forecasts there is a small probability that a cluster forecast is missing. A cluster forecast is missing only if every every component forecast that is included in the cluster is missing. Another advantage to CAP is the ability to more easily include public health officials in the ensemble procedure. For past ensemble models, public health officials could participate by assigning apriori weights to component models. However, assigning weights may not be as intuitive as grouping together component models. The CAP approach allows public health officials (PHOs) to engage in ensemble building by grouping component models if there exists an important reason to do so. Clustering does not need to be performance based but could be instead based upon how the PHO wants to communicate information. For example, the PHO may decide that all models associated with a research group will be clustered together or all models of a specific type (statistical, mechanistic, etc) may be grouped together. Forecasts from each cluster and an ensemble forecast can be presented to the PHO group who then disseminates this information to the public. In this work we chose a specific implementation of CAP, however the CAP approach is broad and can include many methods for combining component models, aggregating models, and pooling. In future work we wish to explore several implementations of the CAP to determine if there exists an optimal combination of these three steps. We also wish to explore training a CAP ensemble on forecasts that are formatted as a sequence of quantiles instead of a discretized probability mass function. The quantile format has become the more popular option to format forecasts and the CAP ensemble approach should be expanded to combine forecasts in this format. Our specific CAP implementation and the above results present limitations. The combine and aggregate steps for our ensemble use historical logscores for component models. Here, we assume that a high correlation between log scores of two component models implies that these models must share similar modeling structure and predictive density functions. This may not always be the case. In addition, readers may expect that clustering should partition component models based on structural attributes (such as phenomenological vs mechanistic, team producing the model, etc) but this is not always the case (Fig. 4). Only two weighting procedures were evaluated: an equal and adaptive weighting procedure. Other weighting schemes should be evaluated. Another limitation is that this procedure was evaluated on the older discretized probability format for forecasts. The CAP procedure still, as a last step, pools cluster forecasts, and so assumes that cluster forecast performance should be consistent over the season. This is a limitation because cluster performance may not be consistent over time just as component model performance can be inconsistent. ## VIII Acknowledgements We wish to thank the CMU Delphi group for providing easy, programmatic access to ILI data (https://cmu-delphi.github.io/delphi-epidata/). We wish to thank Nicholas G. Reich for running the FluSight Network project which allowed for a rich dataset of forecasts (https://github.com/FluSightNetwork/cdc-flusight- ensemble). ## IX Data Availability ILI data used for this manuscript can be located at https://cmu- delphi.github.io/delphi-epidata/api/flusurv.html. A script titled “download_epidata.py” at https://github.com/computationalUncertaintyLab/CombineAggregatePool/ can be used to download all ILI data. Forecast output for all component models can be found at https://github.com/FluSightNetwork/cdc-flusight- ensemble/tree/master/model-forecasts/component-models and a script titled ‘combineFSNForecastsTogether.py’ at https://github.com/computationalUncertaintyLab/CombineAggregatePool/ can be used to combine all forecasts into a single dataset. Algorithm and figures code can be found at https://github.com/computationalUncertaintyLab/CombineAggregatePool/. ## References * [1] Disease burden of flu. https://www.cdc.gov/flu/about/burden/index.html#:~:text=While%20the%20effects%20of%20flu,annually%20between%202010%20and%202020. Accessed: 2022-05-18. * [2] Melissa A Rolfes, Ivo M Foppa, Shikha Garg, Brendan Flannery, Lynnette Brammer, James A Singleton, Erin Burns, Daniel Jernigan, Sonja J Olsen, Joseph Bresee, et al. Annual estimates of the burden of seasonal influenza in the united states: a tool for strengthening influenza surveillance and preparedness. Influenza and other respiratory viruses, 12(1):132–137, 2018. * [3] Karen K. Wong, Seema Jain, Lenee Blanton, Rosaline Dhara, Lynnette Brammer, Alicia M. Fry, and Lyn Finelli. Influenza-associated pediatric deaths in the united states, 2004–2012. Pediatrics, 132(5):796–804, 2013. * [4] Noelle-Angelique M Molinari, Ismael R Ortega-Sanchez, Mark L Messonnier, William W Thompson, Pascale M Wortley, Eric Weintraub, and Carolyn B Bridges. The annual impact of seasonal influenza in the us: measuring disease burden and costs. Vaccine, 25(27):5086–5096, 2007. * [5] Wayan CWS Putri, David J Muscatello, Melissa S Stockwell, and Anthony T Newall. Economic burden of seasonal influenza in the united states. Vaccine, 36(27):3960–3966, 2018. * [6] Nicholas G Reich, Craig J McGowan, Teresa K Yamana, Abhinav Tushar, Evan L Ray, Dave Osthus, Sasikiran Kandula, Logan C Brooks, Willow Crawford-Crudell, Graham Casey Gibson, et al. A collaborative multi-model ensemble for real-time influenza season forecasting in the us. bioRxiv, page 566604, 2019. * [7] Chelsea S Lutz, Mimi P Huynh, Monica Schroeder, Sophia Anyatonwu, F Scott Dahlgren, Gregory Danyluk, Danielle Fernandez, Sharon K Greene, Nodar Kipshidze, Leann Liu, et al. Applying infectious disease forecasting to public health: a path forward using influenza forecasting examples. BMC Public Health, 19(1):1–12, 2019. * [8] U.s. influenza surveillance: Purpose and methods. https://www.cdc.gov/flu/weekly/overview.htm. Accessed: 2022-08-19. * [9] Noreen Qualls, Alexandra Levitt, Neha Kanade, Narue Wright-Jegede, Stephanie Dopson, Matthew Biggerstaff, Carrie Reed, Amra Uzicanin, CDC Community Mitigation Guidelines Work Group, CDC Community Mitigation Guidelines Work Group, et al. Community mitigation guidelines to prevent pandemic influenza—united states, 2017. MMWR Recommendations and reports, 66(1):1, 2017. * [10] Nicholas G Reich, Craig J McGowan, Teresa K Yamana, Abhinav Tushar, Evan L Ray, Dave Osthus, Sasikiran Kandula, Logan C Brooks, Willow Crawford-Crudell, Graham Casey Gibson, et al. Accuracy of real-time multi-model ensemble forecasts for seasonal influenza in the us. PLoS computational biology, 15(11):e1007486, 2019. * [11] Craig J McGowan, Matthew Biggerstaff, Michael Johansson, Karyn M Apfeldorf, Michal Ben-Nun, Logan Brooks, Matteo Convertino, Madhav Erraguntla, David C Farrow, John Freeze, et al. Collaborative efforts to forecast seasonal influenza in the united states, 2015–2016. Scientific reports, 9(1):1–13, 2019. * [12] Dave Osthus. Fast and accurate influenza forecasting in the united states with inferno. PLOS Computational Biology, 18(1):e1008651, 2022. * [13] Matthew Biggerstaff, Rachel B Slayton, Michael A Johansson, and Jay C Butler. Improving pandemic response: Employing mathematical modeling to confront coronavirus disease 2019. Clinical Infectious Diseases, 2021. * [14] Jeffrey Shaman and Alicia Karspeck. Forecasting seasonal outbreaks of influenza. Forecasting seasonal outbreaks of influenza, 109(50):20425–20430, 2012. * [15] Pushpendra Singh and Anubha Gupta. Generalized sir (gsir) epidemic model: An improved framework for the predictive monitoring of covid-19 pandemic. ISA Transactions, 124:31–40, 2022. * [16] Logan C. Brooks, David C. Farrow, Sangwon Hyun, Ryan J. Tibshirani, and Roni Rosenfeld. Flexible modeling of epidemics with an empirical bayes framework. PLOS Computational Biology, 11:1–18, 2015. * [17] Sen Pei and Jeffrey Shaman. Aggregating forecasts of multiple respiratory pathogens supports more accurate forecasting of influenza-like illness. PLoS computational biology, 16(10):e1008301, 2020. * [18] Lijing Wang, Jiangzhuo Chen, and Madhav Marathe. Defsi: Deep learning based epidemic forecasting with synthetic information. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 9607–9612, 2019. * [19] Teresa K Yamana, Sasikiran Kandula, and Jeffrey Shaman. Individual versus superensemble forecasts of seasonal influenza outbreaks in the united states. PLoS computational biology, 13(11):e1005801, 2017. * [20] Evan L Ray and Nicholas G Reich. Prediction of infectious disease epidemics via weighted density ensembles. PLoS computational biology, 14(2):e1005910, 2018. * [21] Evan L Ray, Nutcha Wattanachit, Jarad Niemi, Abdul Hannan Kanji, Katie House, Estee Y Cramer, Johannes Bracher, Andrew Zheng, Teresa K Yamana, Xinyue Xiong, et al. Ensemble forecasts of coronavirus disease 2019 (covid-19) in the us. MedRXiv, 2020. * [22] Thomas McAndrew and Nicholas G Reich. Adaptively stacking ensembles for influenza forecasting. Statistics in Medicine, 40(30):6931–6952, 2021. * [23] Robert V Hogg, Joseph W McKean, and Allen T Craig. Introduction to mathematical statistics. Pearson, 2019. * [24] Robert L Winkler and Allan H Murphy. “good” probability assessors. Journal of Applied Meteorology and Climatology, 7(5):751–758, 1968\. * [25] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. * [26] Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950. * [27] Tilmann Gneiting, Fadoua Balabdaoui, and Adrian E Raftery. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society Series B: Statistical Methodology, 69(2):243–268, 2007. * [28] Flusightnetwork github repository. https://github.com/FluSightNetwork/cdc-flusight-ensemble. Accessed: 2022-05-18. ## X Supplemental ## XI Influenza-like illness by season Figure 7: Percent influenza-like illness (ILI) values for HHS regions 1-10 (blue) and a US national estimate (black) from the 2010/2011 season to the 2018/2019 season. ILI values across regions follow a similar pattern within one season. Across seasons, ILI follows different trajectories. ## XII Component model forecasts We organize the component forecasts of 27 component model for all HHS regions, all forecast targets, and all epidemic weeks as $\displaystyle\mathcal{F}=\begin{bmatrix}\text{Region}&\text{Target}&\text{Component Model ID}&\text{EW}&\text{bin 1}&\text{...}&\text{bin 131}\\\ \text{HHS1}&\text{1}&\text{1}&201040&\text{XXX}&...&\text{XXX}\\\ \vdots\\\ \text{HHS10}&\text{4}&\text{27}&201820&\text{XXX}&...&\text{XXX}\\\ \text{Nat}&\text{1}&\text{1}&201040&\text{XXX}&...&\text{XXX}\\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\\ \text{Nat}&\text{4}&\text{27}&201920&\text{XXX}&...&\text{XXX}\\\ \end{bmatrix}$ (12) Each row of $\mathcal{F}$ is a discretized forecast distribution of a component model with specified target and region. 131 ILI bins are ordered in ascending order. Each contains a probability of the forecasted ILI value made by the component model. For example, at epidemic week 201040, component model id 0 (CUBMA) generates a discretized distribution (row 1, bin 1 $\sim$ bin 131) of predicted wILI value for epidemic week 201041 in region HHS1. Figure 8: (A.) Component model forecasts one through four weeks ahead at epidemic week 51 in HHS 3 for the 2017/2018 season represented as a median (solid line) plus 25th, 75th quantiles (shaded region). (B.) A single component model forecast one through through four weeks ahead at three different epidemic weeks. The solid line represents ground truth ILI data that is observed by all models. The dotted line represents ground truth ILI that models have not yet observed. Component model forecasts are open source and available at [28] in the folder ./model-forecasts/component-models/. ## XIII Correlation in log score over time Figure 9: Pairwise linear correlation for 2 week ahead forecasts of ILI among all 27 component models at four different epidemic weeks:(A.) 201743 (B.) 201801 (C.) 201811 (D.) 201820. (E.) ILI for the 17/18 season. ## XIV Descriptions of twenty-seven Component models Model ID | Component Model | Research Team | Submission Type ---|---|---|--- 1 | BMA | CU | real-time 2 | EAKFC-SEIRS | CU | real-time 3 | EAKFC-SIRS | CU | real-time 4 | EKF-SEIRS | CU | real-time 5 | EKF-SIRS | CU | real-time 6 | RHF-SEIRS | CU | real-time 7 | RHF-SIRS | CU | real-time 8 | Basis Regression | Delphi | real-time 9 | Empirical Futures | Delphi | real-time 10 | Empirical Trajectories | Delphi | real-time 11 | Delta Density | Delphi | real-time 12 | Markovian Delta Density | Delphi | real-time 13 | Uniform Distribution | Delphi | real-time 14 | Mechanistic GLEAM Ensemble | FluOutlook | retrospective 15 | Augmented Mechanistic GLEAM Ensemble | FluOutlook | retrospective 16 | Auto Regressive model with Likelihood Ratio based Model Selection | FluX | retrospective 17 | Long Short-Term Memory (LSTM) based deep learning | FluX | retrospective 18 | Dynamic Bayesian Model plus | LANL | real-time 19 | Ensemble of dynamic harmonic model and historical averages | Protea | real-time 20 | Subtype weighted historical average model | Protea | real-time 21 | Dynamic Harmonic Model with ARIMA errors | Protea | real-time 22 | Kernel Conditional Density Estimation | ReichLab | real-time 23 | Kernel Density Estimation | ReichLab | real-time 24 | Kernel Conditional Density Estimation with post-hoc backfill adjustment | ReichLab | retrospective 25 | SARIMA model without seasonal differencing | ReichLab | real-time 26 | SARIMA model with seasonal differencing | ReichLab | real-time 27 | Epidemic Cosine with Variational Data Assimilation | UA | retrospective Table 1: Brief description of twenty-seven component models
# Steering undulatory micro-swimmers in a fluid flow through reinforcement learning Zakarya El Khiyati Université Côte d’Azur, Inria, CNRS, Sophia-Antipolis, France Raphaël Chesneaux Ecole Nationale Supérieure des Mines de Paris, PSL University, CNRS, Cemef, Sophia-Antipolis, France Laëtitia Giraldi Université Côte d’Azur, Inria, CNRS, Sophia-Antipolis, France Jérémie Bec Université Côte d’Azur, Inria, CNRS, Sophia-Antipolis, France Ecole Nationale Supérieure des Mines de Paris, PSL University, CNRS, Cemef, Sophia-Antipolis, France ###### Abstract This work aims at finding optimal navigation policies for thin, deformable microswimmers that progress in a viscous fluid by propagating a sinusoidal undulation along their slender body. These active filaments are embedded in a prescribed, non-homogeneous flow, in which their swimming undulations have to compete with the drifts, strains, and deformations inflicted by the outer velocity field. Such an intricate situation, where swimming and navigation are tightly bonded, is addressed using various methods of reinforcement learning. Each swimmer has only access to restricted information on its configuration and has to select accordingly an action among a limited set. The optimisation problem then consists in finding the policy leading to the most efficient displacement in a given direction. It is found that usual methods do not converge and this pitfall is interpreted as a combined consequence of the non- Markovianity of the decision process, together with the highly chaotic nature of the dynamics, which is responsible for high variability in learning efficiencies. Still, we provide an alternative method to construct efficient policies, which is based on running several independent realisations of $Q$-learning. This allows the construction of a set of admissible policies whose properties can be studied in detail and compared to assess their efficiency and robustness. ## I Introduction A number of microorganisms, including bacteria and plankton, are natural examples of active, self-propelled particles. They often inspire the design of artificial devices used for industrial micro-manufacturing, toxic waste disposal, targeted drug delivery and localised medical diagnostics Wu _et al._ (2020). Recent technological developments in the use of micro-swimmers in medicine open new frontiers, such as microscopic-scale surgery directly inside the human body and medicine and drugs delivery in very precise places where their efficiency will be optimal. Much work has been devoted to designing adequate nano-robots and studying the way they can be propelled and controlled using an external magnetic field Servant _et al._ (2015), in particular for in-vivo conditions. Still, many questions remain open on how to optimise the displacement of these micro-swimmers, and in particular whether their behaviour is altered when they are embedded in complex flows comprising obstacles, walls, or having non-Newtonian properties. This is particularly important to design new strategies that will allow artificial swimmers to reach today inaccessible regions of the human body. Studying and optimising the movement of swimmers and micro-swimmers is generally addressed in two successive steps. The first is to find an appropriate swimming strategy by choosing the composition, shape, or deformation that will lead to an efficient locomotion. The second step is to define a navigation strategy that takes into account obstacles, fluctuations in the surrounding flow, and its geometry, with the aim to minimise the time needed or the energy used to reach a specific target. Studying swimming strategies at the microscopic level requires advanced tools to describe fluid- structure interactions Berti _et al._ (2020); Alouges _et al._ (2013), to take a non-Newtonian rheology of the surrounding fluid into account Shen and Arratia (2011), to model the hydrodynamics stresses due to the vicinity of walls Daddi-Moussa-Ider _et al._ (2021). Finding an effective strategy then relies on biomimetics Borazjani and Sotiropoulos (2009); Cohen and Boyle (2010) or on solving costly problems of optimal control Alouges _et al._ (2019). As a matter of fact, such swimming issues are most of the time addressed in situations where the surrounding flow is at rest. This is justified by the complexity and the computational costs that would be required to accurately model the intricate fluid-structure interactions occurring in a chaotic or turbulent medium. Regarding navigation problems, there is an increasing interest in considering complicated carrier flows (see Reddy _et al._ (2022) for a recent review). The swimming mechanisms are often oversimplified and one rather focuses on how to adjust macroscopic active features of the swimmers in order to optimise their long-term displacement. Under such conditions, the use of machine learning techniques has proved efficiency Cichos _et al._ (2020). Reinforcement learning has for instance been used to address navigation in a turbulent flow and to construct strategies that allow swimmers to find optimal paths to their targets in such a chaotic and fluctuating environment Reddy _et al._ (2016); Colabrese _et al._ (2017); Gustavsson _et al._ (2017); Schneider and Stark (2019); Muiños-Landin _et al._ (2021); Qiu _et al._ (2022); Jaya Kumar A. _et al._ (2020). Navigation problems have also been studied from different perspectives such as finding new paths in the presence of obstacles that can be modelled as barriers of potential Schneider and Stark (2019). As to approaches that use deep reinforcement learning, they demonstrated successes in various applications, such as terrain-adaptive dynamic locomotion Peng _et al._ (2016) or real-world manipulation tasks Levine _et al._ (2016). Here we want to address the situation where locomotion and navigation are tightly dependent on each other. Our goal is to show the feasibility of using machine learning approaches for a mesoscopic model of swimmer, and in particular to understand if such approaches are able, not only to make the swimmer move, but also to have it at the same time navigate a complex environment. The swimmers are assumed to be simple, deformable, inextensible thin filaments whose interactions with the fluid are explicitly described by the slender-body theory. Among the different types of swimming, we have chosen wave locomotion which is a self-propulsion strategy that relies on the generation and propagation of an undulation along the swimmer Pironneau and Katz (1974). This is a relatively simple, but remarkably robust technique that builds on the interactions between the swimmer and the fluid and appears in a variety of swimming strategies observed in nature. We consider the problem where such swimmers are aiming at moving as fast as possible in a given direction, being at the same time embedded in a space-periodic, time- stationary, incompressible fluid flow that produces headwinds and deformations hindering their mobility. We find that in such settings, the undulatory swimmers progress efficiently only if they follow a policy that prescribes different actions to be performed depending on their configuration. We focus on a simple, paradigmatic case: The actions and observations of the environment by the swimmer are both chosen from discrete sets that consist, respectively, of swimming either horizontally or vertically with different amplitudes and having sparse information on its orientation and the local direction of the flow. We look for optimal policies for this partially-observable Markov decision process, by applying and comparing various algorithms of reinforcement learning, ranging from the popular $Q$-learning technique to approximation methods (differential SARSA and Actor-Critic). We find that these approaches do not provide satisfactory results: Either they do not converge, or if they do so, they require prohibiting long times. We propose an alternative method that can be seen as belonging to the class of competitive $Q$-learning approaches. It builds on the observation that, because of the highly chaotic character of the dynamics, individual realisations of simple, deterministic $Q$-learning are able to identify, quite quickly, a diversity of policies that lead to a reasonable displacement of the swimmer. The analysis of these admissible strategies can then be easily refined and systematised in order to rank them and select the most efficient ones. The advantage of this method is that it provides a short list of policies whose robustness can be tested and compared by varying the problem setting, for instance, the physical attributes of the swimmer (length, elasticity) or the properties of the surrounding flow. The paper is organised as follows. Section II introduces the swimmer model and reports results on how the efficiency of its locomotion depends on its physical properties. In Section III, we describe the outer flow and formulate the navigation problem in terms of discrete observations and actions. We also show that a policy is needed for the swimmer’s displacement and introduce a naive strategy that allows it. Section IV is dedicated to a detailed comparison of various reinforcement learning techniques, leading to introduce the competitive $Q$-learning approach described above. Results on the performances and robustness of the short-listed policies are reported in Section V, including trials performed in unsteady flows that are solving the Navier–Stokes equation. Finally, Section VI gathers concluding remarks and perspectives. ## II A model of undulatory threadlike swimmer ### II.1 Dynamics of deformable slender bodies We consider elongated, flexible, inextensible swimmers. We moreover assume that they are very thin, meaning that their cross-section diameter $d$ is much smaller than their length $\ell$. This leads to describe their interactions with the surrounding viscous fluid in terms of the slender-body theory Lindner and Shelley (2015). The swimmers are embedded in an incompressible flow whose velocity field is denoted by $\bm{u}(\boldsymbol{x},t)$. We neglect the swimmers feedback onto this prescribed flow, which is justified in the limit when swimmers are very thin and dilute. The conformation of an individual swimmer at time $t$ is given by a curve ${\boldsymbol{X}}(s,t)$ parametrised by its arc-length $s\in[0,\ell]$. We neglect the swimmer’s inertia, so that its dynamics is given by equating to 0 the sum of the forces that act on it, namely $\displaystyle-\zeta\,\mathbb{R}\left[\partial_{t}{\boldsymbol{X}}-\bm{u}({\boldsymbol{X}},t)\right]$ $\displaystyle+\ \partial_{s}(T\partial_{s}{\boldsymbol{X}})$ $\displaystyle-K\,\partial_{s}^{4}{\boldsymbol{X}}+\bm{f}(s,t)=0.$ (1) This equation of motion, which corresponds to the over-damped Cosserat equation, is the same as that obtained by resistive force theory to describe bio-filaments Moreau _et al._ (2018). The first term on the left-hand side involves the drag coefficient $\zeta=8\pi\mu/[2\log(\ell/d)-1]$ (with $\mu$ the fluid dynamic viscosity) and the local Oseen resistance tensor $\mathbb{R}=\mathbb{1}-(1/2)\,\partial_{s}{\boldsymbol{X}}\,\partial_{s}{\boldsymbol{X}}^{\mathsf{T}}$. This expression of the force exerted by the fluid assumes that, despite an arbitrary length, the fibre’s thickness is so small that its perturbation on the flow has a vanishingly small Reynolds number, whence a linear but anisotropic drag. The second force appearing in Eq. (1) is the tension. Its amplitude $T$ results from the inextensibility constraint $|\partial_{s}{\boldsymbol{X}}(s,t)|=1$, valid at all time $t$ and all position $s$ along the swimmer. The third term is the bending elasticity force and depends on the swimmer’s flexural rigidity $K$ (product of Young’s modulus and inertia). The last term, denoted by $\bm{f}$, is a prescribed internal force that accounts for the active behaviour of the swimmer responsible for its locomotion. Equation (1) is associated with the free-end boundary conditions $\partial_{s}^{2}{\boldsymbol{X}}(s,t)=0$ and $\partial_{s}^{3}{\boldsymbol{X}}(s,t)=0$ at the swimmer’s extremities $s=0$ and $\ell$. The tension itself satisfies a second-order differential equation obtained by imposing $\partial_{t}|\partial_{s}{\boldsymbol{X}}|^{2}=0$ with the boundary conditions $T(s,t)=0$ at $s=0$ and $\ell$. In the absence of active force ($\bm{f}=0$), the swimmer is just a passive, flexible but inextensible fibre, whose dynamics depends on two non-dimensional parameters. One is given by the ratio $\ell/L$ between the fibre’s length $\ell$ and the characteristic spatial scale $L$ of the fluid flow. It characterises to which extent the fibre samples the fluid flow length scales and monitors geometrical interactions with surrounding structures and eddies Picardo _et al._ (2018); Rosti _et al._ (2018). The other parameter is $(U\zeta/KL)^{1/4}\ell$, where $U$ is a typical magnitude of the fluid velocity. It measures the fibre’s flexibility and in particular its likeliness to be bent or buckled by the flow Young and Shelley (2007); Brouzet _et al._ (2014); Allende _et al._ (2018). The larger it is, the more deformable is the fibre when it is subject to shear or compression. ### II.2 The undulatory swimming procedure We focus on swimmers that move by propagating a sinusoidal plane wave along their body. This undulation is assumed to be applied through the active body force $\bm{f}$ appearing in the dynamical equation (1). The swimmers are thus assumed to have the ability to adapt their curvature along their body, as in the case of nematodes Gray and Lissmann (1964); Berri _et al._ (2009). Such settings are somewhat different from the beating of cilia or flagella, for which it is rather a time-periodic boundary data that is imposed to a flexible beating appendage, as in the case of sperm cells Friedrich _et al._ (2010); Jikeli _et al._ (2015). We choose here to write the active force as $\bm{f}(s,t)=A\,\zeta\,\nu\,\ell\,\cos(2\pi\,k\,s/\ell-\nu\,t)\,\bm{p}$ (2) where $\bm{p}$ is a unit vector in a direction orthogonal to that in which the swimmer is expected to move. The wave has frequency $\nu$ and wavenumber $2\pi k/\ell$ where $k$ is an integer. To ensure self-propulsion, we impose that the force $\bm{f}$ is not a global source of momentum for the swimmer, namely that $\int\bm{f}\mathrm{d}s=0$, justifying why the wavenumber has to be chosen as a multiple of $(2\pi/\ell)$. The strength of the active force is controlled by the dimensionless amplitude $A$. The resulting swimming speed in the $\bm{p}^{\perp}$ direction, which is hereafter denoted by $V_{\rm swim}$, non-trivially depends on the forcing parameters and the physical properties of the swimmer. To our knowledge, there is at present no analytic expression for $V_{\rm swim}$, even in the absence of external fluid flow ($\bm{u}=0$). This can be explained by the intricate role played by inextensibility and tension and the imposed free-end boundary conditions that prevent from obtaining an explicit solution for the fibre conformation ${\boldsymbol{X}}$ for this force. Still, when rescaling spatial scales by the swimmer’s length $\ell$ and time scales by the wave frequency $\nu^{-1}$, one finds that $V_{\rm swim}=\ell\nu\,\Psi_{k}(A,\mathcal{F})$, where $\mathcal{F}=(\zeta\nu/K)^{1/4}\ell$ is a non-dimensional measure of the swimmer’s flexibility under the action of the active force and the $\Psi_{k}$’s are non-dimensional functions indexed by the wavenumber $k$. To obtain their behaviour, we have recourse to numerics. To set our physical parameters and understand better how the swimmers respond to activation, we have performed numerical simulations of the over-damped Cosserat equation (1) for isolated fibres in a fluid flow at rest. We use the second-order, centred finite-difference scheme of Tornberg and Shelley (2004) with $N=201$ to $801$ grid-points along the fibre’s arc-length. The inextensibility constraint is enforced by a penalisation method. Time marching uses a second-order semi-implicit Adams–Bashforth method with time step ranging from $\delta t=10^{-3}$ to $10^{-4}$. We have performed several simulations varying the forcing amplitude, its wavenumber, and the swimmer bending elasticity. After transients, the swimmer, which is initialised in a straight configuration, develops an undulating motion corresponding to a travelling wave propagating from its head ($s=0$) to its tail ($s=\ell$). Once this periodic regime is attained, we measure the time required for its displacement over several lengths $\ell$ in order to evaluate the asymptotic swimming speed $V_{\rm swim}$. Figure 1: Swimming speed in the absence of fluid velocity field, (a) as a function of the forcing amplitude, for flexibility $\mathcal{F}=15$ and three various values of the wavenumber ($k=2$, $3$, and $4$, as labelled), and (b) as a function of the swimmer’s flexibility $\mathcal{F}$, for $k=2$ and three values of the forcing dimensionless amplitude $A$. The dependence of the swimming speed upon the amplitude parameter $A$ is shown in Fig. 1(a), for different wave-numbers $k$ and a fixed dimensionless flexibility $\mathcal{F}$. Several representative configurations of the swimmer are also shown, with a dot indicating its head ($s=0$). At small forcing amplitudes, the undulation of the swimmer is very close to the imposed wave and the swimming speed increases quadratically. This behaviour can be obtained from a linear expansion of Eq. (1) at $A\ll 1$. To leading order the swimmer is aligned with $\bm{p}^{\perp}$, the unit vector orthogonal to the force, and it moves along this direction. The projection of its position can thus be expanded as $\bm{p}^{\perp}\cdot{\boldsymbol{X}}=-s+X_{1}^{\prime}$ with $X_{1}^{\prime}\ll 1$. In the transverse direction, one gets from Eq. (1) that $X_{2}^{\prime}=\bm{p}\cdot{\boldsymbol{X}}=\mathcal{O}(A)$. The inextensibility constraint reads $|\partial_{s}{\boldsymbol{X}}|^{2}=(1-\partial_{s}X_{1}^{\prime})^{2}+(\partial_{s}X_{2}^{\prime})^{2}=1$, implying that the longitudinal perturbation $X_{1}^{\prime}$ is of the order of $(X_{2}^{\prime})^{2}$. This indeed implies that $V_{\rm swim}\sim\partial_{t}X^{\prime}_{1}=\mathcal{O}(A^{2})$. This quadratic growth saturates for $A\approx 0.1$– $0.2$ and the swimming speed then attains a maximum. This optimal speed slowly decreases and shifts toward larger values of $A$ when $k$ increases. One consequently observes that achieving a given swimming speed is getting more energetic, or even impossible, when the wavenumber of the undulation is taken larger. Beyond this maximum, swimming becomes less and less efficient at larger forcing amplitudes. At such value the swimmer’s distorsion is highly non-linear and bending elasticity becomes important and induces an important dissipation. Figure 1(b) represents again $V_{\rm swim}$, but this time as a function of the non-dimensional flexibility $\mathcal{F}$, for $k=2$ and three different amplitudes of forcing, before, at the maximum, and after. The swimming speed attains a maximum at intermediate values of $\mathcal{F}$. When too stiff, the swimmer is not able to develop any significant undulation as the power input from the active force is dissipated by bending elasticity. At very large values of the flexibility, the swimmer is conversely too limp and energy is dissipated through viscous drag. An optimal locomotion is attained when the two dissipative mechanisms balance. This preliminary study of undulatory swimming in the absence of an external flow allows us to properly choose the physical parameters that will be considered. Hereafter we focus on the forcing wavenumber $k=2$, the flexibility is chosen to be $\mathcal{F}=15$, and the forcing amplitudes are picked before the saturation of swimming efficiency, i.e. $A\lesssim 0.15$. ## III Statement of the navigation problem We consider the two-dimensional navigation problem, which consists in having the swimmer moving as fast as possible in the $x_{1}>0$ direction in the presence of a prescribed external fluid flow. In Sec. III.1, after introducing the model flow, we demonstrate that displacement can only occur if the swimmer follows a strategy. We then present in Sec. III.2 the observations and actions that can be used by the swimmers to control its displacement and we formulate the optimisation problem. We finally introduce in Sec. III.3 a “naive” strategy and evaluate its performance, with the aspiration that the reinforcement-learning methods applied in Sec. IV can outperform it. ### III.1 Swimming in a cellular flow To design a navigation strategy, we consider an undulatory swimmer that is embedded is a two-dimensional cellular flow. More specifically, we prescribe the outer fluid velocity to be $\bm{u}=\nabla^{\perp}\Psi=(-\partial_{2}\Psi,\partial_{1}\Psi)$ with the stream function taking the simple periodic form $\Psi(\bm{x},t)=(L\,U/\pi)\,\cos(\pi\,x_{1}/L)\,\cos(\pi\,x_{2}/L)$. The spatial domain is hence covered by a tile of cells mimicking eddies. Their size $L$ is chosen of the same order of magnitude as the fiber length $\ell$. The velocity field has an amplitude $U$ to be compared to the swimming velocity $V_{\rm swim}$ introduced in previous section. Such a two-dimensional flow is a stationary solution of the incompressible Euler equations and is used, for instance, to model the convection cells present in steady Rayleigh–Bénard convection. It is often employed to study the effects of fluid shear and rotation on transport and mixing. It moreover has the convenience of being easily reproducible by experiments Rothstein _et al._ (1999). As seen later, even if the motion of tracers in such a flow is not chaotic, the dynamics of swimmers can be so. Our aim is to maximise the swimmer displacement toward the $x_{1}>0$ direction. When using the basic swimming, that is to say always binding the fibre to swim with the force (2) constantly applied along the direction $\bm{p}=\bm{e}_{2}$, one does not observe any long-term, net displacement. We have indeed perform a set of numerical simulations where the swimmer is initialised in a straight configuration, with its head always oriented toward $x_{1}>0$, and varying its initial angle with the horizontal direction. Unless otherwise stated, we always use a discretisation of the swimmer with $N=201$ grid-points and a time step $\delta t=10^{-3}$. Performance is then monitored by $\bar{x}_{1}(t)=\bm{e}_{1}\cdot\bar{\bm{X}}(t)=\frac{1}{\ell}\int_{0}^{\ell}\bm{e}_{1}\cdot{\boldsymbol{X}}(s,t)\,\mathrm{d}s,$ (3) i.e. by the horizontal displacement of the swimmer’s centre of mass $\bar{\bm{X}}$. Figure 2: Swimmers continuously undulating in the vertical direction without any specific strategy. The parameters are here $\mathcal{F}=15$, $U=0.025\ell\nu$ and $\ell/L=1$. (a) Displacement along the horizontal direction $x_{1}$ as a function of time for initially-straight swimmers released with various angles with the $x_{1}$ axis. (b) Two instances of trapped swimmers: The blue one is oriented toward $x_{1}<0$ and is stuck between two cells where it swims against the flow. The red one performs a cycle across several cells, during which it is tumbled back and forth by the flow; The trajectory of the swimmer’s centre of mass is shown as a black line. The fluid vorticity $\omega=\partial_{1}u_{2}-\partial_{2}u_{1}$ is represented as coloured contour lines. Figure 2a reports the evolution of the displacement of swimmers initialised with various initial orientations. After crossing a few cells, they get systematically trapped on rather stable cyclic orbits, preventing them from further displacements. We identify two types of cyclic trap, which are illustrated in Fig. 2b. In the case shown in blue, the swimmer is oriented in the wrong direction (towards $x_{1}<0$) and swims in a counterflow that pushes it to the right and exactly compensates its locomotion. The position of its center of mass barely changes during an undulation period. In the second case, shown in red, the swimmer alternatively swims to the left, is rotated by the flow, swims to the right, then changes again direction, and so on. The mean abscissa $\bar{x}_{1}(t)$ performs in that case a cyclic motion with an amplitude $\simeq 1.6\,L$ and a period corresponding to approximately 300 forcing periods. The black line shows the position $\bar{\bm{X}}(t)$ of the swimmer’s center of mass sampled over more than 30 cycles. Actually, it does not exactly form a closed loop and tiny deviations can be observed from one cycle to the other. Despite this, such a cyclic motion remains stable and persists for hundreds of cycles. Note that these simulations indicate a very sensitive dependence upon the swimmer’s initial orientation as a tiny variation of the initial angle can lead the swimmer to end up in distant cells of the flow and in different configurations. This sensitivity is a hallmark of a chaotic behaviour. However it also indicates that the swimmers dynamics is not ergodic when they continuously undulate in such a flow. Hence the swimmers do not show any net displacement if they just follow their basic swimming procedure without observing any further strategy. Moreover, an adequate navigation policy should be able to prevent, or at least destabilise, the two kinds of trap that were identified. Such an observation can be used to make a guess on adequate minimal observations and actions that should be accounted for in the swimmer’s decision process. ### III.2 The optimisation problem Our problem is to optimise navigation for a swimmer by controlling the parameters of the actuating force based on the current state of the swimmer. This problem is typically studied using the formalism of Markov decision processes (MDPs), which assumes that the state of the system is fully observable. This requires grabbing an information that lives, in principle, in the infinite-dimensional set of all two-dimensional curves $s\mapsto{\boldsymbol{X}}(s,t)$ with length $\ell$, and in numerics, in the $(N+1)$-dimensional manifold of $\mathbb{R}^{2N}$ formed by attainable discretised configurations ($N$ being the number of points used to discretise the swimmer arc-length). We hereafter denote by $\mathcal{S}$ this set of states. Because of the high dimensionality of $\mathcal{S}$, a full description of the swimmer state is clearly not possible, neither in numerics, nor in practical applications. Figure 3: (a) The discretisation of observations depends on both the swimmer orientation, which can be towards positive or negative abscissae, and on the strength of the horizontal fluid velocity at its head $u_{\rm h}$, which divides the flow in regions of three different kinds. (b) The discretisation of actions set whether the swimmer should propagate an undulation in the horizontal or vertical direction, and with which amplitude $A$. Instead of assuming a full information on the state $\sigma\in\mathcal{S}$, we consider that only a minimalistic information is available. This problem falls under the category of partially-observable Markov decision processes (POMDPs), where the observations of the agent — the swimmer — are not sufficient to infer the true state of the system. As a result, optimal decision strategies must rely on a limited amount of data, making the problem even more challenging. We denote by $\mathcal{O}$ the set of all possible observations $\omega$. We infer from previous section that the swimmer requires information on two features of its state: whether or not it is rightly oriented and whether the fluid velocity helps or hinders its displacement towards $x_{1}>0$. More specifically, the first property is deduced from the sign of $X_{1}(0,t)-\bar{x}_{1}(t)$, namely whether the swimmer’s head is located on the right ($\omega=0,1,2$) or on the left ($\omega=3,4,5$) of its center of mass. The second property is obtained from the horizontal component $u_{\rm h}=\bm{e}_{1}\cdot\bm{u}({\boldsymbol{X}}(0,t),t)$ of the fluid velocity at the swimmer’s head. Three cases are distinguished: either $u_{\rm h}<-u_{0}$ and the swimmer feels a headwind ($\omega=0,3$), either $u_{\rm h}>u_{0}$ and it feels a tailwind ($\omega=2,5$), or $|u_{\rm h}|<u_{0}$ and it feels no significant wind ($\omega=1,4$). $u_{0}$ is a parameter that we fix to $u_{0}/U=1/5$. This makes up for a total of 6 possible observations that are illustrated and numbered in Fig. 3(a), so that $\mathcal{O}=\\{0,1,2,3,4,5\\}$. The various actions that the swimmer can take are illustrated in Fig. 3(b). Seven choices are possible, consisting in doing nothing (in black, $\alpha=0$) or applying the active force either in the horizontal ($\bm{p}=\bm{e}_{1}$, in red, $\alpha=0,1,2$) or in the vertical ($\bm{p}=\bm{e}_{2}$, in blue, $\alpha=4,5,6$) direction, choosing among three possible amplitudes: $A=\frac{1}{3}A_{0}$ ($\alpha=2,4$), $A=\frac{2}{3}A_{0}$ ($\alpha=1,5$), or $A=A_{0}$ ($\alpha=0,6$), where the base non-dimensional amplitude is fixed to $A_{0}=0.08$. The set of all possible actions is again discrete and denoted $\mathcal{A}=\\{0,1,2,3,4,5,6\\}$. We assume that the swimmer observes its environment at discrete times $t_{n}=n\Delta t$ with $n\in\mathbb{N}$. We choose the time step $\Delta t$ smaller than all physical timescales (in practice, we fix $\Delta t=0.2\,\nu^{-1}$). A navigation strategy consists in following a policy $\pi$, which associates to each couple $(\alpha_{n},\omega_{n})\in\mathcal{A}\times\mathcal{O}$, a probability $\pi(\alpha_{n}|\omega_{n})$ to choose the action $\alpha_{n}$ having observed $\omega_{n}$ at time $t_{n}$. A deterministic policy corresponds to having $\pi(\alpha|\omega)=1$ for $\alpha=\alpha_{\pi}(\omega)$ and $\pi(\alpha|\omega)=0$ otherwise. Finding an optimal strategy consists in finding the policy $\pi_{\star}$ that maximises a given reward over time. To formally define our POMDP we use the tuple $(\mathcal{S},\mathcal{A},\mathcal{O},{R},{T},\Omega)$, where $\mathcal{S}$, $\mathcal{A}$, and $\mathcal{O}$ are the state, action, and observation sets introduced above. The decision process also depends on the reward function ${R}$, the transition function ${T}$, and the observation function ${\Omega}$. The reward function ${R}$ maps the current state $\sigma_{n}\in\mathcal{S}$ and action $\alpha_{n}\in\mathcal{A}$ to a real number measuring the benefit of having chosen this action. As we are interested in maximising the motion of the swimmer to the right, the chosen reward is the horizontal displacement of its centre of mass ${R}(\sigma_{n},\alpha_{n})=\bar{x}_{1}(t_{n+1})-\bar{x}_{1}(t_{n})$. The transition function ${T}$ is the function that maps the current state and the action taken by the swimmer to the next state: $\sigma_{n+1}={T}(\sigma_{n},\alpha_{n})$. Such a function clearly exists because the full dynamics is deterministic and Markovian. Finally, the observation function $\Omega$ is the function that maps the state to the observation sensed by the swimmer: $\omega_{n}=\Omega(\sigma_{n})$. A given policy $\pi$ defines a (possibly stochastic) flow on $\mathcal{S}$: $\sigma_{n}\mapsto\sigma_{n+1}=T(\sigma_{n},\alpha_{n})$ with $\alpha_{n}$ chosen with probability law $\pi(\cdot|\Omega(\sigma_{n}))$. The policy thus fully determines the sequence $\\{(\sigma_{n},\alpha_{n}),n>0\\}$ for a given $\sigma_{0}$. We aim at finding a policy $\pi$ that maximises the long-term displacement of the swimmer towards positive abscissae. Formalising this optimisation problem requires introducing an adequate objective function. One could naively think of maximising the actual asymptotic displacement $\lim_{N\to\infty}\left[\bar{x}_{1}(t_{N})-\bar{x}_{1}(t_{0})\right]=\sum_{n=0}^{\infty}R(\sigma_{n},\alpha_{n})$. The infinite-horizon sum is however expected to diverge, because we seek policies leading to effective displacement. Such a pitfall is usually circumvented in MDPs by introducing a discount factor $\gamma$ to ensure convergence. One then maximises the discounted return $\mathcal{R}^{\rm disc}[\pi]=\sum_{n=0}^{\infty}{\rm e}^{-\gamma t_{n}}R(\sigma_{n},\alpha_{n}).$ (4) The discount factor $\gamma$ attributes more importance to immediate rewards than to those obtained in a distant future. The choice of this parameter is largely problem-dependent and can have a significant impact on the learned policy. As seen later, we use such a reward in our implementation of $Q$-learning (Sec. IV.1). Still, as discussed in Singh _et al._ (1994), using a discounted reward can be problematic for POMDPs. One can alternatively maximise the so-called differential return $\displaystyle\mathcal{R}^{\rm diff}[\pi]=\\!\sum_{n=0}^{\infty}\\!\left(R(\sigma_{n},\alpha_{n})-\bar{R}[\pi]\right)\\!$ $\displaystyle\mbox{ where }\bar{R}[\pi]=\\!\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N}\\!\left\langle R(\sigma_{n},\alpha_{n})\right\rangle\\!.$ (5) This formulation weights equally all rewards. It makes use of the mean reward $\bar{R}[\pi]$ that is averaged over both time and the realisations of the POMDP. In the framework of MDP (for which $\omega\equiv\sigma$), one often introduces the state value function $V_{\pi}(\sigma)=\left\langle\mathcal{R}[\pi]\ |\ \sigma_{0}=\sigma\right\rangle$, which quantifies the quality of the policy $\pi$ when we start from the state $\sigma$. A particularly useful function when searching for an optimal policy is the $Q$-value function $\mathcal{Q}_{\pi}(\sigma,\alpha)=\left\langle\mathcal{R}[\pi]\ \middle|\ \sigma_{0}=\sigma,\alpha_{0}=\alpha\right\rangle.$ (6) It assesses the value of the policy $\pi$ when taking the specific action $\alpha$ in a given state $\sigma$. Typically, value-based reinforcement learning algorithms try to learn an estimate $\mathcal{Q}_{\star}$ of the optimal $Q$-value function over all possible policies and use it to extract an optimal deterministic policy $\pi_{\star}$ as $\displaystyle\pi_{\star}(\alpha|\sigma)=1\mbox{ if }\alpha=\alpha_{\pi_{\star}}(\sigma)=\text{argmax}_{\alpha^{\prime}}\mathcal{Q}_{\star}(\sigma,\alpha^{\prime}),$ and 0 otherwise. (7) Such an optimal policy always exists for MDPs, in the sense that it maximises the value function $V_{\pi}(\sigma)$ for all states $\sigma$. For our partially-observable settings, the agent does not have a full information on $\sigma$ and the the $Q$-value function (6) becomes irrelevant to the navigation problem. A policy that is optimal in the same sense as in MPD is thus no longer guaranteed to exist Singh _et al._ (1994). Still, as seen above, one can instead use a different optimality criterion and maximise the differential return (5). Following Singh _et al._ (1994), the $Q$-value function can be then be defined by projecting $Q_{\pi}$ on observations, namely $Q_{\pi}(\omega,\alpha)=\sum_{\sigma\in\mathcal{S}}\mathcal{Q}_{\pi}(\sigma,\alpha)\,P(\sigma|\omega)\ $ (8) where $P(\sigma|\omega)$ is the probability to be in state $\sigma$, given observation $\omega$. ### III.3 A naive strategy Figure 4: (a) Naive strategy, which consists in swimming horizontally with maximum amplitude ($\alpha=6$) whenever the swimmer is rightly oriented and feel a calm or tailwind fluid flow ($\omega=4$ or $5$) and do nothing ($\alpha=3$) otherwise. (b) Horizontal displacement of swimmers initialised with various orientations and following the naive strategy; The average is shown as a bold solid line, and the interval defined by the standard deviation as dashed lines. (c) Sample of different trajectories in the $(x_{1},x_{2})$ plane. We introduce in this section a policy allowing the swimmer to reasonably move in the $x_{1}>0$ direction. We call it the naive strategy. It consists in following rather simple rules: If the swimmer has the proper orientation and simultaneously feels no headwind ($\omega=4,5$), the sinusoidal force (2) is applied with maximal amplitude $A_{0}$ in the direction $\bm{p}=\bm{e}_{2}$ ($\alpha=6$). If the swimmer is wrongly oriented and faces the $x_{1}<0$ direction, or experiences a headwind (all other observations), then no force is applied and the locomotion is stopped ($\alpha=3$). This naive policy is shown in Fig. 5a, using a graphical representation that we will employed later on to describe other policies: The different observations $\omega$ are represented as 6 vertically aligned coloured boxes, each colour (from red to blue) standing for the action $\alpha$ taken when $\omega$ is observed. This policy breaks the symmetry $x_{1}\mapsto-x_{1}$ and thus induces a positive drift. It moreover prevents the swimmer from being indefinitely trapped by similar mechanisms as those observed in Sec. III.1 in the absence of any strategy. We performed numerical simulations of 100 naive swimmers initialised at $t=0$ at the centre of a cell in a straight configuration, but with different initial orientations spanning $[-{\pi}/{2},{\pi}/{2}]$. As we can see from Fig. 4b, the naive strategy leads to a positive average displacement, with a distribution of swimmers that perceptibly spreads with time. Sometimes the swimmers temporarily fall in a trap and their displacement stays approximately constant during rather long times. As seen from the sample of trajectories of Fig. 4c, these trapping events correspond to the swimmer turning several times around a given cell before escaping and pursuing its motion towards $x_{1}>0$. The quasi-periodic cycles of Fig. 2b are no more stable and the naive strategy makes endless trapping impossible. Thanks to that, all trajectories are asymptotically moving toward $x_{1}>0$ and the dynamics of swimmers that follow this policy becomes statistically stationary and ergodic. Figure 5: Performance of the naive strategy for $\mathcal{F}=15$, $U=0.025\,\ell\nu$, $\ell/L=1$, $u_{0}/U=1/5$, and $A_{0}=0.08$. (a) Swimmer’s horizontal displacement $\delta_{\tau}\bar{x}_{1}=\bar{x}_{1}(t+\tau)-\bar{x}_{1}(t)$, showing an average velocity $V\approx 0.75\,V_{\rm swim}$. (b) Zoom on much shorter times showing a succession of fast displacements and periods of trapping. (c) Variance of the swimmer’s displacement $\mathrm{Var}\,[\delta_{\tau}\bar{x}_{1}]=\langle[\delta_{\tau}\bar{x}_{1}-\langle\delta_{\tau}\bar{x}_{1}\rangle]^{2}\rangle$. (d) Corresponding skewness $S=\langle[\delta_{\tau}\bar{x}_{1}-\langle\delta_{\tau}\bar{x}_{1}\rangle]^{3}\rangle/(\mathrm{Var}\,[\delta_{\tau}\bar{x}_{1}])^{3/2}$ and flatness $F=\langle[\delta_{\tau}\bar{x}_{1}-\langle\delta_{\tau}\bar{x}_{1}\rangle]^{4}\rangle/(\mathrm{Var}\,[\delta_{\tau}\bar{x}_{1}])^{2}$. Figure 5 shows more detailed statistics on the displacement of swimmers that follow the naive policy. As can be seen in Fig. 5a, the displacement $\delta_{\tau}\bar{x}_{1}=\bar{x}_{1}(t+\tau)-\bar{x}_{1}(t)$ approaches a self-averaged linear behaviour $\delta_{\tau}\bar{x}_{1}\approx V\,\tau$ at large time $\tau$. The average horizontal speed $V$ is approximately $0.75$ times the speed $V_{\rm swim}$ that the swimmer has in the absence of an external flow. When zooming on much shorter timescales (Fig. 5b), one actually observes that this average displacement consists of an alternate sequence of inefficient trapping periods and efficient displacements, during which the swimmer swings smoothly between cells with a speed slightly exceeding $V_{\rm swim}$. As we will see later, the long-term balance between these two kinds of events is precisely what determines the effectiveness of a given policy. The variance of $\delta_{\tau}\bar{x}_{1}$ is shown in Fig. 5c. Its dependence on $\tau$ follows three successive regimes. At short times $\tau\lesssim 10/\nu$, one has $\mathrm{Var}\,[\delta_{\tau}\bar{x}_{1}]\propto\tau^{2}$, resulting from swimmers moving with an instantaneous velocity different from $V$, and thus deviations $\propto\tau$ from the average displacement. The corresponding higher-order moments of $\delta_{\tau}\bar{x}_{1}$ (skewness $S$ and flatness $F$) are shown in Fig. 5d. One observes at small time lags $S<0$ with $|S|\ll 1$ and thus an almost-symmetric distribution of $\delta_{\tau}\bar{x}_{1}$, so that trapping is certainly not contributing much to this regime. Fluctuations are sub-Gaussian, i.e. $F<3$. At larger times, naive swimmers follow an intermediate regime where the variance of $\delta_{\tau}\bar{x}_{1}$ grows super-diffusively, approximately as $t^{1.67}$. This regime displays a negative skewness, meaning that trapping is involved. The flatness reaches values above 3, indicating a significant contribution from extreme events. As seen later (Sec. IV.3), this intermediate regime falls in a range during which swimmers have a significant probability to be trapped. It extends to significantly long times, of the order of $\tau\approx 500/\nu$, above which the displacement becomes a sequence of independent events. The resulting ultimate regime is diffusive, i.e. $\mathrm{Var}\,[\delta_{\tau}\bar{x}_{1}]\propto\tau$. The skewness tends asymptotically to $S=0$ and the flatness decreases to possibly approach $F=3$. We aim at finding policies that outperform this naive strategy. For that, we test in next section various methods of reinforcement learning. It will be important to keep in mind that, even if the swimmer follows a strategy leading to a significant displacement, trapping can be present and result in a significant dependence on history, over times exceeding thousands of undulatory beats. ## IV Reinforcement learning ### IV.1 $Q$-learning Here, we first test the performance of classical $Q$-learning. This method, which has been borrowed from MDPs, has been extensively and successfully applied in the past to optimise the navigation of active swimmers Reddy _et al._ (2016); Colabrese _et al._ (2017); Gustavsson _et al._ (2017); Schneider and Stark (2019); Muiños-Landin _et al._ (2021); Qiu _et al._ (2022). #### Method $Q$-learning is based on the value-iteration update of the Bellman equation. At each step $t_{n}=n\Delta t$, the swimmer has at disposal an estimation $Q_{t_{n}}$ of the $Q$-table. It makes an observation $\omega_{n}$ of its environment, takes an action according to the running policy, which is in the $\varepsilon$-greedy case, is such that $\alpha_{n}=\textrm{argmax}_{\alpha}Q_{t_{n}}(\omega_{n},\alpha)$ with probability $1-\varepsilon$, other actions being chosen uniformly with probability $\varepsilon/6$. The swimmer then receives a reward $R_{n}=\bar{x}_{1}(t_{n+1})-\bar{x}_{1}(t_{n})$ and the $Q$-table is updated accordingly. The whole procedure is summarised in Algorithm 1. Algorithm 1 $Q$-learning Parameters: rates $\lambda$ and $\gamma$; exploration parameter $\varepsilon$ 1:Initialise $Q$ and $\omega$ 2:for $n=1,2,\dots$ do 3: Take action $\alpha$ with the $\varepsilon$-greedy law given by $Q(\omega,\cdot)$ 4: Evolve the swimmer to the new state $\sigma^{\prime}$ 5: Measure reward $R$ and observation $\omega^{\prime}=\Omega(\sigma^{\prime})$ 6: $Q(\omega,\alpha)\leftarrow(1-\lambda\Delta t)Q(\omega,\alpha)$ 7: $+\lambda\Delta t\,[R+{\rm e}^{-\gamma\Delta t}\max_{\alpha^{\prime}}\\!Q(\omega^{\prime},\alpha^{\prime})]$ 8: $\omega\leftarrow\omega^{\prime}$ 9:end for In addition to $\varepsilon\in[0,1]$ that controls how much randomness is put in the learning process, the method depends upon two parameters, which are here appropriately expressed as inverse time scales. The first is the learning rate $\lambda$ that we chose as the inverse of the time needed by a swimmer to cross one cell with velocity $V_{\rm swim}$ in the absence of outer flow, namely here $\lambda=\nu/40$ for $A_{0}=0.08$. This rate sets the timescale at which the $Q$-table is updated. A smaller $\lambda$ would have led to adapting the policy with a too long delay compared to the dynamical timescales of the swimmer, and thus to inefficient adjustments. A larger $\lambda$ would imply that the $Q$-table is updated too fast compared to the actual time needed to discern the outcomes of a given action. The second parameter is the discount rate $\gamma$, which sets the horizon of future rewards. It was chosen as the inverse of the time needed by the swimmer to travel across ten cells with $V_{\rm swim}$, namely $\gamma=\nu/400$. The corresponding timescale is at the edge of the long-correlated regime observed in previous section for the naive policy. Initial entries of the $Q$-table are all set to an arbitrary positive number, equal in our case to $0.25\,L$. For MDPs, successive iterations of the procedure 1 lead to convergence of the entries of the $Q$-table to the optimal $Q$-value function (6) in the limit when $n\to\infty$ and $\varepsilon\to 0$ simultaneously. Convergence results rely on the usual stochastic approximation assumptions on the learning rate and are valid as long as all the state-action pairs are indefinitely visited with a positive probability. The associated empirical greedy policy then converges to the optimal deterministic policy $\pi_{\star}$ given by Eq. (7). However, such convergence results only hold in the Markovian case. There is no guarantee that they extend to our settings and actually, counter-examples have been constructed in Singh _et al._ (1994) showing that $Q$-learning techniques do not generally apply to POMDPs. We nevertheless test this procedure below. #### Non-convergence of $\varepsilon$-greedy $Q$-learning Figure 6(a) shows the displacement of swimmers during the evolution of $Q$-learning for decreasing values of the exploration parameter $\varepsilon$. All instances lead to a net displacement of the swimmer. It consists of long periods of forward motions interrupted by phases during which the swimmer barely progresses. These alternations become less and less frequent when $\varepsilon$ decreases. Figure 6(b) shows the time-evolution of the policy followed by the swimmer for $\varepsilon=0.025$. Each extended period of forward motion corresponds to a stabilisation of the running policy. For instance, between times $t=0.7$ and $1.4\times 10^{6}\nu^{-1}$, the swimmer maintains an average horizontal velocity $\approx 0.45\,V_{\rm swim}$ that is smaller, but comparable to the performance of the naive strategy. During this time interval, the swimmer follows a policy that differs from the naive one only by favouring a vigorous horizontal undulation ($\alpha=0$, bright red) when a headwind is observed ($\omega=0$ and $3$). This temporarily learned policy is however forgotten at times $t>1.5\times 10^{6}\nu^{-1}$. Other sustainable strategies are selected later on, giving rise to subsequent periods of forward motion with different, but comparable horizontal velocities. These numerical experiments obtained at varying $\varepsilon$ allow us to extrapolate to what would be obtained if the level of randomness were decreased progressively: As the duration of forward-motion periods expands when $\varepsilon$ increases, the learning procedure will probably get stacked to a given policy determined by the history of the swimmer’s trajectory and thus very unlikely to be the optimum. This gives evidence that $Q$-learning methods do not easily converge for our problem. Figure 6: Results of $\varepsilon$-greedy $Q$-learning for $\mathcal{F}=15$, $U/(\ell\nu)=0.025$, $\ell/L=1$, $u_{0}/U=1/5$ and $A_{0}=0.08$. (a) Displacement as a function of time for three different values of the exploration parameter $\varepsilon$. (b) Time evolution of the policy shown here for $\varepsilon=0.025$. Figure 7: Time evolution of the different components of the $Q$ table obtained for $\varepsilon=0.025$, as in Fig. 6b. The six panels correspond to the various values of the observation $\omega$, while the different colours stand for the action $\alpha$, as labeled. We interpret the above-mentioned loss of memory as a consequence of long-term trapping phases that can clearly not be detected from our reduced set of observations. The underlying mechanism gets clearer when looking at the time evolution of the $Q$-table entries in Fig. 7. The periods of forward motion are associated with an increase of all $Q_{t}(\omega,\alpha)$, with the current running policy weakly singling out given entries. Once the swimmer enters a phase of quasi immobilisation, this growth stops and all entries of the $Q$-table decrease simultaneously, without any possibility to keep in mind the previously learned strategy. Hence, convergence could in principle be only achieved if the learning rates is small enough to filter out such trapping events, and would thus require running the $Q$-learning algorithm for extravagantly long times. #### An iterative Markovian approximation Motivated by the suspicion that convergence could require very long times, we test here the idea to approximate the dynamical evolution of the swimmer by an MDP. Our hope is that this approximation will capture the most relevant information of our optimisation problem, namely, the transition probabilities between the states of our environment and the distribution of the rewards obtained by our agent. The advantages of this approach are twofold: First, since the MDP only cares about the transitions and the rewards process abstracting away all the other aspects of the dynamics, the associated learning algorithms will run significantly faster, without having to simulate simultaneously the whole swimmer dynamics; Second, this approach would separate the issue of non-Markovianity from other potential difficulties. Our procedure consists in constructing a sequence of policies $\pi_{0}$, $\pi_{1}$, …$\pi_{k}$ that will hopefully converge to the optimal $\pi_{\star}$. At each step, we simulate a swimmer that follows the policy $\pi_{k}$, trying out at every time step $t=t_{n}$, all possible actions to monitor the new observation and reward at time $t_{n+1}$. This is used to construct numerical approximations to the transition probability $p_{{\rm T},k}(\omega^{\prime}|\omega,\alpha)$ of observing $\omega^{\prime}$ at time $t+\Delta t$ given that $\omega$ was observed and action $\alpha$ was performed at time $t$, together with the corresponding distribution of rewards $p_{{\rm R},k}(R|\omega,\alpha)$. Both distributions depend of course on $\pi_{k}$. We then use the approximate probabilities $p_{{\rm T},k}$ and $p_{{\rm R},k}$ to run the $Q$-learning algorithm that, because of the Markovian formulation imposed now, is ensured to converge. This leads to construct the optimal policy $\pi_{k+1}$ associated to the approximate system. This procedure is reiterated changing the base policy to $\pi_{k+1}$, until it attains a fixed point. The method is summarised in Algorithm 2. Algorithm 2 Iterative Markovian approximation 1:Initialise policy $\pi_{0}$ 2:repeat for $k=0,1,2,\ldots$ 3: Simulate a swimmer that follows policy $\pi_{k}$ 4: Measure $p_{{\rm T},k}(\omega^{\prime}|\omega,\alpha)$ and $p_{{\rm R},k}(R|\omega,\alpha)$ 5: Use them to find the optimal policy $\pi_{k+1}$ 6:until $\pi_{k+1}\in\\{\pi_{0},\pi_{1},\ldots,\pi_{k}\\}$ The motivation behind this procedure is that, if the Markovian approximation is not too far off, then it is natural to think that the optimal policy $\pi_{k+1}$ of the approximate system should be at least an improvement on the policy $\pi_{k}$ if not also the optimal policy when we go back to the real system. Hence, if the optimal policy $\pi_{\star}$ is a fixed point of our procedure, then the sequence $\\{\pi_{k};k\geq 0\\}$ would converge to it, thus solving our problem. We have run this procedure, choosing for the initial policy $\pi_{0}$ the naive strategy of Sec. III.3. After three iterations the algorithm circled back to the policy we encountered on the first iteration $\pi_{3}=\pi_{1}$. Hence this proposed procedure does not lead to any improvement with respect to the naive policy. This could be again a sign of the highly non-Markovian nature of our setting. We therefore test in the next section various approximation-based methods that could in principle lead to efficient results for POMDPs. ### IV.2 Approximation-based methods In the previous section, we made use of the traditional $Q$-learning with discounted return (4) to estimate the action-value function. We applied blindly this method by replacing states with observations and obtained only limited success. Here, we will explore two approaches that belong to the broad class of approximation methods for reinforcement learning Sutton and Barto (2018): the semi-gradient differential SARSA and the Actor-Critic policy- gradient method. Both use a formulation of the optimisation problem in which value functions are estimated in terms of the differential return (5) instead of the discounted return. The main motivation for using such approximation methods is the partially- observable nature of our problem. In such settings, accurate estimations of the action-value function $Q$ are difficult, hindering the convergence of exact-solution algorithms like $Q$-learning Singh _et al._ (1994). However, by using approximation methods, such as neural networks or other parametric models, we can represent the policy and value function in a way that takes into account only the available observations rather than the full state. Such methods are flexible and effective and, in particular, they provide a way to trade-off between the quality of the solution and computational complexity. This makes them a good choice for problems with large or continuous state spaces, where exact solution methods are not applicable. They can also search for optimal stochastic policies, which can help ensure exploration during the learning process, particularly when the optimal policy may not be deterministic, as is often the case in POMDPs Singh _et al._ (1994), though not likely in our exact case. For these reasons, approximation methods allow us to effectively address the partial observability issue and achieve good performance, at least in theory, without compromising the underlying theory of reinforcement learning. #### IV.2.1 Semi-gradient differential SARSA The semi-gradient differential SARSA algorithm is a value-based method, like $Q$-learning. It similarly builds on the idea of estimating the action-value function $Q$ to construct an optimal policy, but uses for that the differential return instead of the discounted return. A key difference between this method and traditional SARSA or $Q$-learning is that it involves an approximation of the $Q$-function in place of its exact value. We use here the linear parametrisation $\mathcal{Q}(\sigma,\alpha)\approx\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)=\sum_{ij}\eta_{ij}\,\delta_{\Omega(\sigma),i}\,\delta_{\alpha,j}$ where $\delta$ is the Kronecker delta, $\sigma\mapsto\Omega(\sigma)=\omega$ is the observation function introduced in Sec. III.2, and $\boldsymbol{\eta}\in\mathbb{R}^{6}\times\mathbb{R}^{7}$ denotes the approximation parameters. This approach aggregates together all states leading to the same observation (similarly to what we did for $Q$-learning). The partial observability of the problem is then devolved to this specific choice of the approximation. Such a method was used successfully in Berti _et al._ (2022) to find the optimal swimming strategy for Najafi’s swimmer Najafi and Golestanian (2004) The main idea of semi-gradient differential SARSA is to update the approximation of the value function by combining the gradient descent and temporal difference (TD) learning methods in order to converge to the parameters $\boldsymbol{\eta^{*}}$ that approximate best the optimal $Q$-function. The action at the $n$-th step is chosen, as in $Q$-learning, such that $\alpha_{n}=\mathrm{argmax}_{\alpha}\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma_{n},\alpha)$ with possibly an $\varepsilon$-greedy step. The resulting procedure is summarised in Algorithm 3. Algorithm 3 Semi-gradient differential SARSA Algorithm parameters: rates $\lambda_{1},\lambda_{2}$; exploration parameter $\varepsilon$ 1:Initialise $\omega$, $\alpha$, $\bar{R}$, and the approximation parameters $\boldsymbol{\eta}$ 2:for $n=1,2,\dots$ do 3: Take action $\alpha$ and evolve to the new state $\sigma^{\prime}$ 4: Measure reward $R$ and observation $\omega^{\prime}=\Omega(\sigma^{\prime})$ 5: Choose action $\alpha^{\prime}$ with $\varepsilon$-greedy law given by $\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma^{\prime},\cdot)$ 6: Compute the error $\delta=R-\bar{R}+\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma^{\prime},\alpha^{\prime})-\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)$ 7: $\bar{R}\leftarrow\bar{R}+\lambda_{1}\Delta t\,\delta$ 8: $\boldsymbol{\eta}\leftarrow\boldsymbol{\eta}+\lambda_{2}\Delta t\,\delta\,\nabla\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)$ 9: $\omega\leftarrow\omega^{\prime}$, $\alpha\leftarrow\alpha^{\prime}$ 10:end for Figure 8 reports results obtained with this method. We have here chosen the rates $\lambda_{1}=0.025\nu$ and $\lambda_{2}=0.025\nu$, which corresponds to the inverse of the times needed by the swimmer to cross one and ten cells with $V_{\rm swim}$, respectively. The displacements obtained for different values of the exploration parameter $\varepsilon$ (Fig. 8a) are by an order of magnitude smaller than those resulting from the $\varepsilon$-greedy $Q$-learning algorithm. In addition, results indicate that the learning performance decreases when $\varepsilon$ decreases, at variance with what was observed for $Q$-learning. At the largest value of $\varepsilon$, one finds forward-moving periods to be much shorter and trapping phases much more frequent. Still, when zooming on an interval of time when the swimmer significantly progresses, one observes that the local velocity is comparable to those obtained with $Q$-learning. As can be seen from Fig. 8b, the corresponding running policy fluctuates much, and significant displacement only occurs when the policy is able to maintain for a significant amount of time the action $\alpha=6$ (undulate vertically with maximal amplitude) for the two most favourable observations $\omega=4$ and $5$ (corresponding to being rightly oriented with no headwind). Figure 8: Results of semi-gradient differential SARSA obtained with the same physical parameters as in Fig. 6 of previous subsection. (a) Time-evolution of the displacement for three different values of the exploration parameter $\varepsilon$. (b) Time evolution of the optimal policy shown here for $\varepsilon=0.1$. Both figures show as insets a zoom on a time interval during which the swimmer significantly progresses. We found that in our setting, the semi-gradient differential SARSA method is not able to learn properly due to a non-ergodicity of the environment. Indeed, the swimmer is often trapped in a situation where it observes the same observation $\omega=3$ (being wrongly oriented with no headwind), performs the same action $\alpha=6$ (undulate vertically with maximal amplitude), and remains indefinitely trapped in this situation. (The curve associated to $\varepsilon=0.025$ in Fig. 8a is an example of such a situation.) This is due to the fact that in a large set of configurations leading to observation $\omega=3$, the swimmer performing action $\alpha=6$ remains in the same set of configurations. Furthermore, the swimmer keeps on being stuck, even if it performs other actions but not for a long-enough time, so that its probability of escaping decreases exponentially fast as $\varepsilon\to 0$. #### IV.2.2 Actor-Critic policy-gradient method Policy-gradient methods strongly differ from $Q$-learning and semi-gradient differential SARSA in that, instead of learning the function $Q$, they learn directly the policy $\pi$ by interacting with the environment. Additionally, instead of using the temporal difference rule to learn the estimates, policy- gradient methods are gradient-based, meaning that the policy is itself approximated, similarly to the value function for differential SARSA, by an estimate $\hat{\pi}_{\boldsymbol{\theta}}$, which involves a set of parameters $\boldsymbol{\theta}$ that are learned using gradient descent. We furthermore use here an Actor-Critic version of such a method. The “actor” represents the policy, while the “critic” estimates the value function. This separation can help improve the stability and convergence of the policy- gradient algorithm, as well as reduce the variance of the gradient samples used to update the policy parameters. Together, the actor and the critic form a coalition where the actor selects actions and the critic evaluates the quality of those actions to provide feedback to the actor to improve its policy. The general scheme of the Actor-Critic algorithm is sketched in Fig. 9a. After a change in the environment, both the actor and the critic are informed about the new observation $\omega$ of the system. The critic, which has also access to the reward, updates its approximation $\hat{V}_{\eta}$ of the value function and communicates to the actor the temporal-difference (TD) error $\delta$, which measures the difference between the expected return and the actual return. The actor uses the information that $\delta$ provides on the quality of the approximated policy $\hat{\pi}_{\boldsymbol{\theta}}$ in order to update it and decides, according to the observation $\omega$, the action to be taken throughout the next step. Figure 9: (a) Sketch of the actor-critic algorithm. (b) Time evolution of the displacement obtained for the actor-critic algorithm for fixed hyper- parameters. Inset (c) Time evolution of the value function for the six different values of the observation $\omega$, as labelled. We choose to represent the policy by the soft-max parametrisation $\pi(\alpha|\sigma)\approx\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)=\sum_{ij}\frac{1}{\mathcal{Z}_{i}}{\rm e}^{\theta_{ij}}\delta_{\Omega(\sigma),i}\,\delta_{\alpha,j},$ (9) with normalising factor $\mathcal{Z}_{i}=\sum_{j}{\rm e}^{\theta_{ij}}$. The approximated policy hence depends on the state $\sigma$ only through the observation $\omega=\Omega(\sigma)$. This seamlessly takes into account partial observability by considering only the available information rather than the full state of the system. The policy parameters $\boldsymbol{\theta}\in\mathbb{R}^{6}\times\mathbb{R}^{7}$ are optimising a performance measure given by the average return $\bar{R}[\pi_{\boldsymbol{\theta}}]$ defined in Eq. (5). The gradient-ascent procedure used by the actor to update $\boldsymbol{\theta}$ requires to approximate the gradient $\nabla_{\boldsymbol{\theta}}\bar{R}[\pi_{\boldsymbol{\theta}}]$ of the performance measure. We rely on the policy-gradient theorem (see, e.g., Sutton and Barto (2018)) $\displaystyle\nabla_{\boldsymbol{\theta}}\bar{R}[\pi_{\boldsymbol{\theta}}]$ $\displaystyle=\left\langle\mathcal{Q}_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)\frac{\nabla_{\boldsymbol{\theta}}\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)}{\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)}\right\rangle$ $\displaystyle=\left\langle\mathcal{Q}_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)\,\nabla_{\boldsymbol{\theta}}\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)\right\rangle,$ (10) which allows us to instantiate the performance-measure gradient as ${\nabla}_{\boldsymbol{\theta}}\bar{R}[\pi_{\boldsymbol{\theta}}]\approx\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)\,\nabla_{\boldsymbol{\theta}}\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)$, where $\hat{\mathcal{Q}}_{\boldsymbol{\eta}}$ is an approximation of the value function, at the hands of the critic, $\boldsymbol{\eta}$ being the associated parametrisation parameters. We can use the value function as a baseline for a better estimate of the gradient. Since $\left\langle V_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma)\,\nabla_{\boldsymbol{\theta}}\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)\right\rangle=0$, the gradient can be rewritten as $\nabla_{\boldsymbol{\theta}}\bar{R}[\pi_{\boldsymbol{\theta}}]=\left\langle A_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)\,\nabla_{\boldsymbol{\theta}}\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)\right\rangle,$ (11) where $A_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)=Q_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)-V_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma)$ is the advantage function. We furthermore use that the temporal-difference error of the value function is an unbiased estimate of the advantage function, namely $\displaystyle A_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma,\alpha)$ $\displaystyle=\langle\delta\rangle,\mbox{ with }$ (12) $\displaystyle\delta$ $\displaystyle=R(\sigma_{t},\alpha_{t})-\bar{R}[\hat{\pi}_{\boldsymbol{\theta}}]+V_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma_{t+\Delta t})-V_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma_{t}),$ leading to sample the performance-measure gradient as ${\nabla}_{\boldsymbol{\theta}}\bar{R}[\pi_{\boldsymbol{\theta}}]\approx\delta\,\nabla_{\boldsymbol{\theta}}\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha_{t},|,\sigma_{t})$ and use this approximation to update the policy parameters. As for the gradient of the policy, we use the soft-max approximation (9) to write $\displaystyle\partial_{\theta_{ij}}\\!\log\hat{\pi}_{\boldsymbol{\theta}}(\alpha|\sigma)$ $\displaystyle=\delta_{\Omega(\sigma),i}\delta_{\alpha,j}-\frac{1}{\mathcal{Z}_{i}}\mathrm{e}^{\theta_{ij}}\delta_{\Omega(\sigma),i}$ $\displaystyle=\delta_{\Omega(\sigma),i}\left[\delta_{\alpha,j}-\hat{\pi}_{\boldsymbol{\theta}}(j|\sigma)\right].$ (13) In practice we use an approximation of the value function $V_{\hat{\pi}_{\boldsymbol{\theta}}}(\sigma)\approx V_{\boldsymbol{\eta}}(\sigma)=\sum_{i}\eta_{i}\delta_{\Omega(\sigma),i}$ with parameters $\boldsymbol{\eta}\in\mathbb{R}^{6}$, in order to compute $\delta$. We trivially get $\partial_{\eta_{i}}V_{\boldsymbol{\eta}}(\sigma)=\delta_{\Omega(\sigma),i}$. Summing up these expressions finally yields the procedure presented in Algorithm 4. Algorithm 4 Policy gradient / Actor-Critic 1:Algorithm parameters: rates $\lambda_{1},\lambda_{2},\lambda_{3}$ 2:Initialize $\omega$, $\alpha$, $\bar{R}$ and the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\eta}$ 3:Initialize the state $\sigma$ and the action $\alpha$ 4:for $n=1,2,\dots$ do 5: Take action $\alpha$ and evolve to the new state $\sigma^{\prime}$ 6: Measure reward $R$ and new observation $\omega^{\prime}=\Omega(\sigma^{\prime})$ 7: Select the next action $\alpha^{\prime}\sim\hat{\pi}_{\boldsymbol{\theta}}(\cdot\,|\,\sigma^{\prime})$ 8: Compute the TD error $\delta=R-\bar{R}+\hat{V}_{\boldsymbol{\eta}}(\sigma^{\prime})-\hat{V}_{\boldsymbol{\eta}}(\sigma)$ 9: $\bar{R}\leftarrow\bar{R}+\lambda_{1}\Delta t\,\delta$ 10: $\boldsymbol{\eta}(\omega)\leftarrow\boldsymbol{\eta}(\omega)+\lambda_{2}\Delta t\,\delta$ 11: $\boldsymbol{\theta}(\omega,\cdot)\leftarrow\boldsymbol{\theta}(\omega,\cdot)+\lambda_{3}\Delta t\,\delta\,[\delta_{\alpha,\cdot}-\hat{\pi}_{\boldsymbol{\theta}}(\cdot\,|\,\sigma)]$ 12: $\omega\leftarrow\omega^{\prime}$, $\alpha\leftarrow\alpha^{\prime}$ 13:end for We evaluate the performance of the actor-critic policy gradient algorithm in our navigation problem with the learning rates $\lambda_{1}=5\times 10^{-7}\nu$ and $\lambda_{2}=\lambda_{3}=5\times 10^{-5}\nu$. The results reported in Fig. 9b show that the swimmer reaches a swimming velocity that is similar to that of the $\varepsilon$-greedy $Q$-learning algorithm during forward motion periods. However, unlike the $Q$-learning algorithm which suffers from an enormous variability, the results obtained by actor-critic are more consistent and stable, showing minimal variability across different realisations. As seen in Fig. 9a , the learning process of the swimmer shows the desired behaviour as the swimming velocity systematically, albeit slowly, improves overtime. The evolution of the value function of the different observations (Fig. 9c) uncovers a sizeable amount of the story, it highlights how the swimmer initially learns to distinguish that tailwind is better than no significant wind, which in turn is better than headwind. Much later in the process, it ends up learning the obvious (to us humans) fact that being righty oriented is better than being wrongly oriented. Eventually, as can be seen by the very end of this run, it starts to make more precise evaluations by starting to learn that orientation is more important than the wind direction and magnitude. This improvement, which is only reached at the end of the run, indicates that there is still significant potential for further improvement in the swimmer’s performance when using such an algorithm. Figure 10: Time evolution of the approximated policies, shown for each value of the observation $\omega\in\\{0..5\\}$. The probability of choosing a given action $\alpha$ is shown as a coloured area. Regarding the policy, as shown in Fig. 10, the swimmer evolves and adapts its strategy over time in the course of the learning process. The policy starts from a random state where the swimmer is equally likely to choose any of the seven possible actions and thus basically being carried along by the flow. Over time, it learns to select with higher probabilities the actions that are more likely to lead to an improvement in its horizontal displacement. The swimmer, for instance, eventually discovers that action 6 is the most effective when it is oriented in the right direction and the wind is blowing in the right direction or not significant. This may seem obvious to us, but it took the swimmer a long time to figure it out. It is worth mentioning that this run for the actor-critic algorithm is longer by a factor of 10 compared to previous runs, but the performance of the swimmer still improves consistently, although at a slower pace than in the early stages of the process. All in all, the actor-critic algorithm presents a learning process that is more stable and consistent across runs compared to $Q$-learning. This stability leads to a policy that is incrementally improved during learning, resulting in the desired feature of improved performance over time. However, despite its consistent learning process, the swimmer’s performance achieved through the actor-critic algorithm falls short of the results obtained with the naive strategy and requires substantial computational resources if it is to surpass it. ### IV.3 Competitive $Q$-learning We have seen in previous subsections that various methods of reinforcement learning fail to provide a satisfactory answer to our optimisation problem. On the one hand, the idea of bluntly applying methods designed for Markovian systems, such as $Q$-learning, suffers non-convergence. On the other hand, approximation approaches, which were in principle developed to tackle partial observability, face issues related to an extremely slow convergence, if any, making their use ineffective or even prohibitive. Moreover, all the policies that emerged as intermediate or asymptotic outcomes of these trials, were found to be significantly less performant than the naive strategy introduced in Sec. III.3. We interpret these difficulties as a consequence of the rather brusque manner with which we have projected the high-dimensional set of swimmer’s configurations onto a very small number of discrete observables. Such a reduction of information triggers the chaoticity of the system, including of the learning process and this explains the sensitivity of our results to both the method and the particular chosen trajectory during iterative optimisation procedures. In light of these observations, we present here a new perspective. In place of searching for a single efficient policy that would possibly outperform the naive strategy, we propose to construct a set of admissible policies. To make such a construction as easy as possible, we consider 200 different realisations of deterministic $Q$-learning (with a vanishing exploration parameter i.e., $\varepsilon=0$) that are obtained by choosing randomly the initial orientation of the swimmer. Each realisation of the learning algorithm is run in parallel to the others for a time $t=2\times 10^{5}\nu^{-1}$. After this duration, the deterministic $Q$-learning algorithm has in all cases stabilised to a given policy, even if the entries of the $Q$-table have not converged. This evolution to a fixed policy is a result of our decision to eliminate exploration by setting $\varepsilon=0$. The 200 policies obtained this way are then used to construct our admissible set. Figure 11: Results of $200$ realisations of deterministic $Q$-learning. (a) Average swimming speeds, ordered by decreasing performance. The two dotted lines show the velocities of the naive policy and that obtained with the Actor-Critic algorithm; the two dashed vertical lines mark quasi discontinuities in the swimmers performance. (b) Strategies that significantly lead to the swimmer’s displacement, again ordered from the most performant to the least. The two dashed vertical lines mark the same change of behaviour as on the left panel. Figure 11a shows the asymptotic velocity $\delta_{\tau}\bar{x}_{1}/\tau$ attained by these 200 instances of $Q$-learning, ordered by decreasing efficiency. One observes roughly three classes of policies, which are separated in the plot by vertical dashed lines. The top 15% perform better, or comparably to the naive strategy. The next 15 to 37% realisations overcome the strategy obtained by the actor-critic algorithm and give a reasonable displacement of the swimmer. Finally, the worse 63% do not yield any significant displacement. These three regimes are separated by abrupt jumps in the average velocity. As evidenced from Fig. 11b, they correspond to significant changes in the corresponding policies. The top 15% policies clearly belong to the same category as the naive strategy. They are all prescribing a vigorous vertical undulation ($\alpha=6$) when the swimmer is favourably oriented and feels no headwind ($\omega=4$ and $5$). They essentially recommend to stop swimming ($\alpha=3$) for a right orientation and a headwind ($\omega=3$) or when the swimmer is directed the wrong way and experiences a headwind ($\omega=1$ and $2$). They favour horizontal undulations ($\alpha=0$ and $1$) or to stop swimming ($\alpha=3$) when the swimmer is wrongly oriented with the flow blowing to the left. These top strategies mostly differ by the actions chosen for $\omega=0$, 1, and 2. The separation with the second family of policies is clear from Fig. 11b: It corresponds to a change in the action performed for $\omega=3$, from stopping swimming to undulating in the horizontal direction. The second change separating the second and third categories is as clear: The policies stop there prescribing a vertical undulation in the favourable configuration $\omega=6$. When looking with more detail at the 15% top-ranked outcomes of $Q$-learning, one notices that the corresponding policies are rather few. They indeed form a set of five admissible policies whose performances are very similar and overtake each other depending on the realisation of the algorithm. In addition to the naive strategy, which can be written as $\alpha_{\pi}=[3,3,3,3,6,6]$ where the $i$-th element of the array corresponds to the action $\alpha_{\pi}(\omega)$ followed when $\omega=i$, the other four policies are $\alpha_{\pi}=[0,3,3,3,6,6]$, $[1,3,3,3,6,6]$, $[3,3,0,3,6,6]$, and $[0,1,3,3,6,6]$. Notice that none of these five policies emerged, neither in an intermediate stage, nor asymptotically, in the various trials of reinforcement learning in previous subsections. We select these five strategies to define a set of admissible deterministic policies that are potential solutions to our optimal navigation problem. In the next section, we address with more detail their actual performance and robustness when varying the physical settings in our system. ## V Performance and robustness of the admissible policies ### V.1 Long-term statistics We here provide details on the performance of the five admissible strategies obtained from competitive realisations of deterministic $Q$-learning in Sec. IV.3. Figure 12a shows the time evolution of the velocity $\delta_{\tau}\bar{x}_{1}/\tau$ along trajectories that each follow one of the selected policies (velocities are there expressed in units of the swimming speed $V_{\rm swim}$ in absence of flow). Unambiguous differences in the performance of the different policies are only visible for $\tau\gtrsim 10^{6}\nu^{-1}$, the shorter time lags being essentially dominated by violent fluctuations of the displacement. This very slow convergence of the time averages along the swimmer dynamics can clearly be an important source of difficulties when using iterative optimisation algorithms. We hereafter label these trajectories from \raisebox{-.9pt}{1}⃝ to \raisebox{-.9pt}{5}⃝ using their efficiency ranking. The naive policy is \raisebox{-.9pt}{4}⃝ and the diagrams of the other admissible policies are shown in inset of Fig. 12c. Figure 12: Comparison of the admissible strategies \raisebox{-.9pt}{1}⃝ to \raisebox{-.9pt}{5}⃝ — as depicted in the inset of panel (c) — again evaluated for $\mathcal{F}=15$, $U=0.025\,\ell\nu$, $\ell/L=1$, $u_{0}/U=0.2$, and $A_{0}=0.08$. (a) Time-averaged horizontal velocity $\delta_{\tau}\bar{x}_{1}/\tau$ as a function of the time lag $\tau$. (b) Effective diffusion coefficient obtained from the variance of $\delta_{\tau}\bar{x}_{1}$. (c) Complementary cumulative distribution function of the first passage time $T_{+1}$ from $x_{1}=j\,L$ to $x_{1}=(j+1)L$. The variances of the displacement over a time $\tau$ evaluated for the five policies are shown in Fig. 12b. We have here divided it by the time lag $\tau$ in order to measure effective coefficients of diffusion. One observes almost the same ordering of trajectories (except for \raisebox{-.9pt}{5}⃝), suggesting that good performance goes together with weaker fluctuations. All curves saturate to a plateau at large times, indicating a long-term diffusive regime of the horizontal displacement about its average, as already observed for the naive strategy in Sec. III.3. The asymptotic value gives an estimate of the associated coefficient of diffusion. For all policies, it is of the order of the fluid-flow units, namely $UL$, which is itself of the order of the displacement units $\simeq V_{\rm swim}\ell$. This means that, on a time $L/V_{\rm swim}$ needed by the swimmer to travel across a cell, it typically diffuses over a distance equal to the cell size $L$ itself. This strong diffusion accounts for the observed slow convergence of the average velocity. The order of magnitude of diffusion exactly corresponds to a finite contribution from trapping. It indicates that on a time $L/V_{\rm swim}$, the swimmers can remain with a finite probability in the same cell rather than moving to the next one. These considerations become much clearer when measuring the probability distribution of the time $T_{+1}$ that the swimmer needs to travel from one cell to the next adjacent one. The complementary cumulative distribution functions obtained for the five policies are shown in Fig. 12c. All curves almost collapse on the top of each other, up to times corresponding to hundreds of undulatory beats. Admissible policies therefore differ little in their ability to move the swimmer when its conditions are standard. Nonetheless, marked difference are found in the tails of the distributions, which are sampling trapping events. The two most performant policies (\raisebox{-.9pt}{1}⃝ and \raisebox{-.9pt}{2}⃝) are associated to lesser probabilities of getting a long transition time $T_{+1}$. This can be interpreted as a consequence of the horizontal undulation that both policies recommend when the swimmer is wrongly oriented with a negative fluid velocity ($\omega=0$). Such a choice apparently makes a difference with the two next policies (\raisebox{-.9pt}{3}⃝ and \raisebox{-.9pt}{4}⃝) that both display a fatter tail in the distribution of $T_{+1}$. For these two policies, the swimmer stops undulating when in such a configuration. Finally, policy \raisebox{-.9pt}{5}⃝, which is beaten by the four others, shows a higher probability at smaller values of $T_{+1}$, possibly indicating that it is more likely to bring about trapping, even if swimmers can then escape faster. ### V.2 Robustness with respect to the physical parameters Here we address the performance of admissible policies by varying the physical properties of the swimmer. We have performed a set of numerical simulations where we alternatively vary the size ratio $\ell/L$, the swimmer flexibility $\mathcal{F}=(\zeta\nu/K)^{1/4}\ell$, or the velocity ratio $U/V_{\rm swim}$, while keeping constant the two other parameters. We estimated from these simulations average swimming speeds by monitoring again the asymptotic displacements of the swimmers. Figure 13a shows the performance of five policies obtained at varying the length $\ell$ of the swimmer. We find that dependence upon policy is only visible for swimmers that are sufficiently small compared to the cell size, whereas at larger sizes, the five policies perform comparably well. One indeed observes for $\ell\lesssim 0.8\,L$ that the performance ranking of policies is completely shuffled. The swimmers following the otherwise efficient policies \raisebox{-.9pt}{1}⃝ and \raisebox{-.9pt}{2}⃝ barely move towards $x_{1}>0$, while the best performance is achieved by \raisebox{-.9pt}{3}⃝. This can be understood by the fact that tumbling and trapping completely change their natures for short swimmers (or equivalently large-scale fluid inhomogeneities). The action of trying to escape by a vigorous vertical swim seems hence less efficient than just stopping to swim and waiting for being conveyed by the flow in a more favourable region. At larger swimmer sizes ($\ell\gtrsim 0.8\,L$), the ranking between the various policies seems almost independent of $\ell/L$, even if the various policies seem to asymptotically perform similarly. The swimming speed seems to saturate for $\ell\gtrsim 1.8\,L$. This due to the fact that long swimmers are very unlikely to get tumbled by the flow, so that what only matters are the actions performed for observations $\omega=3$, $4$, and $5$ and they are identical for the five admissible policies. Figure 13: Robustness of the admissible strategies \raisebox{-.9pt}{1}⃝ to \raisebox{-.9pt}{5}⃝ when varying the swimmers physical parameters. All results where obtained for $u_{0}/U=0.2$ and $A_{0}=0.08$. (a) Average swimming speed as a function of the ratio between the swimmer length $\ell$ and the flow length scale $L$ (for $\mathcal{F}=15$ and $U=0.025\,\ell\nu$ both fixed). (b) Same, as a function of the swimmer flexibility $\mathcal{F}=(\zeta\nu/K)^{1/4}\ell$ and the flow length scale $L$ (for $\ell/L=1$ and $U=0.025\,\ell\nu$). (c) Same as before, varying this time the fluid flow velocity $U$ (for $\ell/L=1$ and $\mathcal{F}=15$). On each panel, the vertical dashed line shows the parameter value used in earlier sections. Figure 13b shows dependence upon flexibility. The various policies perform equally well for rigid swimmers (small $\mathcal{F}$). In that case, they are almost never bent, nor buckled by the flow. This prevents trapping, and thus does not allow the various policies to display any differences in performance. At the same time and as seen in Sec. II.2, much energy is dissipated by the elastic forces, hindering efficient swimming motions. The differences between the various strategies is however much more visible for flexible swimmers (large $\mathcal{F}$). Policies that efficiently prevent long-term traps (\raisebox{-.9pt}{1}⃝, \raisebox{-.9pt}{2}⃝ and \raisebox{-.9pt}{5}⃝) stand clearly out from the two others. This divergence is promoted by flexibility, because the swimmers are more and more likely to get trapped when $\mathcal{F}$ increases. Finally, figure 13c show results obtained when varying the amplitude $U$ of the outer fluid flow. For all policies, the average horizontal velocity decreases from the swimming speed in the absence of flow ($U=0$) to very small values for strong fluid flows. None of the admissible policies lead to any significant displacement of the swimmers for fluid velocities exceeding $\simeq 0.045\,\ell\nu\simeq 2.5\,V_{\rm swim}$. It seems from our measurements that the performance ranking between the five policies does not depend on $U$. ### V.3 Tests in two-dimensional unsteady flows To assess further the robustness of the proposed policies, we consider now the case where the swimmers are moving in a more realistic stationary flow that solves the incompressible Navier–Stokes equations. The fluid velocity field, in place of being a steady cellular flow, is now a solution of $\displaystyle\rho_{\rm f}\left[\partial_{t}\bm{u}+\bm{u}\cdot\nabla\bm{u}\right]=-\nabla p+\mu\nabla^{2}\bm{u}-\alpha\bm{u}+\nabla^{\perp}F,$ $\displaystyle\nabla\cdot\bm{u}=0,$ (14) where $\rho_{\rm f}$ is the fluid mass density, assumed constant, $\mu$ is its dynamic viscosity, $\alpha$ is a friction coefficient accounting for the two- dimensional confinement of the flow, and $\nabla^{\perp}F$ is an external incompressible force that maintains the flow in motion. We choose the stationary cellular force $F=(\alpha UL/\pi)\cos(\pi x_{1}/L)\,\cos(\pi x_{2}/L)$, with a forcing amplitude $U$ and a spatial period $L$ that set the large velocity and length scales of the flow. The dynamics then depends upon two non-dimensional parameters, the viscous Reynolds number $Re_{\mu}=\rho_{\rm f}U\,L/\mu$, which balances inertia and viscous dissipation, and the friction Reynolds number $Re_{\alpha}=\rho_{\rm f}U/(L\alpha)$, which balances inertia and friction. Depending on them, the flow might bifurcate between different regimes Perlekar and Pandit (2010); Michel _et al._ (2016). We assume $Re_{\mu}\gg 1$, so that viscous dissipation acts only at small scales, possibly yielding a direct turbulent cascade of enstrophy. By contrast $Re_{\alpha}$ is used as a control parameter. With this choice one recovers when $Re_{\alpha}\ll 1$ the stationary cellular flow that has been previously considered. When $Re_{\alpha}$ increases, the flow transitions to a turbulent regime where it is unsteady and chaotic. Illustrations of the associated vorticity fields are given in Fig. 14a and b. Figure 14: Swimmers immersed in a non-steady flow that follow the five admissible policies. Left panels: snapshot of the fluid vorticity $\omega=\partial_{1}u_{1}-\partial_{2}u_{1}$ (contour lines in background), together with the instantaneous position of swimmers coloured according to the policy they follow, for (a) $Re_{\alpha}\simeq 2$ and (b) $Re_{\alpha}\simeq 9.5$. Right panel (c) Average swimming speed as a function of the friction Reynolds number $Re_{\alpha}$ for the five admissible policies, as labeled. We have performed several numerical simulations of the fluid velocity at varying the friction Reynolds number $Re_{\alpha}$. We used a pseudo-spectral solver with $256^{2}$ collocation points, second-order Runge–Kutta time marching, and implicit integration of the linear viscous and friction terms. The velocity is assumed $2\pi$-periodic and we set $L=\pi/2$. Various swimmers are embedded in the flow (20 for each policy from \raisebox{-.9pt}{1}⃝ to \raisebox{-.9pt}{5}⃝) and we monitor their progression toward $x_{1}>0$. In all simulations, both physical and navigation parameters are kept the same as in Sec. V.1, namely $\mathcal{F}=15$, $U=0.025\,\ell\nu$, $\ell/L=1$, $u_{0}/U=0.2$, and $A_{0}=0.08$. The average horizontal speed of such swimmers is reported in Fig. 14b as a function of the friction Reynolds number. At low $Re_{\alpha}$, one recovers the same performance ranking as previously observed in stationary cellular flows. However, for $Re_{\alpha}>4$, the flow develops a chaotic regime characterised by open streamlines with rather strong jets where the swimmer might be entrained in an inappropriate direction. The performance of policies \raisebox{-.9pt}{3}⃝ and \raisebox{-.9pt}{4}⃝ drops down significantly, while the other policies continue to operate relatively well. This dissimilarity can be explained by the contrasting responses to trapping observed in Sec. V.1. Policies \raisebox{-.9pt}{1}⃝, \raisebox{-.9pt}{2}⃝ and \raisebox{-.9pt}{5}⃝ have in common to promote a horizontal undulation when the swimmer is wrongly oriented with a headwind. This action allows the swimmer to move transversally and escape strong persistent jets that otherwise sweep it toward $x_{1}<0$. This makes apparently a noticeable difference at intermediate values of $Re_{\alpha}$. ## VI Concluding remarks We have studied in this paper the question of optimising the displacement of undulatory, deformable micro-swimmers evolving in a prescribed, non- homogeneous outer flow. Our physical model imposes close links between the macroscopic displacement of the swimmer and its microscopic active deformation that induces motility. This clearly differs from standard optimal-navigation problems, which generally assume a scale separation between these two mechanisms that is sufficiently large to consider them independent from each other. We used reinforcement-learning methods to address this problem, trying constantly to interpret the outcomes of our approaches in terms of the underlying physics. An important message that we want to convey is the necessity of determining the relevant physical timescales of the problem. This leads not only to choosing appropriate hyper-parameters of the learning schemes, but also to estimating and understanding their convergence rate. In our settings, the swimmer’s configurations form a high-dimensional set from which we arbitrarily decided to exploit only very partial information. However, these settings happened to constitute a clear instance where the prescription of only a limited knowledge of the state of the agent has drastic impacts on the optimisation of the decision process. We have tested several methods, ranging from simple $Q$-learning to more sophisticated approximation methods. All these trials lead to prohibitively-long convergence times, if not infinite. To our opinion, this is due to the fact that the information on the swimmer’s configuration is so coarse that our problem deviates in a significant manner from the usual Markovian framework. This, combined with chaotic dynamics, leads to tremendous fluctuations with respect to initial data that jeopardise the outcomes of reinforcement-learning procedures. The combination of a very-partially observable character with a highly-chaotic nature of the system is certainly a feature shared by many other practical decision problems. It would be of significant interest to formalise better such connections by evaluating for instance the stability and ergodicity of the global dynamical system defined as the union of the iterative learning procedure and the underlying dynamics. Despite these difficulties, we have proposed an alternative approach based on concurrent realisations of reinforcement learning. Instead of limiting the optimisation procedure to searching for a unique satisfactory approximation of the optimal policy, we shifted our objective to constructing an almost- comprehensive set of admissible strategies whose performance and robustness might be assessed subsequently. The case we have considered is particularly rich, while remaining tractable. The set of admissible strategies was obtained in a quite simple manner by running different instances of deterministic $Q$-learning, whose results proved to be particularly sensitive to the specific initialisation of the algorithm. Moreover, the set constructed this way reduces to only five different admissible policies, making rather easy any systematic assessment on their efficiencies. Still, as demonstrated in Sec. V, the performance of each of these policies can appreciably vary when changing the physical parameters of the swimmer or the type of fluid flow in which it is immersed. Such a systematic investigation would have been impossible if one had to solve, for each setting, an expensive optimisation problem. Finally, let us stress that most of the difficulties we faced could be stemming from the arbitrary choice of limited observables and actions that we considered in the decision process. The motivation for such a prescription was mainly coming from practical applications. In general, the amount of accessible information and of possible manoeuvres is strongly constrained by the design, cost, and efficiency of sensors and engines that equip a micro- robot, or by the primitive nature of the micro-organisms in consideration. However, it could largely be that the observables and actions that we have chosen are not enough for this physical model and the associated navigation problem. It would thus be interesting to repeat this analysis and the reinforcement-learning trials by adding, at both ends of the decision process, encoder-decoder neural networks that would automatically extract and redistribute the relevant information. Interpreting the encoded information could be highly pertinent to the design of optimal sensors and actuators and their implementation in practical applications. ## Acknowledgments The authors are grateful to the OPAL infrastructure from Université Côte d’Azur for providing computational resources. This work received support from the UCA-JEDI Future Investments funded by the French government (grant no. ANR-15-IDEX-01) and from the Agence Nationale de la Recherche (grant no. ANR-21-CE30-0040-01). ## References * Wu _et al._ (2020) Z. Wu, Y. Chen, D. Mukasa, O. S. Pak, and W. Gao, Chem. Soc. Rev. 49, 8088 (2020). * Servant _et al._ (2015) Q. Servant, K. Mazza, and Nelson, Adv. Mater. 27, 2981‐2988 (2015). * Berti _et al._ (2020) L. Berti, L. Giraldi, and C. Prud’Homme, ESAIM Proc. Surv. 67, 46 (2020). * Alouges _et al._ (2013) F. Alouges, A. DeSimone, L. Giraldi, and M. Zoppello, Int. J. Non Linear Mech. 56, 132 (2013). * Shen and Arratia (2011) X. Shen and P. E. Arratia, Phys. Rev. Lett. 106, 208101 (2011). * Daddi-Moussa-Ider _et al._ (2021) A. Daddi-Moussa-Ider, H. Löwen, and B. Liebchen, Commun. Phys. 4, 1 (2021). * Borazjani and Sotiropoulos (2009) I. Borazjani and F. Sotiropoulos, J. Exp. Biol. 212, 576 (2009). * Cohen and Boyle (2010) N. Cohen and J. H. Boyle, Contemp. Phys. 51, 103 (2010). * Alouges _et al._ (2019) F. Alouges, A. DeSimone, L. Giraldi, Y. Or, and O. Wiezel, New J. Phys. 21, 043050 (2019). * Reddy _et al._ (2022) G. Reddy, V. N. Murthy, and M. Vergassola, Annu. Rev. Condens. Matter Phys. 13, 191 (2022). * Cichos _et al._ (2020) F. Cichos, K. Gustavsson, B. Mehlig, and G. Volpe, Nat. Mach. Intell. 2, 94 (2020). * Reddy _et al._ (2016) G. Reddy, A. Celani, T. J. Sejnowski, and M. Vergassola, Proc. Nat. Acad. Sci. 113, E4877 (2016). * Colabrese _et al._ (2017) S. Colabrese, K. Gustavsson, A. Celani, and L. Biferale, Phys. Rev. Lett. 118, 158004 (2017). * Gustavsson _et al._ (2017) K. Gustavsson, L. Biferale, A. Celani, and S. Colabrese, Euro. Phys. J. E 40, 1 (2017). * Schneider and Stark (2019) E. Schneider and H. Stark, Europhys. Lett. 127, 64003 (2019). * Muiños-Landin _et al._ (2021) S. Muiños-Landin, A. Fischer, V. Holubec, and F. Cichos, Sci. Robot. 6, eabd9285 (2021). * Qiu _et al._ (2022) J. Qiu, N. Mousavi, K. Gustavsson, C. Xu, B. Mehlig, and L. Zhao, J. Fluid Mech. 932, A10 (2022). * Jaya Kumar A. _et al._ (2020) Jaya Kumar A., A. K. Verma, J. Bec, and R. Pandit, Phys. Rev. E 101, 043110 (2020). * Peng _et al._ (2016) X. B. Peng, G. Berseth, and M. Van de Panne, ACM Trans. Graph. 35, 1 (2016). * Levine _et al._ (2016) S. Levine, C. Finn, T. Darrell, and P. Abbeel, J. Mach. Learn. Res. 17, 1334 (2016). * Pironneau and Katz (1974) O. Pironneau and D. Katz, J. Fluid Mech. 66, 391 (1974). * Lindner and Shelley (2015) A. Lindner and M. J. Shelley, in _Fluid-Structure Interactions in Low-Reynolds-Number Flows_, edited by C. Duprat and H. A. Stone (Royal Society of Chemistry, Cambridge (UK), 2015) pp. 168–192. * Moreau _et al._ (2018) C. Moreau, L. Giraldi, and H. Gadêlha, J. R. Soc. Interface 15, 20180235 (2018). * Picardo _et al._ (2018) J. R. Picardo, D. Vincenzi, N. Pal, and S. S. Ray, Phys. Rev. Lett. 121, 244501 (2018). * Rosti _et al._ (2018) M. E. Rosti, A. A. Banaei, L. Brandt, and A. Mazzino, Phys. Rev. Lett. 121, 044501 (2018). * Young and Shelley (2007) Y.-N. Young and M. J. Shelley, Phys. Rev. Lett. 99, 058303 (2007). * Brouzet _et al._ (2014) C. Brouzet, G. Verhille, and P. Le Gal, Phys. Rev. Lett. 112, 074501 (2014). * Allende _et al._ (2018) S. Allende, C. Henry, and J. Bec, Phys. Rev. Lett. 121, 154501 (2018). * Gray and Lissmann (1964) J. Gray and H. W. Lissmann, J. Exp. Biol. 41, 135 (1964). * Berri _et al._ (2009) S. Berri, J. H. Boyle, M. Tassieri, I. A. Hope, and N. Cohen, HFSP J. 3, 186 (2009). * Friedrich _et al._ (2010) B. M. Friedrich, I. H. Riedel-Kruse, J. Howard, and F. Jülicher, J. Exp. Biol. 213, 1226 (2010). * Jikeli _et al._ (2015) J. F. Jikeli, L. Alvarez, B. M. Friedrich, L. G. Wilson, R. Pascal, R. Colin, M. Pichlo, A. Rennhack, C. Brenker, and U. B. Kaupp, Nat. Commun. 6, 1 (2015). * Tornberg and Shelley (2004) A.-K. Tornberg and M. J. Shelley, J. Comput. Phys. 196, 8 (2004). * Rothstein _et al._ (1999) D. Rothstein, E. Henry, and J. P. Gollub, Nature 401, 770 (1999). * Singh _et al._ (1994) S. P. Singh, T. Jaakkola, and M. I. Jordan, in _Machine Learning Proceedings 1994_, edited by W. W. Cohen and H. Hirsh (Morgan Kaufmann, San Francisco (CA), 1994) pp. 284–292. * Sutton and Barto (2018) R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_ (The MIT Press, Cambridge (MA), 2018). * Berti _et al._ (2022) L. Berti, Z. El Khiyati, Y. Essousy, C. Prud’Homme, and L. Giraldi, IFAC-PapersOnLine 55, 1 (2022). * Najafi and Golestanian (2004) A. Najafi and R. Golestanian, Phys. Rev. E 69, 062901 (2004). * Perlekar and Pandit (2010) P. Perlekar and R. Pandit, New J. Phys. 12, 023033 (2010). * Michel _et al._ (2016) G. Michel, J. Herault, F. Pétrélis, and S. Fauve, Europhys. Lett. 115, 64004 (2016).
# SurfFlow: high-throughput surface energy calculations for arbitrary crystals Firat Yalcin Computational Materials Physics, University of Vienna, Kolingasse 14-16, 1090, Vienna, Austria Michael Wolloch Vasp Software GmbH, Sensengasse 8/12, 1090, Vienna, Austria ###### Abstract We introduce SurfFlow, an open-source high-throughput workflow package designed for automated first-principles calculations of surface energies in arbitrary crystals. Our package offers a comprehensive solution capable of handling multi-element crystals, nonstoichiometric compositions, and asymmetric slabs, for all potential terminations. To streamline the computational process, SurfFlow employs an efficient pre-screening method that discards surfaces with suspected high surface energy before conducting resource-intensive density functional theory computations. The results generated are seamlessly compiled into an optimade-compliant database, ensuring easy access and compatibility. Additionally, a user-friendly web interface facilitates workflow submission and management, provides result visualization, and enables the examination of Wulff shapes. SurfFlow represents a valuable tool for researchers looking to explore surface energies and their implications in a diverse range of systems. Keywords— high-throughput, surface energies, density functional theory, Wulff construction ## 1 Introduction The surface energy of a material determines its stability and influences its adhesion properties, catalytic ability, and ability to form thin films and interfaces. It is defined as the work needed to create a new surface from a bulk crystal. It is a fundamental property that can be used to explain the stability of surface facets, the Wulff shape [1], which is the equilibrium crystal shape of the material, and phenomena such as surface reconstruction [2, 3], segregation [4, 5, 6], and catalysis [7, 8, 9]. Catalytic properties are closely tied to surface energy, and nanomaterials with high surface energies show exceptional properties for electrocatalysis, photocatalysis, and gas sensor applications [10]. The surface energies of a pair of materials are also an important descriptor for the adhesion energy of these materials, a property of great importance for the construction of solid-state batteries [11, 12]. Experimental data on surface energies are scarce because of the technical difficulty of the measurements. The available data are primarily limited to specific facets of elemental crystals. Experimental measurements of surface energies of solids can be based e.g. on cleavage [13, 14], interfacial energy of small elastic smooth spheres [15], or contact angle measurements of various liquids [16, 17]. The main problem with cleavage-based methods is plastic deformation at the crack tip, while the second approach strictly works only for amorphous solids. Contact angle measurements, on the other hand, are associated with several difficulties and limitations that affect the accuracy of results, such as surface contamination, roughness, and assumptions in the underlying mathematical models. In the quest to find new and highly functional materials, expensive and time- consuming experimental studies have been supplemented more and more by large- scale computational high-throughput (HT) screenings in recent years, mainly employing density functional theory (DFT). Due to the difficulties with conducting experiments to determine surface energies, and their crucial importance for a range of applications (e.g. Refs. [18, 19]), a couple of HT tools to tackle this problem have been published in recent years [20, 21, 22, 23]. Calculating surface properties via ab-initio atomistic modeling alleviates the problem of isolating specific facets and provides exceptional control of the system parameters. Tran et al. generated a database of the surface energies of elemental crystals [20], in which they provide surface energies of more than 100 polymorphs of approximately 70 elements. Yang et al. provide an open-source code [21] to generate organic surfaces from bulk molecular crystals, which was very recently updated to better interface with DFT and neural network interatomic potentials, among other improvements [24]. Brlec et al. provide a python library [22] that automates the surface cleaving and the processing of raw DFT outputs to extract materials properties such as surface energy. Furthermore, there has been an interest in predicting surface properties from unrelaxed surfaces, skipping DFT relaxations, and using a neural network model to learn and predict cleavage energies and Wulff constructions [23]. Although these studies are effective in calculating surface energies for specific systems, they are hampered by some constraints: the ability to handle multi-element crystals is very limited, and/or they are not fully automated, requiring the user to manually initiate calculations and run post-processing scripts. This hinders the high-throughput screening approach as manual handling of such a large number of systems quickly becomes impractical. The main problem that so far eluded a comprehensive HT treatment of surface energies of multi-element crystals is to correctly and automatically handle asymmetric and/or nonstoichiometric slabs, which are unavoidable for most systems, while still keeping the computational effort tractable. This is further complicated by the sheer amount of symmetrically in-equivalent surface directions (Miller indices), which might have several unique terminations each. Looking at surfaces with Miller indices of only up to a maximal index of 3, a rather simple crystal with only two distinct elements might have on the order of 100 unique surface configurations, while only a small fraction of those will show favorable surface energy and actually contribute to the Wulff shape. In the present work, we aim to solve these issues and present a fully automatic approach to calculate surface energies and Wulff shapes for arbitrary crystals. Our code package efficiently handles multiple terminations of asymmetric and nonstoichiometric slabs. Additionally, it can filter out polar surfaces111Polar surfaces usually have higher surface energies compared to non-polar ones and usually do not contribute to the Wulff shape. The surface energy of polar surfaces can be calculated with SurfFlow if wanted, however, as discussed in section S3 of the SM., and predict low energy ones, as well as provide fully automatic calculation handling, error correction, database operations, and output visualization. The package is developed in Python 3 and builds on well-known and commonly used tools for material science and HT computations like pymatgen [25], atomate [26], and Fireworks [27]. It is available from the Python package index (PyPI) or a GitLab repository as open source software and uses the Vienna Ab-initio Simulation Package, vasp [28, 29, 30], to perform the DFT calculations. See Sec. 3 for the settings used. While the methods and algorithms used in our workflow are not individually new, we believe that no other freely available Python package offers a comparable richness of features, ease of use, and efficiency. In the following sections, we will present our methodology, highlight our measures to optimize slab size and pre-screen for low-energy surfaces using bond valence information, and benchmark some results against available experimental and calculated data. A detailed flowchart and explanation of the workflow architecture can be found in S5 of the SM. ## 2 Results & Discussion ### 2.1 High-throughput handling of arbitrary surfaces To model a surface with a DFT code, which generally employs periodic boundary conditions in all directions, requires the construction of a slab, a thin slice of material separated by a vacuum region from its duplicates in one direction. The simplest case of a surface energy ($\gamma$) calculation is for a symmetric and stoichiometric slab, meaning that both surfaces of the slab are equivalent and the system contains an integer number of formula units of the primitive bulk cell of the material. Then the surface energy $\gamma$ is, $\gamma=\frac{1}{2A}\left[E_{\mathrm{slab}}-NE_{bulk}\right]\quad,$ (1) where $A$ is the area of the exposed surface, $E_{\mathrm{slab}}$ the total energy of the relaxed slab, $E_{bulk}$ the energy of the bulk unit222It has been shown that using a bulk cell with the same lateral symmetry helps to converge the surface energy quickly with respect to slab thickness [31]., and $N$ the number of formula units in the slab. For all other cases of symmetry and stoichiometry, a more generalized expression can be given $\gamma_{1}=\frac{1}{A}\left[E_{\mathrm{slab}}-\sum_{i}N_{i}\mu_{i}(p,T)\right]-\gamma_{2}\quad,$ (2) where asymmetry leads to two distinct surface energies given by $\gamma_{1}$ and $\gamma_{2}$, and the bulk energy is replaced by a term involving chemical potentials $\mu_{i}$, which quantifies the contributions of the missing atoms in nonstoichiometric slabs. Because the chemical potential of a species is a function of pressure and temperature, instead of a single value for the surface energy, we can talk about a region of stability for the surface that is defined by the environment. While it is in principle possible to perform reference calculations to determine the region of stability with respect to chemical potentials, this gets progressively more difficult as the number of species in the system increases. It is also not always trivial to choose a bulk or gas-phase reference [32, 33]. We have thus decided to compute our surface energies with respect to a complete vacuum and not make an attempt to include chemical potentials. There have been a number of ways presented in the literature to computationally decouple surfaces in asymmetric cases, without relying on chemical potentials. Notable examples are the wedge method [34, 35], energy density method [36], surface passivation method [34, 37], twinned slab method [38], and methods based on combinations of unrelaxed and relaxed surface energies [39, 40]. All these approaches intend to isolate one side of the slab by eliminating the contribution to the free energy of the other side. In this work, we have chosen to employ the method by Tian et al., which calculates the surface energy as the combination of cleavage (very similar to the unrelaxed surface energy given in [39]) and relaxation energies in a way that is highly transferable and able to deal with both asymmetric and nonstoichiometric slabs [40]. This approach is also similar to that of Eglitis and Vanderbilt [41] with some improvements in dealing with asymmetric surfaces. The approach is based on the notion that a slab is first cleaved from a bulk, and then the surfaces relax into their final shape. The cleavage energy is defined as $E_{\mathrm{cleavage}}=\frac{1}{2A}\left[E_{\mathrm{slab}}^{\mathrm{unrelax}}-NE_{\mathrm{bulk}}\right]\quad,$ (3) where $E_{\mathrm{slab}}^{\mathrm{unrelax}}$ is the total energy of the unrelaxed slab, $E_{\mathrm{bulk}}$ is the energy of the bulk reference structure, and $N$ is the number of formula units in the slab. (We should note that this definition of the cleavage energy differs from its more common interpretation as the average surface energy of an asymmetric slab.) Relaxation energy for symmetric slabs is simply given as $E_{\mathrm{relaxation}}=\frac{1}{2A}\left[E_{\mathrm{slab}}^{\mathrm{relax}}-E_{\mathrm{slab}}^{\mathrm{unrelax}}\right]\quad,$ (4) where $E_{\mathrm{slab}}^{\mathrm{relax}}$ is the total energy of the fully relaxed slab. Two key assumptions in this method allow it to treat asymmetric and nonstoichiometric slabs. First, it is assumed that the cleavage energy (sometimes referred to as the unrelaxed surface energy in literature) is equal for complementary terminations (i.e. sequential layers in the infinite bulk). This assumption follows from the idea that complementary terminations are created simultaneously by cleaving the bulk in a single plane333In reference [40], this is claimed to hold only for systems where the constituent species have similar electronegativities. Based on some theoretical arguments and test calculations we believe that this is only true for extreme differences in electronegativity, but can be considered negligible for most systems. See the SM, section 4, for details.. Thus, one can calculate the contribution of the cleavage energy to the surface energy on each side of the slab separately, provided that the slab is stoichiometric, as Eqn. 3 is valid only for stoichiometric slabs. The second assumption is that freezing half of the slab is enough to decouple two surfaces of the slab and simulate a semi-infinite slab which is fulfilled if the slab is thick enough that the middle layers are bulk-like. The relaxation energy for asymmetric slabs can then be expressed separately for different terminations $T$ $E_{\mathrm{relaxation}}(T)=\frac{1}{A}\left[E_{\mathrm{slab}}^{\mathrm{relax}}(T)-E_{\mathrm{slab}}^{\mathrm{unrelax}}\right]\quad.$ (5) where this time we divide by $A$ since only one side of the slab is relaxed. These assumptions together allow SurfFlow to compute the surface energy $\gamma$ as $\gamma=E_{\mathrm{cleavage}}+E_{\mathrm{relaxation}}\quad,$ (6) where $E_{\mathrm{relaxation}}$ is either equation 4 or equation 5, depending on the symmetry of the slab. Of course, equation 6 is dependent on the termination $T$ in the case of asymmetric slabs. While it is possible in theory to handle all cases of symmetry and stoichiometry using this approach, in practice we have to impose further constraints on the systems to be able to automate the process of calculating surface energies. First, our computational framework deals strictly with bulk- terminated surfaces, which means that adatoms or surface vacancies cannot be considered. Furthermore, we limit the nonstoichiometric systems to symmetric slabs. This is done by modifying the bottom surfaces of slabs but is not possible for all slabs, thus this option is turned off by default and we generate equivalent stoichiometric, but asymmetric, slabs instead. This is possible in all cases and all possible terminations of a bulk are covered by this approach. For these systems, it suffices to perform two relaxation calculations (top and bottom halves) and two static calculations (static slab and bulk reference) to calculate both surface energies $\gamma(T_{1})$ and $\gamma(T_{2})$. This case is depicted for an example system (KCl) in Fig. 1. For a more detailed explanation of the symmetric but nonstoichiometric case, see the SM, section 2. With this method [40], we are able to deal with all terminations using only a few calculations, avoid large system sizes (as for the wedge and twinned-slab methods), or face difficulties in generalizability (as for energy density and passivation methods). Figure 1: Surface energy calculation scheme for an asymmetric and stoichiometric KCl (mp-23193) (111) slab (a) with complementary terminations using the method by Tian et al. [40]. Calculations required are relaxations of the initial slab with only the (b) top, and (c) bottom halves relaxed, (c) static initial slab, and (d) static oriented unit cell. ### 2.2 Optimizing performance by slab resizing and predicting low energy surfaces While simple DFT calculations with a GGA or metaGGA functional can no longer be considered expensive for systems up to around 100 atoms, slabs have usually low symmetry and especially higher index surfaces often need slabs with large lateral extensions. Even if a single calculation is relatively cheap, usually there are many unique directions with several possible terminations each, even for relatively simple crystals. Only surfaces with comparatively low energy are formed in most experimental circumstances, so those are of considerably more interest compared to unstable ones. It is therefore prudent to minimize the computational load by optimizing slabs and pre-screening surfaces to determine if it is likely that they might occur in experiments and/or contribute to the Wulff shape. Optimization of slab size is a simpler task but still not trivial. While pymatgen provides great tools for generating slabs and minimizing their lateral dimensions, the minimal thickness is set by the size of the oriented unit cell (OUC). For high-index surfaces, this OUC might become quite large due to the necessity for periodic boundary conditions. For the slabs, the periodic boundary condition in the direction of the surface normal is broken by the vacuum however, SurfFlow reduces the thickness to match the intended layer count as accurately as possible. However, the original terminations must be preserved, which SurfFlow can achieve by clustering atomic sites into layers and removing appropriate chunks of them. This procedure is described in detail in S5.2 of the supplemental material (SM). Filtering out surfaces that are unlikely to contribute to the Wulff shape because they should have very high surface energy is a much more complex task. In the first step, we attempt to filter out polar surfaces which usually have higher surface energies than their nonpolar counterparts of the same crystal [42, 43, 44]. The polarity of the surfaces is identified by pymatgen by guessing the most likely oxidation states and calculating the dipole moment in the direction of the surface normal. More about polar slabs can be found in section 3 of the SM. Next, SurfFlow attempts to rank the remaining surfaces by quantifying the bonds broken during slab generation before actually performing any DFT calculations. Then, the workflow proceeds with $N$ candidates predicted to have low surface energies. These will be used for the Wulff shape generation. (Note that high-index and high energy surfaces can be beneficial for catalysis applications [19], so this option can be turned off when submitting a workflow to include all possible slabs compatible with the slab generation parameters.) In the following paragraphs, we will present the broken bond model used by SurfFlow in some more detail and benchmark the performance on a comprehensive set of materials with different structures, constituents, and bonding types. Broken bond models have been used to predict surface energies of certain materials in the past [45, 46, 47]. However, the performance of the approach depends heavily on the material and the types of bonds present, which leads to better performance for some systems than for others. SurfFlow adapts this method to apply to a wide range of materials by counting and weighing broken bonds. This is especially important for multi-element crystals where bond strength can vary significantly. The weights assigned to each bond should correspond to its strength, as the energy needed to break them is the main contributor to the surface energy. SurfFlow makes use of bond valences that have been shown to approximate the bond energy by Etxebarria et al. [48]. The valence of a bond between species $i$ and $j$ is given as $S_{ij}=exp((R_{0}-R_{ij})/b)\quad,$ (7) where $R_{0}$ is the optimal (for these bonding partners) and $R_{ij}$ is the realized bond length in the investigated material. The parameter $b$ measures the softness of the bond. Eqn. 7 is the expression most widely used for bond valence, and tables of values for $R_{0}$ and $b$ for numerous bonds are available in the literature. To be independent of necessarily incomplete tabulated data, SurfFlow uses the sum of the covalent radii of species forming the bond as the ideal bond length, and the frequently used 0.37 Å as $b$ parameter [49]444Some studies suggest that significantly different $b$ values should be used depending on bond type [49], but this is intractable for an HT approach aimed at generality. However, we allow the default parameter to be changed in the setting file.. We define the total bond valence sum (BVS) of a given site $i$ as a sum over all neighbors, $S_{i}=\sum_{j\neq i}exp((R_{0}-R_{ij})/b)\quad.$ (8) We consider all neighbors up to 1.2 times the largest bond length of the bulk, however, due to the exponential decay of the bond valence, mostly nearest neighbors contribute. The total bond valence sum of broken bonds $S$ is then simply a sum of all the partial sums over all surface sites, $S=\sum_{i}S_{i}\quad.$ (9) A similar, but slightly more complex approach was recently used to estimate the relative surface energies of homoatomic transition metal crystals with good success [50]. However, for our more diverse data set, the method presented here was found to be slightly more reliable. No method based on bond breaking can differentiate between in-equivalent but complementary terminations of an asymmetric slab, because both surfaces originate from a single cleavage. However, the bond valence sum correlates generally well with DFT surface energies, with a median Pearson correlation value of 0.87. Thus, in most cases, the method can be used as an excellent preliminary filter to weed out facets and terminations predicted to have high surface energies. ### 2.3 Benchmarking the bond-valence model for predicting low energy surfaces We have tested our bond valence model for 36 materials (See table 1 in the SM), encompassing different bonding characteristics, crystal structures, and constituting elements, from monoatomic to three-component systems. All unique non-polar surfaces up to a maximal Miller index (MMI) of 3 have been considered for each material, 653 surface energies in total555For a couple of systems we found only 4 non-polar unique surfaces, and have decided to include polar ones with $\mathrm{MMI}\leq 2$ as well for those. Also, slabs with more than 100 sites were disregarded to conserve computational resources. See S1 in the SM for more details.. The median correlation coefficient across all test systems is $r_{\mathrm{med}}=0.870$, while two systems with slightly negative correlation reduce the average to $r_{\mathrm{avg}}=0.803$. However, 30 of the 36 materials tested show a correlation larger than 0.7, as can be seen in the histogram in Fig. 2. The four worst performing systems are bcc Li (mp-135, $r=-0.087$), $\gamma$-TiAl (mp-1953, $r=-0.070$), as well as bcc and hcp Fe (mp-13, $r=0.573$; mp-136, $r=0.550$). The by far worst correlation values (essentially 0) are observed for bcc Li and $\gamma$-TiAl. Both materials have several surfaces that are extremely close to one another in energy. For Li, a very soft metal, where all surface energies are low, and also the BVS values are very close together, this is not very surprising. We thus recommend not relying on BVS filtering when calculating surface energies of very weakly bound materials. For TiAl, however, the situation is different and might be rooted in difficulties in evaluating Ti $d-d$ interactions correctly in intermetallic systems. Sang and coworkers measured the charge density distribution of $\gamma$-TiAl with highly accurate quantitative convergent beam electron diffraction [51]. While the qualitative agreement with full potential DFT calculations was good, there were considerable deviations for PAW-based calculations as we conduct them here. A more detailed discussion about all systems with $p<0.6$ can be found in S11 of the SM. In the inset of Fig. 2 we plot examples of the correlation between $S$ and $\gamma$ for four systems, colored according to the bins into which they are sorted. To test the bond valence sum filtering approach on Wulff shapes, we use the $N$ surfaces with the lowest bond valence sums to construct Wulff shapes. All of the surface energies are calculated with DFT, however. We then compare the mean absolute errors (MAE) in the area fractions of these Wulff shapes with respect to the full DFT results up to the MMI 3. The area fractions describe which percentage of the Wulff shape each facet contributes. Figure 2: Main plot: Histogram sorting the investigated materials into bins according to the Pearson correlation numbers of the bond valence sum with the DFT surface energies. Inset: Surface energy $\gamma$ vs bond valence sum $S$ for four exemplary materials representing different success situations of the BVS estimation of the surface energy. Colors indicate the bins of the main plot. The shaded regions represent the 95% confidence interval for the regression estimate. In Fig. 4 we plot Wulff shapes for the four systems already presented in Fig. 2 calculated by DFT from the lowest $N$ BVS surfaces. These give examples of Wulff shape evolution if more and more surfaces are considered for different classes of BVS/$\gamma$ correlation. In the case of CoSi2 (Fig. 4(a)), where the correlation is really good, we see that for 3 computed surfaces, the (331) facet is quite prominently featured alongside the dominant (111) facet and a sliver of (110). The total MAE of the area fractions is already very low at 1.8%. Adding four more surface energy calculations reduces the MAE a bit more by including (211), (320), and (321) facets, where the latter two disappear in the final shape in favor of the (311) and (310) facets that are both considerably higher in energy at $N=11$. Figure 3: Coverage required to reach MAE threshold of 0.05 for area fraction compared to the full Wulff shape (coverage 100%). The inset visualizes how many $N$ lowest BVS predictions must be calculated with DFT for all materials to find the lowest energy surface. For LiPd (Fig. 4(b)) we have a total of 41 surfaces to consider. Again, a high index surface appears early with the (301) facet, which shows up even at $N=3$ and is ever so slightly lower in energy than (101). At $N=9$, the (320) and (001) facets also appear, but (320) ends up not contributing to the final shape at $N=38$, being replaced by the (101) surface. The (103) facet shown with considerable area fraction at $N=38$ does not appear until quite late at $N=32$, due to its relatively high surface energy. We see that with many surfaces close in energy, even systems with decent correlations between BVS and $\gamma$ might not produce the correct Wulff shape unless many surfaces are considered. MAEs are therefore higher here, at 9.4% for $N=9$ and 7.2% for $N=14$. However, calculating the Wulff shape by computing all unique surfaces with Miller indices smaller than 1 or 2 (as commonly done) results in even higher errors of 14 and 13.2%, and at costs of 7 and 13 calculations, respectively. For Al3Pt2 (Fig. 4(c)), where the correlation is in the bottom six of our test set, low-index facets dominate the Wulff shape. Here, MAE starts off relatively high at $N=3$ with 11.2% and as we add more surfaces, drops to 6.9% at $N=13$ mostly due to the (100) facet appearing which also contributes to the final shape. At $N=27$, we see contributions from (111) and (2-12) which remain in the final Wulff shape, and at this value of $N$, the MAE drops to 3.6%. Calculating up to MMI 1 or 2, however, results in MAE of 5.2 and 0.0%, although at relatively high costs of 11 and 23 calculations, respectively. For this system, this ends up being the better approach, providing higher accuracy at fewer calculations. For $\gamma$-TiAl (Fig. 4(d)), where we have no correlation and do not expect this method to work well, we expect similar problems. Indeed SurfFlow misses the lowest energy surface, (110), which also contributes to the Wulff shape if we set $N<15$. However, the second lowest energy surface, (101) is immediately captured, as is the high energy (001) facet. The MAEs for $N=3$ and $N=9$ are decent at 9.1 and 6.8%, and at $N=15$, we have the correct Wulff shape, even though there are 29 distinct surfaces for $\mathrm{MMI}=3$. However, computing up to MMI 1 or 2 is more advantageous here as well, with MAE of 3.2 and 2.4%, at costs of 5 and 12 calculations respectively. Overall, we can confidently say that utilizing the BVS is better than just relying on low-index surfaces, although low surface energy alone does not determine that a facet contributes to the Wulff shape. The average MAE (compared to the full Wulff shape for MMI 3) for all materials is 17.1% if an MMI of 1 is used and 12.5% if MMI is 2. Using the same number of calculations per material as for MMI 1, or 2, but using the BVS predictions to select them, the average MAE goes down to 9.0 and 2.7%, respectively. Fig. 3 shows a histogram of materials with respect to the coverage666In this context, coverage pertains to the ratio between the number of surfaces under consideration for a given material and the total number of surfaces available for that material. required to reach 5% of MAE when compared to the full Wulff shape at 100% coverage. The results show that a 40% coverage is sufficient to reach this threshold for the vast majority of materials studied ($\sim 75\%$), while only 4 of the 36 materials need more than 60%. Finally, the lowest-energy orientation of a crystal is also of considerable interest. In the study of 36 materials, the BVS ranking correctly predicts the surface with the lowest overall energy in 67% of the cases, and in over 85%, it is within the lowest 3 predictions (see inset in Fig. 3). If finding the lowest energy surface for a set of materials is the primary concern, SurfFlow’s BVS filtering is thus an excellent tool to save computation time. (a) CoSi2 (mp-2379) (b) LiPd (mp-2744) (c) Al3Pt2 (mp-10905) (d) TiAl (mp-1953) Figure 4: Wulff shapes of 4 materials with varying correlations between $S$ and $\gamma$ for different numbers of calculated Miller-index/surface shift combinations. The facet colors correspond to the surface energy, from dark blue (lowest) to dark red (highest). ### 2.4 Benchmarking the SurfFlow workflow As mentioned in section 1, it is hard to measure surface energies experimentally. Thus, it is not easy to benchmark our workflow using hard experimental data as a reference. We have decided to first validate our workflow using some monoatomic systems with data from the materials project as reference [20]. The agreement is excellent other than for some outliers which hint at some problems in the data of reference [20] since our results match other previous studies. A detailed discussion of these results can be found in S10 of the SM. A more rigorous test of the important capability to treat multi-component systems is to compare our calculated Wulff shapes with the experimentally determined and previously calculated nanoparticle shape of the anatase and rutile phases of TiO2 (mp-390 and mp-2657). These oxide materials potentially feature polar surfaces and reconstructions, and thus present a particularly hard challenge for SurfFlow’s algorithms. They are also well-studied systems, both experimentally and especially theoretically (see e.g. the reviews by Diebold and Liu et al. [52, 53]). For these calculations we have employed the defaults of SurfFlow (see S5.1 in the SM), disregarded polar surfaces, and utilized the bond valence sum for pre-screening, only calculating the lowest 10 BVS terminations with DFT. For anatase, SurfFlow predicts the (101) facet to be dominant (99%) in the Wulff shape and also having the lowest surface energy of all facets at 0.41 J/m2 which is in good agreement with previous results of 0.44 J/m2 [54, 55]. The only other surface contributing to the Wulff shape is a small (001) facet, which has a considerably higher surface energy of 0.95 J/m2 (0.90 J/m2 [54]). Nanoparticles with this shape were already grown more than 20 years ago by Penn and Banfield [56]. Further calculated low energy surfaces are the (100) and (201) facets, at 0.51 J/m2 (0.53 [54]) and 0.63 J/m2, respectively. Indeed, TiO2 anatase nanoparticles with exposed (201) facets have been grown previously [57, 58]. The other high-index surfaces showing relatively low surface energy we calculated are (112), (310), and (103). Another termination of the (201) facet with low BVS, but featuring an undercoordinated oxygen atom did not converge, so only 9 surface energies are reported in table 1. The correlation between the BVS and the DFT computed surface energy is very high at $r_{\mathrm{anatase}}=0.96$. anatase | rutile ---|--- hkl | $\gamma$ | AF | hkl | $\gamma$ | AF 101 | 0.41 | 98.9 | 110 | 0.29 | 82.1 100 | 0.51 | - | 100 | 0.58 | - 201 | 0.63 | - | 311 | 0.75 | - 112 | 0.71 | - | 201 | 0.79 | 8.5 310 | 0.74 | - | 211 | 0.82 | - 210 | 0.89 | - | 331 | 0.88 | - 103 | 0.91 | - | 101 | 0.93 | - 001 | 0.95 | 1.1 | 332 | 0.97 | 9.4 110 | 1.00 | - | 320 | 1.25 | - Table 1: Surface energies $\gamma$ in [J/m2] and Wulff shape area fractions (AF) in % for anatase and rutile TiO2. For rutile, (110) is found to be the dominant facet, occupying 82% of the total area of the Wulff shape, with a surface energy of 0.29 J/m2. Two more facets, (201) at 0.80 J/m2, and (101) at 0.97 J/m2 each contribute about 9% to the Wulff shape. In literature, only low index surfaces (MMI=1) are calculated, and the Wulff shapes computed feature the (110), (101), (100), and (001) facets [59, 60]. We do calculate a low surface energy of 0.58 J/m2 for the (100) facet, but it does not contribute to the Wulff shape, while the (001) surface is not calculated because its BVS sum is not within the lowest 10 surfaces. SurfFlow’s result for the (110) facet is in line with a previous PBE result of 0.31 J/m2 [54], while Perron et al. report a Perdew & Wang GGA value of 0.48 J/m2 [61], Bredow et al. find 0.63 J/m2 for the same functional and Jiang and coworkers compute an even larger value of 0.74 J/m2 with PBE [60]. This broad spread of values is due to well-known and large oscillations of the electronic properties of this surface with respect to the odd/even number of titanium layers [62], which makes the surface energy very hard to converge. Such oscillations are not found for the (100) surfaces. Our (110) result was computed for 4 titanium layers according to SurfFlow’s defaults, which yields a particularly low value of the surface energy, crowding out the (100) surface from our Wulff shape. High-resolution scanning tunneling microscopy images suggest that the (100) facet should however be present [60]. Again, one of the facets, now (320), did not converge, leaving us with 9 computed surface energies reported in table 1. Note that of the 9 computed surface energies, 6 have $\mathrm{MMI}>1$, and the correlation between BVS and computed $\gamma$ is decent at $r_{\mathrm{rutile}}=0.80$, indicating that those surfaces indeed have low surface energy compared to not calculated lower index surfaces. In conclusion, we have developed SurfFlow, a Python package that efficiently computes surface energies and Wulff shapes for arbitrary crystals in a high- throughput manner. By employing symmetric considerations, optimizing slab thickness, and implementing a pre-screening approach based on the bond valence sum, we successfully reduce the computational cost associated with determining low-energy surface orientations and terminations. For all but very weakly bound materials we find a good to excellent correlation between the bond valence sum and the surface energy. Utilizing this correlation, we have shown that we can reliably construct more accurate Wulff shapes with fewer calculations than by just computing low-index surfaces. This paper presents the algorithms and approximations utilized as well as the workflow architecture, web tools, and a comprehensive database that includes hundreds of surface energies for crystals with diverse chemistries and structures. Notably, SurfFlow offers features such as streamlined job submission, automatic error correction, post-processing capabilities, and integration with an optimade compliant MongoDB database. ## 3 Methods We use the Vienna Ab-Initio Simulation Package (VASP, version 6.3.1, [28, 29, 30]) with the potpaw.54 potential set [63]. The potential mapping is equivalent to the one defined in the MPRelaxSet of the Materials Project, except for tungsten, where we use W_sv instead of the deprecated W_pv. The user can easily overwrite this potential mapping while submitting workflows. Our package allows for easy changes to computational and structural parameters such as plane wave energy cutoff, k-point density, and slab thickness. We have nevertheless taken care to set default values that promise both accurate results and reasonable computation time. Those are e.g. using 400 eV as the plane wave cutoff, 5.0 Å-1 for the k-point density, and slab thicknesses of at least 8 layers or 10 Å, whatever is greater. All data presented here are computed with those defaults, or in some cases of older data, with higher values. Smearing parameters change for the relaxations depending on material parameters (especially bandgap) and might be corrected during runs by custodian, but for total energies, we always include an extra static run using the tetrahedron method with Blöchl corrections. Our workflow is optimized to use the PBE or SCAN functionals, but LDA or other GGAs, as well as van der Waals corrections, can also be selected. The results calculated for this paper are using the PBE functional. More information about the choice of functional and pseudopotential set is available in section 8 of the SM. ## Data availability All data described in this paper is available in an optimade compliant database accessible via a web app at: https://surfflow-db.onrender.com/. For more advanced users that would like to connect their MongoDB apps to browse the database, we also provide the necessary <EMAIL_ADDRESS>. Additionally, we include a table with most of the results in S12 of the SM. ## Code availability The SurfFlow code can be installed directly from PyPI with the same name, and the source code can be found on GitHub. ## Acknowledgements This research was funded by the Austrian Science Fund (FWF) [P 32711]. The authors thank M. Reticcioli for fruitful discussions. ## Author contributions Firat Yalcin: Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – Original Draft, Writing – Review & Editing, Visualization; Michael Wolloch: Conceptualization, Software, Resources, Writing - Review & Editing, Supervision, Project administration, Funding acquisition ## Competing interest F.Y. declares no competing interest. M.W. is a part-time employee of the VASP software GmbH. ## References * [1] G. Wulff, Zur frage der geschwindigkeit des wachstums und der auflösung der kristallformen, Zeitschrift für Kristallographie 34 (5) (1901) 449–530. * [2] G. A. Somorjai, Surface Reconstruction and Catalysis, Annual Review of Physical Chemistry 45 (1) (1994) 721–751. doi:10.1146/annurev.pc.45.100194.003445. URL https://doi.org/10.1146/annurev.pc.45.100194.003445 * [3] K. Oura, V. Lifshits, A. Saranin, A. Zotov, M. Katayama, Surface science: An introduction, Springer Science & Business Media, 2013, [Online; accessed 2023-01-25]. * [4] P. Wynblatt, R. C. Ku, Surface energy and solute strain energy effects in surface segregation, Surface Science 65 (2) (1977) 511–531. doi:10.1016/0039-6028(77)90462-9. * [5] M. P. Seah, Quantitative prediction of surface segregation, Journal of Catalysis 57 (3) (1979) 450–457. doi:10.1016/0021-9517(79)90011-3. * [6] M. Polak, L. Rubinovich, The interplay of surface segregation and atomic order in alloys, Surface Science Reports 38 (4) (2000) 127–194. doi:https://doi.org/10.1016/S0167-5729(99)00010-2. URL https://www.sciencedirect.com/science/article/pii/S0167572999000102 * [7] B. Hammer, J. K. Nørskov, Theoretical surface science and catalysis—calculations and concepts, Advances in Catalysis 45 (C) (2000) 71–129. doi:10.1016/S0360-0564(02)45013-4. * [8] V. Stamenkovic, B. S. Mun, K. J. Mayrhofer, P. N. Ross, N. M. Markovic, J. Rossmeisl, J. Greeley, J. K. Nørskov, Changing the activity of electrocatalysts for oxygen reduction by tuning the surface electronic structure, Angewandte Chemie - International Edition 45 (18) (2006) 2897–2901. doi:10.1002/anie.200504386. * [9] J. K. Nørskov, T. Bligaard, J. Rossmeisl, C. H. Christensen, Towards the computational design of solid catalysts, Nature Chemistry 1 (1) (2009) 37–46. doi:10.1038/nchem.121. URL https://doi.org/10.1038/nchem.121 * [10] Z.-Y. Zhou, N. Tian, J.-T. Li, I. Broadwell, S.-G. Sun, Nanomaterials of high surface energy with exceptional properties in catalysis and energy storage, Chem. Soc. Rev. 40 (2011) 4167–4185. doi:10.1039/C0CS00176G. URL http://dx.doi.org/10.1039/C0CS00176G * [11] I. D. Seymour, E. Quérel, R. H. Brugge, F. M. Pesci, A. Aguadero, Understanding and engineering interfacial adhesion in solid-state batteries with metallic anodes, ChemSusChem 16 (12) (2023) e202202215. arXiv:https://chemistry-europe.onlinelibrary.wiley.com/doi/pdf/10.1002/cssc.202202215, doi:https://doi.org/10.1002/cssc.202202215. URL https://chemistry- europe.onlinelibrary.wiley.com/doi/abs/10.1002/cssc.202202215 * [12] P. Restuccia, G. Losi, O. Chehaimi, M. Marsili, M. C. Righi, High-throughput first-principles prediction of interfacial adhesion energies in metal-on-metal contacts, ACS Applied Materials & Interfaces 15 (15) (2023) 19624–19633, pMID: 37015021. doi:10.1021/acsami.3c00662. URL https://doi.org/10.1021/acsami.3c00662 * [13] J. W. Obreimoff, The splitting strength of mica, Proceedings of The Royal Society A: Mathematical, Physical and Engineering Sciences 127 (1930) 290–297. * [14] J. J. Gilman, Direct measurements of the surface energies of crystals, Journal of Applied Physics 31 (12) (1960) 2208–2218. arXiv:https://doi.org/10.1063/1.1735524, doi:10.1063/1.1735524. URL https://doi.org/10.1063/1.1735524 * [15] K. Kendall, N. McN.Alford, J. D. Birchall, A new method for measuring the surface energy of solids, Nature 325 (6107) (1987) 794–796. doi:10.1038/325794a0. URL https://doi.org/10.1038/325794a0 * [16] D. Y. Kwok, A. W. Neumann, Contact angle measurement and contact angle interpretation, Advances in Colloid and Interface Science 81 (3) (1999) 167–249. doi:https://doi.org/10.1016/S0001-8686(98)00087-6. URL https://www.sciencedirect.com/science/article/pii/S0001868698000876 * [17] A. Kozbial, Z. Li, C. Conaway, R. McGinley, S. Dhingra, V. Vahdat, F. Zhou, B. D’Urso, H. Liu, L. Li, Study on the Surface Energy of Graphene by Contact Angle Measurements, Langmuir 30 (28) (2014) 8598–8606. doi:10.1021/la5018328. URL https://doi.org/10.1021/la5018328 * [18] J. A. Williams, H. R. Le, Tribology and mems, Journal of Physics D: Applied Physics 39 (12) (2006) R201. doi:10.1088/0022-3727/39/12/R01. URL https://dx.doi.org/10.1088/0022-3727/39/12/R01 * [19] C. Xiao, B.-A. Lu, P. Xue, N. Tian, Z.-Y. Zhou, X. Lin, W.-F. Lin, S.-G. Sun, High-index-facet- and high-surface-energy nanocrystals of metals and metal oxides as highly efficient catalysts, Joule 4 (12) (2020) 2562–2598. doi:10.1016/j.joule.2020.10.002. URL https://doi.org/10.1016/j.joule.2020.10.002 * [20] R. Tran, Z. Xu, B. Radhakrishnan, D. Winston, W. Sun, K. A. Persson, S. P. Ong, Surface energies of elemental crystals, Scientific Data 3 (1) (2016) 160080. doi:10.1038/sdata.2016.80. URL https://doi.org/10.1038/sdata.2016.80 * [21] S. Yang, I. Bier, W. Wen, J. Zhan, S. Moayedpour, N. Marom, Ogre: A Python package for molecular crystal surface generation with applications to surface energy and crystal habit prediction, Journal of Chemical Physics 152 (24) (2020). doi:10.1063/5.0010615. URL https://doi.org/10.1063/5.0010615 * [22] K. Brlec, D. Davies, D. Scanlon, Surfaxe: Systematic surface calculations, Journal of Open Source Software 6 (61) (2021) 3171. doi:10.21105/joss.03171. * [23] A. Palizhati, W. Zhong, K. Tran, Z. W. Ulissi, Predicting intermetallic surface energies with high-throughput dft and convolutional neural networks, ChemRxiv (2019) 1–18doi:10.26434/chemrxiv.8709566. * [24] S. Moayedpour, I. Bier, W. Wen, D. Dardzinski, O. Isayev, N. Marom, Structure prediction of epitaxial organic interfaces with ogre, demonstrated for tetracyanoquinodimethane (tcnq) on tetrathiafulvalene (ttf), The Journal of Physical Chemistry C 127 (21) (2023) 10398–10410. doi:10.1021/acs.jpcc.3c02384. URL https://doi.org/10.1021/acs.jpcc.3c02384 * [25] S. P. Ong, W. D. Richards, A. Jain, G. Hautier, M. Kocher, S. Cholia, D. Gunter, V. L. Chevrier, K. A. Persson, G. Ceder, Python Materials Genomics (pymatgen): A robust, open-source python library for materials analysis, Computational Materials Science 68 (2013) 314–319. doi:10.1016/j.commatsci.2012.10.028. URL https://www.sciencedirect.com/science/article/pii/S0927025612006295 * [26] K. Mathew, J. H. Montoya, A. Faghaninia, S. Dwarakanath, M. Aykol, H. Tang, I. heng Chu, T. Smidt, B. Bocklund, M. Horton, J. Dagdelen, B. Wood, Z. K. Liu, J. Neaton, S. P. Ong, K. Persson, A. Jain, Atomate: A high-level interface to generate, execute, and analyze computational materials science workflows, Computational Materials Science 139 (2017) 140–152. doi:10.1016/j.commatsci.2017.07.030. URL https://www.sciencedirect.com/science/article/pii/S0927025617303919 * [27] A. Jain, S. P. Ong, W. Chen, B. Medasani, X. Qu, M. Kocher, M. Brafman, G. Petretto, G.-M. Rignanese, G. Hautier, D. Gunter, K. A. Persson, FireWorks: a dynamic workflow system designed for high-throughput applications, Concurrency and Computation: Practice and Experience 27 (17) (2015) 5037–5059. doi:https://doi.org/10.1002/cpe.3505. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.3505 * [28] G. Kresse, J. Hafner, Ab initio molecular dynamics for liquid metals, Phys. Rev. B 47 (1993) 558–561. doi:10.1103/PhysRevB.47.558. URL https://link.aps.org/doi/10.1103/PhysRevB.47.558 * [29] G. Kresse, J. Furthmüller, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Computational Materials Science 6 (1) (1996) 15–50. doi:10.1016/0927-0256(96)00008-0. * [30] G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Physical Review B - Condensed Matter and Materials Physics 54 (16) (1996) 11169–11186. doi:10.1103/PhysRevB.54.11169. * [31] W. Sun, G. Ceder, Efficient creation and convergence of surface slabs, Surface Science 617 (2013) 53–59. doi:10.1016/j.susc.2013.05.016. URL https://www.sciencedirect.com/science/article/pii/S003960281300160X * [32] T. Jia, Z. Zeng, H. Paudel, D. J. Senor, Y. Duan, First-principles study of the surface properties of $\gamma$-LiAlO2: Stability and tritium adsorption, Journal of Nuclear Materials 522 (2019) 1–10. doi:10.1016/j.jnucmat.2019.05.007. * [33] T. Jia, D. J. Senor, Y. Duan, First-principles study of the surface properties of LiAl5O8: Stability and tritiated water formation, Journal of Nuclear Materials 555 (2021) 153111. doi:10.1016/j.jnucmat.2021.153111. * [34] J. Zhang, Y. Zhang, K. Tse, B. Deng, H. Xu, J. Zhu, New approaches for calculating absolute surface energies of wurtzite (0001)/(000 1): A study of ZnO and GaN, Journal of Applied Physics 119 (20) (2016) 0–8. doi:10.1063/1.4952395. * [35] C. Ma, W. Jin, X. Duan, X. Ma, H. Han, Z. Zhang, J. Yu, Y. Wu, From the absolute surface energy to the stabilization mechanism of high index polar surface in wurtzite structure: The case of ZnO, Journal of Alloys and Compounds 772 (2019) 482–488. doi:10.1016/j.jallcom.2018.09.194. URL https://doi.org/10.1016/j.jallcom.2018.09.194 * [36] N. Chetty, R. M. Martin, First-principles energy density and its applications to selected polar surfaces, Physical Review B 45 (11) (1992) 6074–6088. doi:10.1103/PhysRevB.45.6074. * [37] J. W. Kaminski, P. Kratzer, C. Ratsch, Towards a standardized setup for surface energy calculations, Physical Review B 95 (8) (2017) 1–11. doi:10.1103/PhysRevB.95.085408. * [38] M. Bruno, S. Ghignone, A new computational strategy to calculate the surface energy of a dipolar crystal surface, CrystEngComm 23 (27) (2021) 4791–4798. doi:10.1039/d1ce00403d. * [39] E. Heifets, R. I. Eglitis, E. A. Kotomin, J. Maier, G. Borstel, _Ab Initio_ modeling of surface structure for SrTiO 3 perovskite crystals, Physical Review B 64 (23) (2001) 235417. doi:10.1103/PhysRevB.64.235417. * [40] X. Tian, T. Wang, L. Fan, Y. Wang, H. Lu, Y. Mu, A DFT based method for calculating the surface energies of asymmetric MoP facets, Applied Surface Science 427 (2018) 357–362. doi:10.1016/j.apsusc.2017.08.172. URL http://dx.doi.org/10.1016/j.apsusc.2017.08.172https://doi.org/10.1016/j.apsusc.2017.08.172 * [41] R. I. Eglitis, D. Vanderbilt, _Ab Initio_ calculations of Ba Ti O 3 and Pb Ti O 3 (001) and (011) surface structures, Physical Review B 76 (15) (2007) 155439. doi:10.1103/PhysRevB.76.155439. * [42] C. Noguera, Polar oxide surfaces, Journal of Physics Condensed Matter 12 (31) (2000). doi:10.1088/0953-8984/12/31/201. * [43] J. Goniakowski, F. Finocchi, C. Noguera, Polarity of oxide surfaces and nanostructures, Reports on Progress in Physics 71 (1) (2008). doi:10.1088/0034-4885/71/1/016501. * [44] C. E. Dreyer, A. Janotti, C. G. Van De Walle, Absolute surface energies of polar and nonpolar planes of GaN, Physical Review B - Condensed Matter and Materials Physics 89 (8) (2014) 1–4. doi:10.1103/PhysRevB.89.081305. * [45] M. Methfessel, D. Hennig, M. Scheffler, Calculated surface energies of the 4 d transition metals: A study of bond-cutting models, Applied Physics A Solids and Surfaces 55 (5) (1992) 442–448. doi:10.1007/BF00348331. * [46] I. Galanakis, N. Papanikolaou, P. H. Dederichs, Applicability of the broken-bond rule to the surface energy of the fcc metals, Surface Science 511 (1-3) (2002) 1–12. arXiv:0110236, doi:10.1016/S0039-6028(02)01547-9. * [47] Z. Y. Gao, W. Sun, Y. H. Hu, Mineral cleavage nature and surface energy: Anisotropic surface broken bonds consideration, Transactions of Nonferrous Metals Society of China (English Edition) 24 (9) (2014) 2930–2937. doi:10.1016/S1003-6326(14)63428-2. URL http://dx.doi.org/10.1016/S1003-6326(14)63428-2 * [48] I. Etxebarria, J. M. Perez-Mato, A. García, P. Blaha, K. Schwarz, J. Rodriguez-Carvajal, Comparison of empirical bond-valence and first-principles energy calculations for a complex structural instability, Physical Review B - Condensed Matter and Materials Physics 72 (17) (2005) 1–8. doi:10.1103/PhysRevB.72.174108. * [49] I. D. Brown, Recent developments in the methods and applications of the bond valence model, Chemical Reviews 109 (12) (2009) 6858–6919. doi:10.1021/cr900053k. * [50] H. Ma, Y. Jiao, W. Guo, X. Liu, Y. Li, X.-D. Wen, Predicting crystal morphology using a geometric descriptor: A comparative study of elemental crystals with high-throughput dft calculations, The Journal of Physical Chemistry C 124 (29) (2020) 15920–15927. doi:10.1021/acs.jpcc.0c03537. URL https://doi.org/10.1021/acs.jpcc.0c03537 * [51] X. Sang, A. Kulovits, G. Wang, J. Wiezorek, High precision electronic charge density determination for l10-ordered $\gamma$-tial by quantitative convergent beam electron diffraction, Philosophical Magazine 92 (35) (2012) 4408–4424. doi:10.1080/14786435.2012.709324. URL https://doi.org/10.1080/14786435.2012.709324 * [52] U. Diebold, The surface science of titanium dioxide, Surface Science Reports 48 (5) (2003) 53–229. doi:https://doi.org/10.1016/S0167-5729(02)00100-0. URL https://www.sciencedirect.com/science/article/pii/S0167572902001000 * [53] G. Liu, H. G. Yang, J. Pan, Y. Q. Yang, G. Q. M. Lu, H.-M. Cheng, Titanium dioxide crystals with tailored facets, Chemical Reviews 114 (19) (2014) 9559–9612. doi:10.1021/cr400621z. URL https://doi.org/10.1021/cr400621z * [54] M. Lazzeri, A. Vittadini, A. Selloni, Erratum: Structure and energetics of stoichiometric ${\mathrm{tio}}_{2}$ anatase surfaces [phys. rev. b 63, 155409 (2001)], Phys. Rev. B 65 (2002) 119901. doi:10.1103/PhysRevB.65.119901. URL https://link.aps.org/doi/10.1103/PhysRevB.65.119901 * [55] X.-Q. Gong, A. Selloni, M. Batzill, U. Diebold, Steps on anatase tio2(101), Nature Materials 5 (8) (2006) 665–670. doi:10.1038/nmat1695. URL https://doi.org/10.1038/nmat1695 * [56] R. Penn, J. F. Banfield, Morphology development and crystal growth in nanocrystalline aggregates under hydrothermal conditions: insights from titania, Geochimica et Cosmochimica Acta 63 (10) (1999) 1549–1557. doi:https://doi.org/10.1016/S0016-7037(99)00037-X. URL https://www.sciencedirect.com/science/article/pii/S001670379900037X * [57] H. B. Wu, J. S. Chen, X. W. D. Lou, H. H. Hng, Asymmetric anatase tio2 nanocrystals with exposed high-index facets and their excellent lithium storage properties, Nanoscale 3 (2011) 4082–4084. doi:10.1039/C1NR10854A. URL http://dx.doi.org/10.1039/C1NR10854A * [58] L. Wu, H. B. Jiang, F. Tian, Z. Chen, C. Sun, H. G. Yang, Ti0.89si0.11o2 single crystals bound by high-index 201 facets showing enhanced visible-light photocatalytic hydrogen evolution, Chem. Commun. 49 (2013) 2016–2018. doi:10.1039/C3CC38105F. URL http://dx.doi.org/10.1039/C3CC38105F * [59] M. Ramamoorthy, D. Vanderbilt, R. D. King-Smith, First-principles calculations of the energetics of stoichiometric ${\mathrm{tio}}_{2}$ surfaces, Phys. Rev. B 49 (1994) 16721–16727. doi:10.1103/PhysRevB.49.16721. URL https://link.aps.org/doi/10.1103/PhysRevB.49.16721 * [60] F. Jiang, L. Yang, D. Zhou, G. He, J. Zhou, F. Wang, Z.-G. Chen, First-principles atomistic wulff constructions for an equilibrium rutile tio2 shape modeling, Applied Surface Science 436 (2018) 989–994. doi:https://doi.org/10.1016/j.apsusc.2017.12.050. URL https://www.sciencedirect.com/science/article/pii/S0169433217336371 * [61] H. Perron, C. Domain, J. Roques, R. Drot, E. Simoni, H. Catalette, Optimisation of accurate rutile tio2 (110), (100), (101) and (001) surface models from periodic dft calculations, Theoretical Chemistry Accounts 117 (4) (2007) 565–574. doi:10.1007/s00214-006-0189-y. URL https://doi.org/10.1007/s00214-006-0189-y * [62] T. Bredow, L. Giordano, F. Cinquini, G. Pacchioni, Electronic properties of rutile $\mathrm{Ti}{\mathrm{o}}_{2}$ ultrathin films: Odd-even oscillations with the number of layers, Phys. Rev. B 70 (2004) 035419. doi:10.1103/PhysRevB.70.035419. URL https://link.aps.org/doi/10.1103/PhysRevB.70.035419 * [63] G. Kresse, D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method, Phys. Rev. B 59 (1999) 1758–1775. doi:10.1103/PhysRevB.59.1758. URL https://link.aps.org/doi/10.1103/PhysRevB.59.1758
[a]Laura Olivera-Nieto # Standardized Formats for Gamma-Ray Analysis Applied to HAWC Observatory Data , Vikas Joshi , Harm Schoorlemmer and Axel Donath ###### Abstract A wide range of data formats and proprietary software have traditionally been used in $\gamma$-ray astronomy, usually developed for a single specific mission or experiment. However, in recent years there has been an increasing effort towards making astronomical data open and easily accessible. Within the $\gamma$-ray community this has translated to the creation of a common data format across different $\gamma$-ray observatories: the "gamma-astro-data- format" (GADF). Based on a similar premise, open-source analysis packages, such as Gammapy, are being developed and aim to provide a single, robust tool which suits the needs of many experiments at once. In this contribution we show that data from the High-Altitude Water Cherenkov (HAWC) observatory can be made compatible with the GADF and present the first GADF-based production of event lists and instrument response functions for a ground-based wide-field instrument. We use these data products to reproduce with excellent agreement the published HAWC Crab spectrum using Gammapy. Having a common data format and analysis tools facilitates joint analysis between different experiments and effective data sharing. This will be especially important for next- generation instruments, such as the proposed Southern Wide-field Gamma-ray Observatory (SWGO) and the planned Cherenkov Telescope Array (CTA). ## 1 Introduction Historically, a variety of instrument-specific and largely proprietary tools and data formats have been used in $\gamma$-ray astronomy, which hinders effective data-sharing and reproducibility. However, in recent years there has been a shift towards making data more accessible and easier to share in the context of joint analysis. A big driver of this trend has been the upcoming Cherenkov Telescope Array (CTA) [1], the data of which will become public after a short proprietary period. Motivated by this development, there has been an advent of openly developed analysis tools, such as Gammapy [2], which are able to replace the existing instrument-specific packages by offering a single, common tool. Gammapy is a community developed Python package for $\gamma$-ray astronomy selected to be part of the CTA science tools. It has been successfully used and validated for analysis of Imaging Atmospheric Cherenkov Telescope (IACT) data [3] and joint analysis with data from the Fermi Large Area Telescope [4]. In parallel to these efforts, a common data format across different observatories, the gamma-astro-data-format (GDAF)111https://gamma-astro-data- formats.readthedocs.io/en/latest/ [5] has been developed. The scope of this standard is to cover all high-level data products from telescopes, starting at the level of event lists and instrument response functions (IRFs). This format relies on file storage by the Flexible Image Transport System (FITS) format [6], which is widely used by the whole astronomical community. It builds on existing standards such as OGIP222https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/ofwg_intro.html and expands them to tailor the specific needs of the $\gamma$-ray community. The focus of these efforts has largely been IACT data, ignoring another type of $\gamma$-ray experiment: wide-field ground-based observatories. The High-Altitude Water Cherenkov (HAWC) is a $\gamma$-ray detector that consists of 300 water Cherenkov detectors, each outfitted with four photomultiplier tubes (PMTs). Its wide field of view covers two-thirds of the sky uninterruptedly, allowing for constant monitoring and deep observations of the $\gamma$-ray sky. It has been in operation since November 2014. It is worth noting that the HAWC Accelerated Likelihood (HAL) [7] framework and the Multi-Mission Maximum Likelihood framework (3ML) [8], the packages primarily used by the HAWC observatory, are also open-source, but spouse a different philosophy to that of the packages described above. Instead of replacing the existing frameworks from different observatories by a single, common tool, packages like 3ML provide a common framework in which the instrument-specific tools (such as HAL) interface. This approach has the advantage to include not only $\gamma$-ray instruments but also multiwavelength and multimessenger observations. However, it has the disadvantage to rely on instrument-specific analysis software, which, in the case of $\gamma$-ray observations, performs very similar tasks between different instruments. ## 2 HAWC data and IRFs in the GADF Events recorded by the HAWC observatory are binned by size, that is, on the fraction of PMTs from the array that were triggered by the event. A total of 9 event size bins are defined, as detailed in Table 2. The event size only weakly correlates with energy [9]. In order to estimate the energy on an event-by-event basis, more advanced algorithms have been developed. The ground parameter (GP) algorithm is based on the charge density deposited at ground by the shower. The neural network (NN) algorithm estimates energies with an artificial neural network that takes as input several quantities computed during the event reconstruction. A detailed overview of both algorithms can be found in [10]. Energy bins are typically defined beforehand in HAWC analysis and referred to with letter names, as described in Table 2. The combination of event sizes and energy bins leads to a 2-dimensional bin scheme, with 108 analysis bins resulting from the combination of each event size bin 1 to 9 with the 12 energy bins. However, only a subset of these bins are populated with enough event statistics, for example, low energy events are very unlikely to have large event sizes. Bin number | Low edge | High edge ---|---|--- 1 | 0.067 | 0.105 2 | 0.105 | 0.162 3 | 0.162 | 0.247 4 | 0.247 | 0.356 5 | 0.356 | 0.485 6 | 0.485 | 0.618 7 | 0.618 | 0.740 8 | 0.740 | 0.840 9 | 0.840 | 1.00 Table 1: Event size ("nhit") bins. Bins are defined from the fraction of PMTs triggered by each event. Bin | Low edge (TeV) | High edge (TeV) ---|---|--- a | 0.316 | 0.562 b | 0.562 | 1.00 c | 1.00 | 1.78 d | 1.78 | 3.16 e | 3.16 | 5.62 f | 5.62 | 10.0 g | 10.0 | 17.8 h | 17.8 | 31.6 i | 31.6 | 56.2 j | 56.2 | 100 k | 100 | 177 l | 177 | 316 Table 2: The energy bins. Note that the first two bins are not used in the analysis as the estimate is highly biased [10]. ### 2.1 Events and good time intervals For each of these analysis bins, gamma-hadron separation cuts are optimized [10] and applied to the reconstructed data. Additional direction corrections are applied and the event coordinates are converted to the J2000 epoch. The resulting event lists are stored in FITS [6] files with headers and columns compliant with the GADF. Additionally, other columns are stored that contain information pertinent to the characteristics of wide-field arrays, such as core location in the array, direction in local coordinates and event size. Due to the large sky coverage and high duty cycle, it is common among wide- field observatories to make sky-maps as their primary data product directly from reconstructed events. However, producing event lists as an intermediate step has the advantage of adding more flexibility. It allows to easily select a subset of the whole dataset for a given analysis, which simplifies the study of time-dependent signals. It also facilitates extensive systematic checks. Good Time Intervals (GTIs) are defined as the time intervals during which the detector is on and taking data continuously. These intervals are then used to build the exposure information. Detector downtime is caused by a variety of factors, ranging from hardware issues to meteorological conditions. These interruptions are not uniformly distributed over time, with certain meteorological events being more likely during particular times of the year. This leads to fluctuations in the exposure as a function of hour angle, or, equivalently, right ascension (R.A.). For HAWC data collected between June 2015 and June 2019, we compute the number of transits - i.e., sidereal days - for which the detector was taking data as a function of the R.A. of zenith, shown by the green line in Figure 1. Figure 1: Number of transits during which data was recorded as a function of R.A. It can be useful, for example for background modeling, to remove this R.A. dependency on the exposure, to "flatten" it. To do this, one can remove GTIs from the time selection until the resulting exposure is no longer a function of R.A. An example of this process can be seen in Figure 1 This exposure "flattening" leads to a loss of data of around 1-2%, maintaining the overall detector efficiency above 90%. ### 2.2 Intrument response functions The IRFs describe the combined detection abilities and precision of an instrument data-taking and reconstruction procedure. They are computed by simulating a point source emitting $\gamma$-rays following a given energy spectrum, usually $\propto$E-2. These events are processed with the detector simulation and reconstruction procedure, see [9] for more details. The reconstructed events are binned as described in Section 2 and gamma-hadron separation is applied. This process yields information on the number of events successfully identified as $\gamma$-rays as well as the accuracy and precision of the energy and direction assigned to them. In the GADF, this information is split into three components respectively, the effective area, the energy dispersion matrix, and the point-spread function (PSF). Note that this framework neglects the correlation between the different IRF components. This is currently mostly sufficient for IACT analysis and also present in the standard HAWC framework. However, this will be re-addressed for CTA and thus, likely for the GADF as well in future. #### 2.2.1 Effective area The effective area of a detector is the combination of its detection efficiency with the observable area. Typically in the computation of HAWC IRFs the simulated spectrum is convolved with a source transit over the observatory, leading to a comparison of events detected to events thrown per square meter and transit. Analysis bins described in Section 2 include a cut on zenith angle at 45° [10]. We can compute the number of hours that a source at each declination spends at zenith angles lower than 45°, that is, the duration of a transit as a function of declination, to recover the effective area in units of m2. Figure 2 shows these curves for all the available declinations resulting from analysis bins defined using each of the energy estimation schemes described in Section 2. The effective is greatest for declinations close to 19°, the terrestrial longitude of the HAWC location. Sources at this declination pass through the local zenith as they transit the sky over the observatory. The effective area per transit, that is, the effective exposure for one transit, is also a useful quantity, especially when considering the long-term study of sources. Given its declination dependence, we fill a Gammapy Map333https://docs.gammapy.org/0.18.2/maps/index.html with the one-transit effective exposure. Then, for a given data range selection, the number of transits like the one shown in Figure 1 is combined with this map to produce the effective exposure map used for the analysis. Figure 2: Effective area after background rejection for bins defined using each of the energy estimators described in Section 2. ### 2.3 Point-Spread Function The point-spread function is a measure of the precision achieved in the event direction reconstruction. It is computed as the spatial probability distribution of events produced by a point source. It is assumed to be radially symmetric, and so can be stored as a function of offset from the source location. Gammapy provides several options to store and use this information in the irf444https://docs.gammapy.org/0.18.2/irf/psf.html#irf-psf class. Like the other IRFs, the HAWC PSF depends on declination. For this reason, we fill a PSFMap with the PSF radial profile information at each sky position. From this, we can compute the 68% containment radius, which we compare to the published values in [10] to validate the procedure. This comparison can be seen in Figure 3, showing good agreement. Figure 3: Comparison of the PSF containment at the Crab location for analysis bins using the energy estimators described in Section 2. ### 2.4 Energy dispersion matrix The energy dispersion is a measure of the accuracy and precision achieved in the event energy estimation. It is computed as the probability of an event with a given simulated energy (Etrue) to be reconstructed with a different energy (Ereco). We use the EDispMap555https://docs.gammapy.org/0.18.2/api/gammapy.irf.EDispMap.html#gammapy.irf.EDispMap class in Gammapy, which is a dedicated 4-dimensional sky-map. At each sky position, it contains the probability matrix that quantifies the energy dispersion. An example of such matrices at the Crab location is shown in Figure 4 for each of the event size bins defined in Table 2 and Ereco energy bins defined in Table 2. As can be seen, the energy resolution improves with increasing event size bin. Figure 4: Energy dispersion matrix at the Crab location for each of the energy estimators described in Section 2 ## 3 Validation analysis with Gammapy: the Crab Nebula The Crab Nebula is one of the brightest sources in the $\gamma$-ray sky. For this reason, it is typically used for calibration and as a reference analysis. The HAWC Collaboration published a measurement of the Crab spectrum extending up to 100 TeV using the energy estimators described in [10]. Using the same data range and background estimation method as in that work, and the IRFs as described above, we can reproduce that result using Gammapy. We fit a combined 3-dimensional (spatial and spectral) model. For the spatial model we use a point-source assumption, and a log-parabola for the spectral shape. The result of this fit is shown in Figure 5, for both energy estimators. For both figures, the top panel compares the best-fit spectrum obtained with Gammapy with the published one, showing excellent agreement. In the bottom panel, the flux points computed with Gammapy are compared to the model in [10]. Figure 5: Best-fit Crab spectrum obtained with Gammapy compared with [10] for both energy estimators described in the text. The residual shows the comparison of the flux points with the model in [10]. ## 4 Conclusion The data from ground-based, wide-field observatories, and in this case, from the HAWC observatory, can also be made compatible with the GDAF and thus can be analyzed using the related open-source tools with minor adjustments. We find excellent agreement with the results published in [10] using an analysis tool that is built with a different philosophy and structure. This is a powerful check of both the scientific result and both tools involved, as well as the production of HAWC data and IRFs in the GDAF-compliant format. Having a common data format and analysis tools facilitates joint analysis between different experiments and effective data sharing. This synergy between experiments is particularly relevant given the complimentary nature of pointing and wide-field instruments. This will be specially important for future observatories like SWGO [11]. The lifetime of observatories is finite, and one of the concerns at the end of the operation is to ensure that the archival data is available and easy to use for future studies and reproducibility of results. Having data in a format that is common to other observatories and which can be analyzed with a general use tool is a great advantage in this regard. Gammapy has recently been selected as the official CTA Science tool. This ensures that it will be maintained and used by the overall $\gamma$-ray community, a much larger developer and user base than any of the other collaboration-specific tools individually. ## Acknowledgments We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnología (CONACyT), México, grants 271051, 232656, 260378, 179588, 254964, 258865, 243290, 132197, A1-S-46288, A1-S-22784, cátedras 873, 1563, 341, 323, Red HAWC, México; DGAPA-UNAM grants IG101320, IN111716-3, IN111419, IA102019, IN110621, IN110521; VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant, DEC-2017/27/B/ST9/02272; Coordinación de la Investigación Científica de la Universidad Michoacana; Royal Society - Newton Advanced Fellowship 180385; Generalitat Valenciana, grant CIDEGENT/2018/034; Chulalongkorn University’s CUniverse (CUAASC) grant; Coordinación General Académica e Innovación (CGAI- UdeG), PRODEP-SEP UDG-CA-499; Institute of Cosmic Ray Research (ICRR), University of Tokyo, H.F. acknowledges support by NASA under award number 80GSFC21M0002. We also acknowledge the significant contributions over many years of Stefan Westerhoff, Gaurang Yodh and Arnulfo Zepeda Dominguez, all deceased members of the HAWC collaboration. Thanks to Scott Delay, Luciano Díaz and Eduardo Murrieta for technical support. ## References * [1] CTA-Consortium, _Science with the Cherenkov Telescope Array_ , WORLD SCIENTIFIC (2019), 10.1142/10986, [https://www.worldscientific.com/doi/pdf/10.1142/10986]. * [2] C. Deil et al., _Gammapy - A prototype for the CTA science tools_ , in _35th International Cosmic Ray Conference (ICRC2017)_ , vol. 301 of _International Cosmic Ray Conference_ , p. 766, Jan., 2017 [1709.01751]. * [3] L. Mohrmann et al., _Validation of open-source science tools and background model construction in $\gamma$-ray astronomy_, _Astronomy and Astrophysics_ 632 (2019) A72 [1910.08088]. * [4] C. Nigro et al., _Towards open and reproducible multi-instrument analysis in gamma-ray astronomy_ , _Astronomy and Astrophysics_ 625 (2019) A10 [1903.06621]. * [5] C. Deil et al., _Open high-level data formats and software for gamma-ray astronomy_ , in _6th International Symposium on High Energy Gamma-Ray Astronomy_ , vol. 1792 of _American Institute of Physics Conference Series_ , p. 070006, Jan., 2017, DOI [1610.01884]. * [6] D.C. Wells et al., _FITS - a Flexible Image Transport System_ , _Astronomy and Astrophysics, Supplement_ 44 (1981) 363. * [7] A.U. Abeysekara et al., _Characterizing gamma-ray sources with HAL (HAWC Accelerated likelihood) and 3ML_ , in _Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021)_ , vol. 395, p. 828, 2021, DOI. * [8] G. Vianello et al., _The Multi-Mission Maximum Likelihood framework (3ML)_ , _arXiv e-prints_ (2015) arXiv:1507.08343 [1507.08343]. * [9] A.U. Abeysekara et al., _Observation of the Crab Nebula with the HAWC Gamma-Ray Observatory_ , _Astrophysical Journal_ 843 (2017) 39 [1701.01778]. * [10] A.U. Abeysekara et al., _Measurement of the Crab Nebula Spectrum Past 100 TeV with HAWC_ , _Astrophysical Journal_ 881 (2019) 134 [1905.12518]. * [11] A. Albert et al., _Science Case for a Wide Field-of-View Very-High-Energy Gamma-Ray Observatory in the Southern Hemisphere_ , _arXiv e-prints_ (2019) arXiv:1902.08429 [1902.08429]. ## Full Authors List: HAWC Collaboration A.U. Abeysekara48, A. Albert21, R. Alfaro14, C. Alvarez41, J.D. Álvarez40, J.R. Angeles Camacho14, J.C. Arteaga-Velázquez40, K. P. Arunbabu17, D. Avila Rojas14, H.A. Ayala Solares28, R. Babu25, V. Baghmanyan15, A.S. Barber48, J. Becerra Gonzalez11, E. Belmont-Moreno14, S.Y. BenZvi29, D. Berley39, C. Brisbois39, K.S. Caballero-Mora41, T. Capistrán12, A. Carramiñana18, S. Casanova15, O. Chaparro-Amaro3, U. Cotti40, J. Cotzomi8, S. Coutiño de León18, E. De la Fuente46, C. de León40, L. Diaz-Cruz8, R. Diaz Hernandez18, J.C. Díaz-Vélez46, B.L. Dingus21, M. Durocher21, M.A. DuVernois45, R.W. Ellsworth39, K. Engel39, C. Espinoza14, K.L. Fan39, K. Fang45, M. Fernández Alonso28, B. Fick25, H. Fleischhack51,11,52, J.L. Flores46, N.I. Fraija12, D. Garcia14, J.A. García-González20, J. L. García-Luna46, G. García-Torales46, F. Garfias12, G. Giacinti22, H. Goksu22, M.M. González12, J.A. Goodman39, J.P. Harding21, S. Hernandez14, I. Herzog25, J. Hinton22, B. Hona48, D. Huang25, F. Hueyotl-Zahuantitla41, C.M. Hui23, B. Humensky39, P. Hüntemeyer25, A. Iriarte12, A. Jardin-Blicq22,49,50, H. Jhee43, V. Joshi7, D. Kieda48, G J. Kunde21, S. Kunwar22, A. Lara17, J. Lee43, W.H. Lee12, D. Lennarz9, H. León Vargas14, J. Linnemann24, A.L. Longinotti12, R. López-Coto19, G. Luis-Raya44, J. Lundeen24, K. Malone21, V. Marandon22, O. Martinez8, I. Martinez- Castellanos39, H. Martínez-Huerta38, J. Martínez-Castro3, J.A.J. Matthews42, J. McEnery11, P. Miranda-Romagnoli34, J.A. Morales-Soto40, E. Moreno8, M. Mostafá28, A. Nayerhoda15, L. Nellen13, M. Newbold48, M.U. Nisa24, R. Noriega- Papaqui34, L. Olivera-Nieto22, N. Omodei32, A. Peisker24, Y. Pérez Araujo12, E.G. Pérez-Pérez44, C.D. Rho43, C. Rivière39, D. Rosa-Gonzalez18, E. Ruiz- Velasco22, J. Ryan26, H. Salazar8, F. Salesa Greus15,53, A. Sandoval14, M. Schneider39, H. Schoorlemmer22, J. Serna-Franco14, G. Sinnis21, A.J. Smith39, R.W. Springer48, P. Surajbali22, I. Taboada9, M. Tanner28, K. Tollefson24, I. Torres18, R. Torres-Escobedo30, R. Turner25, F. Ureña-Mena18, L. Villaseñor8, X. Wang25, I.J. Watson43, T. Weisgarber45, F. Werner22, E. Willox39, J. Wood23, G.B. Yodh35, A. Zepeda4, H. Zhou30 1Barnard College, New York, NY, USA, 2Department of Chemistry and Physics, California University of Pennsylvania, California, PA, USA, 3Centro de Investigación en Computación, Instituto Politécnico Nacional, Ciudad de México, México, 4Physics Department, Centro de Investigación y de Estudios Avanzados del IPN, Ciudad de México, México, 5Colorado State University, Physics Dept., Fort Collins, CO, USA, 6DCI-UDG, Leon, Gto, México, 7Erlangen Centre for Astroparticle Physics, Friedrich Alexander Universität, Erlangen, BY, Germany, 8Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, Puebla, México, 9School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta, GA, USA, 10School of Physics Astronomy and Computational Sciences, George Mason University, Fairfax, VA, USA, 11NASA Goddard Space Flight Center, Greenbelt, MD, USA, 12Instituto de Astronomía, Universidad Nacional Autónoma de México, Ciudad de México, México, 13Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ciudad de México, México, 14Instituto de Física, Universidad Nacional Autónoma de México, Ciudad de México, México, 15Institute of Nuclear Physics, Polish Academy of Sciences, Krakow, Poland, 16Instituto de Física de São Carlos, Universidade de São Paulo, São Carlos, SP, Brasil, 17Instituto de Geofísica, Universidad Nacional Autónoma de México, Ciudad de México, México, 18Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Puebla, México, 19INFN Padova, Padova, Italy, 20Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Ave. Eugenio Garza Sada 2501, Monterrey, N.L., 64849, México, 21Physics Division, Los Alamos National Laboratory, Los Alamos, NM, USA, 22Max-Planck Institute for Nuclear Physics, Heidelberg, Germany, 23NASA Marshall Space Flight Center, Astrophysics Office, Huntsville, AL, USA, 24Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA, 25Department of Physics, Michigan Technological University, Houghton, MI, USA, 26Space Science Center, University of New Hampshire, Durham, NH, USA, 27The Ohio State University at Lima, Lima, OH, USA, 28Department of Physics, Pennsylvania State University, University Park, PA, USA, 29Department of Physics and Astronomy, University of Rochester, Rochester, NY, USA, 30Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, China, 31Sungkyunkwan University, Gyeonggi, Rep. of Korea, 32Stanford University, Stanford, CA, USA, 33Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL, USA, 34Universidad Autónoma del Estado de Hidalgo, Pachuca, Hgo., México, 35Department of Physics and Astronomy, University of California, Irvine, Irvine, CA, USA, 36Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA, 37Universidad de Costa Rica, San José , Costa Rica, 38Department of Physics and Mathematics, Universidad de Monterrey, San Pedro Garza García, N.L., México, 39Department of Physics, University of Maryland, College Park, MD, USA, 40Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, México, 41FCFM-MCTP, Universidad Autónoma de Chiapas, Tuxtla Gutiérrez, Chiapas, México, 42Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM, USA, 43University of Seoul, Seoul, Rep. of Korea, 44Universidad Politécnica de Pachuca, Pachuca, Hgo, México, 45Department of Physics, University of Wisconsin-Madison, Madison, WI, USA, 46CUCEI, CUCEA, Universidad de Guadalajara, Guadalajara, Jalisco, México, 47Universität Würzburg, Institute for Theoretical Physics and Astrophysics, Würzburg, Germany, 48Department of Physics and Astronomy, University of Utah, Salt Lake City, UT, USA, 49Department of Physics, Faculty of Science, Chulalongkorn University, Pathumwan, Bangkok 10330, Thailand, 50National Astronomical Research Institute of Thailand (Public Organization), Don Kaeo, MaeRim, Chiang Mai 50180, Thailand, 51Department of Physics, Catholic University of America, Washington, DC, USA, 52Center for Research and Exploration in Space Science and Technology, NASA/GSFC, Greenbelt, MD, USA, 53Instituto de Física Corpuscular, CSIC, Universitat de València, Paterna, Valencia, Spain
# Optimal measure transportation with respect to non-traditional costs S. Artstein-Avidan, S. Sadovsky, K. Wyczesany ###### Abstract. We study optimal mass transport problems between two measures with respect to a non-traditional cost function, i.e. a cost $c$ which can attain the value $+\infty$. We define the notion of $c$-compatibility and strong-$c$-compatibility of two measures, and prove that if there is a finite- cost plan between the measures then the measures must be $c$-compatible, and if in addition the two measures are strongly $c$-compatible, then there is an optimal plan concentrated on a $c$-subgradient of a $c$-class function. This function is the so-called potential of the plan. We give two proofs of this theorem, under slightly different assumptions. In the first we utilize the notion of $c$-path-boundedness, showing that strong $c$-compatibility implies a strong connectivity result for a directed graph associated with an optimal map. Strong connectivity of the graph implies that the $c$-cyclic monotonicity of the support set (which follows from classical reasoning) guarantees its $c$-path-boundedness, implying, in turn, the existence of a potential. We also give a constructive proof, in the case when one of the measures is discrete. This approach adopts a new notion of ‘Hall polytopes’, which we introduce and study in depth, to which we apply a version of Brouwer’s fixed point theorem to prove the existence of a potential in this case. ## 1\. Introduction and results The Monge transport problem is concerned with finding a _transport map_ moving mass from one probability measure111All considered measures are Borel measures on Polish spaces, which are complete, separable metric spaces equipped with their Borel $\sigma$-algebra. to another, in a way which is efficient with respect to some _cost function_. The most widely studied case of this problem is for the quadratic cost $c(x,y)=\|x-y\|_{2}^{2}/2$, for which the Brenier–Gangbo-McCann theorem [10, 12] implies that under mild conditions on the measures involved, optimal transport maps exist and are given by gradients of convex functions. In this work the main emphasis will be on _non- traditional_ cost functions, i.e. costs that can attain the value $+\infty$, as this project is motivated by the study of transportation with respect to the so-called _polar cost_ given by (1) $p(x,y)=-\ln(\langle x,y\rangle-1),$ where $p(x,y)=+\infty$ if $\langle x,y\rangle\leq 1$. This cost function is linked with the polarity transform (see [3, 4]), similarly to the strong connection of the quadratic cost with the Legendre transform. Transportation with respect to polar cost was first considered in [7]. We provide necessary conditions on pairs of measures, together with a cost $c$, for which finite cost plans exist. To this end, we discuss the class of functions connected with a cost, called its $c$-class (see the definition in Section 2.2). The optimality of a plan is linked with the possibility of finding a “potential” for the plan, which is a $c$-class function such that the plan lies on its $c$-subgradient (yet another important notion we discuss in depth, see the definition in Equation (7)). We will see shortly that the mere existence of a finite cost plan between two measures $\mu$ and $\nu$ implies that the two measures considered are $c$-compatible, namely that for any measurable set $A$ in the measure space $(X,\mu)$, one has that $\mu(A)\leq\nu(\\{y:\exists x\in A,\,\,c(x,y)<\infty\\})$. This is quite intuitive – all points (up to measure $0$) in $A$ must be mapped to points in the target space with which they have finite cost. This $c$-compatibility of two measures is thus a necessary condition (for the formal definition of $c$-compatibility see Definition 3.2, and for the statement of the necessity of this condition see Lemma 3.3). As an example we will show (see Example 3.5) that $c$-compatibility is not a sufficient condition for the existence of a finite cost plan. However, if a finite cost plan exists, a slight strengthening of $c$-compatibility condition in which we demand a strict inequality is already sufficient to ensure that the optimal plan has a potential. We will show later why our notion of “strong compatibility” is a very natural strengthening of compatibility, and discuss cases where two measures are $c$-compatible but not strongly $c$-compatible and how this implies that the transport problem is decomposable into sub- problems. In this note we only consider symmetric cost functions $c:X\times X\to(-\infty,\infty]$ with $c(x,y)=c(y,x)$, but to see the difference between the two variables we denote the second copy of $X$ by $Y$ and write $c:X\times Y\to(-\infty,\infty]$. Our results hold for the non-symmetric case as well, with only minor adjustments. We will also add a lower-bound assumption on the cost $c$ which allows to integrate it and its marginals (see also Example 2.5). We say that $c$ is essentially bounded from below with respect to $\mu$ and $\nu$ if there exist functions $a(x)\in L^{1}(\mu)$, $b(x)\in L^{1}(\nu)$ such that $c(x,y)\geq a(x)+b(y)$. For the polar cost this condition is satisfied if, for example, both measures have finite second moment. Our main theorem is the following (here $\partial^{c}\varphi$ denotes the $c$-subgradient of $\varphi$, see the definition in equation (7), and $\Pi(\mu,\nu)$ denotes all transport plans between $\mu$ and $\nu$, see the beginning of Section 2). ###### Theorem 1.1. Let $X=Y$ be a Polish space, let $c:X\times Y\to(-\infty,\infty]$ be a continuous and symmetric cost function, essentially bounded from below with respect to probability measures $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$. Assume $\mu$ and $\nu$ are strongly $c$-compatible, namely satisfy that for any measurable $A\subset X$ we have $\mu(A)+\nu(\\{y\in Y:\forall x\in A,\,\,c(x,y)=\infty\\})<1.$ If there exists _some_ finite cost plan transporting $\mu$ to $\nu$, then there exists a $c$-class function $\varphi$ and an optimal transport plan $\pi\in\Pi(\mu,\nu)$ concentrated on $\partial^{c}\varphi$. The proof uses results from [5] on $c$-path-boundedness, which is a notion that replaces $c$-cyclic monotonicity from the Rockafellar-Rochet-Rüschendorf result (see [17, 16, 18]) in the case when the cost is non-traditional. The $c$-path-boundedness is a necessary and sufficient condition for a set to be included in a $c$-subgradient of a $c$-class function. Since the initial main interest for us in developing this theory concerned the polar cost, in which case we have a precise form for $c$-subgradients, let us state the relevant theorem, which is almost a direct application of the theorem above, together with some simple analysis of polar-subgradients as performed in [4]. By ${\rm Cvx}_{0}(\mathbb{R}^{n})$ we denote a class of lower semi-continuous convex functions from $\mathbb{R}^{n}$ to $[0,\infty]$ which take the value zero at the origin. By ${\mathcal{A}}$ we denote the polarity transform on the class ${\rm Cvx}_{0}(\mathbb{R}^{n})$, defined in [3] and given in (14). ###### Theorem 1.2. Let $X=Y=\mathbb{R}^{n}$ and let $\mu,\,\nu\in{\mathcal{P}}(\mathbb{R}^{n})$ be probability measures with finite second moment, which are strongly $p$-compatible where $p(x,y)=-\ln(\langle x,y\rangle-1)_{+}$ is the polar cost, that is $\mu(K)+\nu(K^{\circ})<1$ for any convex set $K$ with $\mu(K)\neq 0,1$. Assume further that $\mu$ is absolutely continuous. Assume there exists some finite cost plan mapping $\mu$ to $\nu$. Then there exists $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ such that $\partial^{\circ}\varphi$ is an optimal transport map between $\mu$ and $\nu$, where $\partial^{\circ}\varphi(x)=\\{y\in\mathbb{R}^{n}:\,\varphi(x){\mathcal{A}}\varphi(y)=\langle x,y\rangle-1>0\\}.$ In particular, for $\mu$-almost every $x$, the set $\partial^{\circ}\varphi(x)$ is a singleton. We remark that the existence of a potential function for the cost $p$ and other non-traditional costs leads naturally to the question regarding regularity of such potentials (as introduced by Caffarelli in [11] and developed, among others, by Trudinger and Wang in [21]). In this work we do not pursue this direction, and instead focus on the analysis of the existence of potentials, leaving the question of regularity for future work. In the second half of the paper we specialize to the case where $\nu$ is discrete. In this case we give a constructive proof for the existence of a transport map, where the $c$-class function is given as a finite infimum of “basic functions” (see (6)) associated with the cost. The advantage of this method is that much of the geometry of the problem is revealed. In the proof, we generalize a method used by K. Ball [6] for the quadratic cost, where all possible maps are parametrized by a weight vector, and the existence of the required one is shown using Brouwer’s fixed point theorem. However, in contrast with the case of the classical quadratic cost function and other traditional costs, when the cost attains infinite values the set of all discrete measures with a given support, to which a measure $\mu$ can be mapped with finite cost, is given by an interesting polytope which we call the _Hall polytope_ of the measure $\mu$. The condition of strong $c$-compatibility corresponds to measures with weight vectors in the interior of the polytope. We present a thorough study of the structure and geometry of Hall polytopes (which for traditional costs are just simplices), which we use to prove Theorem 1.3 below. An advantage of this method is that we can relax the conditions on the cost function. We do need a condition of $c$-regularity for the measure $\mu$ (given in Definition 5.1), which for the polar cost is satisfied if, say, $\mu$ is absolutely continuous. ###### Theorem 1.3. Let $X$ be some Polish space and $Y=\\{u_{i}\\}_{i=1}^{m}$. Assume $c:X\times Y\to(-\infty,\infty]$ is a measurable cost function, $\mu\in{\mathcal{P}}(X)$ is $c$-regular and $\nu=\sum_{i=1}^{m}\alpha_{i}\mathbbm{1}_{u_{i}}\in{\mathcal{P}}(Y)$. Assume furthermore, that the intersection $\\{x\in X:c(x,u_{i})<\infty\\}\cap\\{x\in X:c(x,u_{j})<\infty\\}$ contains an open set for each pair $u_{i},u_{j}$. If $\mu$ and $\nu$ are strongly $c$-compatible then there exists an optimal transport plan $\pi\in\Pi(\mu,\nu)$ whose graph lies in the $c$-subgradient $\partial^{c}\varphi$ of a $c$-class function $\varphi:X\to[-\infty,\infty]$. The case where the measures $\mu$ and $\nu$ are $c$-compatible but not strongly so, can be analyzed as well. In this case we can write $\mu=\mu_{1}+\mu_{2}$ and $\nu=\nu_{1}+\nu_{2}$ where $\mu_{1}(X)=\nu_{1}(Y)$ (and so $\mu_{2}(X)=\nu_{2}(Y)$), where the measures $\mu_{1}$ and $\mu_{2}$ are concentrated on disjoint sets, as are $\nu_{1}$ and $\nu_{2}$, and in such a way that any finite cost transport plan $\pi\in\Pi(\mu,\nu)$ is given as a sum of $\pi_{1}\in\Pi(\mu_{1},\nu_{1})$ and $\pi_{2}\in\Pi(\mu_{2},\nu_{2})$. We illustrate this in Section 7. ### Structure of the paper Section 2 is dedicated to gathering all the required definitions and notions and previous results. In Section 3 we discuss the notion of $c$-compatibility and strong $c$-compatibility together with their geometric interpretation. In Section 4 we prove Theorem 1.1. In Section 5 we go back to the discrete case and show how one may treat it using some deep structural properties of Hall polytopes, which we establish, proving Theorem 1.3. In Section 6 we specialize to the polar cost, showing that for absolutely continuous measure the optimal plan is given by a map. In Section 7 we discuss the case of measures which are $c$-compatible but not strongly $c$-compatible. For completeness an appendix A in which we review $c$-subgradients, with detailed examples and geometric intuition. ### Acknowledgment The authors were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 770127). The second named author is grateful to the Azrieli foundation for the award of an Azrieli fellowship. ## 2\. Background and preliminary observations ### 2.1. Transport plans and maps Given two measure spaces $X,\,Y$, a measurable222When referring to a function on $X\times Y$ as “measurable” we assume it is both measurable with respect to the product $\sigma$-algebra and its fibers $f(\cdot,y)$ and $f(x,\cdot)$ are measurable functions on $X$ and $Y$ respectively, for any $x\in X$ and $y\in Y$. cost function $c:X\times Y\to(-\infty,\infty]$, and probability measures $\mu$ on $X$ and $\nu$ on $Y$, we say that there exists a _$c$ -optimal transport map_ between them if the following infimum is attained: $\inf_{T}\int_{X}c(x,T(x))d\mu(x),$ where $T:X\to Y$ are measurable _transport maps_ , i.e. $\nu(B)=\mu(T^{-1}(B))$ for all measurable sets $B\subset Y$. We say that there exists a _$c$ -optimal plan_ between them if the infimum (2) $\inf_{\pi}\int_{X\times Y}c(x,y)d\pi(x,y)$ is attained, where $\pi\in\Pi(\mu,\nu)$, namely $\pi$ is a probability measure on $X\times Y$ satisfying $\pi(A\times Y)=\mu(A),\ \ \ \pi(X\times B)=\nu(B)$ for all measurable sets $A\subset X$ and $B\subset Y$. Every transport map induces a transport plan supported on its graph, while not every plan is induced by a map. We denote the infimum in (2), also called the “total cost”, by $C(\mu,\nu)$. Due to the Kantorovich Duality Theorem [13, 14], when $c$ is lower semi-continuous, the total cost is equal to (3) $\sup_{\varphi,\psi}\left\\{\int_{X}\varphi d\mu+\int_{Y}\psi d\nu:\ \varphi\in L_{1}(X,\mu),\,\psi\in L_{1}(Y,\nu){\rm~{}~{}~{}admissible}\right\\}$ where $(\varphi,\psi)$ is called an _admissible pair_ , if $\varphi:X\to[-\infty,\infty]$, $\psi:Y\to[-\infty,\infty]$ satisfy $\forall(x,y)\in X\times Y,\,\,\,\ \varphi(x)+\psi(y)\leq c(x,y).$ In the case where $\varphi=+\infty$ and $\psi=-\infty$ we stipulate $-\infty+\infty=-\infty$, namely in such a case the condition above holds regardless of the value of $c(x,y)$. ### 2.2. The $c$-transform Motivated by (3), for every function $\psi:Y\to[-\infty,\infty]$ one may consider the largest function $\varphi$ for which $(\varphi,\psi)$ is an admissible pair, and vice versa. This gives rise to the $\mathbf{c}$-transform, defined by (4) $\psi^{c}(x)=\inf_{y}(c(x,y)-\psi(y)),$ and (5) $\varphi^{c}(y)=\inf_{x}(c(x,y)-\varphi(x)).$ ###### Remark 2.1. Here if on the right hand side are infinities of opposite signs, which may occur only if $\psi(y)=\infty$ (as $c\neq-\infty$), we use the opposite convention, namely $-\infty+\infty=+\infty$, since when the cost $c(x,y)$ is infinite there is no restriction on the sum $\varphi(x)+\psi(y)$. In general one must be careful with sums of opposite side infinities, as there is no obvious “rule of thumb” that can apply everywhere. Note that for a general cost we may lose the measurability of $\varphi$ when applying the $c$-transform, as well as integrability, even under the assumption that $c$ is measurable in the strong sense we have postulated. When $c$ is continuous, however, this is less of a problem. Also, by truncating the functions and taking limits, the issue of integrability can sometimes be resolved. Nevertheless, one should be extra careful when using (3) for a pair $\varphi,\varphi^{c}$ when the cost is non-traditional, and in the existing literature it is not always clear for which theorems does the non-traditional case follow from the same proof. When $X=Y$ and $c(\cdot,\cdot)$ is symmetric in its arguments the transforms in (4) and (5) coincide. Hence, abusing notation, we use the same notation for both. We define the $c$-class as the image of the $c$-transform $\\{\psi^{c}:\psi:Y\to[-\infty,\infty]\\}$, or equivalently, as all the functions $\varphi$ such that $\varphi^{cc}=\varphi$. By definition, any function in the $c$-class is an infimum of _basic functions_ , which are functions of the form (6) $c(x)=c(x,y_{0})+t$ for some $y_{0}\in Y$ and $t\in\mathbb{R}$. It is useful to notice that the $c$-class is always closed under pointwise infimum (this fact is commonly known and used, see e.g. [1, 22], and a simple proof can be found in [24]). ### 2.3. The $c$-subgradient Given a function $\varphi$ in the $c$-class, its $c$-subgradient is the subset of $X\times Y$ given by (7) $\partial^{c}\varphi=\\{(x,y):\,\varphi(x)+\varphi^{c}(y)=c(x,y)\,\text{ and }\,c(x,y)<\infty\\}.$ To illustrate the relevance of $c$-subgradients to the study of optimal transport, let us present a folklore argument, which can be made precise for traditional costs, and which we only use as motivation but do not claim it holds in general. In Kantorovich Duality Theorem, recalled as (3) above, one is inclined to replace $\psi$ with the largest admissible partner of $\varphi$ (at least so long as it is measurable and in $L_{1}(\nu)$), and then replace $\varphi$ by $\varphi^{cc}$. In this sense, one may think of (3) applied only to admissible pairs $(\varphi,\varphi^{c})$, where $\varphi=\varphi^{cc}$ is in the $c$-class. However, for any $\pi\in\Pi(\mu,\nu)$ and $\varphi$ is in the $c$-class, $\displaystyle\int_{X}\varphi d\mu(x)+\int_{Y}\varphi^{c}(y)d\nu(y)=\int_{X\times Y}(\varphi(x)+\varphi^{c}(y))d\pi(x,y)\leq\int_{X\times Y}c(x,y)d\pi(x,y)$ So for equality between the left and right hand side to be obtained for some (potential) $\varphi$ and (optimal plan) $\pi$, we see that $\pi$ must be concentrated on the set $\partial^{c}\varphi$. In other words, finding optimal plans admitting a potential is equivalent to finding some plan supported on a $c$-subgradient. While this argument is not precise (in particular, we ignored measurability and integrability assumptions, applying (3) to a pair $(\varphi,\varphi^{c})$), it constitutes the motivation behind searching for potentials in optimal transport problems. The above observation shows the importance of the notion of the $c$-subgradient mapping. The name $c$-subgradient is connected to the fact that for the classical cost $c(x,y)=-\langle x,y\rangle$, the $c$-class consists of upper semi-continuous concave functions, the $c$-transform of $-\varphi$ is $-{\mathcal{L}}(\varphi)$, and the $c$-subgradient of $-\varphi$ at $x$ is the usual subgradient $\partial\varphi(x)$. So as not to disturb the flow of the paper, we gathered some basic facts about the $c$-subgradient, including the geometric intuition behind it, in Appendix A. ### 2.4. $c$-cyclic monotonicity and $c$-path-boundedness The connection between optimality of a plan and some geometric information on its support is quite intuitive: if a plan is optimal, then we should not gain any profit by interchanging several portions of it. This is the idea behind the well known notion of $c$-cyclic monotonicity. Given a cost $c:X\times Y\to(-\infty,\infty]$, a subset $G\subset X\times Y$ is called $c$-cyclically monotone if $c(x,y)<\infty$ for all $(x,y)\in G$, and for any $m$, any $(x_{i},y_{i})_{i=1}^{m}\subset G$, and any permutation $\sigma$ of $[m]=\\{1,\ldots,m\\}$ it holds that (8) $\sum_{i=1}^{m}c(x_{i},y_{i})\leq\sum_{i=1}^{m}c(x_{i},y_{\sigma(i)}).$ This definition seems to have been first introduced by Knott and Smith [19], as a generalization of cyclic monotonicity considered by Rockafellar [17] in the case of quadratic cost. It is easy to check that if $\varphi$ is a $c$-class function then any set $G\subset\partial^{c}\varphi$ is $c$-cyclically monotone. The theorems of Rockafellar, Rochet and Rüschendorf give the reverse implication, in the case of a traditional cost. Namely, when $c:X\times Y\to\mathbb{R}$, a set $G\subset X\times Y$ is $c$-cyclically monotone if and only if there exists a $c$-class function such that $G\subset\partial^{c}\varphi$. For non-traditional costs, this is no longer the case, and one may construct $c$-cyclically monotone sets which admit no potential. In [5], the corresponding result for non-traditional costs is provided. Cyclic monotonicity has to be replaced by a stronger notion, which we called $c$-path-boundedness. ###### Definition 2.2. Fix sets $X,\,Y$ and $c:X\times Y\to(-\infty,\infty]$. A subset $G\subset X\times Y$ will be called $c$-path-bounded if $c(x,y)<\infty$ for any $(x,y)\in G$, and for any $(x,y)\in G$ and $(z,w)\in G$, there exists a constant $M=M((x,y),(z,w))\in\mathbb{R}$, such that the following holds: For any $m\in{\mathbb{N}}$ and any $(x_{i},y_{i})_{i=2}^{m-1}\subset G$, denoting $(x_{1},y_{1})=(x,y)$ and $(x_{m},y_{m})=(z,w)$, we have $\sum_{i=1}^{m-1}\left(c(x_{i},y_{i})-c(x_{i+1},y_{i})\right)\leq M.$ The fact that a $c$-path-bounded set is also $c$-cyclically monotone is easy to establish (see [5]). With this definition the main theorem of [5] can be stated. ###### Theorem 2.3. Let $X,\,Y$ be sets and let $c:X\times Y\to(-\infty,\infty]$ be given. A set $G\subset X\times Y$ is $c$-path-bounded if and only if there exists a $c$-class function $\varphi$ such that $G\subset\partial^{c}\varphi$. It was also demonstrated in [5] that under certain conditions, the notions of $c$-cyclic monotonicity and $c$-path-boundedness do coincide. One such instance, which will be used in this paper, is explained and formulated in Proposition 4.1 in Section 4. ### 2.5. Some know results about existence of optimal plans and potentials Having fixed a cost, the discussion about the structure of an optimal plan naturally splits into several components. The first, which is relevant only when the cost is non-traditional, is the existence of some finite cost plan (necessary conditions will be discussed in the next section). Further, one can ask whether an optimal plan exists. This is the object of the next theorem, which is quoted from Villani [23]. Recall that $\Pi(\mu,\nu)$ denotes the set of all probability measures on $X\times Y$ whose marginals are $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$, and that $c:X\times Y\to(-\infty,\infty]$ is essentially bounded with respect to $\mu$ and $\nu$ if there exist upper semi- continuous function $a:X\to(-\infty,\infty]$, $a\in L_{1}(\mu)$ and $b:X\to(-\infty,\infty]$, $b\in L_{1}(\nu)$ such that $c(x,y)\geq a(x)+b(y)$ for all $x\in X,\,y\in Y$. ###### Theorem 2.4. Let $X,\,Y$ be two Polish spaces, let $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$. Let $c:X\times Y\to(-\infty,\infty]$ be a lower semi-continuous cost function which is essentially bounded with respect to $\mu$ and $\nu$. Then there exists a $c$-optimal plan $\pi\in\Pi(\mu,\nu)$. Let us note that, in the above theorem, the existence of a plan with finite total cost is not assumed as when no finite cost plan exists, any plan (say, $\mu\otimes\nu$) is optimal in a trivial sense. Further, a simple example demonstrates that without some kind of assumption on boundedness from below of the cost, the total cost may be $-\infty$, and in this case optimal measures can be concentrated on sets which are far from being $c$-cyclically monotone. ###### Example 2.5. Let $p(x,y)=-\ln(xy-1)_{+}$ be the polar cost on $\mathbb{R}_{+}\times\mathbb{R}_{+}$. Let $\mu$ be a discrete probability measure on $\mathbb{R}_{+}$ given by $\mu=\sum_{n=2}^{\infty}\alpha_{n}\mathbbm{1}_{n}$, where $\alpha_{n}$ are such that $\sum_{n=2}^{\infty}\alpha_{n}=1$ and $\sum_{n=2}^{\infty}n^{3/2}\alpha_{n}=\infty$. Consider transport plans of $\mu$ to itself, namely $\Pi(\mu,\mu)$. WE claim that in this case, the identity map $x\mapsto x$ is a transport plan whose total cost is $-\infty$ (in particular, it is optimal) but it is not supported on a $p$-cyclically monotone set. Indeed, consider the measure $\pi_{\mu}$ on the diagonal whose projection is $\mu$. Its total cost is $\sum_{n=2}^{\infty}-\ln(n^{2}-1)\alpha_{n}\leq-\sum_{n=1}^{\infty}n^{3/2}\alpha_{n}=-\infty.$ Clearly even for two points $(x_{1},y_{1})=(2,2)$ and $(x_{2},y_{2})=(3,2)$ it holds that $-\ln(2\cdot 2-1)-\ln(3\cdot 3-1)=-\ln(24)>-\ln(2\cdot 3-1)-\ln(3\cdot 2-1)=-\ln(25).$ We thus see that an optimal plan (albeit with negative infinity cost) may have support which is not $c$-cyclically monotone. Analysing the geometric structure of an optimal plan, after showing its existence, is a problem which has a long history. After Brenier [10], following Rüschendorf [18] determined the classical structure of cyclic monotonicity of optimal plans, Gangbo and McCann [12] extended the result to lower semi-continuous cost functions bounded from below. They showed that every finite optimal plan with respect to such costs lies on a $c$-cyclically monotone set. Beiglböck, Goldstern, Maresch, and Schachermayer [8] generalised the result further by removing regularity assumptions on the cost: ###### Theorem 2.6 (See [8, Theorem 1.a]). Let $X,\,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$ and let $c:X\times Y\to[0,\infty]$ a Borel measurable cost function. Then every finite optimal transport plan is $c$-cyclically monotone. The reverse implication, that $c$-cyclic monotonicity implies optimality, is not true in general as shown in Example 3.1 in [1]. In [8] Theorem 1.b, it was shown that for a measurable cost function $c$ the assumption that the “infinity” set $\\{(x,y):c(x,y)=\infty\\}$ is a union of a closed set and a $\mu\otimes\nu$-null set, implies that every finite $c$-cyclically monotone plan is optimal. Finally, the question of the existence of a potential for the optimal plan remains. A result in this direction was presented in [8]; it states that, with assumptions as in Theorem 2.6, a finite cost plan admits a potential if and only if it is “robustly optimal” (see Definition 1.6. in [8]). In particular, their result implies that a plan which admits a potential is optimal. In this note, our main goal is to find conditions on the pairs of measures that guarantee the existence of a potential for the optimal transport plan between them, thus guaranteeing, in fact, robust optimality. ## 3\. Compatibility Given two probability measures, before trying to find an element of $\Pi(\mu,\nu)$ with some good structure (say, a potential), or an optimal element with respect to the cost, one must figure out whether any element $\pi\in\Pi(\mu,\nu)$ has a finite cost. Clearly, if the cost function is bounded, we may find a finite cost plan between any pair of measures. However, if the cost admits the value $+\infty$, an obvious necessary condition for the existence of a finite cost plan is that every set in $(X,\mu)$ has “enough” points in $(Y,\nu)$ to which it can be mapped for a finite cost. In the case of two discrete measures, this necessary condition is also sufficient, which is the subject of Hall’s marriage theorem. We start with this simple case as it gives some intuition for our next steps. ### 3.1. Starting point: Hall’s Marriage Theorem In the following motivating example, for some $(x_{i})_{i=1}^{m}\subset X$ let $\mu=\sum_{i=1}^{m}\frac{1}{m}\mathbbm{1}_{x_{i}}$ be a probability measure on $X$, and for $(y_{i})_{i=1}^{m}\subset Y$ let $\nu=\sum_{i=1}^{m}\frac{1}{m}\mathbbm{1}_{y_{i}}$ be a probability measure on $Y$. Let $c:X\times Y\to(-\infty,\infty]$ be an arbitrary cost. A finite cost map is a given by a bijection $T:(x_{i})_{i=1}^{m}\to(y_{i})_{i=1}^{m}$, such that $c(x_{i},T(x_{i}))<\infty$ for all $i=1,\dots m$. The bijection $T$ corresponds, of course, to a permutation $\sigma:[m]\to[m]$. By Birkhoff’s theorem on the extremal points of bi-stochastic matrices, every transport plan $\pi\in\Pi(\mu,\nu)$ is a convex combination of permutation maps $T$. The condition for the existence of a finite cost map/plan can be thus reformulated in a graph-theoretic way: Let $G$ be a bipartite graph with a vertex set $V=(x_{i})_{i=1}^{m}\cup(y_{i})_{i=1}^{m}$ and edges $E=\\{(x_{i},y_{j}):\,c(x_{i},y_{j})<\infty\\}$. A finite cost map $T$ corresponds a matching in this graph. Hall’s Marriage Theorem gives the necessary and sufficient conditions for such a matching to exist. ###### Theorem 3.1 (Hall’s Marriage Theorem). A bipartite graph $G$ with a vertex set $V_{1}\cup V_{2}$, such that $|V_{1}|=|V_{2}|$, contains a complete matching if and only if $G$ satisfies Hall’s condition $|N_{G}(S)|\geq|S|\text{ for every }S\subset V_{1},$ where $N_{G}(S)\subset V_{2}$ is the set of all neighbors of vertices in $S$. The condition can be reformulated in terms of the measures, as $\mu(A)\leq\nu(\\{y:\exists x\in A,\,\,c(x,y)<\infty\\})$ for any $A\subset X$, or, equivalently, $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})\leq 1.$ In fact, in this discrete and finite case, once we have determined the existence of a finite cost map, we may consider, among the finite number of possible matchings, the one with minimal cost (there may, of course, be more than one). It is then not hard to show (and will follow from our results as well) that this resulting optimal plan must lie on a $c$-subgradient of a $c$-class function. (This fact follows from a variation of a theorem of Rüschendorf [18], see also [5].) ### 3.2. The $c$-compatibility condition The continuous counterpart for Hall’s condition is an obvious necessary condition for the existence of a finite cost plan. ###### Definition 3.2. Let $X,\,Y$ be measure spaces and $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. We say that two probability measures $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$ are _$c$ -compatible_ if for any measurable $A\subset X$ it holds that $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})\leq 1.$ It is not hard to check that $c$-compatibility is in fact a symmetric notion, and the above condition holds if and only if for any $B\subset Y$ we have $\nu(B)+\mu(\\{x:\forall y\in B,\,\,c(x,y)=\infty\\})\leq 1.$ Indeed, to get the latter we let $A=\\{x:\forall y\in B,\,\,c(x,y)=\infty\\}$, in which case $B\subset\\{y:\forall x\in A,\,\,c(x,y)=\infty\\}$. Applying the assumed inequality, we get $\nu(B)+\mu(A)\leq\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})+\mu(A)\leq 1.$ The fact that any plan $\pi\in\Pi(\mu,\nu)$ which has finite cost must be concentrated on the finiteness set $S=\\{(x,y):c(x,y)<\infty\\}\subset X\times Y$ implies the necessity of the condition, as is given in the following lemma. ###### Lemma 3.3. Let $X,\,Y$ be measure spaces and $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. Given $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$, assume there exists $\pi\in\Pi(\mu,\nu)$ which is concentrated on $S=\\{(x,y)\in X\times Y:\ c(x,y)<\infty\\}$. Then $\mu$ and $\nu$ are $c$-compatible. ###### Proof. Let $A\subset X$. As $\pi\in\Pi(\mu,\nu)$, we know $\mu(A)=\pi(A\times Y)$, and by assumption, $\pi(A\times Y)=\pi((A\times Y)\cap S)$. Similarly, $\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})=\pi((X\times\\{y:\forall x\in A\,\,,\,\,c(x,y)=\infty\\})\cap S).$ However, these two sets are disjoint, since if $(x,y)\in S$ then $c(x,y)<\infty$, so if $x\in A$ then clearly $y$ does not satisfy that for all $x\in A$, $c(x,y)=\infty$. Therefore, the $\pi$-measures of the two sets sum to at most $1$. ∎ It is useful to know that in certain situations the $c$-compatibility condition is also sufficient for the existence of a finite cost plan; such is the case when the finiteness set $S$ is closed. One may then use the following theorem of Strassen [20]. ###### Theorem 3.4 (Strassen). Let $X,\,Y$ be complete separable metric measure spaces and let $S$ be a non- empty closed subset of $X\times Y$. Given $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$, there exists $\pi\in\Pi(\mu,\nu)$ which is supported on $S$ if and only if for all open $B\subset Y$ (9) $\nu(B)\leq\mu(P_{X}(S\cap(X\times B))),$ where $P_{X}$ is a projection onto $X$. In the case of a non-traditional cost $c$, the relevant set $S$ considered in Lemma 3.3 is not necessarily closed. If $S$ is closed, and $c$ is bounded on it, then the condition in Strassen’s Theorem is sufficient for the existence of a finite-cost transport plan. In some cases, one may use this together with the theorems stated in Section 2.5 and the results from [5] to show that a minimizing plan exists and is concentrated on the graph of a $c$-subgradient. An example of such reasoning for some explicit cost functions will appear in the forthcoming [2]. However, for certain important costs, and in particular for the polar cost $p$ defined in $\eqref{def:polar-cost}$ which serves as a motivating example for this study, the set $S$ of finite-cost pairs is not closed. To illustrate the problem, let us give an example of two measures on intervals which are $c$-compatible (we will use the one dimensional polar cost) but do not admit any plan supported on the finiteness set $S$. ###### Example 3.5. Consider once more the polar cost $p(x,y)=-\ln(xy-1)_{+}$ on $\mathbb{R}^{+}\times\mathbb{R}^{+}$. Its finiteness set is $S=\\{(x,y):xy>1\\}$. Let $\gamma$ be the uniform measure on the set $S_{1}=\\{(x,1/x)\in\mathbb{R}^{2}:x\in[1/2,2]\\}$ and let $\mu$ be its marginal on the first coordinate and $\nu$ its marginal on the second coordinate. It is not hard to check that the measures $\mu$ and $\nu$ (which are the same measure) are $p$-compatible. Indeed, let $A\subset\mathbb{R}^{+}$ be open, note that $P_{X}((\mathbb{R}^{+}\times A)\cap S)=\cup_{y\in A}(1/y,\infty)=(1/\sup(A),\infty).$ Additionally, for any number $\alpha\in[1/2,2]$ we have, by definition, that $\nu([1/2,\alpha])=\mu([1/\alpha,2])$. Combining these observations with the continuity of $\mu$ and $\nu$ we see that the measures are polar compatible $\nu(A)\leq\nu([1/2,\sup(A)])=\mu([1/\sup(A),2])=\mu([1/\sup(A),\infty))=\mu(P_{X}((A\times X)\cap S)).$ We turn to show that there is no transport plan $\pi\in\Pi(\mu,\nu)$ supported on $S$. Assume towards a contradiction that there exists such a transport plan $\pi$. In particular, this implies that there exists some rectangle $B=[x_{1},x_{2}]\times[y_{1},y_{2}]\subset S$ of positive measure. By the definition of $S$, we have that $x_{1}y_{1}>1$. As $\pi$ is supported in $S$ we see that $\mu([1/2,x_{1}])=\pi([1/2,x_{1}]\times[x_{1}^{-1},2])\leq\nu([x_{1}^{-1},2])=\mu([1/2,x_{1}])$ where the last equality follows from the definition of $\mu$ and $\nu$. We thus have equalities all along. Similarly, $\nu([1/2,x_{1}^{-1}])=\pi([x_{1},2]\times[1/2,x_{1}^{-1}])\leq\mu([x_{1},2])=\nu([1/2,x_{1}^{-1}]).$ So we conclude that $\pi([1/2,x_{1}]\times[x_{1}^{-1},2])+\pi([x_{1},2]\times[1/2,x_{1}^{-1}])=\mu([1/2,x_{1}])+\mu([x_{1},2])=1,$ that is, $\pi$ is supported on $[1/2,x_{1}]\times[x_{1}^{-1},2]\cup[x_{1},2]\times[1/2,x_{1}^{-1}])$, which is a contradiction to the fact that $\pi(B)>0$. Figure 1. A schematic drawing of Example 3.5. ### 3.3. The Hall polytope Let us consider a special case, which will be the focus of Section 5, when one of the measures is discrete and the other one arbitrary. In such a case, the compatibility condition can be realized geometrically by a polytope, which we call the Hall polytope. We use $\Delta_{m}=\\{\alpha\in\mathbb{R}^{m}:\alpha_{i}\geq 0,\,\sum_{i=1}^{m}\alpha_{i}=1\\}$ to denote the $(m-1)$-dimensional simplex. ###### Definition 3.6. Let $X$ be some measure space, and $Y=\\{u_{i}\\}_{i=1}^{m}$. Assume $c:X\times Y\to(-\infty,\infty]$ is a measurable cost function, and let $\mu$ be a probability measure supported on $\\{x\in X:\exists i\in[m],\,\,c(x,u_{i})<\infty\\}$. Define the _Hall polytope_ associated with $(u_{i})_{i=1}^{m}$ and $\mu$ by $P=P((u_{i})_{i=1}^{m},\mu)=\bigcap_{I\subset[m]}\\{\alpha\in\Delta_{m}:\sum_{i\in I}\alpha_{i}\leq\mu(A_{I})\\},$ where $A_{I}:=\\{x\in X:\ \exists i\in I\ c(x,u_{i})<\infty\\}.$ Note that the definition implies that $\mu$ and $\nu=\sum_{i=1}^{m}\alpha_{i}\mathbbm{1}_{u_{i}}$ are $c$-compatible if and only if $\alpha\in P((u_{i})_{i=1}^{m},\mu)$. We get back to this definition, and present a careful study of the resulting polytopes, in Section 5. ### 3.4. Strong $c$-compatibility We saw in Example 3.5 that $c$-compatibility is not a sufficient condition for the existence of a finite cost plan. In fact, we will see in Example 7.2 that there exist $c$-compatible measures which do admit a finite cost plan but not a potential. Therefore, we consider a slight strengthening of $c$-compatibility, which will ensure that the existence of a finite cost plan implies the existence of a potential. We call this condition strong $c$-compatibility, and it amounts to asking for a strict inequality in the defining inequalities. ###### Definition 3.7. Let $X,\,Y$ be measure spaces and $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. We say that two probability measures $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$ are _strongly $c$-compatible_ if they are $c$-compatible and for any measurable $A\subset X$ with $0<\mu(A)<1$ it holds that $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})<1.$ The motivation for this specific strengthening of the condition of $c$-compatibility is twofold: First, if two measures are $c$-compatible and not strongly $c$-compatible, this means that there exists a decomposition of the transport problem into two sub-problems (see Section 7). Indeed, this is quite clear from the definition: if some set $A$ of measure $\mu(A)\in(0,1)$ satisfies the equality $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})=1,$ then letting $B=\\{y:\forall x\in A,\,\,c(x,y)=\infty\\}$ we see that $A$ must be mapped to $Y\setminus B$ (and they have the same measure) and the preimage of $B$ must be $X\setminus A$. That is, the original transport problem is in fact decomposed into two disjoint transport problems. Second, in the discrete setting of Section 3.3, strong $c$-compatibility corresponds to the weight vector $\alpha$ residing in the interior of the Hall polytope, which makes for an elegant assumption. We stress that strong $c$-compatibility is not a necessary condition, only $c$-compatibility is. Even if one of the measures is discrete, it could be that the Hall polytope has an empty interior, but good transport maps, admitting a potential, exist. ### 3.5. The geometric meaning of strong $c$-compatibility It will be very useful to rephrase the condition of strong $c$-compatibility in terms that are more geometric. In fact, looking back at the proof of the symmetry of the notion of $c$-compatibility, it seems evident that we do not need to assume an inequality $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})\leq 1$ (or a strict inequality, in the strong $c$-compatibility assumption) for all sets $A$, and it suffices to consider sets of the form $\\{x:\forall y\in B,\,\,c(x,y)=\infty\\}$. To make this observation more precise, we introduce the notion of the $c$-dual of a set. ###### Definition 3.8 ($c$-duality). Let $X,\,Y$ be two sets and let $c:X\times Y\to(-\infty,\infty]$. Fix $t\in(-\infty,\infty]$ (which will be omitted in the notation as it is a fixed parameter). For $K\subset X$ define the _$c$ -dual_ set of $K$ as $K^{c}=\bigcap_{x\in K}\\{y\in Y:\,c(x,y)\geq t\\}=\\{y\in Y:\,\inf_{x\in K}c(x,y)\geq t\\}.$ It will be convenient to assume $X=Y$ and that the cost is symmetric, and as this is the case relevant for this note, we restrict to this case. However, the reader will find it easy to generalize to the case where $X\neq Y$, in which case there are two different “$c$-duality” operations, one mapping sets in $X$ to sets in $Y$, and one mapping sets in $Y$ to sets in $X$, similarly to the $c$-transform. Let us point out that for the polar cost $p(x,y)=-\ln(\langle x,y\rangle-1)_{+}$ and $t=\infty$, the set $K^{p}$ is the well known polar set $K^{\circ}$. Indeed, we have that $\inf_{x\in K}p(x,y)=\infty$ if and only if $\sup_{x\in K}\langle x,y\rangle\leq 1$. For the classical cost $c(x,y)=-\langle x,y\rangle$ and $t=-1$, we also get the polarity map. ###### Remark 3.9. If one adds the assumptions that $X$ and $Y$ are measure spaces and that the cost is upper semi-continuous, it follows that for a fixed $x$, say, the set $\\{y:\,c(x,y)\geq t\\}$ is closed, and hence so is $K^{c}$. Having defined an operation on sets, let us notice some basic properties. ###### Lemma 3.10. For every $K,L\subset X$, the following hold 1. (i) $K\subset(K^{c})^{c}=K^{cc}$, 2. (ii) if $L\subset K$ then $K^{c}\subset L^{c}$, 3. (iii) $K^{c}=K^{ccc}$. ###### Proof. (i) This follows directly from the definition. If $x\in K$ and $y\in K^{c}$ then $c(x,y)\geq t$ so that $x\in K^{cc}$. (ii) Assume that $L\subset K$, and $y\in K^{c}$, then $c(x,y)\geq t$ for all $x\in K$ and in particular for all $x\in L$, so $y\in L^{c}$. (iii) From (i) we know that $K\subset K^{cc}$, so from (ii) we get $K^{c}\supset K^{ccc}$. On the other hand, applying (i) directly to $K^{c}$ we get $K^{ccc}\subset K^{c}$, and equality is obtained. ∎ The similarity of $c$-duality to the $c$-transform is apparent. We are thus motivated to define the $c$-class of sets, on which the $c$-duality is an order reversing bijection. In order to avoid confusion, as we suppressed $t$ in the notation, we restrict the next definition to $t=\infty$, the case relevant for this note. ###### Definition 3.11 ($c$-class and $c$-envelope). Fix $t=\infty$. The $c$-class of sets consists of all closed sets $K\subset X$ such that there exists some $L\subset X$ with $K=L^{c}$. For any set $K\subset X$ we define its $c$-envelope as the set $K^{cc}$, which is the smallest $c$-class set containing $K$. Let us note again that for the polar cost and $t=\infty$, the $p$-class consists of closed convex sets containing the origin, and the $p$-envelope is the polar convexification operation $K\mapsto K^{\circ\circ}=\overline{{\rm conv}\\{0,K\\}}$. Our first observation is that in Definitions 3.2 and 3.7 it is sufficient to consider $c$-class sets, for $t=\infty$, instead of all measurable sets. ###### Lemma 3.12. Let $c:X\times Y\to(-\infty,\infty]$ be an upper semi-continuous symmetric cost function. Two probability measures $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$ are $c$-compatible if and only if for every set $K=K^{cc}\subset X$ in the $c$-class we have $\nu(K^{c})\leq 1-\mu(K).$ They are strongly $c$-compatible if and only if in addition when $\nu(K^{c})\neq 0,1$ we have $\nu(K^{c})<1-\mu(K).$ ###### Proof. If $\mu$ and $\nu$ are $c$-compatible then in particular $\nu(K^{c})\leq\mu(\\{x:\,\inf_{y\in K^{c}}c(x,y)<\infty\\})$, which can be rewritten as $\nu(K^{c})\leq\mu(X\setminus K^{cc})=\mu(X\setminus K)$. For the other direction let $A\subset X$ be a measurable set, and consider the set $K=A^{cc}$. Then $\\{y:\inf_{x\in A}c(x,y)=\infty\\}=\\{y:\forall{x\in A}\,\,c(x,y)=\infty\\}=A^{c}=K^{c}.$ The last equality holds due to Lemma 3.10 (iii). Thus, using the condition on $c$-class sets and Lemma 3.10 (i), we get $\nu(A^{c})=\nu(K^{c})\leq 1-\mu(K)=1-\mu(A^{cc})\leq 1-\mu(A),$ so that $\mu$ and $\nu$ are $c$-compatible. Similarly, two probability measures $\mu$ and $\nu$ are strongly $c$-compatible if and only if they are $c$-compatible and for all $c$-class sets $K\subset X$ such that $\nu(K^{c})\neq 0,1$ we have $\nu(K^{c})<1-\mu(K).$ This follows from the same proof, the only difference being if $A\neq A^{cc}=K$, one gets a strong inequality by $\nu(A^{c})\leq 1-\mu(A^{cc})<1-\mu(A)$, which follows by Lemma 3.10 (i). ∎ In the next lemma we show that the strong $c$-compatibility of two measures implies a vital condition on the distribution of the transport plan between them. ###### Lemma 3.13. Let $\mu$ be a probability measure on $X$, $\nu$ a probability measure on $Y$, and $\pi\in\Pi(\mu,\nu)$ a finite cost plan, with respect to the symmetric cost $c:X\times Y\to(-\infty,\infty]$. Then $\mu$ and $\nu$ are strongly $c$-compatible if and only if for every $c$-class set $K$ such that $\nu(K^{c})\neq 0,1$, we have that $\pi((X\setminus K)\times(Y\setminus K^{c}))>0.$ ###### Proof. First we note that the existence of a finite cost plan $\pi\in\Pi(\mu,\nu)$ implies $c$-compatibility (see Lemma 3.3). Thus, under our assumptions, strong $c$-compatibility is equivalent, by Lemma 3.12, to the fact that for every $c$-class $K$ with $\nu(K^{c})\neq 0,1$ we have that $\nu(K^{c})<1-\mu(K)=\mu(X\setminus K)$. Since $\pi\in\Pi(\mu,\nu)$ this can be rewritten as, for $\nu(K^{c})\neq 0,1$, $\pi(X\times K^{c})<\pi((X\setminus K)\times Y),$ and if $\nu(K^{c})=1$ then $\mu(K)=0$. Note that as $\pi$ has finite cost, it is concentrated on $S=\\{(x,y):c(x,y)<\infty\\}$, and so for $(x,y)$ in the support of $\pi$, if $y\in K^{c}$ then we must have $x\not\in K$. In particular, from the point of view of the measure $\pi$, the set on the left hand side is contained in the set on the right hand side. We can thus rewrite the first inequality as $0<\pi(((X\setminus K)\times Y)\setminus(X\times K^{c}))=\pi((X\setminus K)\times(Y\setminus K^{c})).$ completing the proof of the statement claimed. ∎ ## 4\. Transportation of measure Let us recall our main theorem, to be proved in this section. ###### Theorem 1.1. Let $X=Y$ be a Polish space, and $c:X\times Y\to(-\infty,\infty]$ be a continuous and symmetric cost function, essentially bounded from below with respect to $\mu\in{\mathcal{P}}(X)$ and $\nu\in{\mathcal{P}}(Y)$. Assume $(\mu,\nu)$ are strongly $c$-compatible, and $C(\mu,\nu)<\infty$. Then there exists a $c$-class function $\varphi$ and an optimal transport plan $\pi\in\Pi(\mu,\nu)$ concentrated on $\partial^{c}\varphi$. In order to prove the theorem we will use a combination of Theorems 2.3, 2.4, and 2.6. We will show that once we have an optimal transport plan supported on a $c$-cyclically monotone set then it must be $c$-path-bounded. This will follow from an observation presented in [5] which states that indeed in some special cases $c$-cyclic monotonicity implies $c$-path-boundedness. In order to formulate the condition let us introduce some notation. We consider a directed graph, associated with a cost function $c:X\times Y\to(-\infty,\infty]$ and a set $G\subset S=\\{(x,y):c(x,y)<\infty\\}\subset X\times Y$, in which the vertices are elements of $G$ and there is a directed edge from $(x,y)$ to $(z,w)$ if $c(z,y)<\infty$. Since $G\subset S$ we may say that for every point in $G$ there is an edge (loop) with this point as a start and end vertex. The directed graph induces a (transitive) relation on points in $G$, namely $(x,y)\prec(z,w)$ if there is a directed path from $(x,y)$ to $(z,w)$. We then define an equivalence relation $\sim$ on elements of $G$ where we say that $(x,y)\sim(z,w)$ if $(x,y)\prec(z,w)$ and $(z,w)\prec(x,y)$, i.e. there is a directed cycle passing through both points. To the best of our knowledge, this equivalence relation was first mentioned in [23, Chapter 5, p.75] and studied in [9, 8, 5]. The following proposition was proved (with a different formulation) in [8] and then in [5]. ###### Proposition 4.1. Let $c:X\times Y\to(-\infty,\infty]$ be some cost function and let $G\subset X\times Y$ be a $c$-cyclically monotone set. Assume that all points in $G$ belong to one equivalence class of the equivalence relation $\sim$ defined above. Then $G$ is $c$-path-bounded. With this proposition in hand, our goal is to show that if $\pi$ is a finite cost plan between two strongly $c$-compatible measures, then we can find a set $G$, on which $\pi$ is concentrated, such that all of points in $G$ are in one equivalence class of $\sim$. ###### Proposition 4.2. Let $X,\,Y$ be two Polish spaces, $\mu\in{\mathcal{P}}(X),\nu\in{\mathcal{P}}(Y)$ and assume $(\mu,\nu)$ are strongly $c$-compatible. Let $\pi\in\Pi(\mu,\nu)$ be a finite cost transport plan from $\mu$ to $\nu$. Then there exists a set $G$ on which $\pi$ is concentrated such that all the points in $G$ are in one equivalence class of $\sim$. ###### Proof. Let $G_{1}$ denote the support of $\pi$, and let $G_{0}$ denote the set $G_{1}\cap\\{(x,y)\in X\times Y:c(x,y)<\infty\\}$. Fix a point $(x,y)\in G_{0}$. We shall show that the set of points $(z,w)\prec(x,y)$ in $G_{0}$ is of $\pi$-measure one, as is the set of points $(z,w)$ such that $(x,y)\prec(z,w)$. The intersection of these two sets will also be of measure one, and we denote it by $G$. We will then explain why this $G$ fulfills the requirements of the proposition. Consider $H\subset G_{0}$ consisting of all points $(a,b)\prec(x,y)$. Assume towards a contradiction that $\pi(H)<1$. Note that $\pi(H)>0$ since $(x,y)\in S_{0}$ and in the support of $\pi$, so that for any neighborhood $U$ of $(x,y)$ we have $\pi(U)>0$. Picking a small enough neighborhood $U$, we know that if $(z,w)\in U$ then $c(z,y)<\infty$ and so $(z,w)\in H$ (it may be that $\pi(x,y)>0$ and that $U$ consists of this one point alone). Since $\pi(H)<1$ there is some $(z,w)\not\in H$ which is a density point of $\pi$, that is, for any neighborhood $V$ of $(z,w)$ one has $\pi(V)>0$. If $z\in(P_{X}H)^{cc}$ then $(P_{X}H)^{c}=(P_{X}H)^{ccc}\subset\\{z\\}^{c}$ by Lemma 3.10 (ii) and (iii). Therefore $Y\setminus\\{z\\}^{c}\subset Y\setminus(P_{X}H)^{c}=\\{v:\,\exists u\in P_{X}H,\,\,c(u,v)<\infty\\}$. Further, since $(z,w)\in G$ we have that $c(z,w)<\infty$ and hence $w\in Y\setminus\\{z\\}^{c}$. But this means that $w\in Y\setminus(P_{X}H)^{c}$ and there exists some $a\in P_{X}H$ (and $b$ such that $(a,b)\in G_{0}$) with $c(a,w)<\infty$. Therefore $(z,w)\prec(a,b)$, and by transitivity $(z,w)\prec(x,y)$, a contradiction. We may therefore assume that $(z,w)$ is such that $z\notin(P_{X}H)^{cc}=:K$. Since $K$ is a closed set in $X$, there is a neighborhood of $z$ which does not intersect $K$, and therefore we can find a neighborhood $V$ of $(z,w)$ which is of positive $\pi$ measure (as $(z,w)$ is a density point) and such that its projection onto $X$ does not intersect $K$. Note that this implies in particular that $0<\mu(K)=\pi(K\times Y)<1$, since $H\subset K\times Y$ and $V\cap(K\times Y)=\emptyset$. We may therefore use Lemma 3.13 to deduce that $\pi((X\setminus K)\times(Y\setminus K^{c}))>0.$ In particular there exists some point $(e,f)\in G_{0}$ Such that $e\not\in K$ and $f\not\in K^{c}$. The fact that $e\not\in K$ means in particular that $(e,f)\not\in H$. The fact that $f\not\in K^{c}=(P_{X}H)^{ccc}=(P_{X}H)^{c}$ implies that $f\in\\{v:\,\exists u\in P_{X}H\,c(u,v)<\infty\\}$. Hence there is some point $a\in P_{X}H$ (and $b$ such that $(a,b)\in H$) such that $c(a,f)<\infty$, which means that $(e,f)\prec(a,b)\prec(x,y)$, thus contradicting the fact that $(e,f)\not\in H$. We conclude that the set $H$ satisfies $\pi(H)=1$. Similarly we consider $F\subset G_{0}$ consisting of all points $(a,b)$ such that $(x,y)\prec(a,b)$. Using the same argument as above we get that $\pi(F)=1$. Hence, we found sets $F$ and $H$ of $\pi$-measure one. Let $G=F\cap H$, every point $(z,w)\in G$ satisfies that there is a directed path, going through points in $G_{0}$, between it and $(x,y)$. We now claim that these directed paths only go through points in $G$ itself. Indeed, consider a cycle (in $G_{0}$) which includes $(x,y)$ and $(z,w)\in G$. The existence of this cycle implies that every point on it belongs to both $H$ and $F$, by the definition of the relation $\sim$, so that the whole cycle consists of points in $G$. The proof is now complete. ∎ ###### Proof of Theorem 1.1. By assumption, $C(\mu,\nu)<\infty$, and we may use Theorem 2.4, the assumptions of which are satisfied, to find a $c$-optimal plan $\pi\in\Pi(\mu,\nu)$. By Theorem 2.6, the plan $\pi$ is concentrated on some $c$-cyclically monotone set $G_{1}$. Proposition 4.2 implies that $\pi$ is also concentrated on some set $G_{2}$ such that all points in $G_{2}$ are in one equivalence class of the relation $\sim$ defined above. Let $G=G_{1}\cap G_{2}$, then $G$ is a $c$-cyclically monotone set such that all of its elements lie in one equivalence class, therefore, by Proposition 4.1 the set $G$ is $c$-path-bounded. Finally, we use Theorem 2.3 which implies that a $c$-path-bounded set admits a potential, to find some $c$-class function $\varphi$ such that $G\subset\partial^{c}\varphi$. We have thus determined that there exists a $c$-optimal plan $\pi$ which is concentrated on $\partial^{c}\varphi$ for some $c$-class $\varphi$, as needed. ∎ ## 5\. Transportation to a discrete measure In this section we present a different approach to the problem of finding transport maps which lie on $c$-subgradients of functions. We consider the case where one measure is arbitrary (we will add some mild assumptions on it, connected with the cost, later on) and the second measure is discrete. As explained in Section 3.3, fixing the support of $\nu$ to be the set $\\{y_{i}\\}_{i=1}^{m}$, a necessary condition for the existence of a finite cost plan $\pi\in\Pi(\mu,\nu)$ is that the weight vector $\alpha\in\Delta_{m}$ associated with the probability measure $\nu=\sum_{i=1}^{m}\alpha_{i}\mathbbm{1}_{u_{i}}$ lies in the Hall polytope $P=P((u_{i})_{i=1}^{m},\mu)=\bigcap_{I\subset[m]}\\{\alpha\in\Delta_{m}:\sum_{i\in I}\alpha_{i}\leq\mu(A_{I})\\},$ where $A_{I}:=\\{x\in X:\ \min_{i}c(x,u_{i})<\infty\\}.$ So, our main objective is to show that indeed, for a measure $\nu$ corresponding to a weight vector in the polytope, a finite cost transport plan exists, and further, it is supported on the $c$-subgradient of some $c$-class function. We are able to do this under very general assumptions on the measure $\mu$, and provided $\alpha$ lies in the interior of the polytope (this is Theorem 1.3). Let us introduce the notion of $c$-regularity of a measure, which will be important for the construction given in this section. Roughly speaking, a measure is $c$-regular if it gives $0$-measure to sets where two different basic functions $c(\cdot,y_{1})+a_{1}$ and $c(\cdot,y_{0})+a_{0}$, coincide and equal some finite number. ###### Definition 5.1. Let $X,\,Y$ be measure spaces and let $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function, and $\mu$ a probability measure on $X$. If for any $y_{1}\neq y_{0}\in Y$ and $t\in\mathbb{R}$ $\mu(\\{c(z,y_{1})-c(z,y_{0})=t\\})=0,$ then we say that $\mu$ is a _$c$ -regular measure_. For example, when the cost is such that $\\{z:c(z,y_{1})-c(z,y_{0})=t\\}$ is of lower dimension, and the measure is absolutely continuous, the $c$-regularity property is satisfied. ### 5.1. Building transport maps The idea of the proof is to manually construct functions whose $c$-subgradient is a transport map of a $c$-regular measure $\mu$ to a certain discrete measure $\nu$. We will consider basic functions and use the fact that the $c$-class is closed under the pointwise infimum. Formally, we have the following lemma. ###### Lemma 5.2. Let $X,\,Y$ be measure spaces and let $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. Fix a set of vectors $(u_{i})_{i=1}^{m}\subset Y$ and let $\mu$ be a $c$-regular probability measure on $X$, which is supported on the set $\\{x\in X:\exists i\in[m]{\rm~{}with~{}}c(x,u_{i})<\infty\\}$. Given numbers $(t_{i})_{i=1}^{m}\subset\mathbb{R}$ let $\varphi(x)=\varphi_{(u_{i}),(t_{i})}(x)=\min_{1\leq i\leq m}\left(c(x,u_{i})+t_{i}\right)$ be a function in the $c$-class and denote $U_{i}=\left\\{x\in X:\ \arg\min_{1\leq j\leq m}\left(c(x,u_{j})+t_{j}\right)=i\right\\}$. Then the mapping $T$, defined to be equal to $u_{i}$ on the set $U_{i}$, is well defined $\mu$-almost everywhere and satisfies that $T(x)\in\partial^{c}\varphi(x)$ for all $x$ in the support of $\mu$. Moreover, it transports $\mu$ to the measure $\nu=\sum_{i=1}^{m}\alpha_{i}\mathbbm{1}_{u_{i}}$ on $Y$, where $\alpha_{i}=\mu\left(U_{i}\right).$ ###### Proof. Let $\varphi(x)$ be the function defined in the statement and note that it induces a partition of $X$ into $m$ sets $\\{U_{i}\\}_{i=1}^{m}$, where $U_{i}=\\{x\in X:\ \arg\min_{1\leq j\leq m}\left(c(x,u_{j})+t_{j}\right)=i\\}.$ By the definition of the $c$-subgradient given in (7), $u_{i}\in\partial^{c}\varphi(x)$ for all $x\in U_{i}$. Let $T:X\to Y$ be the map given by $T(x)=u_{i}$ for all $x\in U_{i}$, so indeed $T(x)\in\partial^{c}\varphi(x)$. For $\mu$ which is $c$-regular, the intersections of the sets $U_{i}$ are of zero measure and thus $T$ is well defined $\mu$ almost everywhere. Clearly, the map $T$ transports the measure $\mu$ on $X$ to the measure $\sum_{i=1}^{m}\mu(U_{i})\mathbbm{1}_{u_{i}}$. ∎ ###### Remarks 5.3. (i) In general, the partition to sets $\\{U_{i}\\}$ as above is not disjoint, so without the additional assumption of $c$-regularity of $\mu$ the map $T$ is not well-defined. (ii) Since we may add a constant to all $(t_{i})_{i=1}^{m}$ without changing the $c$-subgradient, we will assume that $t_{i}\geq 0$. Thus, given a finite set $(u_{i})_{i=1}^{m}\subset Y$, it will be convenient for us to consider the family of functions $\varphi(x)=\min_{1\leq i\leq m}\left(c(x,u_{i})-\ln(t_{i})\right),$ with $t=(t_{i})_{i=1}^{m}$ in the $m$-dimensional simplex $\Delta_{m}$. Lemma 5.2 guarantees that given a $c$-regular measure $\mu$ and points $(u_{i})_{i=1}^{m}\subset Y$, the map $\varphi_{(u_{i}),(t_{i})}$ induces a transport map $T:X\to Y$ mapping $\mu$ to $\sum\alpha_{i}\mathbbm{1}_{u_{i}}$. This simple idea will be very important in proving Theorem 1.3, and the bulk of the proof lies in analyzing which weights $\alpha_{i}$ can be attained. In the classical case of the quadratic cost it was proved by K. Ball that all weight vectors $\alpha=(\alpha_{i})_{i=1}^{m}\in\Delta_{m}$ can be attained [6], from which he then obtained the Brenier theorem for all absolutely continuous measures $\mu$ and compactly supported $\nu$ using a limiting argument. In contrast, in the case of non-traditional costs one cannot expect that all weight vectors in $\Delta_{m}$ will be attained, only those residing in the Hall polytope. Let us briefly describe the main steps for proving the existence of a transport map of some measure $\mu$ to a discrete measure $\nu$ with weight vector in the interior of the Hall polytope (i.e. Theorem 1.3). Fixing a measure $\mu$ and an $m$-tuple $(u_{i})_{i=1}^{m}$, the construction in Lemma 5.2 gives rise to a mapping $H$ from the $(m-1)$-dimensional simplex $\Delta_{m}$ onto the set of ‘weight vectors’ $\alpha=(\alpha_{i})_{i=1}^{m}$ of the measure $\nu$ to which $\mu$ can be transported. We will show that $H$ is a surjection from the interior of the simplex onto the interior of the relevant Hall polytope. To this end we define and analyze Hall polytopes, and in particular construct, under some assumptions, a continuous map $R$ from the boundary of the polytope to the boundary of the simplex, which respects certain constraints connected with the face structure of the polytope. We use a variant of Brouwer’s fixed point theorem for the composition $R\circ p\circ H$, where $p$ is a radial projection from some point in the polytope, to obtain the surjectivity. ### 5.2. Structure of the Hall Polytope We introduce the following notation: For $I\subset[m]$, $A\subset\mathbb{R}^{|I|}$ and $B\subset\mathbb{R}^{m-|I|}$, we denote by $A\times_{I}B$ points in $\mathbb{R}^{m}$ with $I$-coordinates in $A$ and $I^{c}$-coordinates in $B$. For a measure $\mu$ and a set $A\subset X$ we denote by $\mu|_{A}$ the measure that is equal to $\mu$ on $A$ and zero on $A^{c}$. Hall polytopes have faces only in specific pre-determined directions. (Their faces’ normal cones are spanned by $\\{0,1\\}$-vectors in $\mathbb{R}^{m}$, projected onto the span of the polytope which is $(m-1)$ -dimensional.) As we shall see in Proposition 5.4, each of these faces has a product structure, of which each component is a Hall polytope itself. ###### Proposition 5.4. Let $P=P((u_{i})_{i=1}^{m},\mu)$ be the Hall polytope associated with some $m$-tuple $(u_{i})_{i=1}^{m}\subset Y$ and a probability measure $\mu\in{\mathcal{P}}(X)$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$. Then for each $I\subset[m]$, the face of $P$ given by (10) $\displaystyle F_{I}=\\{\alpha\in P:\sum_{I}\alpha_{i}=\mu(A_{I})\\},$ admits a splitting $F_{I}=\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ where $P_{I}$ is the Hall polytope associated with the measure $\frac{1}{\mu(A_{I})}\mu|_{A_{I}}$ and the vectors $\\{u_{i}\\}_{i\in I}$, and $\hat{P}_{I}$ is the Hall polytope associated with the measure $\frac{1}{\mu(A_{I}^{c})}\mu|_{A_{I}^{c}}$ and the vectors $\\{u_{i}\\}_{i\in[m]\setminus I}$. In particular, in case $\mu(A_{I})=0$, we have $P_{I}=\\{0\\}_{I}$, and in case $\mu(A_{I}^{c})=0$, $\hat{P}_{I}=\\{0\\}_{[m]\setminus I}$. ###### Proof. Let $\alpha\in F_{I}$, then by the definition of $F_{I}$ we have $\sum_{i\in I}\alpha_{i}=\mu(A_{I})$ and thus $\alpha|_{I}\in\mu(A_{I})\Delta_{|I|}$. Furthermore, for every $J\subset I$ it still holds that $\sum_{i\in J}\alpha_{i}\leq\mu(A_{J})$ and as $A_{J}\subset A_{I}$ we also get that $\sum_{i\in J}\alpha_{i}\leq\mu|_{A_{I}}(A_{J})$. Recall that $P_{I}$ is the Hall polytope associated with $(u_{i})_{i\in I}$ and $\mu(A_{I})^{-1}\mu|_{A_{I}}$, so re-normalizing the previous inequalities by $\mu(A_{I})$ we see that the vector $\alpha|_{I}\in\mu(A_{I})P_{I}$, as claimed. Similarly in the $I^{c}$ coordinates, $\alpha\in F_{I}$ satisfies $\sum_{i\in[m]\setminus I}\alpha_{i}=\mu(A_{I}^{c})$. To show $\alpha|_{[m]\setminus I}\in\mu(A_{I}^{c})\hat{P}_{I}$ we need to check that for every $K\subset[m]\setminus I$ we have $\sum_{i\in K}\alpha_{i}\leq\mu(A_{K}\cap A_{I}^{c})$. To this end consider the new subset of $[m]$ given by $J=I\cup K$. By the assumptions, $\sum_{i\in J}\alpha_{i}\leq\mu(A_{J})=\mu\big{(}\bigcup_{i\in I}\\{x:\ c(x,u_{i})<\infty\\}\cup\bigcup_{i\in K}\\{x:\ c(x,u_{i})<\infty\\}\big{)}.$ Since the first of these unions is in fact all of $A_{I}$, we may rewrite the inequality as $\sum_{i\in J}\alpha_{i}\leq\mu(A_{I})+\mu(A_{I}^{c}\cap\bigcup_{i\in K}\\{x:\ c(x,u_{i})<\infty\\}).$ The sum on the left hand side is simply $\sum_{i\in I}\alpha_{i}+\sum_{i\in K}\alpha_{i}=\mu(A_{I})+\sum_{i\in K}\alpha_{i}$, since we have assumed $\alpha\in F_{I}$. Plugging into the inequality and canceling, we see $\sum_{i\in K}\alpha_{i}\leq\mu(A_{I}^{c}\cap\bigcup_{i\in K}\\{x:\ c(x,u_{i})<\infty\\}),$ as claimed. We have thus shown, so far, that $F_{I}\subset\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$. For the opposite direction, assume we are given some point $\alpha\in\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$, and we want to show that it belongs to $F_{I}$. Clearly, using that if $K\subset J$ then $A_{K}\subset A_{J}$, we have for any $J\subset[m]$ that $\displaystyle\sum_{i\in J}\alpha_{i}$ $\displaystyle=$ $\displaystyle\sum_{i\in J\cap I}\alpha_{i}+\sum_{i\in J\cap([m]\setminus I)}\alpha_{i}\leq\mu|_{A_{I}}(A_{J\cap I})+\mu|_{A_{I}^{c}}(A_{J\cap([m]\setminus I)})$ $\displaystyle\leq$ $\displaystyle\mu|_{A_{I}}(A_{J})+\mu|_{A_{I}^{c}}(A_{J})=\mu(A_{J}).$ This completes the second part of the proof. ∎ We will discuss the facial structure of the polytope, and make use of the following simple observation. ###### Lemma 5.5. Under the conditions and notations of Lemma 5.4, for any $I\subset[m]$, the part of the boundary of $F_{I}$ given by $\mu(A_{I})\partial P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ is a subset of $\cup_{J\subsetneq I}F_{J}$. ###### Proof. The boundary of $P_{I}$ consists of points whose $I^{th}$ coordinates add up to one, and for some $J\subsetneq I$ one of the inequalities defining the Hall polytope associated with $\frac{1}{\mu(A_{I})}\mu|_{A_{I}}$ and $(u_{i})_{i\in I}$ is an equality. In other words, if $\alpha\in\mu(A_{I})\partial P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ there is some $J\subsetneq I$ such that $\sum_{i\in J}\alpha_{i}=\mu(A_{J})$, which means $\alpha\in F_{J}$, as claimed. ∎ ### 5.3. Non-degenerate polytopes In this subsection we continue analyzing properties of Hall polytopes, under an additional assumption on $\mu$ and $(\alpha_{i})_{i=1}^{m}$ which will imply that all of the Hall polytopes’ faces $F_{I}$ (defined in (10)) are ‘full dimensional’ in the $I$ coordinates, i.e. that in the splitting described in Proposition 5.4, the polytope $P_{I}$ is $|I|-1$ dimensional. ###### Definition 5.6. Let $X,Y$ be measure spaces and let $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. Given $(u_{i})_{i=1}^{m}\subset Y$, and a probability measure $\mu$ which is supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$, we say that $\mu$ is _non-degenerate with respect to $(u_{i})_{i=1}^{m}$_ if for every $1\leq i<j\leq m$ it holds that $\mu\left(\\{x:c(x,u_{i})<\infty\\}\cap\\{x:c(x,u_{j})<\infty\\}\right)>0.$ (a) Non-degenerate (b) Non-degenerate (c) Degenerate Figure 2. Examples of 3-dimensional Hall polytopes ###### Proposition 5.7. Given a probability measure $\mu\in{\mathcal{P}}(X)$, which is non-degenerate with respect to $(u_{i})_{i=1}^{m}\subset Y$, the Hall polytope $P=P((u_{i})_{i=1}^{m},\mu)$ satisfies that its dimension (meaning the dimension of its affine hull) is $\dim(P)=m-1$. ###### Proof. We shall prove this fact using induction on $m$. For $m=1$ this is clearly true since the polytope $P$ consists of one point $\alpha=1$, that is, has dimension $0$. Assume that the claim is true for $(m-1)$-tuples. Then, for $m$ and a given set of vectors $(u_{i})_{i=1}^{m}\subset Y$, we know by Proposition 5.4 that $P$ has faces $F_{I}$ where $I\subset[m]$, each of the form $F_{I}=\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$. Let $I_{1}=[m-1]$ and $I_{2}=\\{m\\}$. The set $[m-1]$ still satisfies, along with $\mu|_{I}$, the conditions of the proposition, so by the inductive assumption $P_{1}=P_{I_{1}}$ is a polytope of full dimension, that is, of dimension $m-2$. It remains to show that $F_{2}=F_{I_{2}}$ does not lie within the affine hull of $F_{1}$, and hence $P$ has dimension at least $m-1$ (and of course it cannot have a higher dimension, as it is a subset of $\Delta_{m}$). Note that the affine hull of $F_{1}$ is characterized by the equality $\sum_{i=1}^{m-1}\alpha_{i}=\mu(A_{[m-1]})$, which equivalently can be written as $\alpha_{m}=1-\mu(A_{[m-1]})$. The facet $F_{2}$ satisfies $\alpha_{m}=\mu(A_{\\{m\\}})$. Assuming towards a contradiction that these two facets do intersect, we would need to have $\mu(A_{[m-1]})+\mu(A_{\\{m\\}})=1$. Recall that $A_{\\{m\\}}=\\{x\in X:\ c(x,u_{m})<\infty\\},\ A_{[m-1]}=\\{x\in X:\ \exists i\in[m-1]\ \ c(x,u_{i})<\infty\\}$ and that by the non-degeneracy of $\mu$ it holds that $\mu(A_{\\{m\\}}\cap A_{[m-1]})>0$. Additionally, $\mu(A_{\\{m\\}}\cup A_{[m-1]})=1$, and so $\mu(A_{\\{m\\}}\cup A_{[m-1]})=\mu(A_{\\{m\\}})+\mu(A_{[m-1]})-\mu(A_{\\{m\\}}\cap A_{[m-1]})$ implies that $\mu(A_{\\{m\\}}\cap A_{[m-1]})=0$ (So in particular $\mu(A_{\\{m\\}}\cap A_{\\{1\\}})=0$), contradicting the assumption that $\mu$ is non-degenerate. ∎ ###### Corollary 5.8. Given a probability measure $\mu\in{\mathcal{P}}(X)$ which is non-degenerate with respect to $(u_{i})_{i=1}^{m}\subset Y$, the Hall polytope $P=P((u_{i})_{i=1}^{m},\mu)$ satisfies that each face $F_{I}$ admits a splitting $F_{I}=P_{I}\times\hat{P}_{I}$ such that $\dim(P_{I})=|I|-1$. ###### Proof. The fact that $F_{I}$ has such a splitting was proven already in Proposition 5.4, with $P_{I}$ being the Hall polytope of the normalized restriction of the measure $\mu$ to $A_{I}$. $\mu_{I}$ satisfies, together with the subset $(u_{i})_{i\in I}$, conditions of Proposition 5.7, namely that it is non- degenerate with respect to $(u_{i})_{i\in I}$ as one may easily check that $\mu|_{A_{I}}$ is non-degenerate with respect to $(u_{i})_{i\in I}$. Therefore, $P_{I}$ is full dimensional, as claimed. ∎ Furthermore, in this case the associated polytope satisfies a “good” face- intersection structure, explained in the next two propositions. ###### Proposition 5.9. Given a probability measure $\mu\in{\mathcal{P}}(X)$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$ which is non-degenerate with respect to $(u_{i})_{i=1}^{m}\subset Y$, let $P=P((u_{i})_{i=1}^{m},\mu)$ be the associated Hall polytope. Given $I,J\subset[m]$, the intersection $F_{I}\cap F_{J}$ is a subset of $F_{I\cap J}$ (we let $F_{\emptyset}=P$, so that if $I\cap J=\emptyset$ the claim is trivial). ###### Proof. Let $\alpha\in F_{I}\cap F_{J}$, so that $\sum_{i\in I}\alpha_{i}=\mu(A_{I})$ and $\sum_{i\in[m]\setminus I}\alpha_{i}=\mu(A_{I}^{c})$, as well as $\sum_{i\in J}\alpha_{i}=\mu(A_{J})$ and $\sum_{i\in[m]\setminus J}\alpha_{i}=\mu(A_{J}^{c})$. Consider the following equation $\displaystyle\sum_{i\in I\cap J}\alpha_{i}+\sum_{i\in I\cup J}\alpha_{i}$ $\displaystyle=$ $\displaystyle\sum_{i\in I}\alpha_{i}+\sum_{i\in J}\alpha_{i}=\mu(A_{I})+\mu(A_{J})$ $\displaystyle=$ $\displaystyle\mu(A_{I}\cap A_{J})+\mu(A_{I}\cup A_{J})\geq\mu(A_{I\cap J})+\mu(A_{I\cup J}),$ where the final inequality follows from the inclusion $A_{I\cap J}\subset A_{I}\cap A_{J}$. Pairing this with the fact that each of the extreme terms satisfies that $\sum_{i\in I\cap J}\alpha_{i}\leq\mu(A_{I\cap J})\quad{\rm and}\quad\sum_{i\in I\cup J}\alpha_{i}\leq\mu(A_{I\cup J}),$ we conclude that both of these inequalities are in fact equalities, which implies that $\sum_{i\in I\cap J}\alpha_{i}=\mu(A_{I\cap J}),$ so that $\alpha$ belongs to the facet $F_{I\cap J}$. ∎ In fact, if $\mu$ is non-degenerate and $I\cap J=\emptyset$, we know much more. ###### Proposition 5.10. Given a probability measure $\mu\in{\mathcal{P}}(X)$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$ which is non-degenerate with respect to $(u_{i})_{i=1}^{m}\subset Y$, let $P=P((u_{i})_{i=1}^{m},\mu)$ be the associated Hall polytope. Then given $I,J\subset[m]$, which are disjoint, the faces $F_{I}$ and $F_{J}$ do not intersect. ###### Proof. By non-degeneracy of $P=P((u_{i})_{i=1}^{m},\mu)$, we know that for any $I$, the face $F_{I}=\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ satisfies that $\dim(P_{I})=|I|-1$. Assume $|I|=k_{1},|J|=k_{2}$, and, towards a contradiction, that the intersection $F_{I}\cap F_{J}$ is non-empty. Denoting $\beta_{I}=\mu(A_{I})$ and $\beta_{J}=\mu(A_{J})$, every point $\alpha$ in the intersection must satisfy $\sum_{i\in I}\alpha_{i}=\beta_{I}$ and $\sum_{i\in J}\alpha_{i}=\beta_{J}$. Letting $K=I\cup J$, $\beta_{K}=\mu(A_{K})$, by the fact that $I$ and $J$ are disjoint, we see that $\sum_{i\in K}\alpha_{i}=\beta_{I}+\beta_{J}$, and since $\alpha$ is a point in $P$, $\beta_{I}+\beta_{J}\leq\beta_{K}$. However, using again that $I\cap J=\emptyset$, we know that all points $\alpha\in P$ also satisfy $\sum_{i\in K}\alpha_{i}\leq\beta_{I}+\beta_{J}$, and since by Proposition 5.7 $F_{K}$ is non-empty (it is full dimensional in its $I$ coordinates), there exists some $\alpha\in F_{K}$ such that the equality $\sum_{i\in K}\alpha_{i}=\beta_{K}$ is satisfied. This implies $\beta_{K}=\beta_{I}+\beta_{J}$. We conclude that $F_{I}\cap F_{J}=F_{K}$. Indeed, $F_{K}\subset F_{I}\cap F_{J}$, since each such $\alpha\in P$ satisfies $\sum_{i\in I}\alpha_{i}\leq\beta_{I}$ and $\sum_{i\in J}\alpha_{i}\leq\beta_{J}$, and for points in $F_{K}$ an equality must be attained in both inequalities. The reverse inclusion $F_{I}\cap F_{J}\subset F_{K}$ is clear. However, by Proposition 5.7 the dimension of $P_{I}$ is $k_{1}-1$ and the dimension of $P_{J}$ is $k_{2}-1$. Recalling that $K=I\cup J$, we see that the dimension of $P_{K}$ is at most $k_{1}-1+k_{2}-1<k_{1}+k_{2}-1=|K|-1$, which contradicts the non-degeneracy assumption on $P$, and implies that the intersection must be empty. ∎ ### 5.4. Mapping the Hall polytope to the simplex In this subsection we make one final preparation, and show that for any Hall polytope $P$, associated with a non-degenerate measure and some $m$-tuple, there exists a special mapping $R$ from $\partial P$ to $\partial\Delta_{m}$ such that $F_{I}$ is mapped to $\partial_{I}\Delta_{m}$, and on $F_{I}$, the map only depends on the $I$ coordinates of a point. Let us explain the notation. The relative boundary of the simplex (its boundary in the affine space $\\{\alpha\in\mathbb{R}^{m}:\ \sum_{i=1}^{m}\alpha_{i}=1\\}$) will be denoted by $\partial\Delta_{m}$, and the $I^{\rm th}$ component of this boundary is the lower dimensional simplex defined by $\partial_{I}\Delta_{m}=\\{\alpha\in\Delta_{m}:\ \sum_{i\in I}\alpha_{i}=1\\}.$ Additionally, for $I\subset[m]$ we say that ‘a point $x\in\mathbb{R}^{m}$ has $I^{th}$ coordinates $y\in\mathbb{R}^{|I|}$’ if the restriction of $x$ to its coordinates indexed by $I$ is equal to $y$. ###### Proposition 5.11. Given a probability measure $\mu\in{\mathcal{P}}(X)$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$ which is non-degenerate with respect to $(u_{i})_{i=1}^{m}\subset Y$, let $P=P((u_{i})_{i=1}^{m},\mu)$ be the associated Hall polytope. Then there exists a continuous mapping $R:\partial P\to\partial\Delta_{m}$ such that $\partial_{I}P:=F_{I}=\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ is mapped to $\partial_{I}\Delta_{m}$ with (11) $R(y,z)=R(y,z^{\prime})$ for $y\in\mu(A_{I})P_{I},z,z^{\prime}\in\mu(A_{I}^{c})\hat{P}_{I}$, that is $R(x)=R(x^{\prime})$ if $x|_{I^{c}}=x^{\prime}|_{I^{c}}$. ###### Proof. The construction of $R$ is recursive. We define the map first only on faces $F_{I}$ with $|I|=1$. We then assume it has been defined on faces $F_{I}$ with $|I|<k$ and define it on $F_{I}$ with $|I|=k$. At each step we make sure the map we construct is well defined and continuous on its domain. We denote the center of mass of the face $\partial_{I}\Delta_{m}$ by $q_{I}$ and the center of mass of the polytope $\mu(A_{I})P_{I}$ (the $I$-th component of $F_{I}$) by $p_{I}$. We will ensure, within the proof, that all points in $F_{I}$ with $I^{th}$ coordinates $p_{I}$ are mapped to $q_{I}$, and that in general the map on a face $F_{I}$ depends only on the $I^{th}$ coordinates of the point. The basis for the construction are thus faces $F_{I}$ of $P$ with $|I|=1$. These are mapped to the vertices of the simplex $\Delta_{m}$, namely $F_{\\{i\\}}\mapsto\partial_{\\{i\\}}\Delta_{m}$ (where $\partial_{\\{i\\}}\Delta_{m}=e_{i}$). As these faces are disjoint by Proposition 5.9, and the map $R$ is constant on each face, we conclude that it is continuous. For the induction step, assume we have defined $R$ on all faces $F_{J}$ with $|J|<k$. Let $F_{I}$ be a face of $P$ with $|I|=k$. Since $F_{I}=\mu(A_{I})P_{I}\times_{I}\mu(A_{I}^{c})\hat{P}_{I}$ by Proposition 5.4, we have that $\partial F_{I}=\left(\mu(A_{I}){\rm relint}(P_{I})\times\mu(A_{I}^{c})\partial\hat{P}_{I}\right)\cup\left(\mu(A_{I})\partial P_{I}\times\mu(A_{I}^{c})\hat{P}_{I}\right).$ Since $R$ is already defined, by assumption, on all faces $F_{J}$ for $I\neq J\subset I$, and since $\mu(A_{I})\partial P_{I}\times\mu(A_{I}^{c})\hat{P}_{I}\subset\bigcup_{J\subsetneqq I}F_{J}$ by Lemma 5.5, the map $R$ is already defined on this first component of the boundary. Furthermore, again by assumption, it is defined in such a way that the image of $F_{J}$ is the simplex $\partial_{J}\Delta_{m}$, and that on $F_{J}$ the map $R$ only depends on the $J^{th}$ coordinates of the point. Note that $\bigcup_{J\subsetneqq I}\partial_{J}\Delta_{m}$ is precisely the boundary of $\partial_{I}\Delta_{m}$. So, we essentially are given a continuous mapping $R$ from the boundary of $P_{I}$ to the boundary of $\partial_{I}\Delta_{m}$. We extend it by first imposing $R(p_{I},z)=q_{I}$ for the specified points $p_{I}$ and $q_{I}$ (note that $(p_{I},z)\in F_{I}$ lies in $\mu(A_{I}){\rm relint}(P_{I})$, as $P_{I}$ is full, i.e. $I-1$, dimensional), and then extending $R$ radially for points with $I^{th}$ coordinates in $\mu(A_{I}){\rm relint}(P_{I})$. The resulting map $R$ is now defined on all of $F_{I}$. We do this for all index sets $I$ of size $k$. The resulting map is well defined, since by Propositions 5.9 and 5.10 the intersections of the faces $\\{F_{I}\\}_{|I|=k}$ are included in faces $F_{J}$ with $|J|<k$. By construction $R$ is a continuous mapping that sends $F_{I}$ to $\partial_{I}\Delta_{m}$ and, on $F_{I}$, depends only on the $I^{th}$ coordinates. ∎ As we described in Subsection 5.1, the main idea of the proof of Theorem 1.3 is to show surjectivity of a map taking a potential function $\varphi$ (indexed by some variables $(t_{i})\in\Delta_{m}$) to the weight vector $\alpha$ of the measure $\nu$ to which the $c$-subgradient $\partial^{c}\varphi$ maps $\mu$. We present this formally in the next subsection, where we define and analyse this map. ### 5.5. Mapping the simplex to the Hall polytope Having fixed some $m$-tuple $(u_{i})_{i=1}^{m}\subset Y$ and a probability measure $\mu\in{\mathcal{P}}(X)$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$ and $c$-regular, we define a map on the interior of $\Delta_{m}$, and then extend it (using converging subsequences) to a set- valued map on the boundary. More precisely, for $t=(t_{i})_{i=1}^{m}\in{\rm int}(\Delta_{m})$, define (12) $H_{(u_{i})_{i=1}^{m}}^{\mu}(t)=\alpha$ with $\alpha\in\Delta_{m}$ given by $\alpha_{i}=\mu\left(\left\\{x\in X:\ \arg\min_{1\leq j\leq m}c(x,u_{j})-\ln(t_{j})=i\right\\}\right).$ For $t\in\partial\Delta_{m}$, we let $H_{(u_{i})_{i=1}^{m}}^{\mu}(t)$ be the closure of the function in the usual sense, namely the set of all limit points $\lim H_{(u_{i})_{i=1}^{m}}^{\mu}(t^{(k)})$ as $t^{(k)}\to t$ and $t^{(k)}\in{\rm int}(\Delta_{m})$. When $\mu$ and $(u_{i})_{i=1}^{m}$ are fixed in advance, we denote $H=H_{(u_{i})_{i=1}^{m}}^{\mu}$. By Lemma 5.2 there is a transport map from $\mu$ to $\nu=\sum_{i}\alpha_{i}\mathbbm{1}_{u_{i}}$ when $\alpha=H(t)$ for $t\in{\rm int}(\Delta_{m})$, and moreover the transport map’s graph is included in the $c$-subgradient of the function $\min_{1\leq j\leq m}c(x,u_{j})-\ln(t_{j})$. In particular, the image of $H$ is inside the $P=P((u_{i})_{i=1}^{m},\mu)$, the associated Hall polytope. Our first claim regards the continuity of $H$. ###### Proposition 5.12. Let $X$ and $Y$ be measure spaces, $c:X\times Y\to(-\infty,\infty]$ measurable, let $(u_{i})_{i=1}^{m}\subset Y$, and let $\mu\in{\mathcal{P}}(X)$ be $c$-regular and supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$. Then, the function $H=H_{(u_{i})_{i=1}^{m}}^{\mu}:\Delta_{m}\to P((u_{i})_{i=1}^{m},\mu)$ is well defined and continuous on ${\rm int}(\Delta_{m})$. ###### Proof. First note that the function $H$ is well defined as $\mu$ is $c$-regular, and the subsets $U_{i}:=\left\\{x\in X:\ \arg\min_{1\leq j\leq m}c(x,u_{j})-\ln(t_{j})=i\right\\}$ form a measurable partition of $X$ (the intersections are of measure $0$, as well as the set where the minimum is $+\infty$), as in Lemma 5.2. To show that $H$ is continuous on ${\rm int}(\Delta_{m})$, let $t\in{\rm int}(\Delta_{m})$, $\varepsilon>0$ be fixed. We will show that then there exists $\delta>0$ such that for all $t^{\prime}\in\Delta_{m}$ with $\|t^{\prime}-t\|_{2}<\delta$, we have $\|H(t^{\prime})-H(t)\|_{2}<\varepsilon$. To see this, note that the $i^{th}$ coordinate of the difference is given by $\mu(U_{i})-\mu(V_{i})$, where $V_{i}:=\left\\{x\in X:\ \arg\min_{1\leq j\leq m}c(x,u_{j})-\ln(t^{\prime}_{j})=i\right\\}.$ Clearly, this difference is bounded (in absolute value) by $\mu(U_{i}\triangle V_{i})$, where $\triangle$ denotes the symmetric difference of the two sets. To estimate the measure of the symmetric difference, when $t$ and $t^{\prime}$ are close, we use the following sets, which converge to measure $0$ sets as $k\to\infty$. Define for $i\in[m],k\in{\mathbb{N}}$ $U_{i}^{k}=\left\\{x\in U_{i}:\exists j\neq i,\ c(x,u_{j})-\ln(t_{j})-c(x,u_{i})+\ln(t_{i})\leq\frac{1}{k}\right\\}.$ Note that $U_{i}^{k+1}\subset U_{i}^{k}$, and as $\mu$ is finite $\mu(U_{i}^{k})\searrow\mu(\lim_{k}U_{i}^{k})$. Moreover, $\mu(\lim_{k}U_{i}^{k})=0$ since by $c$-regularity of $\mu$ the limit set $\cup_{i\neq j}\\{x\in X:\ c(x,u_{j})-\ln(t_{j})-c(x,u_{i})+\ln(t_{i})=0\\}$ has zero measure. In particular, for every $i\in[m]$ there exists some $k_{i}$ such that for all $k>k_{i}$, we have $\mu(U_{i}^{k})<\varepsilon/m$. Denote $k_{0}=\max_{i}k_{i}$, and note that for any $k\geq k_{0}$ we have $\mu(\cup_{j=1}^{m}U_{j}^{k})<\varepsilon$ (and in particular for $k=k_{0}$). We next claim that there exists $\delta$ such that if $t^{\prime}$ is such that $|t^{\prime}_{i}-t_{i}|<\delta$ for all $i$, we have that $U_{i}\triangle V_{i}\subset\cup_{j=1}^{m}U_{j}^{k_{0}}$, which completes the proof. Indeed, we will choose $\delta$ such that if $|t-t^{\prime}|<\delta$ then $t_{l}e^{-1/2k_{0}}\leq t_{l}^{\prime}\leq t_{l}e^{1/2k_{0}}$ for every $l$. First consider the case $x\in U_{i}\setminus V_{i}$, and note that then there exists $1\leq l\leq m$ such that $x\in V_{l}$ (since the sets $(V_{l})_{l=1}^{m}$ are a partition of $X$) and hence for all $1\leq j\leq m$ we have $c(x,u_{l})-\ln(t^{\prime}_{l})-c(x,u_{j})+\ln(t^{\prime}_{j})\leq 0.$ By taking $j=i$ and by the choice of $\delta$, $c(x,u_{l})-\ln(t_{l})-c(x,u_{i})+\ln(t_{i})\leq\frac{1}{k_{0}},$ which yields that $x\in U_{i}^{k_{0}}$. Similarly, in the case where $x\in V_{i}\setminus U_{i}$, there exists $1\leq l\leq m$ such that $x\in U_{l}$ and since $x\in V_{i}$ we have that for all $1\leq j\leq m$ $c(x,u_{i})-\ln(t_{i}^{\prime})-c(x,u_{j})+\ln(t_{j}^{\prime})\leq 0.$ By taking $j=l$ and using the assumption on $\delta$, this yields $c(x,u_{i})-\ln(t_{i})-c(x,u_{l})+\ln(t_{l})\leq\frac{1}{k_{0}},$ which implies $x\in U_{l}^{k_{0}}$, and in particular, in both cases, $x\in\cup_{j=1}^{m}U_{j}^{k_{0}}$. Since the parameters were chosen so that the measure of this set is at most ${\varepsilon}$, we conclude that $\mu(U_{i}\triangle V_{i})<{\varepsilon}$, so long as $|t^{\prime}-t|<\delta$, which completes the proof. ∎ A main feature of the map $H_{(u_{i})_{i=1}^{m}}^{\mu}$ is that it respects the product structure on the faces of $\Delta_{m}$. More precisely, when applied to a point on a face $\partial_{I}\Delta_{m}$, the map is usually set- valued. The set which such a point is mapped to, however, has a specified $I^{th}$-coordinate (given by another map of such form, associated with a different measure), and the $I^{c}$-coordinates of points in the image span a full Hall polytope of another associated measure – exactly the one given in the face splitting discussed in Proposition 5.4. This is formally described in the next proposition. ###### Proposition 5.13. Under the assumptions of Proposition 5.12, consider some subset $I\subset[m]$ and let $t=t_{I}\times_{I}0_{I^{c}}$ be a vector with positive $I^{th}$-coordinates. Let $\mu_{I}=\frac{1}{\mu(A_{I})}\mu\big{|}_{A_{I}},\,\mu_{I}^{c}=\frac{1}{\mu(A_{I}^{c})}\mu\big{|}_{A_{I}^{c}}.$ Then, $H_{(u_{i})_{i=1}^{m}}^{\mu}(t_{1},\dots,t_{m})=\mu(A_{I})H_{(u_{i})_{i\in I}}^{\mu_{I}}\left(t_{i}\right)_{i\in I}\times_{I}\mu(A_{I}^{c})H_{(u_{j})_{j\in I^{c}}}^{\mu_{I}^{c}}(\Delta_{m-|I|}).$ In particular, $H_{(u_{i})_{i=1}^{m}}^{\mu}$ maps the face $\partial_{I}\Delta_{m}$ to the face $F_{I}$ of the Hall polytope $P((u_{i})_{i=1}^{m},\mu)$. ###### Proof of Proposition 5.13. We show a two-way inclusion. For the direction $\supseteq$ take a point $(\alpha_{1},\dots,\alpha_{m})$ in the right hand side, which is of the form $\mu(A_{I})H_{(u_{i})_{i\in I}}^{\mu_{I}}(t_{i})_{i\in I}\times_{I}\mu(A_{I}^{c})H_{(u_{j})_{j\notin I}}^{\mu_{I}^{c}}(s_{j})_{j\notin I}$ where $\sum_{i\in I}t_{i}=1$ and $\sum_{j\notin I}s_{j}=1$. For $\delta\in(0,1)$ define ${t}^{\delta}\in\Delta_{m}$ in the following way ${t}^{\delta}_{i}=\begin{cases}(1-\delta)t_{i}&i\in I\\\ \delta s_{i}&i\notin I\end{cases}$ Clearly, $t^{\delta}\to t$ as $\delta\to 0$, thus, by continuity of $H$ (Proposition 5.12), it suffices to show that $(\alpha_{1},\dots,\alpha_{m})\in\lim_{\delta\to 0}H_{(u_{i})_{i=1}^{m}}^{\mu}({t}^{\delta}_{i})_{i=1}^{m}.$ We will show that for every $\varepsilon>0$ there exists some $\delta_{0}$ such that for every $\delta<\delta_{0}$ we have $||(\alpha_{1},\dots,\alpha_{m})-H_{(u_{i})_{i=1}^{m}}^{\mu}({t}^{\delta})||\leq\varepsilon.$ Denote $(\beta_{1},\dots,\beta_{m})=H_{(u_{i})_{i=1}^{m}}^{\mu}({t}^{\delta})$. Let us reinterpret $\beta_{i}$, $\displaystyle\beta_{i}$ $\displaystyle=$ $\displaystyle\mu(\\{x\in X:\arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\})$ $\displaystyle=$ $\displaystyle\mu(\\{x\in A_{I}^{c}:\arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\})$ $\displaystyle+\mu(\\{x\in A_{I}:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\}).$ On $A_{I}^{c}$ the minimum is attained for $k\notin I$, hence $\displaystyle\beta_{i}$ $\displaystyle=\mu(A_{I}^{c})\mu_{I}^{c}(\\{x\in X:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln(\delta s_{k})=i\\})$ $\displaystyle\quad+\mu(\\{x\in A_{I}:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\})$ $\displaystyle=\mu(A_{I}^{c})\mu_{I}^{c}(\\{x\in X:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln(s_{k})=i\\})$ $\displaystyle\quad+\mu(\\{x\in A_{I}:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\}).$ Observe that the first summand is by definition equal to (13) $\mu(A_{I}^{c})\mu_{I}^{c}(\\{x\in X:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln(s_{k})=i\\})=\begin{cases}0&i\in I\\\ \alpha_{i}&i\notin I\end{cases}.$ We first deal with the case $i\notin I$, in which (13) gives that $\beta_{i}=\alpha_{i}+\mu(\\{x\in A_{I}:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\}).$ For $x\in A_{I}$, $\arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i$ means, in particular, that $c(u_{j},x)-\ln((1-\delta)t_{j})>c(u_{i},x)-\ln(\delta s_{i})$ for all $j\in I$. Since $s_{i}$ and $t_{j}$ are fixed, we can clearly find $\delta_{0}>0$ such that for any $\delta<\delta_{0}$ the measure of such $x$’s is arbitrarily small. Thus, we choose $\delta_{0}$ (depending on $s$ and $t$) such that $\alpha_{i}\leq\beta_{i}\leq\alpha_{i}+\varepsilon/m$. For the case $i\in I$, $\displaystyle\beta_{i}$ $\displaystyle=\mu(\\{x\in A_{I}:\ \arg\min_{1\leq k\leq m}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\})$ $\displaystyle\leq\mu(\\{x\in A_{I}:\ \arg\min_{k\in I}c(x,u_{k})-\ln({t}^{\delta}_{k})=i\\})$ $\displaystyle=\mu(\\{x\in A_{I}:\ \arg\min_{k\in I}c(x,u_{k})-\ln(1-\delta)-\ln(t_{k})=i\\})$ $\displaystyle=\mu(A_{I})\mu_{I}(\\{x\in A_{I}:\ \arg\min_{k\in I}c(x,u_{k})-\ln(t_{k})=i\\})=\alpha_{i}.$ Thus we have $\beta_{i}\leq\alpha_{i}$. Since $\sum_{i=1}^{m}\beta_{i}=1=\sum_{i=1}^{m}\alpha_{i}$, and $\alpha_{i}\leq\beta_{i}\leq\alpha_{i}+\varepsilon/m$ when $i\notin I$, we see that $\sum_{i\in I}\alpha_{i}-\varepsilon\leq 1-\sum_{j\notin I}(\alpha_{j}+\varepsilon/m)\leq\sum_{i\in I}\beta_{i}\leq\sum_{i\in I}\alpha_{i}$ and we conclude $\beta_{i}>\alpha_{i}-\varepsilon$ for $i\in I$. So far we have shown that for any $1\leq i\leq m$, we have $\alpha_{i}-\varepsilon\leq\beta_{i}\leq\alpha_{i}+\varepsilon,$ which completes the proof of the first inclusion. We proceed to show the second inclusion $\subseteq$. Let $\alpha\in H_{(u_{i})}^{\mu}(t)$ for $t=t_{I}\times_{I}\\{0_{I^{c}}\\}$. By the definition of $H$ on the boundary of the simplex, there exists a sequence $(t_{1}^{(k)},\dots,t_{m}^{(k)})=t^{(k)}\to t=(t_{1},\dots,t_{m})$ with $t^{(k)}\in{\rm int}(\Delta_{m})$, and $H^{\mu}_{(u_{i})_{i=1}^{m}}(t^{(k)})\to\alpha.$ In particular $\sum_{i\in I}t_{i}^{(k)}\to 1$ and $\sum_{j\notin I}t_{j}^{(k)}\to 0$. Note that for $x\in A_{I}$, as $k\to\infty$, the minimum in the definition will be attained (from some $k_{0}$ onwards) on an index $i\in I$. Therefore $\sum_{j\in I}\alpha_{j}\leftarrow\sum_{j\in I}(H_{(u_{i})_{i=1}^{m}}^{\mu}(t_{i}^{(k)}))_{j}=\mu(x\in X:\arg\min_{1\leq i\leq m}c(x,u_{i})-\ln(t_{i}^{(k)})\in I)\rightarrow\mu(A_{I})$ (limits with respect to $k\to\infty$), which implies $\sum_{i\in I}\alpha_{i}=\mu(A_{I})$ and thus $\sum_{j\notin I}\alpha_{j}=\mu(A_{I}^{c})$. Set ${t^{\prime}}_{i}^{(k)}=\frac{t_{i}^{(k)}}{\sum_{j\in I}t_{j}^{(k)}}$ for $i\in I$. It is well defined (for large enough $k$, as $\sum_{j\in I}t_{j}=1$), and its limit is clearly ${t^{\prime}}_{i}=\frac{t_{i}}{\sum_{j\in I}t_{j}}$. Hence, $\displaystyle H^{\mu_{I}}_{(u_{i})_{i\in I}}(t^{\prime}_{i})$ $\displaystyle=(\mu_{I}(\\{x\in X:\ \arg\inf_{k\in I}c(x,u_{k})-\ln(t^{\prime}_{k})=i\\}))_{i\in I}$ $\displaystyle=(\frac{1}{\mu(A_{I})}\mu(\\{x\in A_{I}:\ \arg\min_{k\in I}c(x,u_{k})-\ln(t^{\prime}_{k})=i\\}))_{i\in I}$ $\displaystyle=(\frac{1}{\mu(A_{I})}\mu(\\{x\in X:\ \arg\min_{k\in I}c(x,u_{k})-\ln(t_{k})=i\\}))_{i\in I}=\frac{1}{\mu(A_{I})}(\alpha_{i})_{i\in I}.$ In the second to last step we used again the fact that the minimum can be attained at $i\in I$ only if $x\in A_{I}$. Thus, $(\alpha_{i})_{i\in I}\in\mu(A_{I})H^{\mu_{I}}_{(u_{i})_{i\in I}}\left(\frac{t_{i}}{\sum_{i\in I}t_{i}}\right)=\mu(A_{I})H^{\mu_{I}}_{(u_{i})_{i\in I}}\left((t_{i}\right)_{i\in I}).$ Setting ${t^{\prime\prime}}_{i}^{(k)}=\frac{t_{i}^{(k)}}{\sum_{j\notin I}{t}_{j}^{(k)}}$ for $i\notin I$, the sequence $({t^{\prime\prime}}_{i}^{(k)})_{k=1}^{\infty}$ has a converging subsequence in $\Delta_{m-|I|}$, denote this subsequence by $({t^{\prime\prime}}_{i}^{(k_{l})})_{l=1}^{\infty}$, and its limit $({t^{\prime\prime}}_{i})_{i\notin I}$. Once again, by the same argument as above, the image of this point under the map $H$ corresponding to $\mu_{I}^{c}$ is exactly $H_{(u_{i})_{i\notin I}}^{\mu_{I}^{c}}(t^{\prime\prime}_{j})=\frac{1}{\mu(A_{I}^{c})}(\alpha_{j})_{j\notin I}.$ ∎ ### 5.6. Transporting a non-degenerate measure to a discrete measure We proceed to the proof of the following theorem, which is a version of Theorem 1.3, with an extra non-degeneracy assumption of $\mu$ (recall Definition 5.6). ###### Theorem 5.14. Let $X$ and $Y$ be measure spaces, $c:X\times Y\to(-\infty,\infty]$ measurable, fix $(u_{i})_{i=1}^{m}\subset Y$ and let $\mu\in{\mathcal{P}}(X)$ be $c$-regular and supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$. Assume, in addition, that $\mu$ is non-degenerate with respect to $(u_{i})_{i=1}^{m}$. Then, the mapping $H_{(u_{i})_{i=1}^{m}}^{\mu}:{\rm int}(\Delta_{m})\to(P((u_{i})_{i=1}^{m},\mu))$ covers the set ${\rm int}(P((u_{i})_{i=1}^{m},\mu))$, that is, for any $\alpha\in{\rm int}(P((u_{i})_{i=1}^{m},\mu)))$ there exists some $t\in{\rm int}(\Delta_{m})$ such that $H_{(u_{i})_{i=1}^{m}}^{\mu}(t)=\alpha$. ###### Proof. Denote $H=H_{(u_{i})_{i=1}^{m}}^{\mu}$ and $P=P((u_{i})_{i=1}^{m},\mu)$. By the non-degeneracy assumption, $P$ is full dimensional, and in particular has non-empty interior. If the image of $H$ did not cover the interior of $P$, there would be some $\alpha\in{\rm int}P$ such that $H(t)\neq\alpha$ for all $t\in{\rm int}(\Delta_{m})$. We use $\alpha$ to define the radial projection $p$ of $P\setminus\\{\alpha\\}$ to its boundary. It follows that $p\circ H$ is well defined and continuous. We then use the function $R$ as given in Proposition 5.11 to map the boundary of $P$ to the boundary of the simplex. Since $H(\partial_{I}\Delta_{m})\subset F_{I}$ we see that $R\circ p\circ H$ is a mapping from the simplex to its boundary which maps the $I^{th}$ facet to itself. Note that this composition map is a well defined function, i.e. a point-valued map: It is clearly point-valued on ${\rm{int}}(\Delta_{m})$. Let $t\in\partial\Delta_{m}$, and take the minimal $I$ (with respect to inclusion) such that $t\in\partial_{I}\Delta_{m}$. Then, the coordinates $t_{I}$ are all non-zero, and by Proposition 5.13 points in the set $H(t)$ differ only on their $I^{c}$ coordinates. Again by Proposition 5.13, $H(t)\in F_{I}\subset\partial P$, so $p(H(t))=H(t)$. Further, since the map $R$ depends only on the $I$ coordinates of $H(t)$ (as $H(t)\in F_{I}$), we conclude that the set $H(t)$ is mapped to a single point, and thus $R\circ p\circ H$ is point-valued. Next we claim that the composition $R\circ p\circ H$ is a continuous function on $\Delta_{m}$. For points in the interior of $\partial\Delta_{m}$ this follows from the fact that all three maps are continuous (see Propositions 5.12 and 5.11). We proceed to explain why the composition is continuous on the boundary. Let $t=t_{I}\times_{I}0_{I^{c}}$ be some boundary point, with $t_{i}>0$ for $i\in I$ (so, $t\in{\rm relint}(\partial_{I}\Delta_{m})$). Consider a sequence $t^{(k)}\to t$ with $(R\circ p\circ H)(t^{(k)})$ converging to some vector $s$ on the boundary of the simplex. We need to show that $s=(R\circ p\circ H)(t)$. By the definition of $H$ on boundary points, and the continuity of $p$ and $R$, we may without loss of generality assume $t^{(k)}\in{\rm int}(\Delta_{m})$. Indeed, for any $t^{\prime}\in\partial\Delta_{m}$, $y\in H(t^{\prime})$ and any ${\varepsilon}>0$, there is some $t^{\prime}_{\varepsilon}\in{\rm int}(\Delta_{m})$ with $\left|y-H(t^{\prime}_{\varepsilon})\right|<{\varepsilon}$, so given any sequence $t^{(k)}\to t$ with $(R\circ p\circ H)(t^{(k)})$ converging to $s$ we can construct a sequence in the interior, converging to $t$, whose image under $R\circ p\circ H$ converges to the same $s$. By definition of $H$, all accumulation points of the sequence $H(t^{(k)})$ belong to $H(t)$. By continuity of $R\circ p$, we conclude that all accumulation points of $R\circ p\circ H(t^{(k)})$ (which we have assumed converge to the point $s$) belong to $(R\circ p)(H(t))$. However, as we have already seen, $R\circ p\circ H(t)$ is a point, and we get that $s=R\circ p\circ H(t)$. However, there does not exist a continuous mapping from the simplex to its boundary which preserves the facets. Indeed, this can be shown, for example, using Brouwer’s fixed point theorem – as such a map could then be composed with a permutation, arriving at a continuous mapping from the simplex to itself with no fixed point. Hence, $H$ covers the interior of $P$, and for every $\alpha\in{\rm int}(P)$ there is some preimage $t$. Moreover, this $t$ satisfies $t\in{\rm int}(\Delta_{m})$, otherwise, if $t_{i}=0$ for some set of indices $i\in I$, then by Proposition 5.13, $H(t)\in F_{I^{c}}$, which does not contain $\alpha$ (as $F_{I^{c}}$ is not in the interior of $P$). ∎ ### 5.7. Removing the non-degeneracy condition The only difference between Theorem 1.3 and Theorem 5.14, apart from notation, is that in the latter we assume not only that $\\{x\in X:c(x,u_{i})<\infty\\}\cap\\{x\in X:c(x,u_{j})<\infty\\}$ contains an open set for any $i\neq j$, but that $\mu$ is non-degenerate with respect to the vectors $(u_{i})_{i=1}^{m}$, namely that $\mu\left(\\{x:c(x,u_{i})<\infty\\}\cap\\{x:c(x,u_{j})<\infty\\}\right)>0.$ To remove this condition, we will use a straightforward perturbation argument, similar to constructions used for example, by McCann [15], adding in this case uniform measures on small disks, and taking limits. More formally, make use of the following technical lemma. ###### Lemma 5.15. Let $P=P((u_{i})_{i=1}^{m},\mu)$ be the Hall polytope associated with $(u_{i})_{i=1}^{m}$ and the measure $\mu$ supported on $\\{x\in X:\exists i\in[m]\ c(x,u_{i})<\infty\\}$, and assume $P$ is full dimensional. Further assume that for $1\leq i<j\leq m$ the intersection $\\{x\in X:c(x,u_{i})<\infty\\}\cap\\{x\in X:c(x,u_{j})<\infty\\}$ contains a disk, for any $i\neq j$, and let $\eta_{i,j}$ denote a uniform measure on this disk, with the constants chosen so that $\sum_{i<j\in[m]}\eta_{i,j}(X)=1$. For any $k\in{\mathbb{N}}$ let $\mu_{k}=\frac{1}{k}\sum_{i<j\in[m]}\eta_{i,j}+(1-\frac{1}{k})\mu,$ and $P_{k}=P((u_{i})_{i=1}^{m},\mu_{k})$ the associated Hall polytope. Then, 1. (i) $H^{\mu_{k}}_{(u_{i})_{i=1}^{m}}\to_{k\to\infty}H^{\mu}_{(u_{i})_{i=1}^{m}}$ uniformly on $\Delta_{m}$, and 2. (ii) $P_{k}\to P$ as $k\to\infty$ in the Hausdorff metric. ###### Proof. Note first that each $\mu_{k}$ is non-degenerate, so by Proposition 5.7 $P_{k}$ is full dimensional. Furthermore, we may apply Theorem 5.14, and get that the mapping $H_{k}:=H^{\mu_{k}}_{(u_{i})_{i=1}^{m}}:{\rm int}(\Delta_{m})\to P^{(k)}$ covers the set ${\rm int}(P^{(k)})$. Denote $H=H^{\mu}_{(u_{i})_{i=1}^{m}}$. For (i), let $t=(t_{1},\dots,t_{m})\in{\rm int}(\Delta^{m})$. It will be convenient to recall the notation $U_{i}=\left\\{x\in X:\ \arg\min_{1\leq j\leq m}c(x,u_{j})-\ln(t_{j})=i\right\\}$. The $i^{th}$ component of the difference vector satisfies $\displaystyle\left|\left(H_{k}(t)-H(t)\right)_{i}\right|=\left|\mu_{k}\left(U_{i}\right)-\mu\left(U_{i}\right)\right|=|\frac{1}{k}\sum_{i<j}\eta_{i,j}\left(U_{i}\right)-\frac{1}{k}\mu\left(U_{i}\right)|\leq\frac{2}{k}.$ For (ii), let $\varepsilon>0$, we will show that for all $k>k_{0}=k(\varepsilon)$, $P_{k}\subset P+\varepsilon B_{2}^{m}$ and $P\subset P_{k}+\varepsilon B_{2}^{m}$. For the first inclusion, let $\alpha\in{\rm int}(P_{k})$, then since $P_{k}$ is a Hall polytope of a non-degenerate measure, we apply Theorem 5.14 and get a point $t\in\Delta_{m}$ for which $H_{k}(t)=\alpha$. As $H(t)\in P$, the previous assertion (i) gives $\|\alpha-H(t)\|_{2}\leq\frac{2\sqrt{m}}{k}$, so ${\rm int}(P_{k})\subset P+\frac{2\sqrt{m}}{k}B_{2}^{m}$ and therefore $P_{k}$ itself is also included in the same extension of $P$. For the second inclusion, let $\alpha\in P$ and let $\alpha^{(k)}$ be given by $\alpha_{i}^{(k)}:=\left(1-\frac{1}{k}\right)\alpha_{i}+\frac{1}{k}\sum_{\\{j:i<j\\}}\eta_{i,j}(X)$. We claim $\alpha^{(k)}\in P_{k}$; we check that it satisfies all of the necessary inequalities. Clearly $\sum\alpha_{i}^{(k)}=1$, and as the support of $\eta_{i,j}$ is a subset of $A_{\\{i\\}}\cap A_{\\{j\\}}\subset A_{I}$ for all $i\in I$ we have $\displaystyle\sum_{i\in I}\alpha_{i}^{(k)}$ $\displaystyle=\left(1-\frac{1}{k}\right)\sum_{i\in I}\alpha_{i}+\frac{1}{k}\sum_{i\in I}\sum_{\\{j:\,i<j\\}}\eta_{i,j}(X)\leq\left(1-\frac{1}{k}\right)\mu(A_{I})+\frac{1}{k}\sum_{i<j}\eta_{i,j}(A_{I})=\mu_{k}(A_{I}).$ We compute $\|\alpha-\alpha^{(k)}\|_{2}=\left(\sum_{i=1}^{m}\frac{1}{k^{2}}(\alpha_{i}-\sum_{i\in I}\sum_{\\{j:\,i<j\\}}\eta_{i,j}(X))^{2}\right)^{1/2}\leq\frac{2\sqrt{m}}{k}.$ Taking $k_{0}=\frac{2\sqrt{m}}{\varepsilon}$ we see that both inclusions hold. ∎ We are now set up to prove the existence of a transport map between strongly $c$-compatible measures, one of which is discrete and the other $c$-regular. ###### Proof of Theorem 1.3. Let $\mu$ be a $c$-regular measure on $X$ and $\nu=\sum_{i=1}^{m}\alpha_{i}\mathbbm{1}_{u_{i}}$ a discrete measure on $Y$ which satisfy the assumptions of the theorem, and denote by $P=P(\mu,(u_{i})_{i=1}^{m})$ the associated Hall polytope. The condition of strong $c$-compatibility means precisely that for $I\neq\emptyset,[m]$ we have $\sum\alpha_{i}<\mu(A_{I})$, or, in other words, that $\alpha=(\alpha_{i})_{i=1}^{m}\in{\rm int}(P)$. In particular $P$ is non- empty and in fact full dimensional. The conditions of Lemma 5.15 are satisfied so we may use it to define $P_{k}$ and $H_{k}$ and find a sequence of points $\alpha^{(k)}\in{\rm int}(P_{k})$ such that $\alpha^{(k)}\to\alpha$ as $k\to\infty$. Take $t^{(k)}\in{\rm int}(\Delta_{m})$ such that $H_{k}(t^{(k)})=\alpha^{(k)}$. By the compactness of $\Delta_{m}$, there exists a converging subsequence of $t^{(k)}$, to some $t\in\Delta_{m}$, and we denote $t^{(k)}\to t$, abusing notation slightly. We claim that $t\in{\rm int}(\Delta_{m})$ and that $H(t)=\alpha$. Indeed, $\displaystyle\left|H(t)-\alpha\right|$ $\displaystyle\leq\left|H(t)-H(t^{(k)})\right|+\left|H(t^{(k)})-H_{k}(t^{(k)})\right|+\left|H_{k}(t^{(k)})-\alpha^{(k)}\right|+\left|\alpha^{(k)}-\alpha\right|.$ Each of the terms tends to $0$ as $k\to\infty$: the leftmost by continuity of $H$ (Proposition 5.12), the second by uniform convergence of $H_{k}$ to $H$, the third term vanishes for every $k$ by choice of $t^{(k)}$, and the rightmost by choice of the sequence $\alpha^{(k)}$. Therefore $H(t)=\alpha$. Since $\alpha\in{\rm int}(P)$ we may use Proposition 5.13 to conclude that $t\in{\rm int}(\Delta_{m})$. We have thus established that $H$ is onto the interior of $P$. Recalling the construction in Lemma 5.2, we have shown that the function $\varphi(x)=\min_{1\leq i\leq m}(c(x,u_{i})-\ln(t_{i}))$ satisfies that its $c$-subgradient supports a transport map from $\mu$ to $\nu$. The function $\varphi$ is therefore our desired potential. Indeed, the map $T:X\to Y$ which maps the set $\\{x:\ \min_{1\leq j\leq m}(c(x,u_{j})-\ln(t_{j}))=i\\}$ to $u_{i}$ for all $i\in[m]$, is a transport map (we define $T$ on the boundary of these sets arbitrarily, as it is $\mu$-negligible) and $(x,u_{i})\in\partial^{c}\varphi$ for $x\in U_{i}$ by the first (and easy) part of Lemma A.2 from the appendix. ∎ ## 6\. For the Polar cost: Maps versus Plans Throughout the paper, we were careful to discuss transport plans, and not just maps. Indeed, even in the simplest cases of discrete measures, there is no reason for a transport map to exist, as it may require “atom splitting”, a dangerous endeavor. Nevertheless, in the classical case, for example, when a transport plan from some absolutely continuous measure $\mu$ to a measure $\nu$ is concentrated on the usual subgradient of a convex function, $\varphi\in{\rm Cvx}(\mathbb{R}^{n})$, it is easy to see that in fact one obtains a map, not just a plan. Indeed, a convex function has a unique subgradient almost everywhere. For a general cost $c$ this is no longer the case, but for our main motivating example, the polar cost $p(x,y)=-\ln(\langle x,y\rangle-1)$, a similar argument works. Recall that for this cost the $p$-class is given by $-\ln(\varphi)$ where $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ is a geometric convex function, that is, a lower semi-continuous non-negative convex function with $\varphi(0)=0$. The $p$-subgradient of the function $-\ln(\varphi)$ coincides with the polar subgradient $\partial^{\circ}$, introduced in [4], of the function $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$, and we have that $\displaystyle\partial^{p}(-\ln(\varphi))=\partial^{\circ}\varphi=\\{(x,y):\varphi(x){\mathcal{A}}\varphi(y)=\langle x,y\rangle-1>0\\},$ where ${\mathcal{A}}\varphi(y)=\sup_{\\{x:\,\langle x,y\rangle>1\\}}\frac{\langle x,y\rangle-1}{\varphi(x)}$ is the polarity transform defined in [3]. More details are provided in Appendix A together with the proof of the following lemma. ###### Lemma A.4. Let $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ and let $x$ satisfy $\varphi(x)\in(0,\infty)$. Then 1. (i) for any $z\in\partial\varphi(x)$ such that $\langle x,z\rangle\neq\varphi(x)$, we have that $y=\frac{z}{\langle x,z\rangle-\varphi(x)}\in\partial^{\circ}\varphi(x)$, 2. (ii) for any $y\in\partial^{\circ}\varphi(x)$ there exists some $z\in\partial\varphi(x)$ such that $\langle x,z\rangle\neq\varphi(x)$ and such that $y=\frac{z}{\langle x,z\rangle-\varphi(x)}$. When $\varphi(x)=0$ or $\varphi(x)=\infty$, then by definition, $\partial^{\circ}\varphi(x)=\emptyset$. When $\varphi(x)\in(0,\infty)$, the lemma implies that at a differentiability point of $\varphi$, the set $\partial^{\circ}\varphi(x)$ is either a singleton or is empty, which may happen only if the function $\varphi$ is linear on $[0,x]$. Our main theorem thus implies the following. ###### Theorem 1.2. Let $X=Y=\mathbb{R}^{n}$ and let $\mu,\,\nu\in{\mathcal{P}}(\mathbb{R}^{n})$ be probability measures with finite second moment, which are strongly $p$-compatible with respect to the polar cost, that is $\mu(K)+\nu(K^{\circ})<1$ for any convex set $K$ with $\mu(K)\neq 0,1$. Assume further that $\mu$ is absolutely continuous and that there exists some finite cost plan mapping $\mu$ to $\nu$. Then there exists $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ such that $\partial^{\circ}\varphi$ is an optimal transport map between $\mu$ and $\nu$, where $\partial^{\circ}\varphi(x)=\\{y:\varphi(x){\mathcal{A}}\varphi(y)=\langle x,y\rangle-1>0\\}.$ In particular, for $\mu$-almost every $x$, the set $\partial^{\circ}\varphi(x)$ is a singleton. ###### Proof. By Theorem 1.1, we find a function $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ such that there is an optimal plan $\pi$ concentrated on the graph of $\partial^{p}(-\log\varphi)=\partial^{\circ}\varphi$. We claim that $\mu$-almost everywhere, the set $\partial^{\circ}\varphi(x)$ is a singleton, implying that $\partial^{\circ}\varphi$ is indeed a transport map. Since $\pi$ is concentrated on $\partial^{\circ}\varphi$, the measure $\mu$ is concentrated on the projection of $\partial^{\circ}\varphi$, so in particular on the set of $x\in X$ with $\varphi(x)\neq 0,\infty$. We may also restrict to points in the interior of the domain of $\varphi$, as $\varphi$ is convex and points on the boundary of its domain have $\mu$-measure zero (using again that $\mu$ is absolutely continuous). We have that $\mu$-almost every point $x$ in the interior of the domain of $\varphi$ is a differentiability point of $\varphi$, and further that $\varphi$ does not vanish on $\mu$-almost every such point. Hence, by Lemma A.4, $\partial^{\circ}\varphi(x)$ is either a singleton or the empty set (in which case $x$ does not belong to the projection of $\partial^{\circ}\varphi$). We conclude that indeed $\partial^{\circ}\varphi(x)$ must be a singleton $\mu$-almost everywhere, as required. ∎ ## 7\. Decomposable pairs We discussed in Section 3 that when considering the transport problem of a measure $\mu\in{\mathcal{P}}(X)$ to $\nu\in{\mathcal{P}}(Y)$, with respect to a cost function $c:X\times Y\to(-\infty,\infty]$, where $\mu$ and $\nu$ are $c$-compatible but not strongly $c$-compatible, the transport problem splits into two transport problems of disjointly supported measures. Let us make this observation more formal. ###### Proposition 7.1. Let $c:X\times Y\to(-\infty,\infty]$, and assume $\mu\in{\mathcal{P}}(X),\,\nu\in{\mathcal{P}}(Y)$ are $c$-compatible measures which are not strongly $c$-compatible. There exists a $c$-class set $A\subset X$, and $B\subset Y$ such that $Y\setminus B$ is $c$-class, with $\mu(A)=\nu(B)\in(0,1)$, such that, letting $\mu|_{A}$ and $\nu|_{B}$ denote the restricted measures, normalized, the pair $\mu|_{A}$ and $\nu|_{B}$ is $c$-compatible, as is the pair $\mu|_{X\setminus A}$ and $\nu|_{Y\setminus B}$. Moreover, any $\pi\in\Pi(\mu,\nu)$ which is concentrated in the set $S=\\{(x,y):c(x,y)<\infty\\}$, can be written as $\pi=\mu(A)\pi_{1}+(1-\mu(A))\pi_{2}$, where $\pi_{1}\in\Pi(\mu|_{A},\nu|_{B})$ and $\pi_{2}\in\Pi(\mu|_{X\setminus A},\nu|_{Y\setminus B})$, and $C(\mu,\nu)=\mu(A)C(\mu|_{A},\nu|_{B})+(1-\mu(A))C(\mu|_{X\setminus A},\nu|_{Y\setminus B})$. ###### Proof. Indeed, by Lemma 3.12, the fact that the measures are not strongly $c$-compatible implies that there exists some set $A\subset X$, which is a $c$-class set (this means there is some $D\subset Y$, which can also be assumed to be a $c$-class set, such that $A=\\{x:\forall y\in D,\,c(x,y)=\infty\\}$), and such that $\mu(A)\in(0,1)$ and $\mu(A)+\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\})=1.$ Rearranging, this means that $\mu(A)=\nu(\\{y:\exists x\in A,\,\,c(x,y)<\infty\\})\quad{\rm and}\quad\mu(X\setminus A)=\nu(\\{y:\forall x\in A,\,\,c(x,y)=\infty\\}).$ Let $B=\\{y:\exists x\in A,\,\,c(x,y)<\infty\\}=Y\setminus D$. To see that $\mu|_{A}$ and $\nu|_{B}$ are $c$-compatible, letting $\mu(A)=a$, say, and fixing some set $A^{\prime}\subset A$, we see that $\displaystyle\mu(A^{\prime})+\nu(\\{y\in B:\forall x\in A^{\prime},\,\,c(x,y)=\infty\\})=$ $\displaystyle\mu(A^{\prime})+\nu(\\{y\in Y:\forall x\in A^{\prime},\,\,c(x,y)=\infty\\})-\nu(Y\setminus B)\leq 1-(1-a)=a,$ as required. Similarly for the complementary measures. If a transport plan $\pi\in\Pi(\mu,\nu)$ is concentrated on $S$, then $\pi$ cannot have non-zero measure in $A\times(Y\setminus B)$ or in $(X\setminus A)\times B$. Indeed, as $\mu(A)=\pi((A\times Y)\cap S)=\pi(A\times B)\leq\nu(B)=\mu(A)$ implying that we have equalities all along, and $\pi(A\times(Y\setminus B))=0$. Similarly, as $Y\setminus B=D$ is a $c$-class set, $D=\\{y:\forall x\in A,\,c(x,y)=\infty\\}$, so $D$ must be mapped to $X\setminus A$, and as these sets have the same measure, $\nu(D)=\pi((X\times D)\cap S)=\pi((X\setminus A)\times D)\leq\mu(X\setminus A)=\nu(D),$ and by the same reasoning, $\pi((A\setminus X)\times B)=0$. (Figure 1 is a good illustration of this event.) In other words, such a transport plan can be split into its components, $\pi_{1}=\pi|_{A\times B}$ and $\pi_{2}=\pi|_{(X\setminus A)\times(Y\setminus B)}$ (where as above, restriction means to restrict, and renormalize to a probability measure). This completes the proof. ∎ Of course, the fact that the problem splits into two sub-problems does not necessarily imply we may solve it in a satisfactory way. Indeed, it may be the case that each sub-problem has an associated potential function, but these two functions cannot be “glued” so as to form a potential for the original problem. This is the case for example for the polar cost in the following example ###### Example 7.2. Consider the set $A=\\{(x,y):\ x\in(\tfrac{1}{2},1),\ y=3-2x\\}\cup\\{(x,y):\ x\in(1,2),\ y=\tfrac{3}{2}-\tfrac{1}{2}x\\}\subset\mathbb{R}^{+}\times\mathbb{R}^{+}.$ The set is a $p$-cyclically monotone (with respect to the polar cost $p(x,y)=-\ln(\langle x,y\rangle-1)$ since for every point $(x,y)\in A$ we have $\langle x,y\rangle>1$ and it is a graph of non-increasing function on its domain, which characterized $p$-cyclically monotone sets on the ray $\mathbb{R}^{+}$, see [5]. However, the set is not $p$-path-bounded, and thus admits no potential. Next, consider the measure $\mu=\nu$ on $[1/2,2]$ with density $1$ on $[1/2,1]$ and density $1/2$ on $[1,2]$. This is a probability measure. In fact, $\mu$ and $\nu$ are $p$-compatible as the normalized uniform measure on the set $A$ constitutes a plan $\pi\in\Pi(\mu,\nu)$. However, they are not strongly $p$-compatible since the set $A=[1/2,1]$ must be mapped to $B=[1,2]$ and vice versa. In this case we see the splitting very clearly, and indeed $A$ is written as the union of two sets, each of which admits a potential (so, in particular, each is $c$-path-bounded, and is an optimal plan between the corresponding restricted measures). However, there is no potential for the full set $A$, as it is not $c$-path-bounded, and in particular no “gluing” of the two potentials is possible. ## Appendix A $c$-subgradients and polar subgradients Since $c$-subgradients play such an important role in this theory, we gather here some relevant information regarding them but which we did not include in the main text so as not to disturb its flow. Let us recall that given a function $\varphi$ in the $c$-class, its $c$-subgradient is defined by $\partial^{c}\varphi=\\{(x,y):\,\varphi(x)+\varphi^{c}(y)=c(x,y)\,\text{ and }\,c(x,y)<\infty\\}.$ Denoting by $\partial^{c}\varphi(x)$ the set of points $y\in Y$ for which $(x,y)\in\partial^{c}\varphi$, we have by definition that $x\in\partial^{c}\varphi^{c}(y)\;\;\Leftrightarrow\;\;y\in\partial^{c}\varphi(x).$ Notice that $y\in\partial^{c}\varphi(x)$ if and only if the function $c(\cdot,y)-\varphi^{c}(y)$ is above $\varphi$ and coincides with it at $x$. This provides the first simple but useful way to think about $c$-subgradients, summarized in Lemma A.1. Given a function $\varphi$ in the $c$-class, it is the image, under the $c$-transform, of another $c$-class function $\psi=\varphi^{c}$ and therefore, it can be written as an infimum over basic functions as follows: $\varphi(x)=\inf_{y}\left(c(x,y)-\varphi^{c}(y)\right).$ All the functions on the right hand side lie above $\varphi$. If any one of the basic functions (indexed by $y$) on the right hand side is tangent to $\varphi$ at the point $x$, then the pair $(x,y)$ belongs to $\partial^{c}\varphi$, and $y\in\partial^{c}\varphi(x)$. In other words ###### Lemma A.1. Let $\varphi$ be a $c$-class function, and $x\in X$ and assume that $\varphi(x)<\infty$. Then $y_{0}\in\partial^{c}\varphi(x)$ if and only if $c(x,y_{0})<\infty$ and the function $\ell(z)=c(z,y_{0})-c(x,y_{0})+\varphi(x)$ satisfies $\ell(z)\geq\varphi(z)\ \text{ for all }\ z\in X.$ ###### Proof. By the definition we have that $y_{0}\in\partial^{c}\varphi(x)$ if and only if $\varphi(x)+\varphi^{c}(y_{0})=c(x,y_{0})<\infty.$ Using the definition of the $c$-transform we see that $\varphi(x)=c(x,y_{0})-\varphi^{c}(y_{0})=\sup_{z}(c(x,y_{0})-c(z,y_{0})+\varphi(z)),$ which holds if and only if for all $z$ we have $c(z,y_{0})-c(x,y_{0})+\varphi(x)\geq\varphi(z)$. ∎ It is useful to understand the structure of the $c$-subgradient of the basic functions. In parallel to the classical case, where the linear functions have constant subgradient, we show that under mild assumptions the same is true for $c$-subgradients of basic functions. This was, of course, our motivation for using the specific candidates for the potential functions in Section 5. ###### Lemma A.2. Let $X,\,Y$ be measure spaces and let $c:X\times Y\to(-\infty,\infty]$ be a measurable cost function. Consider a basic function $\varphi(x)=c(x,y_{0})+t$ for some $y_{0}\in Y$. If $c(x,y_{0})<\infty$, then $y_{0}\in\partial^{c}\varphi(x)$. If, in addition, for any $y_{1}\neq y_{0}$ we have that $\inf_{z}\left(c(z,y_{1})-c(z,y_{0})\right)$ is not attained at $x$ (for example, if the infimum is $-\infty$, or bounded but not attained at all) then $\\{y_{0}\\}=\partial^{c}\varphi(x)$. ###### Proof. Indeed, let $\varphi$ be as in the statement. From the definition it follows that $y\in\partial^{c}\varphi(x)$ if and only if $c(x,y)<\infty$ and $c(x,y)-\varphi(x)=\varphi^{c}(y)=\inf_{z}(c(z,y)-\varphi(z)),$ which can be reformulated as $c(x,y)-\varphi(x)\leq c(z,y_{1})-\varphi(z)\ \ \ \text{ for all }z\in X.$ Plugging in the definition of $\varphi$ we get $c(x,y)-c(x,y_{0})\leq c(z,y)-c(z,y_{0})\ \ \ \text{ for all }z\in X.$ We see that $y=y_{0}$ always satisfies the equality, so that $y_{0}\in\partial^{c}\varphi(x)$. Clearly for $y_{1}\neq y_{0}$, such an inequality means precisely that the infimum is attained at $x$. ∎ An important and motivating first example is the one coming from the clasical cost function $c(x,y)=-\langle x,y\rangle$. ###### Example A.3. For the cost function $c(x,y)=-\langle x,y\rangle$, whose transport plans and maps coincide with those associated to the quadratic cost, the $c$-subgradient coincides, up to a minus sign, with the well known subgradient. More formally, a function $\varphi$ is in the $c$-class if and only if $-\varphi\in{\rm Cvx}(\mathbb{R}^{n})$, namely is convex and lower semi-continuous. Denoting $\psi=-\varphi$ and using the definition of the $c$-transform we see that $(x,y)\in\partial^{c}\varphi$ if and only if for all $z\in X$ we have $\psi(z)-\psi(x)=\varphi(x)-\varphi(z)\geq c(x,y)-c(z,y).$ Plugging in the quadratic cost we indeed get that $y\in\partial^{c}\varphi(x)$ if for all $z$ it holds that $\psi(x)+\langle z-x,y\rangle\leq\psi(z)$, namely $y\in\partial\psi(x)$. The second motivating example, which is our main point of interest, is that of the polar cost $p:\mathbb{R}^{n}\times\mathbb{R}^{n}\to(-\infty,\infty]$, which we once again recall $p(x,y)=-\ln(\langle x,y\rangle-1)_{+}=\begin{cases}-\ln(\langle x,y\rangle-1),&\ \ \text{if }\langle x,y\rangle>1\\\ +\infty,&\ \ \text{otherwise.}\end{cases}$ It was shown in [7] that for the polar cost the $p$-class consists of all functions of the form $-\ln(\varphi)$, where $\varphi$ is a geometric convex function, that is, a lower semi-continuous non-negative convex function with $\varphi(0)=0$. The associated cost transform is linked with the ${\mathcal{A}}$-transform defined in [3] and given by (14) ${\mathcal{A}}\varphi(y)=\sup_{\\{x:\,\langle x,y\rangle>1\\}}\frac{\langle x,y\rangle-1}{\varphi(x)}.$ More precisely, one may easily verify that $-\ln({\mathcal{A}}\varphi)=(-\ln(\varphi))^{p}$. Further, the $p$-subgradient of the function $-\ln(\varphi)$ can be rewritten as the polar subgradient $\partial^{\circ}$, introduced in [4], of the function $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$. Indeed, we have that (15) $\displaystyle\partial^{p}(-\ln(\varphi))=\partial^{\circ}\varphi=\\{(x,y):\varphi(x){\mathcal{A}}\varphi(y)=\langle x,y\rangle-1>0\\}.$ This convenient form is a reason for us to sometimes consider a “multiplicative” setting, where the basic functions are of the form $\varphi_{u,t}(x)=t(\langle x,u\rangle-1)_{+}.$ The next lemma, which is a version of [4, Lemma 3.3], describes the connection between the polar subgradient and the classical subgradient. We will use the following notation for the zero set $Z_{\varphi}=\\{x:\varphi(x)=0\\}$ and ${\rm dom}(\varphi)=\\{x:\varphi(x)<\infty\\}$ for the domain where $\varphi$ is finite. ###### Lemma A.4. Let $\varphi\in{\rm Cvx}_{0}(\mathbb{R}^{n})$ and let $x\in{\rm dom}(\varphi)\setminus Z_{\varphi}$. Then 1. (i) for any $z\in\partial\varphi(x)$ such that $\langle x,z\rangle\neq\varphi(x)$, we have that $y=\frac{z}{\langle x,z\rangle-\varphi(x)}\in\partial^{\circ}\varphi(x)$, 2. (ii) for any $y\in\partial^{\circ}\varphi(x)$ there exists some $z\in\partial\varphi(x)$ such that $\langle x,z\rangle\neq\varphi(x)$ and such that $y=\frac{z}{\langle x,z\rangle-\varphi(x)}$. ###### Proof. (i) Let $z\in\partial\varphi(x)$ with $\langle x,z\rangle\neq\varphi(x)$, which means that for every $w$ we have $\langle w,z\rangle-\varphi(w)\leq\langle x,z\rangle-\varphi(x)$. In particular, $\langle x,z\rangle-\varphi(x)>0$. Hence, letting $y=\frac{z}{\langle x,z\rangle-\varphi(x)}$ we have that $\langle x,y\rangle>1$. To show that $y\in\partial^{\circ}\varphi(x)$, it remains to show that $\varphi(x){\mathcal{A}}\varphi(y)=\langle x,y\rangle-1$. According to the definition of ${\mathcal{A}}$, this holds if for every $w$ with $\langle w,y\rangle>1$ and $\varphi(w)>0$, we have $\frac{\langle w,y\rangle-1}{\varphi(w)}\leq\frac{\langle x,y\rangle-1}{\varphi(x)}.$ Plugging in $y$ and rearranging gives $\frac{\langle w,\frac{z}{\langle x,z\rangle-\varphi(x)}\rangle-1}{\varphi(w)}\leq\frac{\langle x,\frac{z}{\langle x,z\rangle-\varphi(x)}\rangle-1}{\varphi(x)}={\frac{1}{\langle x,z\rangle-\varphi(x)}}.$ Using that $\langle x,z\rangle-\varphi(x)>0$, the above inequality is equivalent to our initial assumption $\langle w,z\rangle-\varphi(w)\leq\langle x,z\rangle-\varphi(x)$. (ii) Given $y\in\partial^{\circ}\varphi(x)$ it follows from the definition that $\langle x,y\rangle>1$. Consider $z=\frac{y\varphi(x)}{\langle y,x\rangle-1},$ which is well defined, and also implies that $y=\frac{z}{\langle x,z\rangle-\varphi(x)}$. We need to show that $z\in\partial\varphi(x)$ and $\langle z,x\rangle\neq\varphi(x)$. The latter follows easily since $\langle z,x\rangle=\varphi(x)\big{(}1+\tfrac{1}{\langle x,y\rangle-1}\big{)}$ and once again that $\langle x,y\rangle>1$. For the former, we use as before that if $y\in\partial^{\circ}\varphi(x)$ then for any $w$ with $\langle w,y\rangle>1$ and $\varphi(w)>0$ we have $\frac{\langle w,y\rangle-1}{\varphi(w)}\leq\frac{\langle x,y\rangle-1}{\varphi(x)}$. Plugging in $y$ and rearranging, we get that $\varphi(x)+\langle w-x,z\rangle\leq\varphi(w)$ holds for any $w$ such that $\langle w,z\rangle>\langle x,z\rangle-\varphi(x)$ and $\varphi(w)>0$. In the case when $w$ is such that $\langle w,z\rangle\leq\langle x,z\rangle-\varphi(x)$, this actually means that $\varphi(x)+\langle w-x,z\rangle\leq 0$ and since the geometric convex functions are non-negative the desired inequality trivially follows. It remains to consider the case when $w\in Z_{\varphi}$, i.e. when $\varphi(w)=0$. Then, plugging in the previously defined $z$, we have that the inequality defining the subgradient of $\varphi$ at $x$ becomes simply $\langle w,y\rangle\leq 1.$ That is, we need to show that $\partial^{\circ}\varphi(x)$ is contained in the polar set of $Z_{\varphi}$. Indeed, $y\in\partial^{\circ}\varphi(x)$ implies in particular that $y\in\text{dom}({\mathcal{A}}\varphi)$ (since the value of ${\mathcal{A}}\varphi(y)=\frac{\langle x,y\rangle-1}{\varphi(x)}<\infty$) and it follows from the definition of ${\mathcal{A}}$ that $\text{dom}({\mathcal{A}}\varphi)\subset Z_{\varphi}^{\circ}$, which completes the proof. ∎ We end this appendix with one explicit example of a function and its $p$-subgradient. More examples and applications can be found in [7, 24] and in the forthcoming [2]. ###### Example A.5. Let $\varphi(x)=|x|^{2}/2$, in which case ${{\mathcal{A}}}\varphi(y)=|y|^{2}/2$ and the supremum in the definition of ${\mathcal{A}}\varphi$ is satisfied for $x=2y/|y|^{2}$. Hence, $\partial^{\circ}\varphi(x)=\frac{2x}{|x|^{2}}$. Note that the mapping $x\mapsto\partial^{\circ}\varphi(x)$ in this case is a (rescaled) spherical inversion. ## References * [1] L. Ambrosio and A. Pratelli, _Existence and stability results in the ${L}^{1}$ theory of optimal transportation_, Optimal transportation and applications, Springer, 2003, pp. 123–160. * [2] S. Artstein-Avidan, H. Barel, Y. Rubinstein, S. Sadovsky, and K. Wyczesany, _Transportation induced by the polarity transform_ , In preparation. * [3] S. Artstein-Avidan and V. Milman, _Hidden structures in the class of convex functions and a new duality transform_ , Journal of the European Mathematical Society 13 (2011), no. 4, 975–1004. * [4] S. Artstein-Avidan and Y. A. Rubinstein, _Differential analysis of polarity: Polar hamilton-jacobi, conservation laws, and monge ampère equations_ , Journal d’Analyse Mathématique 132 (2017), no. 1, 133–156. * [5] S. Artstein-Avidan, S. Sadovsky, and K. Wyczesany, _A Rockafellar-type theorem for non-traditional costs_ , arXiv:2011.13263. * [6] K. Ball, _An elementary introduction to monotone transportation_ , Geometric aspects of functional analysis, Springer, 2004, pp. 41–52. * [7] H. Barel, _Optimal transportation problem for polar cost_ , Master’s thesis, Tel Aviv University, 2019. * [8] M. Beiglböck, M. Goldstern, G. Maresch, and W. Schachermayer, _Optimal and better transport plans_ , Journal of Functional Analysis 256 (2009), no. 6, 1907–1927. * [9] S. Bianchini and L. Caravenna, _On optimality of c-cyclically monotone transference plans_ , Comptes Rendus Mathematique 348 (2010), no. 11-12, 613–618. * [10] Y. Brenier, _Polar factorization and monotone rearrangement of vector-valued functions_ , Communications on pure and applied mathematics 44 (1991), no. 4, 375–417. * [11] L. A. Caffarelli, _The regularity of mappings with a convex potential_ , Journal of the American Mathematical Society 5 (1992), no. 1, 99–104. * [12] W. Gangbo and R. J. McCann, _The geometry of optimal transportation_ , Acta Mathematica 177 (1996), no. 2, 113–161. * [13] L. V. Kantorovich, _On the transfer of masses_ , Dokl. Acad. Sci. SSSR 37 (1942), 227–229. * [14] by same author, _On a problem of monge_ , Uspekhi Mat. Nauk. 3 (1948), 225–226. * [15] R. J. McCann, _A convexity theory for interacting gases and equilibrium crystals_ , Ph.D. thesis, Princeton University, 1994. * [16] J. Rochet, _A necessary and sufficient condition for rationalizability in a quasi-linear context_ , Journal of mathematical Economics 16 (1987), no. 2, 191–200. * [17] R. T. Rockafellar, _Characterization of the subdifferentials of convex functions_ , Pacific Journal of Mathematics 17 (1966), no. 3, 497–510. * [18] L. Rüschendorf, _On c-optimal random variables_ , Statistics & probability letters 27 (1996), no. 3, 267–270. * [19] C. Smith and M. Knott, _On Hoeffding-Fréchet bounds and cyclic monotone relations_ , Journal of multivariate analysis 40 (1992), no. 2, 328–334. * [20] V. Strassen, _The existence of probability measures with given marginals_ , The Annals of Mathematical Statistics 36 (1965), no. 2, 423–439. * [21] N. S. Trudinger and X. Wang, _On strict convexity and continuous differentiability of potential functions in optimal transportation_ , Archive for rational mechanics and analysis 192 (2009), no. 3, 403–418. * [22] C. Villani, _Topics in optimal transportation_ , no. 58, American Mathematical Soc., 2003. * [23] by same author, _Optimal transport: old and new_ , vol. 338, Springer Science & Business Media, 2008. * [24] K. Wyczesany, _Topics in high-dimensional geometry and optimal transport_ , Ph.D. thesis, University of Cambridge, 2020. School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel e-mail<EMAIL_ADDRESS> e-mail<EMAIL_ADDRESS> e-mail<EMAIL_ADDRESS>
# Analysis of Library Dependency Networks of Package Managers Used in iOS Development ††thanks: This research has been funded by grant PRG1226 of the Estonian Research Council, the European Social Fund via the IT Academy program, and the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), the Federal Ministry for Digital and Economic Affairs (BMDW), and the State of Upper Austria in the frame of the SCCH competence center INTEGRATE (FFG grant no. 892418) part of the COMET - Competence Centers for Excellent Technologies Programme managed by Austrian Research Promotion Agency FFG. 1st Kristiina Rahkema * — Institute of Computer Science University of Tartu Tartu, Estonia <EMAIL_ADDRESS>2nd Dietmar Pfahl * — Institute of Computer Science University of Tartu Tartu, Estonia <EMAIL_ADDRESS>3rd Rudolf Ramler * — Software Competence Center Hagenberg (SCCH) GmbH Hagenberg, Austria <EMAIL_ADDRESS> ###### Abstract Reusing existing solutions in the form of third-party libraries is common practice when writing software. Package managers are used to manage dependencies to third-party libraries by automating the process of installing and updating the libraries. Library dependencies themselves can have dependencies to other libraries creating a dependency network with several levels of indirections. The library dependency network in the Swift ecosystem encompasses libraries from CocoaPods, Carthage and Swift Package Manager (PM). These package managers are used when developing, for example, iOS or Mac OS applications in Swift and Objective-C. We provide the first analysis of the library dependency network evolution in the Swift ecosystem. Although CocoaPods is the package manager with the biggest set of libraries, the difference to other package managers is not as big as expected. The youngest package manager and official package manager for Swift, Swift PM, is becoming more and more popular, resulting in a gradual slow-down of the growth of the other two package managers. When analyzing direct and transitive dependencies, we found that the mean total number of dependencies is lower in the Swift ecosystem compared to many other ecosystems. Still, the total number of dependencies shows a clear growing trend over the last five years. ###### Index Terms: iOS, package manager, library dependency network ## I Introduction Reusing existing solutions in the form of third-party libraries is common practice when writing software. This makes the development process faster and easier. And, third-party solutions are often better vetted than custom solutions. Using a package manager allows to declare and keep track of a project’s dependencies to third-party libraries. The library dependencies themselves can, again, have dependencies to other libraries, creating a network of library dependencies. The collection of all libraries that are available through a package manager and their library dependencies create a - potentially large and complex - library dependency network (LDN). The structure and evolution of such LDNs of various package managers have been studied. For example, Kikas et al. [1] created a dependency dataset and analyzed the LDNs of JavaScript, Ruby and Rust. Decan et al. [2] studied the growth of LDNs of seven package managers. They found that the number of libraries, versions and dependencies grew for each package manager linearly or even exponentially, increasing the risk of dependency conflicts and incompatibilities. Furthermore, when the number of direct and transitive dependencies (i.e. indirect dependencies of any level of indirection) grows, it also increases the security risk of a library depending on vulnerable library version. As seen in the example of Log4J, even a vulnerability in a seemingly harmless logging library can have a severe impact on a significant part of a software ecosystem [3]. Although several studies analyzed LDNs, especially for npm and Maven, no studies exist for the LDNs of CocoaPods, Carthage and Swift PM. These three package managers are used when developing applications in Swift, such as iOS, Mac OS or Watch OS applications. In the following, we refer to the combined ecosystems of CocoaPods, Carthage and Swift PM as the Swift ecosystem. It is important to note that this ecosystem contains libraries written in other languages as well (e.g., Objective-C, C, C++). Additionally, CocoaPods and Carthage are also used in many Objective-C projects. A further interesting aspect of the Swift ecosystem is that the LDNs of the three package managers are partially overlapping. In this paper, we present our analysis of the evolution of the LDNs of CocoaPods, Carthage and Swift PM. We analyse the growth of the whole Swift ecosystem in terms of number of libraries and number of library versions, the growth of LDNs for each of the three package managers, and the evolution of the number of dependencies over time. By comparing our results and observations to studies of LDNs of other ecosystems, we also share insights related to the characteristics of iOS development. We previously briefly discussed these research plans in a doctoral symposium paper [4]. ## II Background ### II-A Package Managers Our focus is on libraries that can be used in applications written in Swift, such as iOS or Mac OS applications. Package managers used in Swift development are CocoaPods, Carthage, and Swift Package Manager (Swift PM). CocoaPods111https://cocoapods.org was released in September 2011 and is the oldest package manager with around 88,000 libraries. It is a centralized package manager. Dependencies are declared in a Podfile. When CocoaPods is executed, it downloads and compiles the declared libraries. It generates a new Xcode Workspace with all libraries included. This makes CocoaPods easy to use, as there is no additional manual work needed. Carthage222https://github.com/Carthage/Carthage was released in November 2014. According to Libraries.io, it includes 4.5 thousand libraries [5]. This number, however, is an estimate as Carthage is a decentralized package manager and no official central repository of libraries exists. Carthage was created as a counterweight to the more heavyweight CocoaPods. Libraries can be included through Carthage by simply adding a repository address of a library to the Cartfile. Carthage downloads and compiles these libraries but does not automatically include them in the app projects. Swift Package Manager (Swift PM)333https://www.swift.org/package-manager/ was released in December 2017. It is the official package manager created by Apple. Swift PM is a decentralized package manager like Carthage. Differently to the other two package managers, the Package.swift file is also used as a build file. Support for iOS applications was not added to Swift PM until 2019 [6]. Since 2019 it is also possible to use Swift PM directly through Xcode, the main IDE for iOS and Mac OS development. ### II-B Dependency Data For each package manager there exist configuration files that specify which libraries the developers wish to include in their project. There are two types of configuration files: manifest files and resolution files. Developers specify a library and its version constraints in a manifest file. After installing the dependencies through the package manager, a resolution file is generated by the package manager that specifies the exact library versions installed. Each of the package managers has a slightly different way of declaring dependencies, the underlining idea however is the same. ### II-C LDN Dataset Rahkema et al. [7] created a dataset for the LDNs of the Swift ecosystem. The dataset contains information on library versions, their dependencies, and publicly reported vulnerabilities. The dataset contains data on 60,533 libraries, 572,131 library versions, and 23,419 dependencies between libraries. We use this data on library versions and dependencies to analyse the evolution of the LDNs. ## III Related Work In related work, the evolution of LDNs for various package managers has been analyzed, most often for Maven, npm and RubyGems. So far, no studies exist on package managers in the Swift ecosystem (CocoaPods, Carthage and Swift PM). Kikas et al. [1] analyzed the evolution of LDNs of three languages: JavaScript, Ruby and Rust. They found that for each package manager the number of libraries is growing. Similarly, the number of direct dependencies and total dependencies per project is increasing. The increase was extreme for JavaScript, where the average number of total dependencies grew from one per project to almost 60 between 2011 and 2016. Decan et al. analyzed the evolution of seven package manager LDNs [2]. They used the libraries.io dataset to analyze how these package managers’ LDNs change over time. They found that the growth of the number of libraries and dependencies differs between package managers. Some LDNs grow linearly while others grow exponentially. They showed that the number of transitive dependencies is significantly higher than the number of direct dependencies. For some of the package managers, the ratio between transitive and direct dependencies is growing. They also pointed out that the average dependency depth is between three and six, depending on the package manager. The libraries.io data set includes data about CocoaPods, Carthage and Swift Package Manager (the three package managers used in iOS development), but according to Decan et al. this data was incomplete, i.e. there was data on dependencies between library versions for these package managers. Therefore, these package managers were excluded from the analysis. Kula et al. analyzed dependency updates in 4,600 java projects [8]. They found, that 81.5% of the studied projects did not update their outdated dependencies. They plotted library usage curves and discovered that new library versions are mostly used by new dependent projects only. ## IV Method Scripts used in our analyses can be found on GitHub444https://github.com/kristiinara/LibraryDependencyEvolution. In this paper, we analyse the following research questions: * • RQ1: How has the combined LDN of the Swift ecosystem evolved? * • RQ2: How have the LDNs of each of the package manager evolved? * • RQ3: How has the number of dependencies evolved in the LDNs? ### IV-A RQ1: Evolution of the combined LDN For RQ1, we plot the cumulative number of all libraries and library versions including libraries that have no dependencies and no dependents. This might include unused libraries. ### IV-B RQ2: Evolution of the LDNs For RQ2, we only consider connected libraries, i.e., libraries with at least one dependent or dependency. We first look at how the number of libraries has grown for each package manager. For this, we find the first version of each library, group by the month of its commit timestamp, and count the number of unique libraries cumulatively. We plot the cumulative curve for each package manager. We then calculate how the number of library versions has grown for each package manager. Again, we group the library versions by the months of commit timestamps. We count the number of library versions released each month and take the cumulative sum. The cumulative curve is plotted for each package manager. ### IV-C RQ3: Evolution of Dependencies For RQ3, we plot the mean number of direct and transitive dependencies for each month, as a monthly snapshot. The monthly snapshot is calculated by finding library versions released during each month and for each library taking the last library version for each month. The number of direct and transitive dependencies is found by querying LIBRARY_DEPENDS_ON chains with a length of up to 10. A maximum threshold for the dependency chain needs to be set for performance reasons, we did, however, confirm that very few dependencies existed beyond that level. The ratio of the total number of dependencies to the number of libraries is then calculated and plotted. ## V Results and Discussion ### V-A RQ1: Evolution of the combined LDN We analyzed 60,533 libraries in total. Figure 1 shows the cumulative number of libraries (red) and library versions (blue) over time. The subset of connected libraries and their versions is shown as dotted lines. The total number of libraries grew very fast after the release of the Swift programming language in 2014. From 2019 onward the number of new libraries added has slightly slowed down. A similar pattern can be observed for library versions. Moreover, we see similar trends for the subset of connected libraries, i.e., libraries that either use a package manager or are used through a package manager. From Figure 1 it is evident that some of the libraries in the Swift ecosystem were created before the introduction of the first package manager, CocoaPods, or before the release of the language Swift. Firstly, it is important to note that, although we call it the Swift ecosystem, it also encompasses Objective-C libraries. Objective-C, the predecessor of Swift, was introduced in 1984 and it is inter-operable with Swift. Additionally, libraries written in C and C++ can be used in both Swift and Objective-C projects. Some of these libraries are also available through these package managers. Some of the C/C++/Objective-C libraries written before the release of CocoaPods were later added to the package managers. In our analysis we use git tags and commit timestamps to date library versions. Therefore, as we have no information on when a library or library version was added to a package manager, we simply assume that it was added when the library version was released. In the following, we will ignore any library versions added before September 2011, i.e., prior to when CocoaPods was introduced. Figure 1: Cumulative number of libraries and library versions. Solid lines show numbers for all libraries, dotted lines for connected libraries. ### V-B RQ2: Evolution of the LDNs In the following analysis, we constrain ourselves to libraries that have at least one dependency or dependent. This means that these libraries are indeed part of at least one package manager LDN. These libraries are called connected libraries. We analyzed the cumulative number of connected libraries for each package manager. In total there are 9,755 connected libraries. Of these libraries, 6,600 belonged to the CocoaPods LDN, 2,856 belonged to Carthage and 2,150 belonged to Swift PM. A library can belong to multiple package manager LDNs. The change in the number of libraries can be seen in Figure 2. The number of libraries is growing fastest for the newest and smallest package manager Swift PM. The number of libraries for CocoaPods is still growing, but the growth has slowed after 2019. The growth of the number of libraries for Carthage has almost completely halted. Figure 2: Cumulative number of libraries. When Decan et al. [2] analyzed the LDNs of seven package managers, they found that the number of libraries grew for each package manager either linearly or exponentially. Given these results we expected to see a similar trend in the number of libraries and library versions for CocoaPods, Carthage and Swift PM. However, the growth we observed has slowed in recent years. The decline in growth for CocoaPods and Carthage is also clear when looking at the number of new library versions added per month. To check if the number of updates per month for CocoaPods is really declining, we conducted an additional analysis of the git history of CocoaPods Spec repository. In this analysis, we searched for the latest git commit for each month and then queried the difference between commits of consecutive months. Additions and deletions were recorded for each file. We discarded all file names that were not Podspec files and then counted the number of additions for each month. This analysis confirmed that the number of updates (i.e., file additions) were indeed falling. Our hypothesis is that more and more developers are moving to Swift PM. Therefore, Carthage has lost most of its appeal. First, when Apple introduced Swift PM it was a standalone terminal application that could be used to create Mac OS applications and packages. Furthermore, in 2019, Apple added support for iOS and Xcode, which is the official IDE for iOS and Mac OS development. Now a dependency can be declared through Swift PM by simply searching for a library in Xcode. This makes Swift PM the easiest to use package manager in the Swift ecosystem. ### V-C RQ3: Evolution of Library Dependencies Figure 3 shows the mean number of direct dependencies for each monthly snapshot. For CocoaPods, the mean number of direct dependencies fluctuated strongly until 2016. After 2016 the mean number leveled to values around three, which is slightly higher than the mean number of direct dependencies for Carthage and Swift PM, each averaging around 2.5. The mean number of direct dependencies has a slight upwards trend for all three package managers. In addition, we calculated the mean number of direct and transitive dependencies for all connected libraries. The data shown in Figure 4 is not differentiating between package managers, as calculating transitive dependency chains conditional to package managers was difficult. We did, however, count the number of unique library names as total dependencies in order to not accidentally include the same library twice if it was referenced through multiple package managers. The mean number of dependencies in Figure 4 shows a clear upwards trend. Similarly to the mean number of direct dependencies, the number of all dependencies fluctuates considerably until 2016. Between 2016 and 2022, however, there is a clear upwards trend where the mean number of direct and transitive dependencies rises from around 3 to 5.5. Figure 3: Number of direct dependencies for each monthly snapshot. Figure 4: Total number of dependencies for each monthly snapshot. We expected the number of direct dependencies and total dependencies to grow over time as has been observed by Kikas et al. [1]. We observed that the number of direct dependencies was very volatile and growing rapidly between 2012 and 2016. After 2016 the number of direct dependencies stabilized and started growing slowly. Although the mean number of direct dependencies is consistently higher for CocoaPods than for Carthage, the trend is similar for all three package managers. Our hypothesis is that this change was a consequence of the introduction of Swift in 2014 and developers migrating from Objective-C to Swift in the following years. The growth of the mean number of dependencies after 2016 for all three studied package managers is lower than for JavaScript, Ruby and Rust [1]. The mean number of direct dependencies is comparable to other package managers [2], but the mean number of transitive dependencies is significantly lower. For example, the median number of transitive dependencies for a library available through the Cargo, NuGet and npm package manager is 41, 27 and 21, respectively. In comparison, the median number of transitive dependencies in the LDNs of the Swift ecosystem is only two. ## VI Conclusion We analysed the combined LDN of the three package managers used in the Swift ecosystem: CocoaPods, Carthage and Swift PM. We saw that the cumulative number of libraries and library versions is growing, but has slowed down in recent years. Swift PM, the newest and only official package manager for Swift, seems to be on the rise as it is growing faster than the other package managers during the last couple of years. In comparison to LDNs of the Cargo, NuGet or npm ecosystems, the Swift ecosystem shows a smaller number of library dependencies. The resulting benefit may be a smaller probability of depending on a vulnerable library. Overall, the number of direct dependencies is showing a slight upwards trend, while the number of total dependencies is clearly growing, reminding of Lehman’s law of ever increasing complexity [9]. ## References * [1] R. Kikas, G. Gousios, M. Dumas, and D. Pfahl, “Structure and evolution of package dependency networks,” _2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR)_ , pp. 102–112, 2017. * [2] A. Decan, T. Mens, and P. Grosjean, “An empirical comparison of dependency network evolution in seven software packaging ecosystems,” _Empirical Software Engineering_ , vol. 24, no. 1, pp. 381–416, 2019. * [3] R. Hiesgen, M. Nawrocki, T. C. Schmidt, and M. Waehlisch, “The race to the vulnerable: Measuring the log4j shell incident,” _Proc. of Network Traffic Measurement and Analysis Conference (TMA22)_ , 2022. * [4] K. Rahkema and D. Pfahl, “Quality analysis of ios applications with focus on maintainability and security,” in _2022 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 2022, pp. 602–606. * [5] Libraries.io, “Supported package managers,” 2022, https://libraries.io (accessed: Feb. 17, 2022). * [6] T. Elliott, “Swift package manager for ios,” 2020, https://www.raywenderlich.com/7242045-swift-package-manager-for-ios (accessed Jan. 21, 2022). * [7] K. Rahkema and D. Pfahl, “Dataset: Dependency networks of open source libraries available through cocoapods, carthage and swift pm,” _2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR)_ , p. 1, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2206.06083 * [8] R. G. Kula, D. M. German, A. Ouni, T. Ishio, and K. Inoue, “Do developers update their library dependencies?” _Empirical Software Engineering_ , vol. 23, no. 1, pp. 384–417, 2018. * [9] M. Lehman, “Programs, life cycles, and laws of software evolution,” _Proceedings of the IEEE_ , vol. 68, no. 9, pp. 1060–1076, 1980.
Across various scientific and engineering domains, a growing interest in flexible and deployable structures is becoming evident. These structures facilitate seamless transitions between distinct states of shape and find broad applicability ranging from robotics and solar cells to meta-materials and architecture. In this contribution, we study a class of mechanisms known as Kokotsakis polyhedra with a quadrangular base. These are $3\times3$ quadrilateral meshes whose faces are rigid bodies and joined by hinges at the common edges. Compared to prior work, the quadrilateral faces do not have to be planar. In general, such meshes are not flexible, and the problem of finding and classifying the flexible ones is old, but until now largely unsolved. It appears that the tangent values of the dihedral angles between different faces are algebraically related through polynomials. Specifically, by fixing one angle as a parameter, the others can be parameterized algebraically and hence belong to an extended rational function field of the parameter. We use this approach to characterize shape restrictions resulting in flexible polyhedra. § INTRODUCTION Flexible geometric structures play a crucial role in various fields, including engineering, architecture, and material science, owing to their adaptability and versatility [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. These structures can deform and change shape in response to external forces, enabling them to navigate complex environments or absorb impacts. In engineering, flexible geometric structures are employed in robotics and machinery, enhancing the efficiency and agility of machines. In architecture, they offer innovative solutions for dynamic and responsive structures that can adapt to changing environmental conditions. Additionally, in material science, the development of flexible materials with geometric patterns opens up new possibilities for creating lightweight and durable materials with applications in aerospace, transportation, and beyond. The relevance of flexible geometric structures lies in their ability to provide adaptable solutions to diverse challenges, fostering advancements in technology and design. Flexible and deployable structures may contain non-rigid parts [11, 12, 13, 14, 15, 16, 17, 18]. However, in this paper, we are only dealing with mechanisms in the classical sense. They are formed by rigid bodies connected by hinges. Our interest is in flexible quadrilateral meshes, whose faces are rigid and joined by hinges at the common edges. It is well known that such a quadrilateral mesh is a mechanism if and only if any $3 \times 3$ sub-mesh is flexible. This has been shown by Schief et al. [19], who formulate the problem for planar faces, but planarity does not enter their proof. Therefore, the first key step in the determination of all flexible quadrilateral meshes is the classification of all flexible $3 \times 3$ quadrilateral meshes, which is the main contribution of the present paper. The problem of flexible polyhedra can be traced back to the last century. Very similar mechanisms to the ones studied in our paper have been introduced by Kokotsakis [20]. His mechanisms possess a central planar, but not necessarily quadrilateral face, surrounded by a belt of planar faces. The work of Kokotsakis inspired further research on such structures [21], in particular with a focus on origami [22, 23, 24]. A breakthrough in this area has been made by Izmestiev [25] who derived a complete classification of flexible $3 \times 3$ quadrilateral meshes with planar faces. It is, however, still unknown which of these types can be parts of larger flexible $n \times m$ quadrilateral meshes. Larger flexible quadrilateral meshes with planar faces have been studied by Schief et al. [19] as discrete integrable systems and counterparts to isometric deformations of conjugate nets on smooth surfaces. Their results are largely rooted in second-order infinitesimal flexibility. A wealth of results on first-order infinitesimal flexibility is found in the book by Sauer [26]. There, we also find a detailed study of special quadrilateral meshes with planar faces that are mechanisms. They come in two types. One type, the so-called Voss meshes, are discrete counterparts to surfaces with a conjugate net of geodesics, first described by Voss [27] (see also [19]). The other type is the T-nets, first described by Graf and Sauer [28] (for a recent description in English see [29]), are discrete counterparts to an affine generalization of moulding surfaces. A very special case of T-nets is the famous Miura origami structures [30]. T-nets are also capable of forming tubular flexible structures and flat-foldable metamaterials [31]. The isometric deformations of T-nets and their smooth counterparts have recently been studied by Izmestiev et al. [32]. The first non-trivial example of a flexible Kokotsakis polyhedron with skew quadrilateral faces was constructed by Nawratil in 2022 [33]. The question of existence has already been posed by Sauer [26], but remained unsolved for a long time. In this paper, we are going to systematically study Kokotsakis polyhedra with a quadrangular base and arbitrary, not necessarily planar faces. We refer to these geometric objects as skew-quadrilateral Kokotsakis meshes, but point out that planarity of faces is included as a special case. We utilize the classical approach proposed by Bricard [34] to reformulate the flexibility problem in Euclidean space to one on the sphere. We can apply powerful methods from algebra to proceed with the polynomial system that arises after turning to new variables, namely tangents of adjacent dihedral half-angles. Algebraically, the condition for continuous flexion implies the existence of a one-parameter real-valued solution set of the polynomial system. §.§ Main approach and contributions Our main object is the mesh[In this article 'mesh' is always meant $3\times3$ quadrilateral mesh by default.] described in Fig. <ref> left, which contains nine quadrilaterals linked by hinges that allow for linked units to rotate. The whole mesh is generally still rigid. We are dedicated to finding a classification for all flexible ones. In fact, during flexion, the values of dihedral angles along hinges change dependently on each other. The motion can be described by zeros of polynomials and therefore, each mesh uniquely determines a polynomial ideal M=(\Tilde{g}^{(1)}(x_1,x_2), \Tilde{g}^{(2)}(x_2,x_3), \Tilde{g}^{(3)}(x_3,x_4), \Tilde{g}^{(4)}(x_4,x_1))\subset\cc[x_1,x_2,x_3,x_4] coming with a zero set Z(M):=\{(x_1,x_2,x_3,x_4)\in (\cp)^4: f(x_1,x_2,x_3,x_4)=0, \forall f \in M\}.\footnote{Kindly notice that $(\cp)^n\neq \mathbb{C} P^n$ in general. We consider projective varieties because $x_i$ actually stand for tangents of angles.} The flexibility of the mesh refers to an infinite $Z(M)$: a continuous trajectory of the motion. Notice that $Z(M)$ can be also obtained by the fiber product \begin{equation}\label{fiber} Z(M)=Z(S_1)\times_{\{x_1, x_3\}} Z(S_2):=\{(x_i)_{i=1}^4: (x_1,x_2,x_3)\in Z(S_1), (x_1,x_4,x_3)\in Z(S_2)\} \end{equation} where the ideals $S_1=(\Tilde{g}^{(1)}, \Tilde{g}^{(2)})$ and $S_2=(\Tilde{g}^{(3)}, \Tilde{g}^{(4)})$. The classification of flexible meshes (i.e. the ideal $M$) proceeds in three main steps: $\bullet$ Study the components of $Z(S_i)$ in Zariski topology; $\bullet$ Classify $Z(S_i)$ according to its components; $\bullet$ Investigate how to choose $S_i$ such that the fiber product in equation (<ref>) is an infinite set, hence the classification of flexible meshes can be conducted on $M$. We only focus on non-singular meshes in which $Z(S_i)$ has no singular component. That is, regarding $Z(S_i)$ as a bunch of irreducible curves, none of them has a constant entry, e.g. Z(x^2-xy)=Z(x)\cup Z(x-y) is singular since $Z(x)=\{(0,y): y\in \cc\}$ is singular. Non-singular meshes are the majority and in particular, each polynomial $\Tilde{g}^{(i)}$ of non-singular meshes has a very specific form. It is quadratic in both $x_i$ and $x_{i+1}$ and has no factors in $\cc[x_i]$ or $\cc[x_{i+1}]$. This means $Z(S_1)$ is a finite collection of irreducible curves (1-dimensional) and each curve can be locally parameterized by one of the three coordinates, say $x_1$, so that $x_2, x_3$ are just algebraic functions of $x_1$ derived from $\Tilde{g}^{(1)}=\Tilde{g}^{(2)}=0$ and hence can be treated as algebraic elements of some extended field over the rational function field $K_i:=\cc(x_i)$. Given $\Tilde{g}^{(i)}$ quadratic, the 2-steps extension \xymatrix{ K_1 \ar@{-}[r]^{\Tilde{g}^{(1)}} & K_1(x_2)\ar@{-}[r]^{\Tilde{g}^{(2)}} & K_1(x_2,x_3) & \text{or} & K_3 \ar@{-}[r]^{\Tilde{g}^{(2)}} & K_3(x_2)\ar@{-}[r]^{\Tilde{g}^{(1)}} & K_3(x_1,x_2) only provides the following possibilities of extension degrees [K_1(x_2):K_1]\in \{1,2\},\,\,[K_1(x_3):K_1]\in\{1,2,4\}, which leads to 6 situations: * purely-rational: if $[K_1(x_2):K_1]=[K_1(x_3):K_1]=1$, i.e. both $\Tilde{g}^{(i)}$ are reducible; * half-quadratic: if $[K_1(x_2):K_1]=1$ and $[K_1(x_3):K_1]=2$, or, $[K_3(x_2):K_3]=1$ and $[K_3(x_1):K_3]=2$, i.e. only $\Tilde{g}^{(1)}$ is reducible or only $\Tilde{g}^{(2)}$ is reducible; Further, when both $\Tilde{g}^{(1)}, \Tilde{g}^{(2)}$ are irreducible, * involutive-rational: if $[K_1(x_2):K_1]=2$ and $[K_1(x_3):K_1]=1$; * purely-quadratic: if $[K_1(x_2):K_1]=[K_1(x_3):K_1]=2$ and $K_1(x_2)=K_1(x_3)$; * involutive-quadratic: if $[K_1(x_2):K_1]=[K_1(x_3):K_1]=2$ but $K_1(x_2)\neq K_1(x_3)$; * quartic: if $[K_1(x_3):K_1]=4$. In the meanwhile, each component of $Z(S_1)$, since known as an irreducible algebraic curve, can be determined by its local behavior only! The above information therefore becomes a global invariant of the component and thus all components fall into one of 6 cases. \begin{equation}\label{c-table} \begin{tabular}{ |c|c|c|c|c|c| } \hline \thead{purely- \\ rational} & \thead{half- \\ quadratic} & \thead{involutive- \\ rational} & \thead{purely- \\ quadratic} & \thead{involutive- \\ quadratic} & \thead{quartic} \\ \hline \textbf{Case 1} & \textbf{Case 2} & \textbf{Case 3} & \textbf{Case 4} & \textbf{Case 5} & \textbf{Case 6}\\ \hline \end{tabular} \end{equation} According to our theory, $S_i$ will fall into one of 7 classes, based on the components of $Z(S_i)$. \begin{equation}\label{table} \begin{NiceTabular}{|c|c|c|c|c|c|c|c|}[hvlines] \Block{2-1}{$S_i$} & \Block{2-1}{\thead{purely- \\ rational}} & \Block{2-1}{\thead{half- \\ quadratic}} & \Block{1-3}{equimodular} & & & \Block{2-1}{\thead{involutive- \\ quadratic}} & \Block{2-1}{\thead{quartic}} \\ & & & \thead{involutive- \\ rational} & \thead{rational- \\ quadratic} & \thead{purely- \\ quadratic} & & \\ \thead{comp. \\ types} & \textbf{Case 1} & \textbf{Case 2} & \textbf{Case 3} & \textbf{Case 3}, \textbf{4} & \textbf{Case 4} & \textbf{Case 5} & \textbf{Case 6}\\ \end{NiceTabular} \end{equation} The term 'equimodular' was introduced by Izmestiev in [25], which we adopted and generalized to non-planar meshes. It covers three classes of ours. However, the classification of the mesh or $M$ is not simply developed by considering all combinations from table (<ref>). This is because the fiber product of $Z(S_1)$ and $Z(S_2)$ is simply a collection of the fiber products of their components, and to make $Z(M)$ an infinite set, some products of the components are finite and hence out of our interest. So the classification of the flexible mesh depends on the components $W_i$ of $Z(S_i)$ that we can choose to make $W_1\times_{\{x_1, x_3\}} W_2$ an infinite set, e.g. Z(x^2-y^2)\times_{\{x,y\}}Z(x^2-3xy+2y^2)=(Z(x-y)\cup Z(x+y))\times_{\{x,y\}}(Z(x-y)\cup Z(x-2y)) is infinite only because $Z(x-y)\times_{\{x,y\}}Z(x-y)$ is infinite. In a nutshell, all flexible meshes are classified by those 'contributing' components of $Z(S_i)$. Every $3\times3$ non-singular flexible mesh must belong to one of the following classes: \begin{equation}\label{classes} \textbf{simple classes}\left\{\begin{array}{l} \textbf{PR}: \text{purely-rational;} \\ \textbf{HQ}: \text{half-quadratic;} \\ \textbf{IR}: \text{involutive-rational;} \\ \textbf{RQ}: \text{rational-quadratic;} \\ \textbf{PQ}: \text{purely-quadratic;} \\ \textbf{IQ}: \text{involutive-quadratic;} \\ \textbf{Q}: \text{quartic;} \\ \end{array}\right. \textbf{hybrid classes}\left\{\begin{array}{l} \textbf{PR + IR};\\ \textbf{HQ + IQ};\\ \textbf{HQ + PQ};\\ \textbf{PQ + IQ}.\\ \end{array}\right. \end{equation} By simple classes we mean $W_1, W_2$ belong to the same Case of table (<ref>), and by hybrid classes we mean the opposite. To avoid misunderstanding, we will use abbreviations such as PR, PQ + IQ etc. ONLY on meshes. §.§ Organization of the paper In Section <ref> we introduce non-singular couplings. Roughly speaking, a coupling is just a polynomial ideal $S_1=(\Tilde{g}^{(1)}(x_1,x_2), \Tilde{g}^{(2)}(x_2,x_3))$ and also the building block of the mesh. We show how to convert a mesh to an ideal $M\subset \cc[x_1,x_2,x_3,x_4]$ such that M=(\Tilde{g}^{(1)}(x_1,x_2), \Tilde{g}^{(2)}(x_2,x_3))+(\Tilde{g}^{(3)}(x_3,x_4), \Tilde{g}^{(4)}(x_4,x_1)) is the sum of two couplings. To study non-singular couplings, we propose a brand new method that is based on the decomposition of the zero set $Z(S_1)$. In Section <ref>, the components are classified into 6 cases. We also provide a construction theorem of the simplest class of flexible meshes as a tutorial for our new classification method. We further split the non-singular couplings into two types which are studied in Section <ref> and <ref> respectively. This will explain how $S_i$ can be classified by table (<ref>). In Section <ref>, we use previous results to factorize the resultant $\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)}; x_2)$ to get ready for the classification of flexible meshes. Finally in Section <ref>, we consider the construction of flexible meshes. Starting from a given half $S_1=(\Tilde{g}^{(1)}, \Tilde{g}^{(2)})$, we find restrictions for the other half $S_2=(\Tilde{g}^{(3)}, \Tilde{g}^{(4)})$ in order to form a flexible mesh, i.e. to ensure that $M=S_1+S_2$ has an infinite zero set $Z(M)$. This will tell why Theorem <ref> covers all possibilities. We delayed most of the proofs to Appendix <ref> so that the readers can have quick access to the theory. The remaining proofs are important to comprehending some main ideas. One can also download a Maple™ script[<https://drive.google.com/file/d/10yn_3bVrJNWD-RsXa-ltJqJvaDwgghD5/view?usp=sharing>] to verify the calculations of the proofs. Left: Sketch of a $3\times3$ quadrilateral mesh, or equivalently, Kokotsakis polyhedron with a quadrangular base. The flexibility only relies on a smaller area marked in dashed lines; Right: A flexible mesh with fixed angles $\lambda_i', \gamma_i', \mu_i', \delta_i'$, which are well-defined by vector products and $\arccos$ function. The flexible angles $\alpha_i'$ and its complement (not shown) $\alpha_i=\pi-\alpha_i'$ are dihedral angles between the central planar panel and surrounding planar panels, which are rigorously defined in Appendix <ref>. § BASIC SETUP §.§ From $\rr^3$ to sphere In this subsection, we introduce the classic approach which has been used extensively among peers. §.§.§ Bricard equation A non-planar flexible mesh (left) and its decomposition (middle and right). $(\lambda_i', \gamma_i', \mu_i', \delta_i')$ are angles between corresponding oriented edges; $\tau_i, \zeta_i$ are fixed dihedral angles along hinges of the central tetrahedron and the attached tetrahedron respectively; $\alpha_i', \beta_i'$ are flexible dihedral angles (in planar case $\alpha_{i+1}'=\beta_i'$). All angles can be defined in a similar way (see Appendix <ref>). [ultra thick] (0.5, 4) arc[start angle=210, end angle = 330, x radius = 3cm, y radius = 0.7cm] node[below, midway] $\lambda_1$; [ultra thick] (1.35, 7) arc [start angle=140, end angle=220,x radius = 2cm, y radius = 2.5cm] node[left, midway] $\gamma_1$; [ultra thick] (4.7, 7) arc [start angle=40, end angle=-55,x radius = 2cm, y radius = 2.5cm] node[right, pos = 0.4] $\delta_1$; [ultra thick] (4.7,7) arc [start angle=50, end angle=130,x radius =2.6cm, y radius = 2cm] node[above, midway] $\mu_1$; [ultra thick] (4.85, 4.1) arc[start angle = 95, end angle = 165,radius=0.5cm] node[above left,pos=0.1] $\beta_1$; [ultra thick] (1.2, 4.1) arc[start angle = 80, end angle = 18,radius=0.5cm] node[right, pos = 0.3] $\alpha_1$; [ultra thick, rotate around=-10:(4.7,3.75)] (4.65, 3.75) arc [start angle=210, end angle = 330, x radius = 3cm, y radius = 0.7cm] node[above, midway] $\lambda_2$; [ultra thick, rotate around = 19:(6,1.2)] (6,1.2) arc [start angle=-134, end angle=-205,x radius = 2cm, y radius = 2.5cm] node[left, pos = 0.35] $\gamma_2$; [ultra thick, rotate around = 0:(8.35,3.1)] (8.35,3.1) arc [start angle=40, end angle=-20,x radius = 1.5cm, y radius = 2cm] node[right, pos = 0.55] $\delta_2$; [ultra thick, rotate around = -2:(6,-0.8)] (5.92,1.2) arc [start angle=230, end angle=312,x radius = 2cm, y radius = 1cm] node[below, midway] $\mu_2$; [thick] (5.15, 3.55) arc[start angle = 0, end angle = 9,radius=2cm] node[right, pos = 0.5] $\tau_1$; [thick] (4.7, 3.15) arc[start angle = -110, end angle = -135, radius=1cm] node[below, pos = 0.7] $\zeta_1$; [ultra thick] (4.7, 3.3) arc[start angle = 270, end angle =325 ,radius=0.5cm] node[below right, pos=0.1] $\alpha_2$; Left: Partial and whole spherical linkage of a mesh in Fig. <ref>, $(\lambda_i, \gamma_i, \mu_i, \delta_i,\alpha_i,\beta_i)$ and $(\lambda_i', \gamma_i', \mu_i', \delta_i',\alpha_i',\beta_i')$ are complementary to $\pi$ respectively, the gap between $\beta_1$ and $\alpha_2$ is caused by $\tau_1$ and $\zeta_1$. From Fig. <ref> left, it is obvious that the flexibility of a mesh is only determined by the structure of a neighborhood of the central quadrilateral. So without loss of generality, we can trim the mesh such that its corner panels are just triangles. Fig. <ref> right is an illustration of a flexible mesh with 9 planar quadrilaterals where $(\lambda_i', \gamma_i', \mu_i', \delta_i')$ are fixed angles determined by rigid quadrilaterals and $\alpha_i'$ are flexible (dihedral) angles between the quadrilaterals while the mesh is deforming. Similarly, if the mesh contains 9 non-planar quadrilaterals, each quadrilateral can be treated as a rigid tetrahedron (consider a diagonally folded quadrilateral vs. a tetrahedron with a missing edge). Hence Fig. <ref> right can be generalized to Fig. <ref> left with a $3\times 3$ tetrahedron mesh. The classical approach to analyze the flexibility is to consider a spherical quadrilateral at each vertex of the central face, which is defined by oriented edges: In Fig. <ref>, by collecting all vectors at the origin, the directions (of vectors) intersect the unit sphere and determine the spherical linkage of the mesh, see Fig. <ref> right, which contains 4 spherical quads.[To distinguish the planar quadrilateral, we will call spherical quadrilateral just quad.] Every 'edge' of each quad is an arc of a great circle. Particularly, each quad of the lingkage, say $(\lambda_1, \gamma_1, \mu_1, \delta_1)$, is associated with the corner surrounded by $(\lambda_1', \gamma_1', \mu_1', \delta_1')$ in Fig. <ref>. It is easy to see that $(\lambda_i, \gamma_i, \mu_i, \delta_i)$ and $(\lambda_i', \gamma_i', \mu_i', \delta_i')$ are complementary to $\pi$ respectively. Angles $\tau_i,\zeta_i$ are (interior) dihedral angles in the corresponding tetrahedrons. All these angles will be rigorously defined in Appendix <ref>. In summary, the mesh vs. its spherical linkage, angles between edges are transformed into arcs of the quads, angles between planes are transformed into angles between arcs, and flexible hinges on adjacent quadrilaterals are transformed into flexible joints between adjacent arcs. Hence the flexibility of the mesh is equivalent to the flexibility of the spherical linkage with the deformation rules: $\bullet$ All arc lengths of each quad are fixed; $\bullet$ All angles between $\lambda_i$ and $\lambda_{i+1}$ are fixed; $\bullet$ All angles between $\delta_i$ and $\gamma_{i+1}$ are fixed. According to Fig. <ref> left, by setting x_i=\tan\left(\frac{\alpha_i}{2}\right),\,\, y_i=\tan\left(\frac{\beta_i}{2}\right),\,\,F_i=\tan\left(\frac{\zeta_{i}+\tau_{i}}{2}\right) we have an invertible relation between $y_i$ and $x_{i+1}$: $y_i=\frac{F_i+x_{i+1}}{1-F_ix_{i+1}}$, or where $i=1, 2, 3, 4$ and $x_5=x_1$. Since we are dealing with angles, it might occur $\zeta_{i}+\tau_{i}=\pi$ and hence $y_i=-\frac{1}{x_{i+1}}$. In such case, we regard $F_i=\infty$ and set H^{(i)}(y_i,x_{i+1})=y_i x_{i+1} + 1=0. When $\lambda_i, \gamma_i, \mu_i, \delta_i\in(0,\pi)$, the arc lengths and angles of the quad in Fig. <ref> left satisfy a polynomial equation during its deformation. That means when the quad changes its shape while all arcs are attaching the sphere and remaining the same lengths, the valves of the interior angles $\alpha_i, \beta_i$ must obey the so-called Bricard equation [34] (the proof can be also found in [21]). \begin{equation}\label{bricard} \left\{\begin{array}{l} A_ix_i^2 y_i^2 + B_i x_i^2 + C_iy_i^2 + D_ix_iy_i + E_i = 0, \text{ or} \\ g^{(i)}(x_i,y_i)=a_ix_i^2 y_i^2 + b_i x_i^2 + c_i y_i^2 + x_iy_i + e_i=0 \\ \end{array}\right. \text{ where } \left\{\begin{array}{l} A_i=\cos(\lambda_i+ \gamma_i + \delta_i ) - \cos(\mu_i), \\ B_i=\cos(\lambda_i+ \gamma_i - \delta_i ) - \cos(\mu_i), \\ C_i=\cos(\lambda_i- \gamma_i + \delta_i ) - \cos(\mu_i), \\ D_i=4\sin(\gamma_i)\sin(\delta_i), \\ E_i=\cos(\lambda_i- \gamma_i - \delta_i ) - \cos(\mu_i), \\ (a_i, b_i, c_i, e_i)=\left(\frac{A_i}{D_i},\frac{B_i}{D_i},\frac{C_i}{D_i},\frac{E_i}{D_i}\right).\\ \end{array}\right. \end{equation} Now all flexible angles $\alpha_i, \beta_i$ (as known as $x_i,y_i$) are related by eight polynomial equations $\{H^{(i)}=g^{(i)}=0\}_{i=1}^4$. However, $y_i$ becomes redundant after the elimination \Tilde{g}^{(i)}(x_i,x_{i+1})=\text{Res}(g^{(i)},H^{(i)}; y_i)=0. The coefficients $(a_i, b_i, c_i, e_i)$ could be mapped back to the angles. Since b_i+c_i-a_i-e_i=\cos(\lambda_i),\,\,b_i-c_i-a_i+e_i=\sin(\lambda_i) \cot(\gamma_i),\,\,c_i+e_i-a_i-b_i=\sin(\lambda_i) \cot(\delta_i), we can get $\lambda_i, \gamma_i, \delta_i$ immediately and $\mu_i$ can be recovered from the definition of $A_i$. In general, there is no guarantee that the angles exist for arbitrary coefficients. Thus, we will develop our theory disregarding reality and embeddability into $\rr^3$. We would like to immediately show a physical model of a non-planar flexible mesh. Consider the following system of angles \left\{\begin{array}{l} (a_1, b_1, c_1, e_1,F_1)=\left(\frac{3}{2},1,n_0,-\frac{2}{3},0\right), \\ (a_2, b_2, c_2, e_2,F_2)=\left(\frac{3}{2},n_0,1,-\frac{2}{3},0\right), \\ (a_3, b_3, c_3, e_3,F_3)=\left(\frac{1}{2},-\frac{1}{2},\frac{(n_0+1)^2}{3},-\frac{(n_0+1)^2}{3},1\right), \\ (a_4, b_4, c_4, e_4,F_4)=\left(\frac{3}{8n_0^2+12n_0+12},\frac{(n_0+1)^2}{4n_0^2+6n_0+6},-\frac{3}{8n_0^2+12n_0+12},-\frac{(n_0+1)^2}{4n_0^2+6n_0+6},0\right), \\ \end{array}\right. where $n_0$ is the only root of $(n+1)^4+n=0$ lying in $(-1,0)$. A numerical solution could be \left\{\begin{array}{l} (\lambda_1, \gamma_1, \mu_1, \delta_1,\tau_1,\zeta_1)=(1.679854, 2.301666, 1.973198, 2.860453, 1.558808, -1.558808), \\ (\lambda_2, \gamma_2, \mu_2, \delta_2,\tau_2,\zeta_2)=(1.679854, 2.860453, 1.973198, 2.301666, 0.694319, -0.694319), \\ (\lambda_3, \gamma_3, \mu_3, \delta_3,\tau_3,\zeta_3)=(2.278478, 2.628901, 1.570796, 1.570796, 1.164528, 0.406268), \\ (\lambda_4, \gamma_4, \mu_4, \delta_4,\tau_4,\zeta_4)=(2.003527, 1.570796, 1.570796, 2.335389, 0.907881, -0.907881), \\ \end{array}\right. where the values of $\mu_3,\delta_3,(\tau_3+\zeta_3),\gamma_4,\mu_4$ are actually $\frac{\pi}{2}$. Visualization is in Fig. <ref> or through the link.[<https://www.geogebra.org/calculator/xaurstfu>] [Please note that a numerical model is not exactly flexible. We had to set an extra free parameter $\zeta_3$ in the simulation. However, its value is almost fixed during the animation.]One can verify later that the above mesh is indeed flexible (Theorem <ref>) and belongs to our most complicated PQ + IQ class (Section <ref>). A flexible mesh with central panel $V_1V_2V_3V_4$ and attached panels $V_iY_iX_{i+1}V_{i+1}$. While undetermined, one can always choose a properly folded corner panel, containing the polyline $X_iV_iY_i$, to avoid self-intersection. This image is generated by rotating $X_1$ along arc $\rho$, in which dashed lines mean the corresponding panel is beneath the central one. §.§.§ Shape restrictions of the quads Several types of spherical quads play an important role in flexibility problems. Here we follow the definitions from [25]. Quad $Q_i$ with arc lengths $(\lambda_i, \delta_i, \mu_i,\gamma_i)$ is called (anti)isogonal or an (anti)isogram if it satisfies one of the corresponding conditions: (i) $\lambda_i= \mu_i, \gamma_i=\delta_i$ (isogram); (ii) $\lambda_i+\mu_i=\gamma_i+\delta_i=\pi$ (antiisogram). Geometrically speaking, an isogram is a quad whose opposite arc lengths are equal, and an antiisogram is a quad whose opposite arc lengths are complementary to $\pi$. When similar conditions hold for adjacent arc lengths, we have Quad $Q_i$ with arc lengths $(\lambda_i, \delta_i, \mu_i,\gamma_i)$ is called an (anti)deltoid if it satisfies one of the corresponding conditions: (iii) $\lambda_i= \gamma_i, \mu_i=\delta_i$ (deltoid); (iv) $\lambda_i+\gamma_i=\mu_i+\delta_i=\pi$ (antideltoid); (v) $\lambda_i= \delta_i, \gamma_i=\mu_i$ (deltoid); (vi) $\lambda_i+\delta_i=\gamma_i+\mu_i=\pi$ (antideltoid). Obviously, a single quad $Q_i$ is in general flexible. Its shape is able to continuously change while all arc lengths $\lambda_i, \gamma_i, \mu_i, \delta_i$ are fixed. Here we do not consider the case where 0 or $\pi$ appears to be one of the lengths, which means the corresponding quadrilateral in Fig. <ref> degenerates into a triangle. It is therefore reasonable to assume that: $\bullet$ General assumption: $\boxed{\lambda_i, \delta_i, \mu_i,\gamma_i \in (0,\pi), \forall i=1,2,3,4}$ since they are defined by the $\arccos$ function, so Bricard equation (<ref>) is applicable to all spherical linkages. §.§ From sphere to polynomial ideals We have seen that each quad uniquely determines a polynomial (<ref>) which reflects the pattern of its deformation on the sphere. We want to describe the constraints on $(\lambda_i, \delta_i, \mu_i,\gamma_i)$ in terms of $(a_i, b_i, c_i, e_i)$ since we are not interested in polynomials with arbitrary coefficients but with a geometric background. §.§.§ Couplings and matchings Since all of $\lambda_i, \gamma_i, \mu_i, \delta_i$ lie in $(0,\pi)$, we have \begin{equation}\label{n0d} \sin(\lambda_i)\sin(\delta_i)\sin(\gamma_i)\sin(\mu_i)> 0. \end{equation} Simple calculations can show the following. The conditions for (anti)isograms and (anti)deltoids are equivalent to: (i) $\Leftrightarrow b_i=c_i=0$; (ii) $\Leftrightarrow a_i=e_i=0$; (iii) $\Leftrightarrow c_i=e_i=0$; (iv) $\Leftrightarrow a_i=b_i=0$; (v) $\Leftrightarrow b_i=e_i=0$; (vi) $\Leftrightarrow a_i=c_i=0$. Clearly, a given mesh uniquely determines an ideal $M=(\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ and when the mesh deforms, the 'trajectory' lies in $Z(M)\subset (\cp)^4$. Now we are ready to convert all geometry concepts to their algebraic versions. The following notations are the most important ones hence they must be specifically defined and stick to their meanings throughout the article. $g^{(i)}\in \rr[x_i,y_i]$ is a polynomial in the form g^{(i)}(x_i,y_i):=a_ix_i^2 y_i^2 + b_i x_i^2 + c_i y_i^2 + x_iy_i + e_i where the coefficients satisfy[Recall inequality (<ref>), $(1-4a_ie_i-4b_ic_i)^2> 64a_ib_ic_ie_i \Leftrightarrow (\sin(\lambda_i)\sin(\delta_i)\sin(\gamma_i)\sin(\mu_i))^2>0$, $(a_i-b_i)(e_i-c_i)< \frac{1}{4}$ and $(a_i-c_i)(e_i-b_i)< \frac{1}{4}$ are both referred to $(\sin(\lambda_i))^2 > 0$.] (1-4a_ie_i-4b_ic_i)^2> 64a_ib_ic_ie_i,\,\, (a_i-b_i)(e_i-c_i)< \frac{1}{4},\,\, (a_i-c_i)(e_i-b_i)< \frac{1}{4}. $H^{(i)}\in \rr[y_i,x_{i+1}]$ is a polynomial determined by a constant $F_i\in \rp$ such that \begin{array}{l} y_i(1-F_ix_{i+1})-(F_i+x_{i+1}) \text{ if } F_i\in \rr, \\ y_ix_{i+1}+1 \text{ if } F_i=\infty. \\ \end{array} \right. Finally, $\Tilde{g}^{(i)}\in \rr[x_i,x_{i+1}]$ and $R_i\in \rr[x_i,y_{i+1}]$ are the polynomials \Tilde{g}^{(i)}(x_i,x_{i+1}):=\text{Res}(g^{(i)},H^{(i)}; y_i),\,\, R_i(x_i,y_{i+1}):=\text{Res}(\Tilde{g}^{(i)},g^{(i+1)}; x_{i+1}) where $\text{Res}(\cdot,\cdot;\ast)$ stands for the resultant respect to $'\ast'$. The index is always considered modulo 4, e.g. $x_5=x_1$, $H^{(8)}=H^{(4)}\in\rr[y_4,x_1]$. For a coupling $S$, the irreducible components of $Z(S)$ in Zariski topology are called components of $Z(S)$ (or $S$). The components are unique up to permutations. An ideal in the form S=(\Tilde{g}^{(i)}, g^{(i+1)}, H^{(i)}, H^{(i+1)}) \subset \cc[x_i,x_{i+1},x_{i+2},y_i,y_{i+1}] is called a coupling. An ideal in the form M=(\Tilde{g}^{(1)}(x_1,x_2), \Tilde{g}^{(2)}(x_2,x_3), \Tilde{g}^{(3)}(x_3,x_4), \Tilde{g}^{(4)}(x_4,x_1))\subset \cc[x_1,x_2,x_3,x_4] is called a matching if the zero set Z(M):=\{(x_1,x_2,x_3,x_4)\in (\cp)^4: f(x_1,x_2,x_3,x_4)=0, \forall f \in M\} is an infinite set.[We are going to apply the notation $Z(\cdot)$ on different ideals, the dimension depends on the number of variables contained by the ideal, e.g. $Z(S)\subset (\cp)^5$, $Z(g^{(i)})\subset (\cp)^2$, etc...] Through Bricard equation (<ref>), every mesh uniquely determines an ideal with the same format as a matching. However, a matching not only represents a mesh but also a flexible one. Geometrically speaking, a coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ is a pair of quads $(Q_1, F_1, Q_2, F_2)$ inserted by two constant gaps $(\zeta_{i}+\tau_{i})=2\arctan(F_i)$ at the opposite angles of $\beta_i$ for $i=1,2$ respectively, as shown in Fig. <ref> left, which is always flexible, i.e. $Z(S)$ is an infinite set. A matching is a proper pair of couplings $(Q_1,F_1,Q_2,F_2)$ and $(Q_3,F_3,Q_4,F_4)$ such that the spherical linkage $(Q_i,F_i)_{i=1}^{4}$ (Fig. <ref> right) is flexible, i.e. $Z(\Tilde{g}^{(1)}, \Tilde{g}^{(2)}, \Tilde{g}^{(3)}, \Tilde{g}^{(4)})$ is an infinite set. §.§.§ Non-singular properties $g^{(i)}$ (or $\Tilde{g}^{(i)}$) is said to be singular if $a_ie_i=b_ic_i=0$, it is non-singular otherwise. Similarly, a coupling (or matching) is said to be singular if it contains a singular $g^{(i)}$ (or $\Tilde{g}^{(i)}$), it is non-singular otherwise. In this article, we are only interested in non-singular couplings and matching. It is not hard to check that when $g^{(i)}$ is non-singular, it is quadratic in both $x_i, y_i$ and has no factors in $\cc[x_i]$ or $\cc[y_i]$. In addition, we have the following. A non-singular polynomial $g^{(i)}$ is reducible if and only if $a_i=e_i=0$ or $b_i=c_i=0$. Moreover, when $g^{(i)}$ is reducible, the factorization must be in the form \begin{array}{l} c_i(kx_i-y_i)(k'x_i-y_i) \text{ if } a_i=e_i=0, \\ a_i(x_iy_i-k)(x_iy_i-k') \text{ if } b_i=c_i=0, \\ \end{array} \right. where $kk'\neq 0$. Combine Lemma <ref> and <ref> we immediately have Given a quad $Q_i=(\lambda_i, \gamma_i, \mu_i, \delta_i)$ which determines a polynomial $g^{(i)}$ through equation (<ref>). Then $Q_i$ is an (anti)deltoid if and only if $g^{(i)}$ is singular. Further, suppose $g^{(i)}$ is non-singular, $g^{(i)}$ is reducible if and only if $Q_i$ is (anti)isogonal if and only if $\{a_i,b_i,c_i,e_i\}$ has more than one 0. A non-singular polynomial $\Tilde{g}^{(i)}$ shares the same reducibility with $g^{(i)}$. In particular, • $\Tilde{g}^{(i)}$ is quadratic in both $x_i, x_{i+1}$ and has no factors in $\cc[x_i]$ or $\cc[x_{i+1}]$; • $f^{(i)}$ is a factor of $g^{(i)}$ if and only if $\Tilde{f}^{(i)}=\text{Res}(f^{(i)},H^{(i)}; y_i)$ is a factor of $\Tilde{g}^{(i)}$. Since $H^{(i)}=0$ provides an rational relation between $y_i$ and $x_{i+1}$, one should realize the factorization of $g^{(i)}$ induces the factorization of $\Tilde{g}^{(i)}$ and vice versa. In fact, it is easy to check that $\exists c\neq 0$ such that $g^{(i)}=c \cdot\text{Res}(\Tilde{g}^{(i)},H^{(i)}; x_{i+1})$. Given $\Tilde{g}^{(i)}, g^{(i+1)}$ non-singular and at least one of them is reducible, for any irreducible factors $\Tilde{f}^{(i)},f^{(i+1)}$ of $\Tilde{g}^{(i)}, g^{(i+1)}$ respectively, $\text{Res}(\Tilde{f}^{(i)},f^{(i+1)};x_{i+1})$ is irreducible. Given $\Tilde{g}^{(i)}, g^{(i+1)}$ non-singular, $R_i(x_i,y_{i+1})$ has no factor in $\cc[x_i]$ or $\cc[y_{i+1}]$. A non-singular coupling $S$ has the following equivalent representations: S=(\Tilde{g}^{(i)}, g^{(i+1)}, H^{(i)}, H^{(i+1)})=(g^{(i)}, g^{(i+1)}, H^{(i)}, H^{(i+1)})=(\Tilde{g}^{(i)}, \Tilde{g}^{(i+1)}, H^{(i)}, H^{(i+1)}). The components have the following equivalent representations: $\bullet$ If at least one of $\Tilde{g}^{(i)}, g^{(i+1)}$ is reducible, \begin{equation}\label{rr} Z(S)=\bigcup_{j,k}W_{jk}=\bigcup_{j,k}Z(\Tilde{f}^{(i)}_j, f^{(i+1)}_k, H^{(i)}, H^{(i+1)})=\bigcup_{j,k}Z(\Tilde{f}^{(i)}_j, f^{(i+1)}_k, r_{jk}, H^{(i)}, H^{(i+1)}) \end{equation} where $\Tilde{f}^{(i)}_j, f^{(i+1)}_k$ travel all distinct irreducible factors of $\Tilde{g}^{(i)}, g^{(i+1)}$ respectively, $r_{jk}:=\text{Res}(\Tilde{f}^{(i)}_j,$ $f^{(i+1)}_k;x_{i+1})$ is irreducible and $r_{jk} | R_i(x_i,y_{i+1})$; $\bullet$ If none of $\Tilde{g}^{(i)}, g^{(i+1)}$ is reducible, \begin{equation}\label{irr} Z(S)=\bigcup_{k}W_{k}=\bigcup_{k}Z(\Tilde{g}^{(i)}, g^{(i+1)}, r_k, H^{(i)}, H^{(i+1)})=\bigcup_{k}Z(g^{(i)}, g^{(i+1)}, r_k, H^{(i)}, H^{(i+1)}) \end{equation} where $r_k$ travels all distinct irreducible factors of $R_i(x_i,y_{i+1})$ (notice that $R_i\in S$). Moreover, a matching $M=(\Tilde{g}^{(1)}, \Tilde{g}^{(2)}, \Tilde{g}^{(3)}, \Tilde{g}^{(4)})$ can be regarded as the sum of two couplings since $Z(S)\cong Z(\Tilde{g}^{(i)}, \Tilde{g}^{(i+1)})$. The proof of Proposition <ref> shows $(g^{(i)},H^{(i)})$ and $(\Tilde{g}^{(i)},H^{(i)})$ are identical ideals. Moreover, it is easy to check that $Z(g^{(i)})\cong Z(g^{(i)}, H^{(i)})=Z(\Tilde{g}^{(i)}, H^{(i)})\cong Z(\Tilde{g}^{(i)})$ hence a matching can be regarded as the sum of two couplings. As for the components, suppose at least one of $\Tilde{g}^{(i)}, g^{(i+1)}$ is reducible. In equation (<ref>) we have $W_{jk}\cong Z(\Tilde{f}^{(i)}_j, f^{(i+1)}_k, r_{jk})\subset (\cp)^3$, the projections of which in $(\cp)^2$ are irreducible curves $\Tilde{f}^{(i)}_j=0, f^{(i+1)}_k=0, r_{jk}=0$ (see Corollary <ref>), thus $W_{jk}$ is an (irreducible) component. Finally, notice that $\forall x_1, y_2 \in \cc$, (r_{jk}=0) \Rightarrow (\exists x_2, \Tilde{f}^{(i)}_j=f^{(i+1)}_k=0) \Rightarrow (\exists x_2, \Tilde{g}^{(i)}=g^{(i+1)}=0) \Rightarrow (R_i=0). Due to irreducibility, $r_{jk}|R_i=0$ must hold. The same logic applies to $W_k$ of equation (<ref>). Equivalent representations make it more convenient to analyze the differential conditions on $Z(S)$. Given a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$. For every component $W$ of $Z(S)$, there is a finite set $W_0$ such that $\forall p \in W-W_0$ and in a neighborhood of $p$, $W$ is a curve and also a function of any of the coordinates, i.e. $W\subset (\cp)^5$ is 1-dimensional and can be locally parameterized by any variable of $\{x_1,x_2,x_3,y_1,y_2\}$. §.§ Main problem (reformulated) It is necessary to reclaim the main problem of this article through our new definitions. The ultimate goal is to classify all flexible meshes without (anti)deltoids, known as non-singular matchings. Since every matching is just two properly paired couplings, we first need to study the couplings. One will see in Section <ref> that for any non-singular coupling, every component of its zero set must lie in 1 of 6 cases. In Section <ref> and <ref>, we will demonstrate that all non-singular couplings, according to their components, fall into 7 classes. In the meanwhile, some examples are given to help the readers understand how to construct a matching by two couplings. Later in Section <ref>, we will describe the classification of non-singular couplings in a different but equivalent way to serve the construction of non-singular matchings. Finally in Section <ref>, we will discover more restrictions on two given couplings so that they can form a matching. Therefore the final classification of flexible meshes is done. We remind the readers that we admit the flexibility of a mesh even if the corresponding matching has an infinite zero set in $\cc$ but not necessarily in $\rr$. § CLASSIFICATION OF THE COMPONENTS AND COUPLINGS §.§ Extension diagrams A brand new algebraic perspective to study the components of non-singular couplings starts here. According to Proposition <ref>, each component $W$ of coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ is obtained by solving a system with irreducible polynomials \begin{equation}\label{compo} W=\{(x_1,y_1,x_2,y_2,x_3)\in (\cp)^5 : \Tilde{f}^{(1)}=f^{(2)}=r=H^{(1)}=H^{(2)}=0\} \end{equation} where $\Tilde{f}^{(1)},f^{(2)}, r$ are irreducible factors of $\Tilde{g}^{(1)}, g^{(2)}, R_1$ respectively. If we fix $x_1$ as a parameter of system (<ref>) and adapt the notations from Proposition <ref>, locally on $W-W_0$, the rest variables are functions of $x_1$ hence can be regarded as algebraic elements of some extended fields over $K_1$ where Similarly, if we switch the parameter to $x_2$, (locally on $W-W_0$) $x_1$ and $y_2$ can be regarded as algebraic elements on $K_2$. A formal definition goes as follows. Given a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ and its component in equation (<ref>), at some $p\in W$ there exist locally defined algebraic functions $y_i=Y_{i}(x_1), x_{i+1}=X_{i+1}(x_1)$ for $i=1,2$ such that \Tilde{f}^{(1)}(x_1,X_{2,p})=f^{(2)}(X_{2,p},Y_{2,p})=r(x_1,Y_{2,p})=H^{(1)}(Y_{1,p},X_{2,p})=H^{(2)}(Y_{2,p},X_{3,p})\equiv0. The extension diagram of $W$ with respect to $x_1$ at $p$ is given by \scalebox{0.75}{\xymatrix{ & K_1(X_{2,p},Y_{2,p}) & \\ K_1(Y_{1,p})=K_1(X_{2,p}) \ar@{-}[ur]^{f^{(2)}} & & K_1(Y_{2,p})=K_1(X_{3,p})\ar@{-}[ul]\\ & K_1\ar@{-}[ul]^{\Tilde{f}^{(1)}} \ar@{-}[ur]_{r}& \\ Similarly, we can define the extension diagrams with respect to $x_2, x_3$, etc... Although the functions are locally defined, they are still enough to determine the component since irreducible algebraic curves can be uniquely determined by their local behaviors. So if we do not distinguish isomorphic field extensions, the extension diagram in Definition <ref> is globally well-defined (on $W-W_0$) and hence uniquely characterizes $W$. In this regard, we prefer to slightly abuse the notations and simplify Definition <ref> to: Given a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ and its component in equation (<ref>). The extension diagram of $W$ (with respect to $x_1$) is given by \begin{equation}\label{diagram} \scalebox{0.75}{\xymatrix{ & K_1(x_2,y_2) & \\ K_1(y_1)=K_1(x_2) \ar@{-}[ur]^{f^{(2)}} & & K_1(y_2)=K_1(x_3)\ar@{-}[ul]\\ & K_1\ar@{-}[ul]^{\Tilde{f}^{(1)}} \ar@{-}[ur]_{r}& \\ \end{equation} In particular, if we want to emphasize a trivial extension, e.g. $K_1(x_2)=K_1$, we will change the line type to \scalebox{0.75}{\xymatrix{K_1(x_2)& K_1\ar@{=}[l]\\}} This simplification provides us with a huge convenience when changing the parameters since locally on $W-W_0$, one can use any $x_i$ or $y_j$ to express the rest variables. And since $H^{(i)}=0$ determines a rational relation between $y_i$ and $x_{i+1}$, we formally have Given an ideal $M=(\Tilde{g}^{(1)}, \Tilde{g}^{(2)}, \Tilde{g}^{(3)}, \Tilde{g}^{(4)})\subset \cc[x_1,x_2,x_3,x_4]$ with all $\Tilde{g}^{(i)}$ non-singular, $M$ is a matching if and only if \gcd(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2),\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4))\neq 1 if and only if for couplings $S_1=(\Tilde{g}^{(1)}, \Tilde{g}^{(2)}, H^{(1)}, H^{(2)})$ and $S_2=(\Tilde{g}^{(3)}, \Tilde{g}^{(4)}, H^{(3)}, H^{(4)})$, there exist components $W_1, W_2$ of $S_1, S_2$ respectively such that in the extension diagrams of $W_1$ and $W_2$, the minimal polynomials of $x_3$ on $K_1$ are identical. Each class in Theorem <ref> actually stands for a particular case of \gcd(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2),\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4)), which means the classification of flexible meshes is essentially the classification of matchings by the greatest common divisor of the resultants. Following Definition <ref>, we further define [K_i(x_j):K_i]:=\text{extension degree of } K_i(x_j)/K_i,\quad [K_i(y_j):K_i]:=\text{extension degree of } K_i(y_j)/K_i. Clearly, $[K_i(x_{j+1}):K_i]=[K_i(y_j):K_i]$ by previous discussion. For each component, [K_1(x_2):K_1]=[K_1(y_1):K_1] \in \{1,2\},\,\, [K_1(x_2,y_2):K_1] \in \{1,2,4\} due to the quadratic nature of $g^{(i)}$ or $\Tilde{g}^{(i)}$, which also implies [K_i(x_{i+1}):K_i]=2 \Leftrightarrow [K_{i+1}(x_i):K_{i+1}]=2 \Leftrightarrow \Tilde{g}^{(i)} \text{ is irreducible.} And for $K_1(y_2)$, as a subfield of $K_1(x_2,y_2)$, we must have [K_1(y_2):K_1] \in \{1,2,4\}. The above argument together with Definition <ref> separates the components of non-singular couplings into 6 cases, all of which are equivalent to those we mentioned above table (<ref>). Given $W$ a component of a non-singular coupling. According to its extension diagram, we say $W$ is of Case 1: purely-rational if $[K_1(x_2):K_1]=[K_1(y_2):K_1]=1$; Case 2: half-quadratic if $[K_1(x_2):K_1]=1$ and $[K_1(y_2):K_1]=2$, or if $[K_3(x_2):K_3]=1$ and $[K_3(x_1):K_3]=2$; Case 3: involutive-rational if $[K_1(y_2):K_1]=1$ and $[K_1(x_2):K_1]=2$; Case 4: purely-quadratic if $[K_1(x_2,y_2):K_1]=[K_1(y_2):K_1]=[K_2(y_2):K_2]=[K_1(x_2):K_1]=2$; Case 5: involutive-quadratic if $[K_1(y_2):K_1]=[K_1(x_2):K_1]=2$ and $[K_1(x_2,y_2):K_1]=4$; Case 6: quartic if $[K_1(y_2):K_1]=4$. This definition shows that, depending on the reducibility of $g^{(i)}$ and $R_i$, the components from Proposition <ref> are not just represented in different ways but are essentially different in their own algebraic natures. Table (<ref>) tells that a given non-singular coupling cannot have components from different cases except for rational-quadratic class. This means only the components of Case 3 and Case 4 can coexist in one coupling during the decomposition. Non-singular couplings $S$ are characterized as table (<ref>). More precisely, we say $S$ is $\bullet$ purely-rational if $S$ only admits components of Case 1; $\bullet$ half-quadratic if $S$ only admits components of Case 2; $\bullet$ equimodular if $S$ only admits components of Case 3 or Case 4, moreover, $S$ is $\bullet$ involutive-rational if $S$ only admits components of Case 3; $\bullet$ rational-quadratic if $S$ only admits components of Case 3 and Case 4; $\bullet$ purely-quadratic if $S$ only admits components of Case 4; $\bullet$ involutive-quadratic if $S$ only admits components of Case 5; $\bullet$ quartic if $S$ only admits components of Case 6. The definition only leads to a classification once we prove its completeness, which will be a direct consequence of Theorem <ref> and <ref>. §.§ First application Now we are ready to deal with the classification problem. As a starter, we tackle the simplest class of flexible meshes whose spherical linkages only contain (anti)isograms. It requires no advanced methods but is a good chance to show how the new algebraic definitions work in practice. This kind of mesh has a very unique algebraic structure and is called PQ (purely-rational) class. An equivalent definition of such a class can be found later in Definition <ref>. Given a mesh with notations in Fig. <ref> and Fig. <ref> such that $\forall i\in \{1,2,3,4\}$, $(\lambda_i, \delta_i, \mu_i,\gamma_i)$ form an (anti)isogram but not (anti)deltoid. Set k_i=\frac{\sin(\gamma_i-\mu_i)}{\sin(\gamma_i)\pm \sin(\mu_i)}\in \rr-\{0\},\,\,F_i=\tan\left(\frac{\zeta_{i}+\tau_{i}}{2}\right) \in \rr\cup\{\infty\}, \left(\begin{array}{cc} -F_i & k_i \\ 1 & k_iF_i \\ \end{array}\right) \text{ if } \lambda_i=\mu_i, \delta_i=\gamma_i, F_i\in \rr,\\ \left(\begin{array}{cc} -1 & 0 \\ 0 & k_i \\ \end{array}\right) \text{ if } \lambda_i=\mu_i, \delta_i=\gamma_i, F_i=\infty,\\ \left(\begin{array}{cc} k_i & -F_i \\ k_iF_i & 1 \\ \end{array}\right) \text{ if } \lambda_i+\mu_i=\delta_i+\gamma_i=\pi, F_i\in \rr,\\ \left(\begin{array}{cc} 0 & -1 \\ k_i & 0 \\ \end{array}\right) \text{ if } \lambda_i+\mu_i=\delta_i+\gamma_i=\pi, F_i=\infty.\\ \end{array}\right. Consider all combinations of signs $'\pm'$ appear in $k_i$, the mesh is flexible if and only if there exists a combination such that $N_4N_3N_2N_1$ is a scalar matrix. We refer to [33] for visualizations of such flexible meshes. We only prove the case when $\lambda_i=\mu_i, \delta_i=\gamma_i, F_i\in\rr$ for all $1\leq i \leq 4$ since the other cases are quite similar. First, it is easy to verify that $k_i$ is well-defined and nonzero since none of the conditions in Definition <ref> is satisfied. On the other hand, a given mesh uniquely determines a spherical linkage $(Q_i,F_i)_{i=1}^4$ (e.g. Fig. <ref> right) and each $Q_i$ determines a $g^{(i)}$, according to Lemma <ref> and <ref> we have where $\{k_i,k_i'\}=\left\{\frac{-1\pm\sqrt{1-4a_ie_i}}{2a_i}\right\}=\left\{\frac{\sin(\gamma_i-\mu_i)}{\sin(\gamma_i)\pm \sin(\mu_i)}\right\}$. Consider the coupling $S_1=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$, by Proposition <ref> each component of $S_1$ is in the form W_1=Z(\Tilde{f}^{(1)}, x_2y_2-k_2, H^{(1)}, H^{(2)}) where $\Tilde{f}^{(1)}$ is an irreducible factor of $\Tilde{g}^{(1)}$, which can be replaced by the factor of $g^{(1)}$ due to Proposition <ref>, so \begin{equation}\label{1dm} W_1=Z(x_1y_1-k_1, x_2y_2-k_2, y_1-F_1y_1x_2-F_1-x_2, y_2-F_2y_2x_3-F_2-x_3), \end{equation} hence in the extension diagram, for $i=1,2$ we have x_{i+1}=\frac{y_i-F_i}{F_iy_i+1},\,\, y_i=\frac{0x_i+k_i}{x_i+0}. If we formally express the above relations in a matrix form N=\left (\begin{array}{cc} a & b \\ c & d \\ \end{array}\right):\,\, \cp \rightarrow \cp;\,\, x \mapsto N(x):=\frac{ax+b}{cx+d},\,\,ad-bc\neq 0, this is the so-called Möbius transformation on $\cp$ which is compatible with matrix multiplications. e.g. x_{i+1}=\left (\begin{array}{cc} 1 & -F_i \\ F_i & 1 \\ \end{array}\right)(y_i)=\left (\begin{array}{cc} 1 & -F_i \\ F_i & 1 \\ \end{array}\right)\left (\begin{array}{cc} 0 & k_i \\ 1 & 0 \\ \end{array}\right)(x_i)= \left(\begin{array}{cc} -F_i & k_i \\ 1 & k_iF_i \\ \end{array}\right)(x_i). Let $N_i=\left(\begin{array}{cc} -F_i & k_i \\ 1 & k_iF_i \\ \end{array}\right)$ we have $x_3=N_2N_1(x_1)$. Likewise, in a component $W_2$ of coupling $S_2=(\Tilde{g}^{(3)}, g^{(4)}, H^{(3)}, H^{(4)})$ we have $x_1=N_4N_3(x_3)$. Since the mesh should be flexible, the corresponding matching $M=(\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ must have an infinite zero set $Z(M)\cong Z(S_1)\times_{\{x_1,x_3\}}Z(S_2)$ (Proposition <ref>) thus there exist components $W_1, W_2$ of $S_1, S_2$ respectively such that $W_1\times_{\{x_1,x_3\}}W_2$ is an infinite set. From equation (<ref>) we know $W_i$ is a 1-dimensional curve that can be parameterized by $x_1$. Therefore $\{x_3=N_2N_1(x_1),x_1=N_4N_3(x_3)\}$ must have infinitely many solutions so that $W_1\times_{\{x_1,x_3\}}W_2$ can be infinite. In other words, x_1=N_4N_3N_2N_1(x_1)=\left (\begin{array}{cc} n & 0 \\ 0 & n \\ \end{array}\right)(x_1)=x_1. This is a (conceptual) spherical image, every curve lies on a great circle. Left: A coupling $(Q_1,F_1,Q_2,0)$ and its spherical reflection $(Q_2',-F_1,Q_1',0)$. Right: According to Fig. <ref>, the positions of adjacent quads should be upside down, so in order to get a spherical linkage, we need to flip $Q_2'$ respect to $\lambda_2'$ and $Q_1'$ respect to $\lambda_1'$. Here is another example that shows the existence of the flexible mesh whose spherical linkage contains two arbitrarily coupled quads in Fig. <ref> left, i.e. the matching that contains a coupling of an arbitrary class. The main idea is extending a coupling of an arbitrary class to a matching. Notice that in table (<ref>), the class of the coupling is defined by its components, and the cases of components, given in Definition <ref>, are independent of $H^{(2)}$, which means the class of a coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})=(g^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ only depends on its first three polynomials. Now suppose we have a coupling in which $g^{(1)}, H^{(1)}, g^{(2)}$ are arbitrary. Set the coefficients of $g^{(3)}, g^{(4)}$ and $F_2, F_3, F_4$ as follows In this regard $Z(\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ is just a 'copy' of $Z(\Tilde{g}^{(1)},\Tilde{g}^{(2)})$ thus, by Proposition <ref>, the fiber product $Z(S_1)\times_{\{x_1,x_3\}} Z(S_2)\cong Z(\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ is always an infinite set. Recall the paragraph above Section <ref>, the geometric illustration of this example is simply the (spherical) reflection of a given coupling $(Q_1,F_1,Q_2,F_2=0)$, see Fig. <ref>. This trick actually works in all situations (including (anti)deltoids). § PSEUDO-PLANAR TYPE WITHOUT (ANTI)DELTOIDS §.§ Section outline and classification theorem We start with the simplest meshes in which all $F_i\in\{0,\infty\}$. In such a type we have \left\{\begin{array}{l} \Tilde{g}^{(i)}(x_i,x_{i+1})=g^{(i)}(x_i,x_{i+1})=a_ix_i^2 x_{i+1}^2 + b_i x_i^2 + c_ix_{i+1}^2 + x_ix_{i+1} + e_i \text{ if $F_i=0$,} \\ \Tilde{g}^{(i)}(x_i,x_{i+1})=-b_ix_i^2 x_{i+1}^2 - a_i x_i^2 - e_ix_{i+1}^2 + x_ix_{i+1} - c_i \text{ if $F_i=\infty$.} \\ \end{array}\right. All equations are exactly in the same format as in the planar type, the only difference might be switching the coefficients and that is why we call it pseudo-planar type.[In Fig. <ref> left, planar type requires $\zeta_1=\tau_1=0$ but pseudo-planar type only requires $\sin(\zeta_1+\tau_1)=0$.] It is easy to check that the coefficients, when $F_i=\infty$, also fit the inequalities of Definition <ref> and the non-sigularity of $\Tilde{g}^{(i)}$ remains. In this regard, we only need to develop the theory for $F_i=0$ and directly apply it to $F_i=\infty$ after switching the coefficients. Hence in this section, we regard $y_i=x_{i+1}$ and simply write a coupling as $S=(g^{(1)}(x_1,x_2), g^{(2)}(x_2,x_3))$. Given a non-singular coupling $S=(g^{(1)}, g^{(2)})$. According to Definition <ref>, each class of $S$ is equivalent to one of the corresponding conditions. i.e. $S$ is $\bullet$ purely-rational if both $g^{(1)}, g^{(2)}$ are reducible; $\bullet$ half-quadratic if only one of $g^{(1)}, g^{(2)}$ is reducible; Further, when both $g^{(1)}, g^{(2)}$ are irreducible, $S$ is $\bullet$ involutive-rational if \begin{equation}\label{eq-c3} \left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}\right\} \text{ and } \left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}\right\};\footnote{We will frequently use such kind of proportional chains throughout the paper to represent the linear dependence between two sets of coefficients, so 0 is allowed among the denominators.} \end{equation} $\bullet$ rational-quadratic if only one of systems (<ref>) holds; $\bullet$ purely-quadratic if one of the following systems holds \begin{equation}\label{eq-c4} \left\{\begin{array}{l} \left\{\frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}\neq 1\right\} \text{ or} \\ \left\{\frac{4a_1 c_1}{1- 4 a_2 e_2- 4 b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{4c_2 e_2}, b_1 e_1 = a_2 b_2 =0\right\} \text{ or} \\ \left\{\frac{4b_1e_1}{1- 4 a_2 e_2- 4 b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{4a_2b_2}, a_1 c_1 = c_2 e_2 =0\right\}; \end{array} \right. \end{equation} $\bullet$ involutive-quadratic if \begin{equation}\label{eq-c5} \left\{\frac{a_1}{b_1}=\frac{c_1}{e_1}=\frac{a_2}{c_2}=\frac{b_2}{e_2}, \frac{a_1}{a_2}\neq \frac{b_2}{c_1}\right\}; \end{equation} $\bullet$ quartic if otherwise. The proof has been split into cases which will be detailed studied. §.§ Case-by-case study §.§.§ Purely-rational (Theorem <ref>) Since all $g^{(i)}$ are reducible, by Proposition <ref>, each component has the form W=Z(f^{(1)}, f^{(2)}, r_{12}, H^{(1)}, H^{(2)}) where $f^{(1)}, f^{(2)}$ are irreducible factors of $g^{(1)},g^{(2)}$ respectively and $r_{12}=\text{Res}(f^{(1)},f^{(2)};x_2)$. Further, by Lemma <ref> all extensions are trivial in the extension diagram of $W$, which implies $[K_1(x_3):K_1]=[K_1(x_2):K_1]=1$. Finally, according to Definition <ref> we know $S$ is purely-rational. §.§.§ Half-quadratic (Theorem <ref>) Consider the extension diagram of the component of $S$. If $g^{(1)}$ is reducible but $g^{(2)}$ is not, we have the left-hand side of the following diagrams \scalebox{0.75}{\xymatrix{ & K_1(x_2,x_3) & & & & K_3(x_1,x_2) &\\ K_1(x_2) \ar@{-}[ur]^{g^{(2)}} & & K_1(x_3)\ar@{=}[ul] & & K_3(x_2) \ar@{-}[ur]^{g^{(1)}} & & K_3(x_1)\ar@{=}[ul]\\ & K_1\ar@{=}[ul]^{f^{(1)}} \ar@{-}[ur] & & & & K_3\ar@{=}[ul]^{f^{(2)}} \ar@{-}[ur] & \\ where $f^{(1)}$ is an irreducible factor of $g^{(1)}$; or if $g^{(2)}$ is reducible but $g^{(1)}$ is not, change the parameter to $x_3$ we have the right-hand side of the above diagrams, where $f^{(2)}$ is an irreducible factor of $g^{(2)}$. Each of the diagrams implies a component of Case 2 which are the only ones possessed by half-quadratic couplings according to Definition <ref>. Let us construct a matching in which both couplings $(g^{(1)}, g^{(2)}), (g^{(3)}, g^{(4)})$ respectively admit components $W_1, W_2$ of Case 2. First, according to Lemma <ref> we may set $a_1=e_1=0$ and $b_3=c_3=0$, then set $g^{(2)}$ irreducible. To build a matching we need $W_1\times_{\{x_1, x_3\}} W_2$ to be an infinite set. On the other hand, in the extension diagrams of $W_1, W_2$ we have $x_2 = k x_1$ and $x_4=\frac{k'}{x_3}$. Plugging them into $g^{(2)}$ and $g^{(4)}$ ($g^{(4)}$ is undetermined yet) respectively should give two multiples of the same minimal polynomial of $x_3$ on $K_1$ according to Theorem <ref>: \left\{\begin{array}{l} (k^2 a_2 x_1^2 + c_2 ) x_3^2 + k x_1 x_3 + (k^2 b_2 x_1^2+ e_2), \\ (c_4 x_1^2 + e_4 ) x_3^2 + k' x_1 x_3 + (k'^2 a_4 x_1^2 + k'^2 b_4), \\ \end{array}\right. which should be equal up to a constant, i.e. the coefficients of $g^{(4)}$ are determined by \frac{k^2 a_2}{c_4}=\frac{c_2}{e_4}=\frac{k}{k'}=\frac{k^2 b_2}{k'^2 a_4}=\frac{e_2}{k'^2 b_4}. §.§.§ Involutive-rational Given a non-singular coupling $S=(g^{(1)}, g^{(2)})$ where all $g^{(i)}$ are irreducible, set $R(x_1,x_3)=\text{Res}(g^{(1)},g^{(2)};x_2)$ we have \begin{array}{ll} (1) & (x_3-kx_1)^2|R(x_1,x_3)\Leftrightarrow(x_3-kx_1)|R(x_1,x_3)\Leftrightarrow\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k; \\ (2) & (x_1x_3-k')^2|R(x_1,x_3)\Leftrightarrow(x_1x_3-k')|R(x_1,x_3)\Leftrightarrow\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k'}.\\ \end{array} Given a non-singular coupling $S=(g^{(1)}, g^{(2)})$ where all $g^{(i)}$ are irreducible, $S$ admits a component of Case 3 if and only if one of systems (<ref>) holds if and only if \begin{equation}\label{===1} \frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}=1. \end{equation} We first prove the equivalence of the last two statements. It is easy to check that either of systems (<ref>) implies equation (<ref>). Conversely, since $S$ is non-singular, we may assume $a_2e_2\neq 0$ (or $b_2c_2\neq 0$ otherwise). By substitution $\left\{b_2=\frac{a_1 c_1}{a_2}, c_2=\frac{b_1 e_1}{e_2}\right\}$, equation (<ref>) implies (one of) systems (<ref>), i.e. \{a_1 c_1=a_2 b_2, b_1 e_1=c_2 e_2, (a_2e_2-a_1e_1)(a_2e_2-b_1c_1)=0\}. Now suppose $S$ admits a component $W$ of Case 3. In the extension diagram of $W$, $g^{(1)}(x_1,t)\in K_1[t]$ determines the minimal polynomial of $x_2$ on $K_1$ since $g^{(1)}$ is irreducible. By Definition <ref>, $x_3$ lies in $K_1$ hence $x_3=f/g$ where $f,g\in \cc[x_1]$ and $\gcd(f,g)=1$. $g^{(2)}(t,x_3)$, as well, determines the minimal polynomial of $x_2$ on $K_1$ so the ratios of the corresponding coefficients in $g^{(1)}(x_1,t)$ and $g^{(2)}(t,x_3)$ should coincide, i.e. \begin{equation}\label{eq-coeff} \frac{a_1 x_1^2 + c_1}{a_2 x_3^2 + b_2}=\frac{x_1}{x_3}=\frac{b_1 x_1^2 + e_1}{c_2 x_3^2 + e_2} \Leftrightarrow \frac{a_1 x_1^2 + c_1}{a_2 f^2 + b_2 g^2}=\frac{x_1}{f g}=\frac{b_1 x_1^2 + e_1}{c_2 f^2 + e_2 g^2}. \end{equation} According to Corollary <ref>, for $i=1, 2$, $\{a_i,...,e_i\}$ contains at most one zero, hence $\gcd(a_1 x_1^2 + c_1,x_1,b_1 x_1^2 + e_1)=1=\gcd(a_2 f^2 + b_2 g^2,f g,c_2 f^2 + e_2 g^2)$. This implies the second ratio chain in equation (<ref>) is a constant in $\cc$ (i.e. $f g/x_1 \in \cc$) so it is either $x_3=k x_1$ or $x_3=k'/x_1$ for some nonzero constant $k$ or $k'$. Plugging them into equation (<ref>) we obtain \frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k \text{ or } \frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k'}. The converse is directly from Lemma <ref>. (Theorem <ref>) Given $g^{(i)}$ non-singular and \left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k, \frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k'}\right\} we know $kk'\neq 0$ hence $\gcd(x_3-kx_1,x_1x_3-k')=1$. Apply Lemma <ref> we have $(x_3-kx_1)^2(x_1x_3-k')^2|R(x_1,x_3)=\text{Res}(g^{(1)},g^{(2)};x_2)$. Notice that $R(x_1,x_3)$ is originally a 4 by 4 determinant with the degree of $x_1$ at most 4, so $R(x_1,x_3)$ only has two different factors $(x_3-kx_1)$ and $(x_1x_3-k')$. Consequently, the extension diagram of each component always implies $x_3\in K_1$ (i.e. Case 3) hence $S$ is involutive-rational. Let us consider an ideal $M=(g^{(1)},g^{(2)},g^{(3)},g^{(4)})$ with all $g^{(i)}$ irreducible and the coefficients satisfy \left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k'}=\frac{a_4}{c_3}=\frac{c_4}{e_3}=\frac{a_3}{b_4}=\frac{b_3}{e_4}\right\} where $k'\neq 0$ is a given parameter. Clearly, according to Lemma <ref>, both couplings $(g^{(1)},g^{(2)})$ and $(g^{(3)},g^{(4)})$ admit components of Case 3 with the extension diagrams in which $x_3x_1-k'=0$. Thus $M$ is a matching according to Theorem <ref>. §.§.§ Rational-quadratic The next lemma characterizes equimodular couplings in two aspects. Proposition <ref> is crucial for its proof! Given a coupling $S=(g^{(1)}, g^{(2)})$ where all $g^{(i)}$ are irreducible. $S$ is equimodular if and only if in every extension diagram of its components, one of the following equivalent conditions holds: (1) $x_3 \in K_1(x_2)$; (2) $x_3 \in K_2(x_1)$; (3) $x_1 \in K_2(x_3)$; (4) $x_1 \in K_3(x_2)$. On the other hand, $S$ is equimodular if and only if one of the following systems holds. \begin{equation}\label{eq-c3or4} \left\{\begin{array}{l} \left\{\frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}\right\} \text{ or} \\ \left\{\frac{4a_1 c_1}{1- 4 a_2 e_2- 4 b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{4c_2 e_2}, b_1 e_1 = a_2 b_2 =0\right\} \text{ or} \\ \left\{\frac{4b_1e_1}{1- 4 a_2 e_2- 4 b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{4a_2b_2}, a_1 c_1 = c_2 e_2 =0\right\}. \end{array} \right. \end{equation} In the extension diagram, one should realize that $K_i(x_{i+1}), K_{i+1}(x_i)$ are just two representations of the same quotient field of the domain $\cc[x_i,x_{i+1}]/(g^{(i)})$, given $g^{(i)}$ is prime. This immediately shows $(1)\Leftrightarrow(2)$ and $(3)\Leftrightarrow(4)$. Now suppose $x_3 \in K_2(x_1)$ so there exist $f_1, g_1 \in K_2$ such that $x_3=f_1 + g_1 x_1$. Due to the irreducibility of $g^{(2)}$, $x_3 \notin K_2$ hence $g_1\neq0$, which implies $x_1=(x_3-f_1)/g_1\in K_2(x_3)$. This trick can show $(2)\Leftrightarrow(3)$. Next, if we treat $g^{(1)}(t,x_2)$ and $g^{(2)}(x_2,t)$ as polynomials in $K_2[t]$, condition (2) means their discriminants are equal up to a square in $K_2$. Let $\Delta_1$ and $\Delta_2$ be the corresponding discriminants of $g^{(1)}$ and $g^{(2)}$ respectively. In particular, $\Delta_1$ is not a square since $g^{(1)}$ is irreducible. Hence \Delta_1=-4 a_1 c_1 x_2^4 + (1 - 4 a_1 e_1- 4 b_1 c_1) x_2^2 - 4 b_1 e_1=\left\{ \begin{array}{c} -4 a_1 c_1(x_2^2-\xi)(x_2^2-\eta) \text{ if } a_1 c_1\neq 0, \\ (1 - 4 a_1 e_1- 4 b_1 c_1)(x_2^2-\rho) \text{ if } a_1 c_1=0 \\ \end{array} \right. where $\xi,\eta,\rho \in \cc$ and $\xi \neq \eta$. So the only square factor that $\Delta_1$ might have is $x_2^2$ (i.e. $b_1 e_1=0$). All in all, systems (<ref>) are just saying $\frac{\Delta_1}{\Delta_2}$ is a square in $K_2$ so formally $K_2(x_1)=K_2(\sqrt{\Delta_1})=K_2(\sqrt{\Delta_2})=K_2(x_3)$ hence systems (<ref>) $\Leftrightarrow$ condition (2). Finally, when all $g^{(i)}$ are irreducible, by Definition <ref> it is easy to see that components of Case 3 and Case 4 can be characterized by their common property $K_1(x_2,x_3)=K_1(x_2)$, i.e. condition (1) $\Leftrightarrow S$ is equimodular. (Theorem <ref>) Since we have already proved Theorem <ref> for involutive-rational class, together with Corollary <ref> we can conclude that, if $S$ admits components of Case 3 and Case 4, then only one of systems (<ref>) holds. Conversely, suppose $\left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}(=k)\right\}$ holds but $\left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}\right\}$ fails. According to Corollary <ref> we know $S$ admits a component of Case 3. Symbolic computation can show (a_2,b_2,c_2,e_2)=\left(\frac{a_1}{k},kc_1,\frac{b_1}{k},ke_1\right) \Rightarrow \frac{\text{Res}(g^{(1)},g^{(2)};x_2)}{(x_3-kx_1)^2}=\left(\frac{a_1b_1}{k^2}x_1^2x_3^2+...+c_1e_1\right). We claim that $r=\frac{a_1b_1}{k^2}x_1^2x_3^2+...+c_1e_1$ is irreducible and quadratic in $x_3$. Otherwise, according to Corollary <ref>, there must be an irreducible factor $r'$ of $r$ which is linear in $x_3$ hence we have a component $Z(g^{(1)}, g^{(2)}, r', H^{(1)}, H^{(2)})$ of Case 3 by Definition <ref>. Consider the proof of Corollary <ref> we know $r'=x_3-k'x_1$ or $x_1x_3-k'$ for some $k'\neq 0$. By Lemma <ref>, it must be $r'=x_3-k'x_1=x_3-kx_1$ since we cannot have $\left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}\right\}$. In the meanwhile, the highest degrees of $x_1, x_3$ in $r$ are at most 2. This leads to $a_1b_1=c_1e_1=0$ given $r'|r$ hence $g^{(1)}$ is either singular or reducible by Lemma <ref>, a contradiction! So finally we have another component (g^{(1)}, g^{(2)}, r, H^{(1)}, H^{(2)}) with the extension diagram in which $[K_1(x_3):K_1]=2$ (i.e. $x_3\notin K_1$). Notice that \left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}\right\} \Rightarrow \left\{\frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}\right\}. Hence by Lemma <ref> we know $W$ is of Case 4. The proof is similar when $\left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}\right\}$ holds but $\left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}\right\}$ fails. §.§.§ Purely-quadratic The proof of Theorem <ref> is a direct consequence of Corollary <ref> and Lemma <ref>. This class is currently "floating in the air" since we only proved its existence. In Theorem <ref> when the condition of purely-quadratic couplings holds, we are expecting a component of Case 4 with an explicit representation from equation (<ref>) W=Z(g^{(1)}, g^{(2)}, r_k, H^{(1)}, H^{(2)}) where $r_k(x_1,x_3)$ is an irreducible factor of $\text{Res}(g^{(1)},g^{(2)};x_2)$. In fact, for every component of Case 4, we have $r_k$ in the form r_k(x_1,x_3)=(\alpha_{22} x_1^2 + \alpha_{02} ) x_3^2 + \alpha_{11}x_1 x_3 + (\alpha_{20} x_1^2+ \alpha_{00}). The proof can be found later in Corollary <ref> where all $\alpha_{ij}$ are given explicitly. §.§.§ Involutive-quadratic The proof of Theorem <ref> is a consequence of the following lemma. Given a non-singular coupling $S=(g^{(1)},g^{(2)})$, then $S$ only admits components of Case 5 if and only if system (<ref>) holds. Moreover, the factorization of the resultant is \text{Res}(g^{(1)},g^{(2)};x_2)=\frac{b_1}{a_1}(a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1)^2. Under the condition of system (<ref>), we have $a_ie_i=b_ic_i$ for both $i=1,2$. In a real flexible linkage, this implies the so-called orthodiagonal property which is $\cos(\lambda_i)\cos(\mu_i)=\cos(\gamma_i)\cos(\delta_i)$. §.§.§ Quartic (Theorem <ref>) Suppose $S$ does not satisfy the requirements of other classes. In particular, both $g^{(1)}, g^{(2)}$ are irreducible. Firstly, $S$ does not have components of Case 1 or Case 2 which requires reducibility on $g^{(i)}$. Secondly, $S$ does not have components of Case 3 or Case 4 by Corollary <ref> and Lemma <ref>. Finally, $S$ does not have components of Case 5 by Lemma <ref>. Thus $S$ only admits component of Case 6. § GENERAL TYPE WITHOUT (ANTI)DELTOIDS §.§ Section outline and classification theorem In this section, we study the couplings from matchings of general type in which not all $F_i\in \{0, \infty \}$. So, up to a rotation, we may always assume $F_1\notin \{0, \infty \}$. We follow our approach from pseudo-planar type and make a case-by-case study. It is interesting to mention that condition $F_1\notin \{0, \infty \}$ implies even more restrictions on coefficients compared to pseudo-planar type. One will see that involutive-rational, rational-quadratic, and purely-quadratic couplings of general type are completely useless for the construction of flexible meshes. So, to avoid unnecessary discussion, we will treat them all together as equimodular class (see Definition <ref>). Given a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ where $F_1\notin\{0,\infty\}$. According to Definition <ref>, each class of $S$ is equivalent to one of the corresponding conditions. i.e. $S$ is $\bullet$ purely-rational if both $\Tilde{g}^{(1)}, g^{(2)}$ are reducible; $\bullet$ half-quadratic if only one of $\Tilde{g}^{(1)}, g^{(2)}$ is reducible; Further, when both $\Tilde{g}^{(1)}, g^{(2)}$ are irreducible, $S$ is $\bullet$ equimodular if \begin{equation}\label{eq-c4-g} \left\{\frac{a_1}{e_1}=\frac{b_1}{c_1}, \frac{a_2}{e_2}=\frac{c_2}{b_2}, F_1=\pm 1, \frac{1-4a_1e_1-4b_1c_1-8a_1c_1}{16a_2b_2}=\frac{16a_1c_1}{1-4a_2e_2-4b_2c_2-8a_2b_2}\right\}; \end{equation} $\bullet$ involutive-quadratic if \begin{equation}\label{eq-c5-g} \left\{\begin{array}{l} \left\{\frac{a_1}{b_1}=\frac{c_1}{e_1}=\frac{a_2}{c_2}=\frac{b_2}{e_2}=-1, F_1\neq\pm 1\right\} \text{ or} \\ \left\{\frac{a_1}{b_1}=\frac{c_1}{e_1}=\frac{a_2}{c_2}=\frac{b_2}{e_2}=-1, F_1=\pm 1, 256a_1c_1a_2b_2\neq 1\right\}; \end{array} \right. \end{equation} $\bullet$ quartic if otherwise. The proof has been split into cases which will be detailed studied. §.§ Case-by-case study §.§.§ Purely-rational The proof of Theorem <ref> is left to the reader. In fact, the proof for purely-rational class and half-quadratic class can be directly copied from their counterparts in pseudo-planar type, Section <ref>. This is because in the corresponding extension diagram, $[K_i(y_j):K_i]=[K_i(x_{j+1}):K_i]$ for all $i,j$. Hence by Definition <ref>, the component remains in the same Case no matter $F_1\in \{0, \infty\}$ or not. §.§.§ Half-quadratic The proof of Theorem <ref> was mentioned above. Nevertheless, given a couping $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ which admits a component $W$ of Case 2, we know it is either $\Tilde{g}^{(1)}$ is reducible or $g^{(2)}$ is reducible (but not both). So when $\Tilde{g}^{(1)}$ is reducible, in the extension diagram of $W$ we have $x_2\in K_1$ and the minimal polynomial of $y_2$ on $K_1$ can be obtained from $g^{(2)}$ through substitution of $x_2=x_2(x_1)$; similarly, when $g^{(2)}$ is reducible we have $x_2\in K_3=\cc(y_2)$ and the minimal polynomial of $y_2$ on $K_1$ can be obtained from $\Tilde{g}^{(1)}$ through substitution $x_2=x_2(y_2)$. §.§.§ Equimodular As in pseudo-planar type, the problem can be reduced to identical discriminants up to a square in $K_2$. To start up, fix an extension diagram and change the base field to $K_2=\cc(x_2)=\cc(y_1)$ so $x_1, y_2$ are algebraic elements on $K_2$. From $g^{(1)}, \Tilde{g}^{(1)}$, and $g^{(2)}$ respectively, we have discriminants \begin{equation}\label{d2} \left\{\begin{array}{l} \Delta_1'=-4 a_1 c_1 y_1^4 + (1 - 4 a_1 e_1- 4 b_1 c_1) y_1^2 - 4 b_1 e_1,\\ \Delta_1=-4 a_1 c_1 (x_2+F_1)^4 + (1 - 4 a_1 e_1- 4 b_1 c_1) (x_2+F_1)^2(1-F_1x_2)^2 - 4 b_1 e_1 (1-F_1x_2)^4,\\ \Delta_2=-4 a_2 b_2 x_2^4 + (1 - 4 a_2 e_2- 4 b_2 c_2) x_2^2 - 4 c_2 e_2.\\ \end{array}\right. \end{equation} Knowing that $H^{(1)}=0$, $\Delta_1$ is actually obtained from $\Delta_1'$ by substitution $y_1=\frac{x_2+F_1}{1-F_1x_2}$. We skip the proof of the following lemma since it is a complete analog of Lemma <ref> from pseudo-planar type, however, it applies to both types. Given a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ where $\Tilde{g}^{(1)}, g^{(2)}$ are irreducible. $S$ is equimodular if and only if in every extension diagram of its components, one of the following equivalent conditions holds: (1) $y_2 \in K_1(x_2)$; (2) $y_2 \in K_2(x_1)$; (3) $x_1 \in K_2(y_2)$; (4) $x_1 \in K_3(x_2)$. On the other hand, regarding $\Tilde{g}^{(1)}(t,x_2), g^{(2)}(x_2,t)$ as polynomials in $K_2[t]$ with discriminants $\Delta_1, \Delta_2$ respectively, $S$ is equimodular if and only if $\exists f\in K_2$ such that $f^2=\frac{\Delta_1}{\Delta_2}$. The proof of Theorem <ref> is a consequence of the following lemma. Given a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ where $\Tilde{g}^{(1)}, g^{(2)}$ are irreducible and $F_1\notin \{0, \infty\}$. $S$ is equimodular if and only if system (<ref>) holds. When $\Tilde{g}^{(1)}$ is irreducible, it determines the minimal polynomial of $x_2$ on $K_1$ in every component. It is convenient to use the expression \Tilde{g}^{(1)}=h_2(x_1) x_2^2 + h_1(x_1) x_2 + h_0(x_1) \begin{equation}\label{h012} \left\{\begin{array}{l} h_2(x_1)=(b_1 F_1^2 +a_1) x_1^2 - F_1 x_1 + e_1 F_1^2 +c_1, \\ h_1(x_1)=2 F_1 (a_1 - b_1) x_1^2 + (1 - F_1^2) x_1 + 2 F_1 (c_1 - e_1), \\ h_0(x_1)=(a_1 F_1^2 +b_1) x_1^2 + F_1 x_1 + c_1 F_1^2 +e_1. \\ \end{array}\right. \end{equation} Now let us suppose the component is of Case 3 with the extension diagram in which $y_2\in K_1$, we have If a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ admits a component of Case 1 or Case 3, then the relation between $x_1$ and $y_2$ in the extension diagram must be in the form $y_2=\frac{px_1+q}{rx_1+s}, ps-qr\neq 0$. For a component $W$ of a coupling, we say $W$ is real if $W\cap (\rp)^5$ is an infinite set. A surprising fact about equimodular couplings of general type is that they have no real components. An equimodular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ admits a real component only if $F_1\in \{0, \infty\}$ and \begin{equation}\label{+ratio} \left\{\begin{array}{l} \frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}>0 \text{ if } F_1=0,\\ \frac{b_1 e_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{a_1 c_1}{c_2 e_2}>0 \text{ if } F_1=\infty.\\ \end{array}\right. \end{equation} The above proposition tells us, in reality, any two spherical quads as coupled in Fig. <ref> left can never determine an equimodular coupling of general type. §.§.§ Involutive-quadratic The proof of Theorem <ref> is a consequence of a stronger version of Lemma <ref>. Given a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$ where $F_1\notin \{0,\infty \}$, then $S$ only admits components of Case 5 if and only if one of systems (<ref>) holds. Moreover, the factorization of the resultant is \text{Res}(\Tilde{g}^{(1)},g^{(2)};x_2)=-(a_2 h_1(x_1) y_2^2 - h_2(x_1) y_2 + b_2 h_1(x_1))^2 where $h_i(x_1)$ are given in equation (<ref>). The conditions in Lemma <ref> impose restrictions on the shape of the quads, i.e. \frac{a_1}{b_1}=\frac{c_1}{e_1}=\frac{a_2}{c_2}=\frac{b_2}{e_2}=-1 \Leftrightarrow \delta_1=\mu_1=\gamma_2=\mu_2=\frac{\pi}{2}. §.§.§ Quartic Considering $\Tilde{g}^{(1)},g^{(2)}$ are irreducible, the proof of Theorem <ref> is a consequence of Lemma <ref> and <ref>. § RESULTANT ANALYSIS §.§ Section outline and factorization theorem For a given coupling, the following theorem describes its class in a different way. This new perspective is very important for the construction of matchings (i.e. flexible meshes) using Theorem <ref>. Depending on the class of a non-singular coupling $S=(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$, the factorization of $R_1:=\text{Res}(\Tilde{g}^{(1)},g^{(2)}; x_2)$ is as follows: $\bullet$ purely-rational: $R_1(x_1,y_2)=\prod_{i=1}^4(r_i x_1 y_2 - p_i x_1 + s_i y_2 - q_i)$; $\bullet$ half-quadratic: $R_1(x_1,y_2)=r(x_1,y_2)\cdot r'(x_1,y_2)$ where $r, r'$ are irreducible and quadratic in both $x_1$ and $y_2$; $\bullet$ involutive-rational: $R_1(x_1,y_2)=\prod_{i=1}^2(r_i x_1 y_2 - p_i x_1 + s_i y_2 - q_i)^2$; $\bullet$ rational-quadratic: $R_1(x_1,y_2)=(r_1 x_1 y_2 - p_1 x_1 + s_1 y_2 - q_1)^2\cdot r(x_1,y_2)$ where $r$ is irreducible and quadratic in both $x_1$ and $y_2$; $\bullet$ purely-quadratic: $R_1(x_1,y_2)=r(x_1,y_2)\cdot r'(x_1,y_2)$ where $r, r'$ are irreducible and quadratic in both $x_1$ and $y_2$; $\bullet$ involutive-quadratic: $R_1(x_1,y_2)=\frac{b_1}{a_1}(a_2 h_1(x_1) y_2^2 - h_2(x_1) y_2 + b_2 h_1(x_1))^2$ where $h_i(x_1)$ are the coefficients of $x_2^i$ in $\Tilde{g}^{(1)}$ for $i=0,1,2$ respectively; $\bullet$ quartic: $R_1(x_1,y_2)$ is irreducible. The proof will be given in the next section where $R_1$ is factorized explicitly. §.§ Symmetric Extension Theorem In the extension diagram of a component of a non-singular coupling, consider switching the parameter between $x_1$ and $x_3$, $x_3$ can be rationally expressed by $x_1$ if and only if $x_1$ can be rationally expressed by $x_3$. One direction has already been proved in Lemma <ref>. For the other direction, carefully check the proof of Lemma <ref>, one will find that if we alternate $(\Tilde{g}^{(1)},\Tilde{g}^{(2)})$ to $(\Tilde{g}^{(2)},\Tilde{g}^{(1)})$ and consider the problem in $K_3$, all arguments are still valid. Thus $x_3$ is rational in $x_1$ if and only if $x_1$ is rational in $x_3$. In the extension diagram of a component of a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$, $[K_i(x_j):K_i]=[K_j(x_i):K_j]$ for all $1\leq i, j \leq 3$. In addition, the extension diagrams with respect to $x_1$ and $x_3$ always belong to the same Case of Definition <ref>. The last statement is a bit subtle and can be regarded as a generalization of Lemma <ref>. In other words, no matter how we interchange the parameters between $x_1$ and $x_3$, so the extension diagram might be upside down, the extension degrees always fit the same Case of Definition <ref>. For a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$, every factor of $R_1(x_1,y_2)$ (or $\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)}; x_2)$) has the same degree in $x_1$ and $y_2$ (or $x_1$ and $x_3$). For a non-singular coupling $(\Tilde{g}^{(1)}, g^{(2)}, H^{(1)}, H^{(2)})$, $R_1(x_1,y_2)$ is always in 4th degree of $x_1$ and $y_2$. Recall Proposition <ref>, each component is related to a factor $r$ of $R_1$. The degree of $y_2$ in $r$ reflects the extension degree $[K_1(y_2):K_1]$ in Definition <ref>. According to the component types that a coupling contains, the factorization of $R_1$ therefore becomes very predictable thanks to Corollary <ref> and Proposition <ref>. (Theorem <ref>) Firstly, according to table (<ref>) and Corollary <ref>, the component types explain the degree of $x_1$ and $y_2$ in each irreducible factor of $R_1$. In the light of Proposition <ref>, the irreducibility of the factors given in Theorem <ref> becomes obvious. The only thing left is to verify if $R_1$ can be factorized as the theorem claimed. For efficiency, we only provide proper reparametrizations so that the symbolic computation can factorize $R_1$ explicitly. $\bullet$ purely-rational or half-quadratic: Keep the original expression of $g^{(i)}$ if irreducible, otherwise use the expressions in Lemma <ref>. $\bullet$ involutive-rational or rational-quadratic: Set $F_1=0$, check the proof of Theorem <ref> for these two classes and replace $x_3$ by $y_2$. The proof is similar for $F_1=\infty$. It is unnecessary to consider the general type $F_1\notin \{0,\infty\}$ due to Proposition <ref>. However, the factorizations stated in Theorem <ref> still hold for general type (see Appendix <ref>). $\bullet$ purely-quadratic: Given table (<ref>) and Proposition <ref>, it is clear that $R_1=r r'$ where $r, r'$ are irreducible and quadratic in both $x_1$ and $y_2$. However, an explicit factorization is only useful with the restrictions in Proposition <ref>. Without loss of generality, we may assume $F_1=0$ and \begin{equation}\label{repara} \frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}=m^2>0 \Rightarrow \left\{\begin{array}{l} a_1=\frac{b_1b_2(4b_2c_2m^2-4b_1c_1-m^2+1)}{4e_2(b_2c_2m^2-b_1c_1)}, \\ e_1=\frac{c_2e_2m^2}{b_1}, \\ a_2=\frac{b_1c_1(4b_2c_2m^2-4b_1c_1-m^2+1)}{4e_2m^2(b_2c_2m^2-b_1c_1)}. \\ \end{array}\right. \end{equation} According to Corollary <ref>, $\{a_i, b_i, c_i, e_i\}$ includes at most one zero for $i=1, 2$. In addition, given $\frac{a_1 c_1}{a_2 b_2}=\frac{b_1 e_1}{c_2 e_2}>0$, the zeros, if exist, must gather in one fraction. Hence it is harmless to assume $b_1c_1e_1b_2c_2e_2\neq 0$ so $a_1, e_1, a_2$ can be parametrized as in equation (<ref>). Nevertheless, we should mention $b_2c_2m^2-b_1c_1\neq 0$. Otherwise m^2=\frac{b_1c_1}{b_2c_2}\Rightarrow \left\{\frac{a_1}{a_2}=\frac{b_1}{c_2},\frac{c_1}{b_2}=\frac{e_1}{e_2} \right\}\Rightarrow m^2=\frac{a_1 e_1}{a_2 e_2}=\frac{b_1 c_1}{b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=1, contradicts to systems (<ref>). $\bullet$ involutive-quadratic: Check the counterparts of the proof of Theorem <ref> and <ref>. $\bullet$ quartic: Trivial. Suppose coupling $S$ is rational-quadratic or purely-quadratic. If $S$ admits a real component (Definition <ref>) of Case 4, the quadratic factors $r, r'$ of $R_1$ in Theorem <ref> are in the form (\alpha_{22} x_1^2 + \alpha_{02} ) y_2^2 + \alpha_{11}x_1 y_2 + (\alpha_{20} x_1^2+ \alpha_{00}) where $\alpha_{ij}$ are given in table (<ref>). Without loss of generality, we may assume $F_1=0$ due to Proposition <ref>. Using the parametrizations as suggested in the proof of Theorem <ref>, symbolic computation immediately shows table (<ref>) for rational-quadratic and purely-quadratic couplings respectively. \begin{equation}\label{aij} \begin{tabular}{ |c|c|c|c| } \hline & rational-quadratic & rational-quadratic & purely-quadratic \\ \hline & $\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k$ & $\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k}$ & $\frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}=m^2$\\ \hline $\alpha_{22}$ & $\frac{a_1b_1}{k^2}$ & $\frac{(a_1e_1 - b_1c_1)^2}{k^2}$ & $\frac{b_1(1-m^2-4b_1c_1+4b_2c_2m^2)}{4e_2m^2}$\\ \hline $\alpha_{20}$ & $(a_1e_1 - b_1c_1)^2$ & $a_1b_1$ & $\frac{b_1b_2(1\pm m)^2}{4(b_2c_2m^2-b_1c_1)}$\\ \hline $\alpha_{02}$ & $\frac{(a_1e_1 - b_1c_1)^2}{k^2}$ & $\frac{c_1e_1}{k^2}$ & $\frac{c_1c_2(1\pm m)^2}{4(b_2c_2m^2-b_1c_1)}$\\ \hline $\alpha_{11}$ & $\frac{2(a_1e_1 - b_1c_1)^2 - a_1e_1 - b_1c_1}{k}$ & $\frac{2(a_1e_1 - b_1c_1)^2 - a_1e_1 - b_1c_1}{k}$ & $\frac{(1\pm m)(b_1c_1 \pm b_2c_2m^3)-4(b_2c_2m^2-b_1c_1)^2}{2m(b_2c_2m^2-b_1c_1)}$\\ \hline $\alpha_{00}$ & $c_1e_1$ & $(a_1e_1 - b_1c_1)^2$ & $\frac{e_2(b_2c_2m^2-b_1c_1)}{b_1}$\\ \hline \end{tabular} \end{equation} § BACK TO THE MESH §.§ Section outline and the classification of flexible meshes As one already knew, each flexible mesh uniquely determines a matching $M=(\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ hence the mesh can be classified by its corresponding matching. According to Proposition <ref>, $M$ can be regarded as the sum of two couplings S_1=(\Tilde{g}^{(1)},\Tilde{g}^{(2)},H^{(1)},H^{(2)}),\,\, S_2=(\Tilde{g}^{(3)},\Tilde{g}^{(4)},H^{(3)},H^{(4)}). And by Theorem <ref>, we must have $\gcd(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2),\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4))\neq 1$. Hence a very straightforward classification can be given by the factors of the gcd. To achieve this, Theorem <ref> is a powerful tool. Notice that in the extension diagram of any component, the minimal polynomial of $x_3$ on $K_1$ can be induced by the minimal polynomial of $y_2$ on $K_1$ through substitution $y_2=\frac{x_3+F_2}{1-F_2x_3}$. Consequently, the factorization of $\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)}; x_2)$ can be induced by the factorization of $R_1(x_1,y_2)$ through the same substitution. Thus by Theorem <ref> and Corollary <ref>, the classification of flexible meshes can be naturally given as below. Given a flexible mesh that determines a non-singular matching the corresponding couplings are S_1:=(\Tilde{g}^{(1)},\Tilde{g}^{(2)},H^{(1)},H^{(2)}),\,\, S_2:=(\Tilde{g}^{(3)},\Tilde{g}^{(4)},H^{(3)},H^{(4)}). \Tilde{R}^{(1)}(x_1,x_3):=\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2),\,\, \Tilde{R}^{(2)}(x_1,x_3):=\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4),\,\, d_M:=\gcd(\Tilde{R}^{(1)},\Tilde{R}^{(2)}). We say $M$ (or the mesh) is $\bullet$ PR if both $S_1, S_2$ are purely-rational so $d_M$ has linear factors only; $\bullet$ HQ if both $S_1, S_2$ are half-quadratic so $d_M$ has quadratic factors only; $\bullet$ IR if for $i=1, 2$, $S_i$ is either involutive-rational or rational-quadratic and $d_M$ has linear factors only; $\bullet$ RQ if both $S_1, S_2$ are rational-quadratic and $d_M$ has linear and quadratic factors; $\bullet$ PQ if for $i=1, 2$, $S_i$ is either rational-quadratic or purely-quadratic and $d_M$ has quadratic factors only; $\bullet$ IQ if both $S_1, S_2$ are involutive-quadratic so $d_M$ has quadratic factors only; $\bullet$ Q if both $S_1, S_2$ are quartic so $d_M=\Tilde{R}^{(1)}$ is irreducible; $\bullet$ PR + IR if one of $S_i$ is purely-rational and the other is either involutive-rational or rational-quadratic so $d_M$ has linear factors only; $\bullet$ HQ + IQ if one of $S_i$ is half-quadratic and the other is involutive-quadratic so $d_M$ has quadratic factors only; $\bullet$ HQ + PQ if one of $S_i$ is half-quadratic and the other is either rational-quadratic or purely-quadratic so $d_M$ has quadratic factors only; $\bullet$ PQ + IQ if one of $S_i$ is involutive-quadratic and the other is either rational-quadratic or purely-quadratic so $d_M$ has quadratic factors only. The above definition just proved Theorem <ref>, and in the rest of this section, $M, S_1, S_2, \Tilde{R}^{(1)},\Tilde{R}^{(2)}$, and $d_M$ always stay their meanings from Definition <ref>. §.§ General construction of matchings The construction of matchings is equivalent to solving a polynomial system on \begin{equation}\label{as-f4} \{a_3, b_3, c_3, e_3, a_4, b_4, c_4, e_4, F_2, F_3, F_4\}. \end{equation} For example, we arbitrarily set $\{\Tilde{g}^{(1)}, g^{(2)}\}$, as known as \{a_1, b_1, c_1, e_1, a_2, b_2, c_2, e_2, F_1\}, then we have an explicit factorization of $R_1(x_1,y_2)$ by Theorem <ref>. Therefore we have the factorization of $\Tilde{R}^{(1)}$ with $F_2$ as a parameter in it. On the other hand, to construct a specific matching, we need to settle down the class of $S_2$ so that $S_1$ and $S_2$ can form a valid combination in Definition <ref>. And once we know the class of $S_2$, more restrictions from Theorem <ref> or <ref> will be added on \{a_3, b_3, c_3, e_3, a_4, b_4, c_4, e_4, F_3, F_4\}. In the meanwhile, Theorem <ref> also allows us to factorize $\Tilde{R}^{(2)}$ symbolically. Finally, given $d_M\neq 1$, there must be two irreducible factors of $\Tilde{R}^{(1)}, \Tilde{R}^{(2)}$ respectively such that their coefficients of $x_1^ix_3^j$ are proportional hence we get the desired polynomial system on (<ref>). A matching $M$ always comes with an infinite zero set $Z(M) \cong Z(S_1)\times_{\{x_1, x_3\}} Z(S_2)$ and the essential reason for $Z(M)$ being infinite is that there are components $W_1, W_2$ of $Z(S_1), Z(S_2)$ respectively such that $W_1\times_{\{x_1, x_3\}} W_2$ is infinite. It is easy to see that the first seven classes from PR to Q in Definition <ref> require $W_1, W_2$ belonging to the same Case of Definition <ref>, hence we call them simple classes. On the contrary, the rest from PR + IR to PQ + IQ are called hybrid classes since $W_1, W_2$ are required to be of different Cases. In this regard, Given a matching $M$ with corresponding couplings $S_1, S_2$. A component $W_i$ of $S_i$ is said to be valid if for $j\neq i$, there exists a component $W_j$ of $S_j$ such that $W_i\times_{\{x_1, x_3\}} W_j$ is an infinite set. §.§ Simple classes §.§.§ PR: $S_1, S_2$ only admit valid components of Case 1. In such a class both $S_1, S_2$ are purely-rational so all $\Tilde{g}^{(i)}$ and $g^{(i)}$ are reducible. The name 'purely-rational' is based on the fact that in the extension diagram of every component of $S_i$, all field extensions are trivial. Geometrically speaking, all forming quads in the spherical linkage of a PR mesh are (anti)isogonal (Corollary <ref>). This means all PR meshes are just those we described in Theorem <ref>. For illustration, we provide an algebraic construction when all quads are antiisogonal. In other words, all PR matchings $M$ in which $a_i=e_i=0$, $\forall i\in\{1,2,3,4\}$ (Lemma <ref>), can be constructed as below. Here we construct matchings $(\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ in which $a_i=e_i=0$ for all $i\in\{1,2,3,4\}$. First, set \{b_1\neq 0,c_1\neq 0,b_2\neq 0,c_2\neq 0,b_3\neq 0,c_3\neq 0,c_4\neq 0,F_1,F_2\} as free parameters and \{b_4\neq 0,F_3,F_4\} undetermined. Following the proof of Theorem <ref> we have k_i & -F_i \\ k_iF_i & 1 \\ \end{array}\right) \text{ or } N_i=\left(\begin{array}{cc} 0 & -1 \\ k_i & 0 \\ \end{array}\right) where $k_i=\frac{-1\pm\sqrt{1-4b_ic_i}}{2c_i}\neq 0$. Since any scalar matrix is the identity of Möbius transformation, \left (\begin{array}{cc} n & 0 \\ 0 & n \\ \end{array}\right)(x)=\frac{nx+0}{0x+n}=x, \forall n\neq 0, we may regard $N_i\in PGL(2,\rr)$ hence the inverse of a matrix, despite a scalar, is simply its adjoint matrix. Simple observation shows \begin{array}{l} N_i\in T:=\left\{ \left (\begin{array}{cc} a & b \\ c & d \\ \end{array}\right) \in \text{PGL}(2,\rr): ab+cd=0 \right\}, \\ N_i^{-1}\in T':=\left\{ \left (\begin{array}{cc} a & b \\ c & d \\ \end{array}\right) \in \text{PGL}(2,\rr): ac+bd=0 \right\}, \\ \end{array} which implies $N_3N_2N_1\in T'$ since $N_4N_3N_2N_1$ is a scalar. Notice that the map f:\,\,\rp \rightarrow \rp;\,\, x \mapsto \frac{2x}{1-x^2} is onto given $f(\tan(\alpha))=\tan(2\alpha)$. Hence $\exists F_3 \in \rp$ such that \frac{2F_3}{1-F_3^2}=\frac{2(ac+bd)}{c^2+d^2-a^2-b^2} \text{ where } \left (\begin{array}{cc} a & b \\ c & d \\ \end{array}\right):= \left (\begin{array}{cc} k_3 & 0 \\ 0 & 1 \\ \end{array}\right)N_2N_1. This is equivalent to saying that $\exists F_3 \in \rp$ such that \left (\begin{array}{cc} a' & b' \\ c' & d' \\ \end{array}\right) \left (\begin{array}{cc} 1 & -F_3 \\ F_3 & 1 \\ \end{array}\right) \left (\begin{array}{cc} a & b \\ c & d \\ \end{array}\right) \in T', \text{ i.e. } a'c'+b'd'=0, so we must have $a'\neq 0$ or $b'\neq 0$. Thus in $PGL(2,\rr)$ we have N_4=\left (\begin{array}{cc} d' & -b' \\ -c' & a' \\ \end{array}\right)=\left (\begin{array}{cc} \frac{d'}{a'} & -\frac{b'}{a'} \\ -\frac{c'}{a'} & 1 \\ \end{array}\right) \text{ or } N_4=\left (\begin{array}{cc} 0 & -1 \\ -\frac{c'}{b'} & 0 \\ \end{array}\right). i.e. $(k_4,F_4)=(\frac{d'}{a'},\frac{b'}{a'})$ or $(-\frac{c'}{b'},\infty)$. Finally, although $c_4$ was mentioned to be free, we need to make sure $c_4k_4\neq-1$ so that $b_4=-c_4k_4^2-k_4\neq 0$ (recall $k_i=\frac{-1\pm\sqrt{1-4b_ic_i}}{2c_i}$). For each $i\in\{1,2,3,4\}$, a PR mesh requires either $a_i=e_i=0$ or $b_i=c_i=0$. However, one should realize that the above example is typical. The approach can be adapted to any other case after a minor change. §.§.§ HQ: $S_1, S_2$ only admit valid components of Case 2. In such a class both $S_1, S_2$ are half-quadratic so each of them only contains one reducible $\Tilde{g}^{(i)}$. The extension diagram of a valid component of $S_1$ leads to either $x_2\in K_1$ or $x_2\in K_3$ but not both. That is why we call it 'half-quadratic'. Please note that we only need to consider the matchings in which $\Tilde{g}^{(1)}, \Tilde{g}^{(3)}$ are reducible or $\Tilde{g}^{(2)}, \Tilde{g}^{(4)}$ are reducible. This is because when reducible polynomials are adjacent, say $\Tilde{g}^{(1)}, \Tilde{g}^{(4)}$, then write the matching as $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},\Tilde{g}^{(2)},\Tilde{g}^{(3)})$, it goes to PR + IR class. Example <ref> gave a way to construct pseudo-planar HQ matchings in which $\Tilde{g}^{(1)}, \Tilde{g}^{(3)}$ are reducible. Clearly, after a rotation, it also works for those in which $\Tilde{g}^{(2)}, \Tilde{g}^{(4)}$ are reducible. §.§.§ IR: $S_1, S_2$ only admit valid components of Case 3. In such a class all $\Tilde{g}^{(i)}$ are irreducible and $d_M$ has linear factors only. This means in the extension diagram of any valid component we have $x_3\in K_1$. So $\Tilde{g}^{(1)}(x_1,t),\Tilde{g}^{(2)}(t,x_3)\in K_1[t]$ determine the same minimal polynomial of $x_2$ on $K_1$ which means they have two common roots in $K_1(x_2)$. Same thing happens on $(\Tilde{g}^{(3)},\Tilde{g}^{(4)})$ and $x_4$. The name 'involutive-rational' therefore comes from here. Example <ref> showed how to construct pseudo-planar matchings $M$ in which $S_1, S_2$ admit valid components of Case 3. However, if both $S_1, S_2$ are rational-quadratic, we only need to adjust a few coefficients such that $\Tilde{R}^{(1)},\Tilde{R}^{(2)}$ only coincide on their linear factors so $S_i$ does not have valid components of Case 4. As for matchings in general type, notice that real components of Case 3 only exist when $F_1, F_3\in\{0, \infty\}$ (see Proposition <ref>). Suppose $W_1, W_2$ are real valid components of $S_1, S_2$ respectively such that $W_1\times_{\{x_1, x_3\}} W_2$ is an infinite set. According to Theorem <ref>, we are allowed to merge the extension diagrams \scalebox{0.75}{\xymatrix{ K_1(x_2,x_3) & & K_1(x_4,x_3)\\ K_1(x_2) \ar@{=}[u]^{\Tilde{g}^{(2)}} & K_1(y_2)=K_1(x_3)\ar@{-}[ul] \ar@{-}[ur] & K_1(x_4) \ar@{=}[u]_{\Tilde{g}^{(3)}}\\ & K_1\ar@{-}[ul]^{\Tilde{g}^{(1)}} \ar@{=}[u] \ar@{-}[ur]_{\Tilde{g}^{(4)}}& \\ According to the proof of Corollary <ref>, we have $y_2=kx_1$ or $y_2=k/x_1$. After substitutions $y_4=\frac{F_4+x_{1}}{1-F_4x_{1}}$ and $x_3=\frac{y_2-F_2}{1+F_2y_2}$, the relation between $(y_4,x_3)$ should keep the same format like $(y_2,x_1)$. This immediately shows the following restrictions. \begin{equation}\label{IR-g} \{F_2F_4=k=\pm 1\}\cup\{-F_2/F_4=k=\pm 1\}\cup\{-F_2F_4=k=\pm 1\}\cup\{F_2/F_4=k=\pm 1\}. \end{equation} §.§.§ RQ: $S_1, S_2$ admit valid components of Case 3 and Case 4. In such a class $\Tilde{R}^{(1)},\Tilde{R}^{(2)}$ have the same factorization: an irreducible and quadratic factor times a squared linear factor. This explains the name 'ration-quadratic'. We only need to consider pseudo-planar type because of the following proposition. A matching $M$ must be pseudo-planar if, for $i=1,2$, coupling $S_i$ admits a valid component $W_i$ of Case 4 such that (W_1\cap (\rp)^5)\times_{\{x_1, x_3\}} (W_2\cap (\rp)^5) is an infinite set. Clearly, $W_1, W_2$ are real components of $S_1, S_2$ respectively, hence $F_1, F_3\in \{0, \infty\}$ by Proposition <ref>. Moreover, due to Theorem <ref>, we can merge the extension diagrams \begin{equation}\label{PQmerge} \scalebox{0.75}{\xymatrix{ K_1(x_2) \ar@{=}[r] & K_1(x_3) & K_1(x_4) \ar@{=}[l]\\ & K_1\ar@{-}[ul]^{\Tilde{g}^{(1)}} \ar@{-}[u] \ar@{-}[ur]_{\Tilde{g}^{(4)}}& \\ \end{equation} where $K_1(x_2)=K_1(x_3)=K_1(x_4)$ is based on Lemma <ref>. The same lemma also tells coupling $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},H^{(4)},H^{(1)})$ admits a real component of Case 3 or Case 4. And again, by Proposition <ref> we have $F_4\in \{0, \infty\}$. Finally, thanks to Theorem <ref> we can use the same argument to conclude $F_2\in \{0, \infty\}$. Thus the whole matching must be pseudo-planar. The construction is easy. First, we use Example <ref> to get some candidates so $\Tilde{R}^{(1)},\Tilde{R}^{(2)}$ share at least a linear factor. Then use Corollary <ref> to get the quadratic factors and make the corresponding coefficients proportional. §.§.§ PQ: $S_1, S_2$ only admit valid components of Case 4. In such a class we can always get a merged diagram (<ref>) in which K_1(x_2)=K_1(x_3)=K_1(x_4)=K_1(x_2,x_3,x_4)\text{ and } [K_1(x_2):K_1]=2. The name 'purely-quadratic' is quite straightforward. The construction is basically as suggested in Section <ref>. However, given $M$ is pseudo-planar (Proposition <ref>), and according to Corollary <ref>, the polynomial system is reduced to four equations which are derived from proportional coefficients $\alpha_{ij}$. But do not forget to remove the solutions of RQ class when both $S_1, S_2$ are wanted to be rational-quadratic. §.§.§ IQ: $S_1, S_2$ only admit valid components of Case 5. Similar to IR matchings, $\Tilde{g}^{(1)}, \Tilde{g}^{(2)}$ determine the same minimal polynomial of $x_2$ on $K_1(x_3)$ and $[K_1(x_3),K_1]=2$. So it is called 'involutive-quadratic'. This class belongs to the so-called orthodiagonal involutive type which is well studied by Aikyn in [35]. They gave a systematic construction for such matchings and found the equivalent condition for the valid components being real. However, we have an important result for pseudo-planar IQ matchings. In a pseudo-planar matching $M$ with couplings $S_1$ and $S_2$, $S_1$ is involutive-quadratic if and only if $S_2$ is involutive-quadratic. Suppose $S_i$ is involutive-quadratic so its valid component must be of Case 5. From Lemma <ref> we know that the irreducible factor of $\Tilde{R}^{(i)}$ is quadratic in both $x_1, x_3$ and only contains odd-degree terms. On the contrary, going through pseudo-planar couplings of other classes, the irreducible factors of the resultant, if quadratic, only contain even-degree terms (see Example <ref> and Corollary <ref>). Hence the other coupling must be involutive-quadratic as well. §.§.§ Q: $S_1, S_2$ only admit valid components of Case 6. This is the irreducible class for $\Tilde{R}^{(1)},\Tilde{R}^{(2)}$, so in order to form a matching, we must have $\Tilde{R}^{(1)}=c\Tilde{R}^{(2)}$ for a constant $c$. We call it 'quartic' simply because all components of the couplings lead to $[K_1(x_3):K_1]=4$. Again, the construction is as suggested in Section <ref> and Example <ref> (Reflection) shows the solution exists. In addition, it is easy to verify that by relabeling the quads in Example <ref>, e.g. considering couplings $(\Tilde{g}^{(2)},\Tilde{g}^{(3)},H^{(2)},H^{(3)})$ and $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},H^{(4)},H^{(1)})$, one may obtain a non-reflection Q mesh. Hellmuth Stachel conjectured a long time ago that there do not exist irreducible matchings. In our language, in a flexible mesh which determines a matching $M$, $\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)}; x_2)$ is irreducible only if $\text{Res}(\Tilde{g}^{(2)},\Tilde{g}^{(3)}; x_3)$ is reducible. Or $[K_1(x_3):K_1]=4\Rightarrow[K_2(x_4):K_2]\leq 2$ for short. The planar version[All forming panels in Fig. <ref> left are planar.] of the conjecture has been proved by Erofeev in [36]. If the conjecture is also true for non-planar type, the construction for Q matchings can be transformed into other classes after a rotation. §.§ Hybrid classes §.§.§ PR + IR: $S_1, S_2$ admit valid components of Case 1 and Case 3 respectively, or vice versa. The construction for this class is just a byproduct of Example <ref>. Symmetrically, we assume $S_1$ admits a valid component of Case 3 hence $S_2$ is purely-rational. In addition, we may assume $F_1\in \{0, \infty\}$ due to Proposition <ref>. We construct matchings $M$ in which $F_1=0$ and the corresponding couplings $S_1, S_2$ admit valid components $W_1, W_2$ of Case 3 and Case 1 respectively. i.e. $\Tilde{g}^{(1)},\Tilde{g}^{(2)}$ are irreducible and $\Tilde{g}^{(3)},\Tilde{g}^{(4)}$ are reducible. According to Theorem <ref>, the merged extension diagram of $W_1$ and $W_2$ is \scalebox{0.75}{\xymatrix{ & K_1(x_2,y_2) &\\ K_1(x_2) \ar@{=}[ur]^{g^{(2)}} & K_1(x_3)\ar@{-}[u] & K_1(x_4) \ar@{=}[l]\\ & K_1\ar@{-}[ul]^{\Tilde{g}^{(1)}} \ar@{=}[u] \ar@{=}[ur]& \\ Since $F_1=0$, from Corollary <ref> and its proof, we know that in the diagram it is either $y_2=kx_1$ or $y_2=k/x_1$, depending on whether the coefficients of $g^{(1)}, g^{(2)}$ satisfy \left\{\frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k\right\} \text{ or } \left\{\frac{a_1}{b_2}=\frac{b_1}{e_2}=\frac{a_2}{c_1}=\frac{c_2}{e_1}=\frac{1}{k}\right\} where $k\neq 0$ is a constant. In either of the above, the relation between $(x_1,x_3)$ is a Möbius transformation $x_3=\frac{px_1+q}{rx_1+s}$ where $p,q,r,s$ are parameterized by $k$ and $F_2$. The rest construction for $\Tilde{g}^{(3)},\Tilde{g}^{(4)}$ goes the same way as in Theorem <ref> or Example <ref>, i.e. p & q \\ r & s \\ \end{array}\right) must be a scalar matrix. §.§.§ HQ + IQ: $S_1, S_2$ admit valid components of Case 2 and Case 5 respectively, or vice versa. The construction is as suggested in Section <ref>. Nevertheless, we want to mention that, it is easy to realize one of $S_i$ is half-quadratic and the other is involutive-quadratic hence there is only one $\Tilde{g}^{(i)}$ reducible. We may temporarily assume it is $\Tilde{g}^{(2)}$ so there are valid components $W_1, W_2$ of $S_1, S_2$ respectively such that $W_1\times_{\{x_1, x_3\}} W_2$ is an infinite set. The following diagrams are from the same merged diagram of $W_1$ and $W_2$ but with respect to different parameters $x_1, x_4$ respectively. \scalebox{0.75}{\xymatrix{ K_1(x_2,x_3) & & K_1(x_4,x_3)\\ K_1(x_2) \ar@{=}[u] & K_1(x_3)\ar@{=}[ul] \ar@{=}[l] \ar@{-}[ur] & K_1(x_4) \ar@{-}[u]_{\Tilde{g}^{(3)}}\\ & K_1\ar@{-}[ul]^{\Tilde{g}^{(1)}} \ar@{-}[u] \ar@{-}[ur]_{\Tilde{g}^{(4)}}& \\ } \,\,\,\,\,\,\,\,\, \xymatrix{ K_4(x_1,x_2) & & K_4(x_2,x_3)\\ K_4(x_1) \ar@{-}[u]^{\Tilde{g}^{(1)}} & K_4(x_2)\ar@{=}[ur] \ar@{=}[r] \ar@{-}[ul] & K_4(x_3) \ar@{=}[u]\\ & K_4\ar@{-}[ul]^{\Tilde{g}^{(4)}} \ar@{-}[u] \ar@{-}[ur]_{\Tilde{g}^{(3)}}& \\ According to Definition <ref> we have $K_1(x_2)=K_1(x_3)\neq K_1(x_4)$ and $[K_4(x_2):K_4]=[K_4(x_3):K_4]=2$. So by Lemma <ref>, coupling $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},H^{(4)},H^{(1)})$, like $S_2$, also admits a valid component of Case 5. This means no matter how we label the polynomials $\Tilde{g}^{(i)}$, an HQ + IQ matching is always an HQ + IQ matching. Therefore, without loss of generality, we can always assume $\Tilde{g}^{(2)}$ is reducible and both couplings $S_2$ and $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},H^{(4)},H^{(1)})$ are involutive-quadratic. Hence by Theorem <ref> or <ref> we have \left\{\frac{a_4}{b_4}=\frac{c_4}{e_4}=\frac{a_1}{c_1}=\frac{b_1}{e_1}(=-1), \frac{a_3}{b_3}=\frac{c_3}{e_3}=\frac{a_4}{c_4}=\frac{b_4}{e_4}(=-1)\right\}. Adding this to the construction process will considerably reduce the number of parameters. The solution exists according to the following example. We first set b_1=-a_1\neq 0, e_1=-c_1\neq 0, a_2=e_2=0, b_2=-c_2-1, b_2c_2\neq 0, F_1=1, F_2=0 plus the inequalities in Definition <ref>. Simple calculation shows $\Tilde{R}^{(1)}$ has an irreducible factor which is also wanted to be a factor of $\Tilde{R}^{(2)}$. So finally, let $F_3=F_4=0$ and use Lemma <ref> to construct $g^{(3)}$ and $g^{(4)}$. §.§.§ HQ + PQ: $S_1, S_2$ admit valid components of Case 2 and Case 4 respectively, or vice versa. As we did for the last class, a similar argument can show that, no matter how we label the polynomials $\Tilde{g}^{(i)}$, an HQ + PQ matching is always an HQ + PQ matching. Without loss of generality, we can always assume $\Tilde{g}^{(2)}$ is reducible and both couplings $S_2$ and $(\Tilde{g}^{(4)},\Tilde{g}^{(1)},H^{(4)},H^{(1)})$ admit valid components of Case 4. And due to Proposition <ref>, we may further assume $F_3, F_4\in \{0,\infty\}$. We first see an example of pseudo-planar type. We construct HQ + PQ matchings in which all $F_i=0$ and $g^{(2)}$ is reducible. Since $S_2$ admits a valid component of Case 4, from Theorem <ref> $g^{(3)}$ and $g^{(4)}$ could be constructed by the first one of systems (<ref>). Hence by Corollary <ref> we get an irreducible factor of $\Tilde{R}^{(2)}$ explicitly (\alpha_{22} x_1^2 + \alpha_{02}) x_3^2 + \alpha_{11} x_1 x_3 + (\alpha_{20} x_1^2+ \alpha_{00}). If necessary, we need to adjust a few coefficients to make sure $\alpha_{11} \neq 0$. On the other hand, by Lemma <ref>, we arbitrarily set $g^{(2)}$ reducible by letting $a_2=e_2=0$ or $b_2=c_2=0$ so $g^{(2)}$ has a factor in the form $y_2-kx_2$ or $x_2y_2-k$. We first consider the case $(y_2-kx_2)|g^{(2)}$. In the merged extension diagram of valid components of $S_1, S_2$, the minimal polynomial of $x_3$ on $K_1$ can be induced by substitutions so $\Tilde{R}^{(1)}$ has an irreducible factor (a_1 x_1^2 + c_1 ) x_3^2 + k x_1 x_3 + k^2(b_1 x_1^2+ e_1) which should be the same factor of $\Tilde{R}^{(2)}$, i.e. $g^{(1)}$ is determined by \frac{a_1}{\alpha_{22}}=\frac{c_1}{\alpha_{02}}=\frac{k}{\alpha_{11}}=\frac{k^2 b_1}{\alpha_{20}}=\frac{k^2e_1}{\alpha_{00}} where $\alpha_{11}\neq 0$, as required, is mandatory. The case for $(x_2y_2-k)|g^{(2)}$ is similar. Clearly, there is no difficulty in applying the above method to construct all pseudo-planar HQ + PQ matchings since all polynomials remain in the same form. In fact, Example <ref> is even typical for general type. Notice that we have restricted $F_3, F_4\in \{0,\infty\}$ so for general type we only allow $F_1, F_2\notin \{0,\infty\}$. Likewise, after $g^{(1)}(x_1,y_1)$ going through three steps of substitutions \left(y_1=\frac{x_2+F_1}{1-F_1x_2}\right) \rightarrow \left(x_2=\frac{y_2}{k} \text{ or } x_2=\frac{k}{y_2} \right) \rightarrow \left(y_2=\frac{x_3+F_2}{1-F_2x_3}\right), we still get an irreducible polynomial in the form (\alpha_{22} x_1^2 + \alpha_{02}) x_3^2 + \alpha_{11} x_1 x_3 + (\alpha_{20} x_1^2+ \alpha_{00}) that can match the irreducible factor of $\Tilde{R}^{(2)}$ without odd-degree terms. This leads to the same consequence (<ref>) as in IR class (replace $F_4$ by $F_1$). The rest work goes the same as in Example <ref>. §.§.§ PQ + IQ: $S_1, S_2$ admit valid components of Case 4 and Case 5 respectively, or vice versa. The construction is as suggested in Section <ref>. The solution exists according to the following example. Suppose $S_1, S_2$ admit valid components of Case 4 and Case 5 respectively hence we may assume $F_1=0$ due to Proposition <ref>. Let $F_2=0$ and according to Corollary <ref> we have an irreducible factor of $\Tilde{R}^{(1)}$ (\alpha_{22} x_1^2 + \alpha_{02}) x_3^2 + \alpha_{11} x_1 x_3 + (\alpha_{20} x_1^2+ \alpha_{00}). On the other hand, set $F_3=\pm 1, F_4=0$ we get an irreducible factor of $\Tilde{R}^{(2)}$ by Lemma <ref> 4a_4(a_3x_3^2 + c_3) x_1^2 +x_3 x_1 + 4b_4(a_3x_3^2+c_3). Thus the coefficients of $g^{(1)},g^{(2)}$ can be determined by equation (<ref>) and \frac{4a_3a_4}{\alpha_{22}}=\frac{4a_3b_4}{\alpha_{02}}=\frac{1}{\alpha_{11}}=\frac{4c_3a_4}{\alpha_{20}}=\frac{4c_3b_4}{\alpha_{00}}. We call back to Example <ref> for a physical model. § CONCLUSION AND FUTURE WORK In this article, we developed an algebraic approach to classify the flexible Kokotsakis polyhedra without (anti)deltoids, also as known as non-singular $3\times 3$ flexible meshes. Among all 11 different classes, most of them are provided with systematic constructions, and for the rest, partial examples are given to demonstrate no empty class. Obviously, the future work is to continue on singular ones which contain (anti)deltoids. Based on the results we obtained so far, our algebraic method still works but in an asymmetric way, unlike non-singular cases. On the other hand, the singularity may occur in many different ways hence a lot of reduction efforts are needed to provide a classification as concise as possible. § ACKNOWLEDGMENTS The authors are grateful to Helmut Pottmann for his comprehensive introduction to the peer works and to Dmitry A. Lyakhov for his feedback on this manuscript. This work has been supported by KAUST baseline funding. [1] Chuck Hoberman. Transformation in architecture and design. In Transportable Environmens 3, pages 70–79. Taylor & Francis, 2006. [2] Peter Dieleman, Niek Vasmel, Scott Waitukaitis, and Martin van Hecke. Jigsaw puzzle design of pluripotent origami. Nature Physics, 16(1):63–68, 2020. [3] Sebastien Callens and Amir Zadpoor. From flat sheets to curved geometries: Origami and kirigami Materials Today, 21(3):241–264, 2018. [4] Zirui Zhai, Lingling Wu, and Hanqing Jiang. Mechanical metamaterials based on origami and kirigami. Appl. Phys. Rev., 8, 2021. [5] Caigui Jiang, Klara Mundilova, Florian Rist, Johannes Wallner, and Helmut Curve-pleated structures. ACM Transactions on Graphics, 38(6):169:1–13, 2019. [6] Levi H Dudte, Etienne Vouga, Tomohiro Tachi, and L Mahadevan. Programming curvature using origami tessellations. Nature materials, 15(5):583, 2016. [7] Mina Konaković, Keenan Crane, Bailin Deng, Sofien Bouaziz, Daniel Piker, and Mark Pauly. Beyond developable: Computational design and fabrication with auxetic ACM Transactions on Graphics, 35(4):89:1–11, 2016. [8] Mina Konaković-Luković, Julian Panetta, Keenan Crane, and Mark Pauly. Rapid deployment of curved surfaces via programmable auxetics. ACM Transactions on Graphics, 37(4):106:1–13, 2018. [9] Amin Jamalimehr, Morad Mirzajanzadeh, Abdolhamid Akbarzadeh, and Damiano Rigidly flat-foldable class of lockable origami-inspired metamaterials with topological stiff states. Nature Commun., 13, 2022. [10] Kai Xiao, Zihe Liang, Bihui Zou, Xiang Zhou, and Jaehyung Ju. Inverse design of 3D reconfigurable curvilinear modular origami structures using geometric and topological reconstructions. Nature Commun., 13, 2022. [11] Julian Lienhard, Simon Schleicher, Simon Poppinga, Tom Masselter, Markuks Milwich, Thomas Speck, and Jan Knippers. Flectofin: a hingeless flapping mechanism inspired by nature. Bioinspir. Biomim., 6, 2011. [12] Tom Masselter, Simon Poppinga, Julian Lienhard, Simon Schleicher, and Thomas The flower of Strelitzia reginae as concept generator for the development of a technical deformation system for architectural purposes. In Proc. 7th. Plant Biomechanics Int. Conf., pages 389–392. INRIA, 2012. [13] Stefan Pillwein, Kurt Leimer, Michael Birsak, and Przemyslaw Musialski. On elastic geodesic grids and their planar to spatial deployment. ACM Transactions on Graphics, 39(4):12, jul 2020. [14] Stefan Pillwein, Johanna Kübert, Florian Rist, and Przemyslaw Musialski. Design and fabrication of elastic geodesic grid structures. SCF '20: Proceedings of the ACM Symposium on Computational Fabric ation, page 11, nov 2020. [15] Enrique Soriano, Ramon Sastre, and Dionis Boixader. G-shells: Flat collapsible geodesic mechanisms for gridshells. In Proceedings of IASS Annual Symposia, volume 2019, pages 1–8. International Association for Shell and Spatial Structures (IASS), [16] Julian Panetta, MINA Konaković-Luković, Florin Isvoranu, Etienne Bouleau, and Mark Pauly. X-shells: A new class of deployable beam structures. ACM Transactions on Graphics (TOG), 38(4):1–15, 2019. [17] Ruslan Guseinov, Eder Miguel, and Bernd Bickel. Curveups: Shaping objects from flat plates with tension-actuated ACM Transactions on Graphics, 36(4):64:1–12, 2017. [18] Luigi Malomo, Jesús Pérez, Emmanuel Iarussi, Nico Pietroni, Eder Miguel, Paolo Cignoni, and Bernd Bickel. Flexmaps: Computational design of flat flexible shells for shaping 3d ACM Transactions on Graphics, 37(6):231:1–14, 2018. [19] Wolfgang Schief, Alexander Bobenko, and Tim Hoffmann. On the integrability of infinitesimal and finite deformations of polyhedral surfaces. In A. Bobenko et al., editors, Discrete differential geometry, volume 38 of Oberwolfach Seminars, pages 67–93. Springer, 2008. [20] Antonios Kokotsakis. Über bewegliche Polyeder. Math. Ann., 107:627–647, 1933. [21] Hellmuth Stachel. A kinematic approach to Kokotsakis meshes. Comp. Aided Geom. Design, 27:428–437, 2010. [22] Tomohiro Tachi. Generalization of rigid foldable quadrilateral mesh origami. J. Int. Ass. Shell & Spatial Structures, 50:173–179, 2009. [23] Tomohiro Tachi and Gregory Epps. Designing one-DOF mechanisms for architecture by rationalizing curved folding. In Y. Ikeda, editor, Proc. ALGODE Symposium, page 14 pp. Arch. Institute Japan, Tokyo, 2011. CD ROM. [24] Tomohiro Tachi. Composite rigid-foldable curved origami structure. In F. Escrig and J. Sanchez, editors, Proc. 1st Transformables Conf., page 6 pp. Starbooks, Sevilla, 2013. [25] Ivan Izmestiev. Classification of flexible Kokotsakis polyhedra with quadrangular Int. Math. Res. Not., (3):715–808, 2017. [26] Robert Sauer. Springer, 1970. [27] Aurel Voss. Über diejenigen Flächen, auf denen zwei Scharen geodätischer Linien ein conjugirtes System bilden. Sitzungsber. Bayer. Akad. Wiss., math.-naturw. Klasse, pages 95–102, 1888. [28] Robert Sauer and Heinrich Graf. Über Flächenverbiegung in Analogie zur Verknickung offener Facettenflache. Math. Ann., 105:499–535, 1931. [29] Kiumars Sharifmoghaddam, Georg Nawratil, Arvin Rasoulzadeh, and Jonas Using flexible trapezoidal quad-surfaces for transformable design. In Proceedings of IASS Annual Symposia, 2020/21. [30] Koryo Miura. Method of packaging and deployment of large membranes in space. In 31st Congress Int'l Astronautical Federation (Tokyo 1980). [31] Kiumars Sharifmoghaddam, Rupert Maleczek, and Georg Nawratil. Generalizing rigid-foldable tubular structures of T-hedral type. Mechanics Research Communications, 2023. to appear. [32] Ivan Izmestiev, Arvin Rasoulzadeh, and Jonas Tervooren. Isometric deformations of discrete and smooth T-surfaces. arXiv:2302.08925, 2023. [33] Georg Nawratil. Generalizing continuous flexible Kokotsakis belts of the isogonal International Conference on Geometry and Graphics, pages 115–126, 2022. [34] Raoul Bricard. Mémoire sur la théorie de l'octaèdre articulé. Journal de Mathématiques pures et appliquées, 3:113–148, 1897. [35] Alisher Aikyn, Yang Liu, Dmitry A. Lyakhov, Florian Rist, Helmut Pottmann, and Dominik L. Michels. Flexible kokotsakis meshes with skew faces: Generalization of the orthodiagonal involutive type polyhedra. Computer-Aided Design, 168, 2024. [36] Ivan Erofeev and Grigory Ivanov. Orthodiagonal anti-involutive kokotsakis polyhedra. Mechanism and Machine Theory, 146, 2020. § DEFINITION OF THE DIHEDRAL ANGLES Following Fig. <ref> right, we need to give a proper definition of dihedral angles $\alpha_i'$. To do that, we are about to relocate the starting points of all vectors to the center of a unit sphere. Let us consider a planar mesh that is partially depicted in Fig. <ref> left. Notice that $(\lambda_i, \gamma_i, \mu_i, \delta_i)$ and $(\lambda_i', \gamma_i', \mu_i', \delta_i')$ are complementary to $\pi$ respectively. This figure is the decomposition of Fig. <ref> right, $S$ is the south pole. In Fig. <ref> right, we take the common orientation of the sphere where the surface normals are pointing outwards. And in Fig. <ref> left, the oriented spherical quad $Q_1: (\lambda_1, \delta_1, \mu_1, \gamma_1)$ determines its interior—the area which does not contain south pole $S$, hence the dihedral angles $\alpha_1, \beta_1 \in [0, 2\pi)$ are well defined by the interior angles of $Q_1$. However, if we adopt the same orientation on Fig. <ref> right for $Q_2$, its interior would look strange—also the area which does not contain $S$. Therefore, in order to easily represent the relation between angles, we reverse the orientation and set $Q_2: (\lambda_2, \gamma_2, \mu_2, \delta_2)$, the dihedral angles $\alpha_2, \beta_2 \in [0, 2\pi)$ are defined in the same way through the interior angles of $Q_2$. In general, once we relocated all vectors to the sphere, the spherical quads are oriented as follows: \left\{\begin{array}{l} (\lambda_i, \delta_i, \mu_i, \gamma_i),\,\, i\in \{1,3\}, \\ (\lambda_i, \gamma_i, \mu_i, \delta_i),\,\, i\in \{2,4\}. \\ \end{array}\right. The diheral angles $\alpha_i, \beta_i$ are hence defined by the corresponding interior angles of $Q_i$ (notice that $\beta_i=\alpha_{i+1}$). When it comes to non-planar meshes (Fig. <ref>), we keep the same orientation of the sphere and set $\boldsymbol{v_i}$ be the surface normal at the common vertex of $Q_i$ and $Q_{i+1}$. Take Fig. <ref> as an example, a neighborhood of $Q_1 \cap Q_2$ can be stereographically projected (angle preserving) to its tangent plane (with the same orientation, see Fig. <ref>). Stereographic projection of the neighborhood near the '$\ast$' in Fig. <ref> left. $\boldsymbol{v_1}$ at the intersecting point is a vector perpendicular to the paper and pointing towards the reader. $-\lambda_1$ and $-\delta_1$, respectively, are the spherical reflections of $\lambda_1$ and $\delta_1$ respect to $\boldsymbol{v_1}$. Generally, we set $-\lambda_i$ and $-\delta_i$, respectively, are the spherical reflections of $\lambda_i$ and $\delta_i$ respect to $\boldsymbol{v_i}$. Let $T(v_i)$ be the oriented tangent plane with surface normal $\boldsymbol{v_i}$ and introduce the operator \left\{\begin{array}{l} <\cdot,\cdot>_{\boldsymbol{v_i}}: T(\boldsymbol{v_i})^2-\{\boldsymbol{0}\} \rightarrow [0,2\pi); \\ <\boldsymbol{x},\boldsymbol{y}>_{\boldsymbol{v_i}}=\text{The counterclockwise rotating angle from $\boldsymbol{x}$ to $\boldsymbol{y}$.} \end{array}\right. The new operator is compatible with existing angles through the following: \alpha_i= \left\{\begin{array}{l} <\gamma_i,\lambda_i>_{\boldsymbol{-v_{i-1}}},\,\, i\in \{1,3\}, \\ <\gamma_i,\lambda_i>_{\boldsymbol{v_{i-1}}},\,\, i\in \{2,4\}, \\ \end{array}\right. \beta_i= \left\{\begin{array}{l} <\delta_i,\lambda_i>_{\boldsymbol{v_i}},\,\, i\in \{1,3\}, \\ <\delta_i,\lambda_i>_{\boldsymbol{-v_i}},\,\, i\in \{2,4\}. \\ \end{array}\right. Thereafter, by setting \tau_i= \left\{\begin{array}{l} <\lambda_{i+1},-\lambda_i>_{\boldsymbol{v_i}},\,\, i\in \{1,3\}, \\ <\lambda_{i+1},-\lambda_i>_{\boldsymbol{-v_i}},\,\, i\in \{2,4\}, \\ \end{array}\right. \zeta_i= \left\{\begin{array}{l} <-\delta_i,\gamma_{i+1}>_{\boldsymbol{v_i}},\,\, i\in \{1,3\}, \\ <-\delta_i,\gamma_{i+1}>_{\boldsymbol{-v_i}},\,\, i\in \{2,4\}. \\ \end{array}\right. we have $\beta_i\equiv \alpha_{i+1}+\tau_i+\zeta_i \mod 2\pi$ for all $i\in \{1,2,3,4\}$. For instance, \beta_2=<\delta_2,\lambda_2>_{\boldsymbol{-v_2}}=<-\delta_2,-\lambda_2>_{\boldsymbol{-v_2}}\equiv\zeta_2+\alpha_3+\tau_2 \mod 2\pi. § TECHNICAL PROOFS (Lemma <ref>) Firstly, it is easy to check that when $a_i=e_i=0$ or $b_i=c_i=0$, $g^{(i)}$ can be factorized as stated and $kk'\neq 0$ is directly from the non-singularity of $g^{(i)}$. Conversely, if neither $a_i=e_i=0$ nor $b_i=c_i=0$, $g^{(i)}$ must be irreducible. We prove this by contradiction. Suppose $g^{(i)}$ is reducible. Definition <ref> implies $g^{(i)}$ has no factors in $\cc[x_i]$ or $\cc[y_i]$. So any irreducible factor of $g^{(i)}$ must be in the form px_iy_i+qx_i+ry_i+s, p\neq 0 \text{ or } q\neq 0. This means, by regarding $x_i$ as the unknown and $y_i$ as a parameter, $g^{(i)}=0$ admits a rational solution $x_i=x_i(y_i)\in \cc(y_i)$. In other words, the discriminant is a square in $\cc[y_i]$. With this information, we can claim that $a_ic_i\neq 0$. Otherwise, we must have $(1-4a_ie_i-4b_ic_i)=0$ or $b_ie_i=0$. This first one fails due to the inequalities of Definition <ref> and the second is also impossible given $g^{(i)}$ is non-singular and neither $a_i=e_i=0$ nor $b_i=c_i=0$. So the discriminant can be factorized into $-4a_ic_i(y_i^2-k)(y_i^2-k')$ where $k=k'$ since it is a square. Consequently, is a square in $\cc[y_i]$ hence has a zero discriminant contradicts to the inequalities in Definition <ref>. (Corollary <ref>) We first assume $\Tilde{g}^{(i)}$ is reducible. By Lemma <ref> and Proposition <ref> we know $\Tilde{f}^{(i)}$ is in first degree of $x_i$ and $x_{i+1}$ so $\Tilde{f}^{(i)}=0$ gives a rational relation between $x_i$ and $x_{i+1}$. In other words, the factorization of $\text{Res}(\Tilde{f}^{(i)},f^{(i+1)}; x_{i+1})$ induces the factorization of $f^{(i+1)}$ hence must be irreducible. The rest proof when $g^{(i+1)}$ is reducible will be the same. (Corollary <ref>) Suppose there is an $r(y_{i+1})\in \cc[y_{i+1}]-\cc$ which divides $R_i(x_i,y_{i+1})$. So we have $R_i(x_i,c)=0$ for some constant $c\in \cc$. According to the algebraic meaning of the resultant, $\Tilde{g}^{(i)}(x_i,x_{i+1})$ and $g^{(i+1)}(x_{i+1},c)$, as polynomials in $x_{i+1}$, share a common root almost everywhere for $x_i\in \cc$.[We need to exclude the values for $x_i$ such that the degree of $x_{i+1}$ in $\Tilde{g}^{(i)}$ degenerates.] Since $\Tilde{g}^{(i)}(x_i,x_{i+1})$ has no factors in $\cc[x_{i+1}]$ (Proposition <ref>), by solving $\Tilde{g}^{(i)}=0$, $x_{i+1}$ takes infinitely many values while $x_i$ changes continuously. So $g^{(i+1)}(x_{i+1},c)$ must be a zero polynomial which contradicts to its non-singularity. Similarly, $R_i(x_i,y_{i+1})$ has no factors in $\cc[x_i]$ either. (Proposition <ref>) Consider the equivalent forms of $S$ in Proposition <ref>. We first assume none of $g^{(i)}$ is reducible. In $Z(S)$, the values of $y_i$ and $x_{i+1}$ uniquely determine each other through $H^{(i)}=0$. Given $g^{(i)}$ non-singular and $\forall c\in \cp$, $g^{(i)}(x_i,c)=g^{(i)}(c,y_i)=0$ always have finitely many solutions in $x_i$ and $y_i$ respectively. This immediately shows $Z(S)$ is locally a 1-dimensional curve except on W_0:=\bigcup_{i\in \{1,2\} \atop t\in \{x,y\}} Z\left(S+\left(\frac{\partial g^{(i)}}{\partial t_i}\right)\right) which is a finite set given that $g^{(i)}, \frac{\partial g^{(i)}}{\partial t_i}$ are coprime. In particular, $Z(S)-W_0$ is locally a function of any variable of $\{x_1,y_1,x_2,y_2,x_3\}$ since none of the partial derivatives vanishes. As for the component in Proposition <ref>, $W=Z(S+(r_k))$ where $r_k$ is an irreducible factor of $R_1$ and $Z(r_k)\subset (\cp)^2$ is a curve in $(x_1,y_2)$. Since $r_k|R_1$, $R_1$ vanishes on $Z(r_k)$ and therefore $\{\Tilde{g}^{(1)}=g^{(2)}=0\}$ always has a solution in $x_2$ for all $(x_1,y_2)\in Z(r_k)$. This means the projection of $W$ in $(\cp)^2$ is a curve hence, as a subset of $Z(S)-W_0$, $W-W_0$ is locally a 1-dimensional curve and a function of any variable of $\{x_1,x_2,x_3,y_1,y_2\}$. Finally, if one of $g^{(i)}$ is reducible, apply the above argument directly on $W$ (rather than $Z(S)$) to get the same result. (Theorem <ref>) By Proposition <ref> we know $Z(M)\cong Z(S_1)\times_{\{x_1, x_3\}} Z(S_2)$. Clearly, if we project $Z(S_1)$ into $(x_1,x_3)$-plane we get $Z(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2))$. For each $(x_1,x_3)\in Z(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2))$, there are at most two pre-images in $Z(S_1)$ according to Proposition <ref>. So $Z(M)$ is an infinite set if and only if $Z(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2))\cap Z(\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4))$ is an infinite set if and only if \gcd(\text{Res}(\Tilde{g}^{(1)},\Tilde{g}^{(2)};x_2),\text{Res}(\Tilde{g}^{(3)},\Tilde{g}^{(4)};x_4))\neq 1. On the other hand, $Z(M)$ is an infinite set if and only if there exist components $W_1, W_2$ of $S_1, S_2$ respectively such that $W_1\times_{\{x_1, x_3\}} W_2$ is an infinite set. In the language of field extensions, it is equivalent to saying that $x_3$ is the same algebraic element on $K_1$ in the extension diagrams of $W_1$ and $W_2$, i.e. the minimal polynomials of $x_3$ on $K_1$ are identical. (Lemma <ref>) We only prove (1) since (2) is quite similar. Symbolic computation can show \frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k \Rightarrow (x_3-kx_1)^2|R(x_1,x_3) \Rightarrow (x_3-kx_1)|R(x_1,x_3). Conversely, when $(x_3-kx_1)|R(x_1,x_3)$, $S$ admits a component W=Z(g^{(1)}, g^{(2)}, x_3-kx_1, H^{(1)}, H^{(2)}) with the extension diagram \scalebox{0.75}{\xymatrix{ & K_1(x_2,x_3) & \\ K_1(x_2) \ar@{=}[ur]^{g^{(2)}} & & K_1(x_3)\ar@{-}[ul]\\ & K_1\ar@{-}[ul]^{g^{(1)}} \ar@{=}[ur]_{x_3-kx_1}& \\ in which $x_3=kx_1$ and $x_2\notin K_1$ due to the irreducibility of $g^{(1)}$. Thus g^{(1)}(x_1,t)=(a_1 x_1^2 + c_1 ) t^2 + x_1 t + (b_1 x_1^2+ e_1), g^{(2)}(t,x_3)=(a_2 k^2 x_1^2 + b_2 ) t^2 + k x_1 t + (c_2 k^2 x_1^2+ e_2) both determine the minimal polynomial of $x_2$ on $K_1$ so the coefficients must be proportional: \frac{a_1}{a_2}=\frac{b_1}{c_2}=\frac{b_2}{c_1}=\frac{e_2}{e_1}=k. (Lemma <ref>) First, we assume system (<ref>) holds so we can claim that all coefficients are nonzero. This is because zeros always come in pairs in the system, which is against the non-singularity of $g^{(i)}$. Hence by Lemma <ref>, $g^{(1)}, g^{(2)}$ must be irreducible. Set b_1=ka_1,\,\, e_1=kc_1,\,\,c_2=ka_2,\,\, e_2=kb_2, symbolic computation can show \text{Res}(g^{(1)},g^{(2)};x_2)=k(a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1)^2. In addition, $a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1$ must be irreducible. Otherwise, according to Corollary <ref>, there would be an irreducible factor $r_k$ which is linear in $x_3$ and further leads to a component of Case 3 in Definition <ref>. Apply Corollary <ref> we have one of systems (<ref>) holds. Either of them contradicts to the inequality of system (<ref>). Therefore $S$ only has one component W=Z(g^{(1)}, g^{(2)}, a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1, H^{(1)}, H^{(2)}) which belongs to Case 4 or Case 5 but not Case 3. Similarly, $W$ must be of Case 5. Otherwise, by Lemma <ref>, we have \frac{a_1 c_1}{a_2 b_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1}{1 - 4 a_2 e_2- 4 b_2 c_2}=\frac{b_1 e_1}{c_2 e_2}=\frac{1- 8k a_1 c_1}{1 - 8k a_2 b_2}=\frac{1- 8k a_1 c_1+8k a_1 c_1}{1 - 8k a_2 b_2+8k a_2 b_2}=1. Contradicts to our previous discussion considering Corollary <ref>. Conversely, suppose $S$ has a component $W$ of Case 5 and consider its extension diagram. By Definition <ref>, we have $x_3\notin K_1(x_2)$ which implies $x_2\notin K_1(x_3)$ given $[K_1(x_3):K_1]=[K_1(x_2):K_1]=2$. This means both $g^{(1)}$ and $g^{(2)}$ determine the minimal polynomial of $x_2$ on $K_1(x_3)$ hence the coefficients should be proportional, which brings us back to the left one of equation (<ref>). i.e. \begin{equation}\label{eqsamemini} \left\{\begin{array}{l} a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1 = 0, \\ c_2 x_1 x_3^2 - (b_1 x_1^2 + e_1) x_3 + e_2 x_1 = 0. \\ \end{array}\right. \end{equation} Since $[K_1(x_3):K_1]=2$, the above two equations both determine the minimal polynomial of $x_3$ on $K_1$ so \frac{a_1}{b_1}=\frac{c_1}{e_1}=\frac{a_2}{c_2}=\frac{b_2}{e_2}. Finally, the irreducibility of equation (<ref>) implies $\frac{a_1}{a_2}\neq\frac{b_2}{c_1}$ since \frac{a_1}{a_2}=\frac{b_2}{c_1}\Rightarrow a_2 x_1 x_3^2 - (a_1 x_1^2 + c_1) x_3 + b_2 x_1=\frac{1}{a_2}(a_2x_1x_3-c_1)(a_2x_3-a_1x_1). (Lemma <ref>) We first assume $S$ is equimodular. By Lemma <ref>, we only need to focus on the equivalent condition for '$\exists f\in K_2$ such that $f^2=\frac{\Delta_1}{\Delta_2}$' where $\Delta_i$ are given by equation (<ref>). We can claim that $a_1b_1c_1e_1a_2b_2c_2e_2\neq 0$. According to $f^2=\frac{\Delta_1}{\Delta_2}$, $\Delta_1$ and $\Delta_2$ are equal after removing their square factors. In the meanwhile, we know from the proof of Lemma <ref> that $\Delta_2$ has 4 distinct roots if and only if $a_2b_2c_2e_2\neq 0$. Thus $a_1b_1c_1e_1=0\Rightarrow a_2b_2c_2e_2=0$. When $a_1b_1c_1e_1\neq 0$, with the same reason, \Delta_1'(y_1)=-4 a_1 c_1 y_1^4 + (1 - 4 a_1 e_1- 4 b_1 c_1) y_1^2 - 4 b_1 e_1=-4 a_1 c_1(y_1-\xi)(y_1+\xi)(y_1-\eta)(y_1+\eta). has 4 distinct roots. Notice that $\Delta_1(x_2)=(1-F_1 x_2)^4\Delta_1'\left(\frac{x_2+F_1}{1-F_1 x_2}\right)$. Therefore $\Delta_1$ has at least 3 distinct roots,[Number 3 occurs only if $\Delta_1'(\frac{-1}{F_1})=0$.] which implies the degree of $x_2$ in $\Delta_2$ is at least 3, i.e. $a_2b_2c_2e_2\neq 0$, and hence $a_1b_1c_1e_1=0 \Leftrightarrow a_2b_2c_2e_2=0$. Next we need to show that $a_1b_1c_1e_1=0$ cannot happen, otherwise $a_2b_2c_2e_2=0$ and \Delta_2=x_2^2(-4 a_2 b_2 x_2^2 + (1 - 4 a_2 e_2- 4 b_2 c_2)) \text{ or } (1 - 4 a_2 e_2- 4 b_2 c_2) x_2^2 - 4 c_2 e_2. Clearly, if we ignore the square factor, there is no linear term in $\Delta_2$. On the other hand, \Delta_1=\left\{\begin{array}{l} (1-F_1x_2)^2[(1 - 4 a_1 e_1- 4 b_1 c_1) (x_2+F_1)^2 - 4 b_1 e_1 (1-F_1x_2)^2] \text{ or}\\ (x_2+F_1)^2[-4 a_1 c_1 (x_2+F_1)^2 + (1 - 4 a_1 e_1- 4 b_1 c_1) (1-F_1x_2)^2]. \\ \end{array}\right. Likewise, there should be no linear term in \left\{\begin{array}{l} (1 - 4 a_1 e_1- 4 b_1 c_1) (x_2+F_1)^2 - 4 b_1 e_1 (1-F_1x_2)^2 \text{ or}\\ -4 a_1 c_1 (x_2+F_1)^2 + (1 - 4 a_1 e_1- 4 b_1 c_1) (1-F_1x_2)^2 \\ \end{array}\right. since the non-square factors should vanish in the fraction $\frac{\Delta_1}{\Delta_2}=f^2$. In either of the cases, we have $(a_1-b_1)(e_1-c_1)=\frac{1}{4}$ which contradicts to Definition <ref>. So we can conclude $a_1b_1c_1e_1a_2b_2c_2e_2\neq 0$. From $a_1b_1c_1e_1a_2b_2c_2e_2\neq 0$ we know that both $\Delta_1$ and $\Delta_2$ are in degree 4. Since $\Delta_2$ has distinct roots, $f^2=\frac{\Delta_1}{\Delta_2}$ implies $\frac{\Delta_1}{\Delta_2}\in \cc$. So, like $\Delta_2$, there should be no linear or cubic term in $\Delta_1$, which brings us \begin{equation}\label{a=bb=a} \left\{\begin{array}{l} (1-4a_1e_1-4b_1c_1+8a_1c_1)=F_1^2(1-4a_1e_1-4b_1c_1+8b_1e_1), \\ (1-4a_1e_1-4b_1c_1+8a_1c_1)F_1^2=(1-4a_1e_1-4b_1c_1+8b_1e_1). \\ \end{array}\right. \end{equation} Given $F_1^2\neq -1$, we must have i.e. $\frac{a_1}{e_1}=\frac{b_1}{c_1}$. Therefore equation (<ref>) is equivalent to (4(a_1-b_1) (e_1-c_1)-1)=(4(a_1-b_1) (e_1-c_1)-1)F_1^2 where $F_1=\pm 1$ follows (see the inequalities in Definition <ref>). Besides, the coefficients of even-degree terms in $\Delta_1$ and $\Delta_2$ should be proportional \begin{equation}\label{-ratio} \frac{1- 4 a_1 e_1- 4 b_1 c_1-8 a_1 c_1}{-4a_2 b_2}=\frac{8 a_1 e_1+8 b_1 c_1-48 a_1 c_1-2}{1-4a_2e_2-4b_2 c_2}=\frac{1- 4 a_1 e_1- 4 b_1 c_1-8 a_1 c_1}{-4c_2 e_2} \end{equation}
# Zeptonewton and Attotesla per Centimeter Metrology With Coupled Oscillators Ian Bouche<EMAIL_ADDRESS>Josh Javor Abhishek Som David K. Campbell David J. Bishop Bishop Lab, Department of Physics, Boston University ###### Abstract We present the coupled oscillator: a new mechanism for signal amplification with widespread application in metrology. We introduce the mechanical theory of this framework, and support it by way of simulations. We present a particular implementation of coupled oscillators: a microelectromechanical system (MEMS) that uses one large ($\sim$$100\text{\,}\mathrm{mm}$) N52 magnet coupled magnetically to a small ($\sim$$0.25\text{\,}\mathrm{mm}$), oscillating N52 magnet, providing a force resolution of $200\text{\,}\mathrm{zN}$ measured over $1\text{\,}\mathrm{s}$ in a noiseless environment. We show that the same system is able to resolve magnetic gradients of $130\text{\,}\mathrm{a}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$ at a single point (within $500\text{\,}\mathrm{\SIUnitSymbolMicro m}$). This technology therefore has the potential to revolutionize force and magnetic gradient sensing, including high-impact areas such cardiac and brain imaging. ## I Introduction Precision metrology is one of the most widespread applications of microelectromechanical systems (MEMS) because powerful nanofabrication techniques and beneficial scaling laws create high sensitivity to small inputs. In recent years, new MEMS technologies have emerged that use a distance-dependent force between an oscillator and another object for precise metrology ([2], [3], [4]). This technology, deemed here as the coupled oscillator, has shown extraordinary promise in force and magnetic gradient metrology. Rather than measuring deflection of a component under a force (as in some traditional cantilever-based force metrology frameworks), a coupled oscillator transduces the input force to a shift in oscillation frequency. This is key for high- sensitivity measurement: by working in frequency space, we can take advantage of an enhanced resolution to small changes thanks to high-precision frequency counters and atomic clock standards. Further, transducing the signal to an oscillation allows for the use of powerful noise-reduction schemes such as phase-locking amplification. In what follows, we will describe the most general form of a coupled oscillator and show how we can use it to measure small forces. Then, we will propose a special case of this framework that uses magnetic attraction as the coupling force and show using simulations that it can theoretically provide sensitivities to $200\text{\,}\mathrm{zN}$ and $460\text{\,}\mathrm{yN}$ forces with $1\text{\,}\mathrm{s}$ and $100\text{\,}\mathrm{s}$ time gates, respectively. These numbers are achieved using commercially available frequency counters and rubidium clocks, but recent developments in atomic clock technology could grant relative frequency measurement uncertainties of ${10}^{-18}\text{\,}$ [5], which would theoretically grant 6 to 8 orders of magnitude of additional sensitivity over the reported numbers at similar gate times. We will see that this framework can achieve higher sensitivity with oscillators that have lower masses and softer springs. Therefore, this technology will lend itself to MEMS platforms due to favorable scaling laws [6]. In this way, coupled oscillator technology naturally integrates with nanometer-range interactions such as the Casimir effect ([2], [3], [4]), and offers a method to directly and precisely measure these quantum interactions. We will also present this system as a highly competitive platform for the measurement of small magnetic gradients. The motion of charged ions throughout the heart during pumping produces magnetic fields near the chest of $\sim$100\text{\,}\mathrm{pT}$$. This signal, however, is drowned in noise from the Earth’s magnetic field, whose fluctuations are of order $\sim$200\text{\,}\mathrm{pT}$$ [7]. For this reason, technologies sensitive enough to image the heart’s magnetic fields such as atomic magnetometers or superconducting quantum interference devices (SQUID) generally require rooms dedicated to magnetic shielding, which significantly detriments the accessibility and price point of those technologies. However, the gradient of Earth’s magnetic field and its noise are both significantly below the gradients present near the human heart [8]. In this way, technology that is sensitive only to gradients of the magnetic field (such as the design proposed in this work) may offer an inexpensive, accessible and uncomplicated cardiac imaging technique. The coupled oscillator system described in this paper will be shown to be sensitive to magnetic gradients of $130\text{\,}\mathrm{a}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$ and $310\text{\,}\mathrm{z}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$ at a single point (within $500\text{\,}\mathrm{\SIUnitSymbolMicro m}$) for respective gate times of $1\text{\,}\mathrm{s}$ and $100\text{\,}\mathrm{s}$ under noiseless conditions. These resolutions are well under the threshold required for biomedical sensing of the heart. ## II Coupled Oscillator Framework Consider a damped harmonic oscillator with the spring equilibrium position at $x=0$ and various added forces (fig. 1): * • Coupling force $F_{C}$: Force felt by the oscillator due to its proximity to a special point called the coupling wall, located at $x=x_{C}$. Examples of coupling forces could be magnetic forces (if, for example, the oscillator and coupling wall are magnets) or the Casimir force ([2], [3], [4]). Positive values of $F_{C}$ are taken to represent an attractive force while negative values represent repulsion. In this work, this force is always written as a function of the distance between the oscillator and the coupling wall, and never as a function of absolute position in our coordinate system. * • Driving force $F_{D}$: Time-dependent sinusoidal driving force with constant amplitude. This force is phase-locked to lead the motion of the oscillator by $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ such that, as the oscillator’s resonant frequency changes during measurement, the driving frequency will adjust to match it. More precisely, the system will always approach the undamped resonant frequency of the system $\omega_{0}=\sqrt{\frac{k}{m}}$ rather than the true resonant frequency, which depends on damping. This critical feedback loop is modeled and demonstrated in the appendix. Further, this behavior of phase-locked loops has been previously shown in references [9], [10]. * • Input force $F_{\text{in}}$: The force that we want to measure. This force will change the equilibrium position of the oscillator. * • Offset force $F_{\text{off}}$: Constant force to cancel the coupling force at $x=0$. Therefore, $F_{\text{off}}=-F_{C}(x_{C})$ always. This force ensures that the system’s equilibrium is at $x=0$ when there is no input force, regardless of the baseline distance to the coupling wall $x_{C}$. This allows us to pick the value of $x_{C}$ freely to optimize performance. Figure 1: Coupled oscillator framework. A damped harmonic oscillator (blue) is driven by a phase-locked sinusoidal force $F_{D}(x)$. This force is linked to the position of the oscillator through phase locking. The oscillator also experiences a coupling force $F_{C}$, a constant input force $F_{\text{in}}$ which we want to measure, and a constant offset force $F_{\text{off}}$. ### Measurement mechanism In the following derivation, we will denote locations in space in two ways: * • Position $x$, where $x=0$ represents the equilibrium position of the spring, and $x$ increases as we approach the coupling wall located at $x=x_{C}$. * • Coupling wall distance $d$, where $d=0$ represents the position of the coupling wall ($x_{C}$) and distance increases as we move from the coupling wall towards the equilibrium position of the spring. We can therefore convert between any position $x_{\text{label}}$ and distance $d_{\text{label}}$ with a simple transformation. $d_{\text{label}}=x_{C}-x_{\text{label}}$ The equation of motion of this system is then $m\ddot{x}+b\dot{x}+kx=F_{D}(t)+F_{C}(x_{C}-x)+F_{\text{in}}+F_{\text{off}}$ We write the driving force as being time-dependent for clarity, but it is fully determined by the position of the oscillator through phase-locking rather than the time. This system has a stable equilibrium position $x_{\text{eq}}$ and distance $d_{\text{eq}}$, defined implicitly by setting $\dot{x}$, $\ddot{x}=0$ and neglecting the driving force $kx_{\text{eq}}=F_{C}(d_{\text{eq}})+F_{\text{in}}+F_{\text{off}}$ (1) We introduce the displacement from equilibrium $\Delta x$: $\Delta x\equiv x-x_{\text{eq}}$ And we rewrite the equation of motion $\displaystyle m\ddot{\Delta x}+b\dot{\Delta x}+$ $\displaystyle k\Delta x+kx_{\text{eq}}=$ $\displaystyle F_{D}(t)+F_{C}(d_{\text{eq}}-\Delta x)+F_{\text{in}}+F_{\text{off}}$ We Taylor-expand the coupling force about the equilibrium distance $\displaystyle m\ddot{\Delta x}+b\dot{\Delta x}+k\Delta x+kx_{\text{eq}}=$ $\displaystyle F_{D}(t)+F_{C}(d_{\text{eq}})-\Delta xF_{C}^{\prime}(d_{\text{eq}})+\mathcal{O}(\Delta x^{2})+F_{\text{in}}+F_{\text{off}}$ By rearranging terms and using eq. 1, we obtain $m\ddot{\Delta x}+b\dot{\Delta x}+(k+F_{C}^{\prime}(d_{\text{eq}}))\Delta x+\mathcal{O}(\Delta x^{2})=F_{D}(t)$ In order to ignore the $\mathcal{O}(\Delta x^{2})$ anharmonic terms, the linear order term $(k+F_{C}^{\prime}(d_{\text{eq}}))\Delta x$ must exceed them significantly. By comparing the $n$th term of the series with the linear term, this condition can be written as $\Delta x\ll\sqrt[n-1]{n!\frac{k+F_{C}^{\prime}(d_{\text{eq}})}{F_{C}^{(n)}(d_{\text{eq}})}}\ \ \ \ \forall n=2,3,4,...$ (2) We can see that the expression shown here is impossible to satisfy if $F_{C}^{\prime}(d_{\text{eq}})=-k$ which may occur for attractive coupling forces ($F_{C}>0$). We call the value of $d_{\text{eq}}$ at which this happens the stiction distance $d_{\text{stic}}$, defined implicitly by $F_{C}^{\prime}(d_{\text{stic}})=-k$ We can then show that condition 2 is met if and only if $\Delta x\ll d_{\text{eq}}-d_{\text{stic}}$ (3) So, by ensuring small oscillations to meet condition 3, we can ignore the higher-order terms, leading to the equation of motion of a driven, damped harmonic oscillator with a modified spring constant $k_{\text{shifted}}=k+F_{C}^{\prime}(d_{\text{eq}})$ $m\ddot{\Delta x}+b\dot{\Delta x}+k_{\text{shifted}}\Delta x=F_{D}(t)$ The resonant frequency of this system is known to be $\displaystyle f_{\text{shifted}}$ $\displaystyle=\frac{1}{2\pi}\sqrt{\frac{k_{\text{shifted}}}{m}-\frac{1}{2}\left(\frac{b}{m}\right)^{2}}$ $\displaystyle=\frac{1}{2\pi}\sqrt{\frac{k+F_{C}^{\prime}(d_{\text{eq}})}{m}-\frac{1}{2}\left(\frac{b}{m}\right)^{2}}$ $\displaystyle=\frac{1}{2\pi}\sqrt{\frac{(1-1/2Q^{2})k+F_{C}^{\prime}(d_{\text{eq}})}{m}}$ Where we rewrote $b$ in terms of the quality factor $Q=\sqrt{km}/b$. In the high-quality limit ($Q\rightarrow\infty$), the shifted frequency is $f_{\text{shifted}}=\sqrt{1+\frac{F_{C}^{\prime}(d_{\text{eq}})}{k}}f_{0}$ (4) where $f_{0}=\frac{1}{2\pi}\sqrt{\frac{k}{m}}$ is the undamped harmonic oscillator resonant frequency. We should note that even for larger damping and lower quality factors, our phase-locking feedback loop will not approach the true resonant frequency, but rather the undamped resonant frequency, always. This is shown and explored in the appendix. Equation 4 then shows how the coupling force manifests as a change in the system: if the oscillator equilibrium is moved towards the coupling wall, its resonance frequency will either decrease (if the coupling is attractive) or increase (if it is repulsive). The agent that causes this approach is $F_{\text{in}}$, meaning the force we want to measure induces a shift in the system’s resonant frequency. This is the mechanism by which forces we wish to measure are turned into frequency shift signals. It is important to note that $d_{\text{eq}}$ is related to $F_{\text{in}}$ in a nonlinear way that depends on the form of $F_{C}(d)$. The exact relationship can be found by solving equation 1. We can express the measurement process as a sequence of steps, visualized in figure 2: 1. 1. The input force acts on the oscillator. 2. 2. This creates a shift in the equilibrium position of the oscillator, moving it closer to or further from the coupled wall. 3. 3. The new distance to the coupling wall changes the effective spring constant which in turn creates a change in the resonant frequency as per equation 4. Due to phase locking of the driving force, this eventually leads the driving frequency to match the undamped resonant frequency. 4. 4. This change in frequency is measured using frequency counters, and this is read as the signal. Figure 2: Overview of the four steps of the force detection mechanism of a coupled oscillator. ### Stiction If at any point $F_{C}(d)\leq-k$, the system will experience a breakdown in oscillation and a runaway attraction towards the coupling wall. This event is called stiction, and it can only occur when condition 3 is violated. Stiction deems the system unusable and can be catastrophically damaging for some fragile implementations. The stiction position $x_{\text{stic}}=x_{C}-d_{\text{stic}}$ can be seen in figure 3 in the potential experienced by the oscillator. It is important to distinguish the stiction position $x_{\text{stic}}$ from the point of no return of the oscillator $x_{\text{no ret}}$. $x_{\text{stic}}$ is the highest value of $x_{\text{eq}}$ that avoids stiction, while $x_{\text{no ret}}$ is the highest value of $x$ that avoids stiction. Although violating the linearity condition 3, it is possible to have $x>x_{\text{stic}}$ without stiction occurring, but not $x>x_{\text{no ret}}$. Further, as the oscillator approaches the coupling wall during a measurement, $x_{\text{stic}}$ will not move, but $x_{\text{no ret}}$ will. In figure 3, $x_{\text{stic}}$ corresponds to the inflection point of the potential energy while $x_{\text{no ret}}$ corresponds to a local maximum. Figure 3: Quasistatic potential experienced by the oscillator with an attractive coupling force. The potentials $U(x)$ for the spring and coupling force add to create the landscape shown. If the oscillator passes the point of no return $x_{\text{no ret}}$, it will necessarily collide with the coupling wall. If the spring potential were moved towards the coupling wall, the curvature at equilibrium would decrease until it is zero when $x_{\text{eq}}=x_{\text{stic}}$. While stiction is generally dangerous, operating near it can also grant extraordinary sensitivity, as we will see in the analysis of the gain. ### Gain of coupled oscillators Next we will study the amplification granted by the coupled oscillator framework. Consider the measurement gain $G$: $G\equiv\frac{1}{\Delta F_{\text{in}}}$ Where $\Delta F_{\text{in}}$ represents the smallest change in input force we can measure. By considering the steps of the measurement, we can write the gain as: $G=\frac{1}{\Delta f_{\text{shifted}}}\frac{\Delta f_{\text{shifted}}}{\Delta d_{\text{eq}}}\frac{\Delta d_{\text{eq}}}{\Delta F_{\text{in}}}$ The first factor is the system’s ability to to detect frequency shifts. This is determined by the precision of the frequency counters and the time gate settings used to detect frequency signals. We will write this as $\frac{1}{\Delta f_{\text{min}}}$, where the frequency resolution $\Delta f_{\text{min}}$ represents the smallest frequency shift we can detect. The second factor corresponds to the shift in frequency caused by the change in oscillation position, and can be determined by differentiating equation 4 with respect to $d_{\text{eq}}$: $\displaystyle\frac{\partial}{\partial d_{\text{eq}}}(f_{\text{shifted}})=\frac{\partial}{\partial d_{\text{eq}}}\left(\sqrt{1+\frac{F_{C}^{\prime}(d_{\text{eq}})}{k}}\frac{1}{2\pi}\sqrt{\frac{k}{m}}\right)$ $\displaystyle\implies\frac{\partial f_{\text{shifted}}}{\partial d_{\text{eq}}}=\frac{1}{4\pi\sqrt{km}}\frac{F_{C}^{\prime\prime}(d_{\text{eq}})}{\sqrt{1+\frac{F_{C}^{\prime}(d_{\text{eq}})}{k}}}$ The third factor corresponds to the change in oscillation position caused by the input force, and can be calculated by differentiating equation 1 (rewritten in terms of $d_{\text{eq}}=x_{C}-x_{\text{eq}}$) with respect to $F_{\text{in}}$: $\displaystyle\frac{\partial}{\partial F_{\text{in}}}\left(k(x_{C}-d_{\text{eq}})\right)=\frac{\partial}{\partial F_{\text{in}}}\left(F_{C}(d_{\text{eq}})+F_{\text{in}}+F_{\text{off}}\right)$ $\displaystyle\implies\frac{\partial d_{\text{eq}}}{\partial F_{\text{in}}}=-\frac{1}{k}\frac{1}{(1+\frac{F_{C}^{\prime}(d_{\text{eq}})}{k})}$ Putting these together, the final gain of the coupled oscillator framework is then given by $\displaystyle G$ $\displaystyle\equiv\frac{1}{\Delta F_{\text{in}}}$ $\displaystyle=-\frac{1}{4\pi}\frac{F_{C}^{\prime\prime}(d_{\text{eq}})}{\Delta f_{\text{min}}k^{3/2}m^{1/2}}\frac{1}{\left(1+F_{C}^{\prime}(d_{\text{eq}})/k\right)^{3/2}}$ (5) This expression signals to us ways in which we can achieve high input force resolution by tweaking parameters of the system. The first way to increase gain is to decrease $\Delta f_{\text{min}}$ by being more sensitive to frequency shifts. Another way is to decrease $k$ or $m$ — in this sense, coupled oscillators lend themselves to microelectromechanical systems (MEMS), since both mass and stiffness of materials decrease with system size [6]. Finally, we can increase gain by finding a coupling force profile with certain properties. According to the expression for gain, there are two approaches to selecting coupling forces $F_{C}$ that minimize $\Delta F_{\text{in}}$ (fig. 4). Figure 4: Comparison of different high-gain profiles for the coupling force. (a) Short-range force profiles can be sensitive where the curvature is high but will have a small operating region. (b) Long-range force profiles can be sensitive where the slope of the force is near the stiction value $-k$, which can provide a large operating region if the curvature is low. The first is to increase the curvature $F_{C}^{\prime\prime}(d_{\text{eq}})$, while staying far away from the point of stiction (so $d_{\text{eq}}\gg d_{\text{stic}}$). This can be done by finding a short-range coupling force that diverges suddenly, so that the curvature is very high at the divergence. This approach lends itself to MEMS applications, because these naturally deal with short-range forces and oscillators will only displace by a small amount. For instance, the Casimir force between a sphere and a plate, which has a very sudden onset around $d\approx 100\text{nm}$, is one such short-range candidate that has been analyzed previously [2]. MEMS systems that utilize or measure the Casimir force have been built previously ([3], [11], [12]) and present excellent candidate platforms for a coupled oscillator system. One drawback of this approach is the increased difficulty of manufacturing due to the small distances necessarily involved: the oscillator must be placed at a very precise baseline distance from the coupling wall and there will be little spatial tolerance to avoid stiction. Such small systems are also more difficult to fabricate and assemble. The second method is to approach stiction and maximize the diverging term $(1+F_{C}^{\prime}(d_{\text{eq}})/k)^{-3/2}$. This requires that we have an attractive coupling force (so that $F_{C}^{\prime}<0$) and that we bring $F_{C}^{\prime}(d_{\text{eq}})$ as close as possible to (while remaining lesser than) the negative spring constant $-k$. This does not universally mean having less room for error: proximity to stiction means $F_{C}^{\prime}(d_{\text{eq}})\gtrsim-k$ which does not necessarily restrict $x_{\text{stic}}-x_{\text{eq}}$ to be below any certain number. If $F_{C}^{\prime\prime}(d_{\text{eq}})$ is very low, $F_{C}^{\prime}(d_{\text{eq}})\gtrsim-k$ can be true over a large range of equilibrium distances $d_{\text{eq}}$. Further, since the term we are maximizing has a divergence, we can achieve extraordinary gain even with long- range forces. For these reasons, the system that we explore in the simulation is an example of a coupled oscillator that leverages a coupling force profile of this long-range type. One downside of this method is usually finding such a long-range interaction, and the fact that these forces tend to require large components as the coupling walls (e.g. large magnets or large metal plates) in order to have the desired properties, so the total size of the instrument increases. The oscillator itself, however, should remain as small as possible regardless of the chosen coupling force. Further, it can be difficult to find force profiles with steep enough curves so that they can overpower a spring and cause stiction at any point. Indeed, in the system we select, the required spring constant is of $1.78\text{\times}{10}^{-3}\text{\,}\mathrm{N}\mathrm{/}\mathrm{m}$, which is much softer than even most MEMS springs. ## III Coupled Oscillator Simulation We programmed a discrete timestep simulation of the coupled oscillator system in MATLAB R2023b. There are 3 key variables that the simulation records over time: oscillator position $x(t)$, oscillator frequency $f(t)$, and the driving argument $\theta_{D}(t)$. The driving argument is used to calculate the driving force: $F_{D}(t)\equiv A_{D}\cos(\theta_{D}(t))$ where $A_{D}$ is a constant corresponding to the driving force amplitude. We make $\theta_{D}$ a dynamic variable (i.e. updated in each timestep along with position and velocity) because the driving frequency is continuously changing, and we therefore need to update this driving argument in small timesteps to maintain continuity in the driving force. ### Main loop In this section, we use square brackets (e.g. $x[t]$) to denote accessing of a discrete array rather than evaluating a continuous function. In each timestep of the simulation, we perform the following tasks: 1. 1. Record system state. Save variable values $x[t]$ and $f[t]$ to memory. 2. 2. Measure $f$. First, we check if we are on a peak by looking at the positions recorded in the current and previous two steps ($x[t]$, $x[t-\Delta t]$, $x[t-2\Delta t]$): $\displaystyle\ \ \ \ \ \ \ \ \left(x[t-\Delta t]>x[t-2\Delta t]\right)\ $ $\displaystyle\text{AND}\ \left(x[t-\Delta t]>x[t]\right)$ $\displaystyle\ \ \ \implies\text{Peak at $t-\Delta t$}$ If we are on a peak, we use the time difference between the current peak and the $N_{\text{cycles}}$th previous peak to obtain an averaged estimate of the frequency of the oscillator. $N_{\text{cycles}}$ is the number of oscillation cycles we average over to better estimate the frequency: $\ \ \ f[t]=\frac{N_{\text{cycles}}}{t_{\text{peaks}}[n]-t_{\text{peaks}}[n-N_{\text{cycles}}]}$ where $t_{\text{peaks}}$ is an array containing the times at which each peak occurs. The variable $n$ represents the current cumulative cycle count. Otherwise, if we do not detect a peak in this step, we retain the same frequency values as the previous step. One of the challenges of this implementation is frequency quantization: because of the discrete timestep, the period of the oscillation is resolvable only to limited precision, and therefore so is the frequency. For a timestep $\Delta t$ and frequency $f$, the frequency grain (smallest resolvable change in frequency) $\Delta f$ is $\displaystyle\Delta f$ $\displaystyle=\frac{1}{2N_{\text{cycles}}}\left(\frac{1}{1/f-\Delta t}-\frac{1}{1/f+\Delta t}\right)$ $\displaystyle\approx\frac{\Delta tf^{2}}{N_{\text{cycles}}}$ where we assumed that $\Delta t^{2}\ll 1/f^{2}$ for the last expression. For low-frequency systems ($1-$100\text{\,}\mathrm{Hz}$$) such as the one we study in the next section, this problem is only slightly noticeable, but when dealing with higher-frequency systems the frequency grain manifests as significant quantization of the measured frequency values. It should be noted that this is strictly a measurement problem and does not indicate that the simulation has underlying quantization: if we used a different method to measure frequency (e.g. Fourier transforms) this artifact would not exist. 3. 3. Calculate the driving frequency $f_{D}$. In order to enforce phase locking, the driving force can be in one of two states: matching or adjusting (fig. 5). Figure 5: When the matching state is activated, the frequency of the driving force is set equal to the frequency of the oscillator at that time. The program will stay in this state and the driving frequency will remain unchanged until the oscillator reaches a peak in its motion, at which point it will set the driving frequency equal to the adjustment frequency $f_{\text{adj}}$ and switch to the adjustment state. $f_{\text{adj}}$ is calculated so that the next peak in the driving force occurs at the desired phase shift relative to the last peak in the motion of the oscillator $f_{\text{adj}}=\frac{$360\text{\,}\mathrm{\SIUnitSymbolDegree}$-\theta_{D}}{\phi_{D}}f[t]$ (6) where $\phi_{D}$ is the driving phase, which in our case is $270\text{\,}\mathrm{\SIUnitSymbolDegree}$ (i.e. leading motion by $90\text{\,}\mathrm{\SIUnitSymbolDegree}$). In this way, we try to correct for any deviation from the desired phase by speeding up or slowing down the driving frequency and therefore enforce phase locking in the system. Once the system reaches the next peak in the driving force, we go back to the matching state (fig. 6). Figure 6: Example oscillator motion and how the driving force algorithm adjusts to maintain a phase of $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ behind the oscillator. When the oscillator reaches a peak in its motion, the program calculates $f_{\text{adj}}$, sets $f_{D}=f_{\text{adj}}$, and switches to the adjustment state. $f_{\text{adj}}$ is calculated such that the next peak in the driving force is at the correct place, given the current frequency of the oscillator $f[t]$. Once the adjustment period ends (at the next peak of the driving force curve), the program sets $f_{D}=f[t]$ and switches to the matching state. Note that for the coupled oscillator framework, we require a driving phase that leads by $90\text{\,}\mathrm{\SIUnitSymbolDegree}$. Here we instead lag by $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ for visualization clarity, but the algorithm is the same for any phase. This algorithm will approach the ideal driving force over time as long as the changes in frequency occur on a timescale much slower than the period of oscillation. The ideal driving force that it approaches would always have the same frequency as the motion, but would be shifted by exactly the desired phase at all times — figure 7 shows how this algorithm produces a driving force that converges to the ideal for a slowly-varying frequency. Figure 7: Driving force calculated from algorithm vs. ideal driving force. Over time, errors in phase are corrected and the two curves become the same. 4. 4. Update the driving argument and get the driving force: We use simple Euler integration to update the driving argument according to the current frequency, and get the driving force from it. $\displaystyle\theta_{D}[t]=\theta_{D}[t-\Delta t]+f_{D}\Delta t$ $\displaystyle F_{D}=A_{D}\cos(\theta_{D}[t])$ 5. 5. Calculate the sum of forces and integrate the position and velocity of the system: We use Newton’s second law to calculate acceleration, and use Verlet integration to calculate the current velocity and position. $\begin{array}[]{rl}\Sigma F=&-kx[t]-bv[t]\\\ &+F_{D}+F_{C}(x_{C}-x[t])+F_{\text{in}}+F_{\text{off}}\\\ v[t]=&v[t-\Delta t]+\frac{1}{m}\Sigma F\Delta t\\\ x[t]=&x[t-\Delta t]+v[t]\Delta t\end{array}$ 6. 6. Check if stiction has occurred: We check if the oscillator has experienced stiction, defined by $x[t]\geq x_{C}\implies\text{Stiction occurred}$ (7) If so, the simulation immediately terminates. ### Input force application When the input force is applied, the exact equilibrium position of the oscillator (determined by equation 1) is generally unknown. Therefore, if we begin the simulation with an input force, the oscillator might initially be far from equilibrium and the ensuing high-amplitude transients may cross the point of no return $x_{\text{no ret}}$ and cause stiction. In order to avoid this issue, the system is always started with no input force so that the oscillator can be placed near the initial equilibrium position, which is well- known. After the transients have subsided, the input force begins to be increased at a constant rate, until the desired value is reached. The system is allowed to settle again before measurements are recorded. Although the input force is constant in the theoretical model, the system responds in a controlled manner to sufficiently slow changes in input force. Figure 8 illustrates the onset of the input force over time. Figure 8: Input force onset. We slowly turn on the input force $F_{\text{in}}$ and then wait for the system to settle before we record the frequency. This graphic is for visual aid and is not a simulation output. The timescales selected here are for demonstration, but in the simulation there is much more time for transients to vanish compared to the oscillation period. There are then three important times in the simulation: the input force start time $t_{\text{start}}$, the input force end time $t_{\text{end}}$ and the measurement time $t_{\text{measure}}$. All should be much larger than the oscillation period to allow for a smooth transition into the input force application. ### Simulation parameters The system we simulated is outlined in figure 9. Figure 9: The coupled oscillator implemented in the simulation to demonstrate the framework. A large N52 magnet is used as the coupling wall, and a small one makes up the oscillator. Both have their magnetization directions aligned horizontally so that they experience attraction. The exact distance between the magnets changes as an input force is applied, but it is always close to $13\text{\,}\mathrm{mm}$. Table 1 contains detailed values of the parameters in this setup. Table 1: Parameter values in simulation Parameter | Value (SI units) | Notes ---|---|--- Timestep $\Delta t$ | $3\text{\times}{10}^{-4}\text{\,}\mathrm{s}$ | Set by convergence tests Input force start time $t_{\text{start}}$ | $25\text{\,}\mathrm{s}$ | Ample time for oscillations to settle Input force end time $t_{\text{end}}$ | $75\text{\,}\mathrm{s}$ | Slow enough to avoid stiction due to transients Measurement time $t_{\text{measure}}$ | $100\text{\,}\mathrm{s}$ | Ample time for oscillations to settle Stiction distance $d_{\text{stic}}$ | $12.7\text{\,}\mathrm{mm}$ | Set to 0.5” Starting distance $x_{C}$ | $13.335\text{\,}\mathrm{mm}$ | $x_{C}=1.05d_{\text{stic}}$ Mass $m$ | $11.875\text{\,}\mathrm{\SIUnitSymbolMicro g}$ | Volume times density of $250\text{\,}\mathrm{\SIUnitSymbolMicro m}$ cubical N52 magnet [13] Spring stiffness $k$ | $1.78\text{\times}{10}^{-3}\text{\,}\mathrm{N}\mathrm{/}\mathrm{m}$ | $k=-F_{C}^{\prime}(d_{\text{stic}})$ Damping coefficient $b$ | $4.5976\text{\times}{10}^{-9}\text{\,}\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{s}$ | Makes $Q=1000$ Oscillation frequency at starting distance | $11.7\text{\,}\mathrm{Hz}$ | Uncoupled frequency of $f_{0}=$61.6\text{\,}\mathrm{Hz}$$ Driving force amplitude $A_{D}$ | $30\text{\,}\mathrm{pN}$ | Ensures condition 3 Offset force $F_{\text{off}}$ | $-51.3\text{\,}\mathrm{\SIUnitSymbolMicro N}$ | $F_{\text{off}}=-F_{C}(x_{C})$ Peaks averaged to get frequency $N_{\text{cycles}}$ | 20 | Creates small frequency grain The oscillator is composed of a $250\text{\,}\mathrm{\SIUnitSymbolMicro m}$-side length cubical N52 magnet, which is the smallest commercially available N52-grade magnet, and is sold by SM Magnetics Co.. A 4.5”-diameter, 3”-thick cylindrical N52 magnet acts as the coupling wall. The only relevant detail of the coupling wall is the force it applies on the oscillator, which we numerically calculated using the Finite Element Method Magnetics software by building a model of the two-magnet system and sampling force data near the desired operating region. We then fit this data to a horizontally-shifted inverse-square function to obtain the coupling force: $F_{C}(d)=\frac{$1.72\text{\times}{10}^{-7}\text{\,}\mathrm{N}\mathrm{m}^{2}$}{(d+$1.27\text{\times}{10}^{-2}\text{\,}\mathrm{m}$)^{2}}$ ## IV Simulation Results and Analysis We ran the simulation with the settings outlined in table 1. Figure 10 demonstrates a sample run with a final input force of $20\text{\,}\mathrm{nN}$. The oscillator starts at rest, with a driving frequency set near resonance. After a few oscillations, phase-locking is engaged and the driving frequency quickly shifts to the exact resonance value. At $t_{\text{start}}=$25\text{\,}\mathrm{s}$$, the input force begins to be applied. The input force continues increasing linearly until $t_{\text{end}}=$75\text{\,}\mathrm{s}$$, at which point the input force stays at a fixed value and the oscillator is allowed to settle again before the simulation ends at $t_{\text{measure}}=$100\text{\,}\mathrm{s}$$. At the end of the simulation, the frequency has been significantly decreased by an amount that corresponds to the input force applied. Figure 10: Distance from the oscillator to the coupling wall vs simulation time. The input force is gradually increased in the input force onset region, which moves the oscillator closer to the stiction distance and changes its oscillation frequency. The maximum input force applied is of $20\text{\,}\mathrm{pN}$. Note that by the end, the oscillation amplitude is comparable to the distance to stiction, so the linearity condition 3 is being violated. Figure 11 shows the measured and predicted frequencies (equation 4) as the system evolves in time. For the latter, the equilibrium distance is used as an input, which is calculated by averaging the positions of the last two extrema. At $t=$20\text{\,}\mathrm{s}$$, the predicted frequency is within $0.1\%$ of the measured value. At $t=$95\text{\,}\mathrm{s}$$, it is within $1\%$. The decreased quality of the prediction can be attributed to the violation of the linearity condition 3, which was held to derive the expressions. Figure 11: Measured and predicted frequencies of the oscillator vs time in a single run of the simulation. Note that there is a brief time at the start where there have not been enough periods of oscillation to take the average of, so there is no measured frequency. The predicted frequency is calculated using equation 4 with the equilibrium distance of the oscillator measured in the simulation as an input. We then ran the simulation multiple times with final input forces ranging from $0\text{\,}\mathrm{pN}$ to $22\text{\,}\mathrm{pN}$. Figure 12 shows the final frequency vs final input force of the system. The frequency decreases non- linearly as the input force causes the system to approach stiction, and would reach zero at the point of stiction. Figure 12: Final oscillator frequency vs final input force over multiple simulation runs. Because we are near stiction, this relationship is highly nonlinear, and tends towards a vertical asymptote as stiction is reached. For the purposes of measurement sensitivity, we are interested in how rapidly the frequency changes per input force increment. We can study this by taking the derivative of the data in figure 12, which is shown in figure 13. We overlay the theoretical frequency sensitivity $\left(\frac{\partial f_{\text{shifted}}}{\partial F_{\text{in}}}\right)_{\text{theory}}$, calculated using the gain expression 5: $\displaystyle\left(\frac{\partial f_{\text{shifted}}}{\partial F_{\text{in}}}\right)_{\text{theory}}$ $\displaystyle=\Delta f_{\text{min}}G$ $\displaystyle=-\frac{1}{4\pi}\frac{F_{C}^{\prime\prime}(d_{\text{eq}})}{k^{3/2}m^{1/2}}\frac{1}{\left(1+F_{C}^{\prime}(d_{\text{eq}})/k\right)^{3/2}}$ (8) We multiply by $\Delta f_{\text{min}}$ because the simulation does not include the final step of measuring the frequency. Figure 13: Measured and predicted frequency sensitivities. The prediction is made using equation 8 and the final equilibrium distance of the oscillator. The gain expression matches the simulated output within $3\%$ for $F_{\text{in}}=$5\text{\,}\mathrm{pN}$$, and within $20\%$ for $F_{\text{in}}=$21\text{\,}\mathrm{pN}$$. Once again, proximity to stiction violates the assumptions on which we built this expression, so this is not surprising. We note that, due to frequency quantization, frequency sensitivity appears noisy for small input forces, so the 3% figure was calculated by averaging nearby points. This effect is visible as small ripples in the low- input force region. ## V Analysis and Application to Gradiometry ### Identification of Frequency Grain With this system we achieved simulated sensitivities of $\sim$$5\text{\times}{10}^{8}\text{\,}\mathrm{Hz}\mathrm{/}\mathrm{N}$. In order to convert this to a force resolution, we must know the smallest frequency shift we can measure $\Delta f_{\text{min}}$. This number depends on the quality of the instrumentation used to make the measurement, the noise characteristics of the system, and tuned parameters like the averaging/gate time of measurement. To find an adequate value for this number, we fed a $5\text{\,}\mathrm{Hz}$ square pulse wave with a .25 duty cycle to a Keysight 53220A frequency counter. The input wave was generated using a Stanford Research Systems DG645 Digital Delay/Pulse Generator. To further increase frequency resolution, we used the $10\text{\,}\mathrm{MHz}$ reference signal from the DG645 as an external timebase reference for the frequency counter. All signals coming from the DG645 are rubidium-standard. Figure 14 shows this system. Figure 14: System used to determine the frequency resolution. The DG645 provides a rubidium-standard reference signal to the 53220A frequency counter to improve resolution. The same DG645 is used to generate 5 Hz pulses with a duty cycle of 0.25 which act as the test signal. Using this system, we average 50 measurements (where one measurement corresponds to the completion of one gate cycle) and obtain the Allan variance of the measured frequency for a range of gate times using the built-in statistics feature of the frequency counter. The Allan variance serves as a measure of the frequency resolution at that gate time. See figure 15 for this relationship. Figure 15: Allan deviation (frequency resolution) vs gate time for measurement. As the gate time increases, our resolution improves due to a longer averaging time. On the right axis, we show the implied force resolution (red) and gradient resolution (blue). The force resolution can be found by dividing the frequency resolution by the sensitivity of the coupled oscillator ($5\text{\times}{10}^{8}\text{\,}\mathrm{Hz}\mathrm{/}\mathrm{N}$) and the gradient resolution can be found by dividing the force resolution by the magnetic moment of the oscillator ($1.5\text{\times}{10}^{-3}\text{\,}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$) Note that we cannot observe any “bottoming out” of the Allan deviation due to frequency drift as we went to higher gate times, as is generally expected of such a study. $1000\text{\,}\mathrm{s}$ is the maximum gate time achievable on the Keysight 53220A, so if such drift effects exist, we cannot detect them with this setup. Regardless, due to the shared reference signal between the source and counter, we expect any contributions from drift to be exceedingly low. From this study, we see that at $5\text{\,}\mathrm{Hz}$ and using a $1\text{\,}\mathrm{s}$ gate time, the measurement system described above is capable of resolving frequency shifts of just below $\Delta f_{\text{min}}=${10}^{-10}\text{\,}\mathrm{Hz}$$. ### Force resolution Combined with the coupled oscillator sensitivity of $5\text{\times}{10}^{8}\text{\,}\mathrm{Hz}\mathrm{/}\mathrm{N}$, our frequency resolution of $\Delta f_{\text{min}}=${10}^{-10}\text{\,}\mathrm{Hz}$$ would imply a force resolution of $\Delta F_{\text{min}}=$200\text{\,}\mathrm{zN}$$. If instead we use a $100\text{\,}\mathrm{s}$ gate time, the frequency resolution becomes $\Delta f_{\text{min}}=$2.3\text{\times}{10}^{-13}\text{\,}\mathrm{Hz}$$, implying a force resolution of $\Delta F_{\text{min}}=$460\text{\,}\mathrm{yN}$$. See figure 15 for a more detailed view of force resolution versus gate time. ### Application to magnetic gradiometry We can apply this system to the measurement of magnetic gradients readily. Because the oscillator is itself a magnet, magnetic gradients in the oscillation direction will induce a force that we can treat as the input force. The system is therefore sensitive to magnetic gradients at a single point (within the oscillator magnet dimensions of $250\text{\,}\mathrm{\SIUnitSymbolMicro m}$). The oscillator magnet has a magnetic moment $1.5\text{\times}{10}^{-5}\text{\,}\mathrm{N}\mathrm{/}\mathrm{(}\mathrm{T}\mathrm{/}\mathrm{m}\mathrm{)}$, so the smallest detectable gradient is of $$13\text{\,}\mathrm{f}\mathrm{T}\mathrm{/}\mathrm{m}$=$130\text{\,}\mathrm{a}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$$ at a $1\text{\,}\mathrm{s}$ gate time and $$31\text{\,}\mathrm{a}\mathrm{T}\mathrm{/}\mathrm{m}$=$310\text{\,}\mathrm{z}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$$ at a $100\text{\,}\mathrm{s}$ gate time. Previous instances of $\text{\,}\mathrm{a}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$ coupled oscillator gradiometry have been described [2]. However, these have had $\text{\,}\mathrm{nm}$-size constraints due to a reliance on the Casimir effect as a coupling force, which is exceedingly short-range. This work presents comparable sensitivity using a micro-scale rather than nano-scale construction. ### Theory to practice We have described and simulated an example of coupled oscillators that is based on realistic magnetic force curves. The technology and micro-assembly techniques required to build this device are standard MEMS processes and have been demonstrated extensively in the past (e.g. [11], [12], [14]) and we believe that the next step is construction of the device. To this end, there are technical obstacles that must be overcome and whose effects may detriment the sensitivity of a realization of the proposed device. We foresee the most difficult of these to be the following: 1. 1. Noise: The system we described assumes a noiseless environment. However, in any laboratory implementation of this device, there are several sources of noise that limit sensitivity if not sufficiently suppressed [6]. Thermal and mechanical noise may require cooling of the system and geometric design that avoids unwanted resonances. An analysis of electrical noise requires defining how the position of the oscillator is measured and converted into an electrical signal. Particularly, the low-frequency nature of the system may make it more vulnerable to $1/f$ fractional noise sources. Phase-lock amplifiers and emerging artificial intelligence noise-reduction schemes [15] would help lower the electrical noise significantly — however, care should be taken in adding steps to the signal pipeline as they may create a bottleneck in frequency precision and ultimately hinder sensitivity. Finally, in the above implementation using magnets, fluctuations in the gradient of the Earth’s magnetic field may be a significant source of noise as we are operating close to and below the geomagnetic gradient noise floor of $500\text{\,}\mathrm{f}\mathrm{T}\mathrm{/}\mathrm{c}\mathrm{m}$ [2] (regardless of whether the system is being applied to gradiometry). Overcoming this source of noise would require magnetic shielding of the apparatus, but it should be noted that even without shielding, this theoretical framework can detect much smaller signals than competing technologies that measure magnetic fields directly, since the noise floor for geomagnetic fields is much larger than for geomagnetic field gradients, compared to sources of interest (such as the heart). 2. 2. Damping: In order to achieve a quality factor of $Q=1000$, the oscillator will likely need to be placed in vacuum, especially since the spring constant and mass are particularly low. It may be possible to build this device with a lower quality factor and retain the desired sensitivity, but such analysis is beyond the scope of this work. 3. 3. Weakness of driving force: The driving force was of $30\text{\,}\mathrm{pN}$ in amplitude, which may be difficult to achieve with traditional driving mechanisms such as parallel electrodes. Once again, this is in large part because the spring constant is so small that the system is easily excitable, and in order to maintain small oscillation amplitudes so close to stiction we require a small driving force. 4. 4. Softness of spring: The spring in the proposed model has a low stiffness of $k=$1.78\text{\times}{10}^{-3}\text{\,}\mathrm{N}\mathrm{/}\mathrm{m}$$, which is difficult to fabricate without increasing susceptibility to undesired vibrational modes. Particularly, long spring or cantilever designs would be sensitive to torsion, which would make the oscillator sensitive to magnetic fields and introduce high noise floors associated with them. 5. 5. Reduced dynamic range due to stiction: A strong enough input force has the potential to induce stiction. In the system described, an input force of $23\text{\,}\mathrm{pN}$ would be sufficient to cause stiction. This may be avoided by making the offset force dynamic using a proportional-integral- derivative (PID) controller, so that it matches the coupling force even as the system is offset from the origin. Three of these points (damping, weakness of driving force, and softness of spring), can be aided by increasing the coupling force strength or using a larger magnet as the oscillator. This would lead to a stiffer force potential and therefore a stiffer spring which would demand less damping and a stronger driving force. Despite these potential difficulties, the system described above is made of simple parts and shows a potential gradient sensitivity that is well in excess of the top gradiometry approaches [14]. ## VI Acknowledgments We would like to recognize the great value of discussions with Nicholas Fuhr and Zhancheng (Ryan) Yao’s during the making and tuning of this simulation. This work was supported in part by the SONY Chip-scale Magnetocardiography grant and Boston University. ## References * [1] ## VII References * [2] Javor, J. et al. Analysis of a Casimir-driven parametric amplifier with resilience to Casimir pull-in for MEMS single-point magnetic gradiometry. Microsystems & Nanoengineering (2021) 7:73 * [3] Javor, J. et al. Zeptometer Metrology Using the Casimir Effect, Journal of Low Temperature Physics 208:147–159 (2022) * [4] Imboden, M. et al. Design of a Casimir-driven parametric amplifier. Journal of Applied (2014); 116 (13): 134504 * [5] Zhang, X., et al. Precision measurement and frequency metrology with ultracold atoms, National Science Review 3:189–200 (2016) * [6] Chang, L. Foundations of MEMS (Second Edition), Pearson Education Asia (2012) * [7] Yao, X. et al. Background noise estimation of the geomagnetic signal, Geosci. Instrum. Method. Data Syst., 7, 189–193 (2018) * [8] Swain, P. P. et al. A feasibility study to measure magnetocardiography (MCG) in unshielded environment using first order gradiometer, Biomed. Signal Process. Control 55, 101664 (2020) * [9] Albrecht, T. R., et al. Frequency modulation detection using high‐Q cantilevers for enhanced force microscope sensitivity, Journal of Applied Physics 69(2): 668-673 (1991) * [10] Nony, L., et al. A nc-AFM simulator with Phase Locked Loop-controlled frequency detection and excitation, arxiv preprint https://arxiv.org/abs/physics/0701343 (2007) * [11] Stange et al. Building a Casimir metrology platform using a commercial MEMS sensor Microsystems & Nanoengineering (2019) * [12] H. B. Chan, et. al. Quantum Mechanical Actuation of Microelectromechanical Systems by the Casimir Force, Science 291:1941-1944 (2001) * [13] Advanced Magnets Co. Typical Physical and Chemical Properties of Some Magnetic Materials. https://www.advancedmagnets.com/custom-magnets/ (2022) [Accessed Jan. 29, 2024]. * [14] Javor, J. et al. 100 pT/cm single-point MEMS magnetic gradiometer from a commercial accelerometer. Microsyst. Nanoeng. 6, 1–13 (2020) * [15] Porr B, et. al. Real-time noise cancellation with deep learning, PLoS One (2022)
# Energy dispersive X-ray spectroscopy of atomically thin semiconductors and heterostructures Anna Rupp Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig-Maximilians-Universität München, Geschwister- Scholl-Platz 1, 80539 München, Germany Jonas Göser Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig- Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany Zhijie Li Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany Philipp Altpeter Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany Ismail Bilgin Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany Alexander Högele Fakultät für Physik, Munich Quantum Center (MQC), and Center for NanoScience (CeNS), Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingtr. 4, 80799 München, Germany ###### Abstract We report the implementation of energy dispersive X-ray spectroscopy for high- resolution inspection of layered semiconductors in the form of atomically thin transition metal dichalcogenides down to the monolayer limit. The technique is based on a scanning electron microscope equipped with a silicon drift detector for energy dispersive X-ray analysis. By optimizing operational parameters in numerical simulations and experiments, we achieve layer-resolving sensitivity for few-layer crystals down to the monolayer, and demonstrate elemental composition profiling in vertical and lateral heterobilayers of transition metal dichalcogenides. The technique can be straight-forwardly applied to other layered two-dimensional materials and van der Waals heterostructures, thus expanding the experimental toolbox for quantitative characterization of layer number, atomic composition, or alloy gradients for atomically thin materials and devices. ## I Introduction The realm of two-dimensional materials with capacity for band engineering through elemental composition at the monolayer level and emergent hybridization phenomena in layered van der Waals heterostructures [1] represents a new paradigm in fundamental condensed matter research with applications in electronics [2, 3, 4] and optoelectronics [5, 6, 7, 8]. In semiconducting transition metal dichalcogenides (TMDs), the band gap can be tuned by layer number [9, 10] or the alloy composition of the respective crystal constituents [11, 12, 13, 14, 15], which also allows to engineer the conduction band spin-orbit splitting [16] and thus the valley polarization [17] in monolayers or conduction and valence band offsets in respective heterostructures [18]. The alloy composition can be adjusted in different TMD synthesis methods, including chemical vapor deposition (CVD) [19, 20, 21] which, under optimized conditions, yields laterally extended monolayer crystals [22, 23], homobilayers and few-layer crystals [24], or lateral [25, 26] and vertical [27] heterostructures. The resulting crystals often exhibit characteristic triangular shapes [28, 29], allowing for simple identification of single-crystal monolayers with standard optical microscopy. More quantitative inspection of the layer number and composition in few-layer crystals can be performed by optical spectroscopy means including photoluminescence (PL) [9, 10, 24, 23, 30, 26] and Raman mapping [31, 32, 33, 27, 19]. These techniques, bound in lateral resolution by the optical diffraction limit to a few hundred nanometers, are complemented by electron spectroscopy techniques such as X-ray photoelectron spectroscopy [24, 33, 22, 19] or Auger electron spectroscopy [14]. Energy dispersive X-ray (EDX) spectroscopy features a similarly high spatial resolution and additionally provides quantitative elemental analysis [23]. Implemented in transmission electron microscopes (TEM), it has been successfully applied to two- dimensional materials [34, 35, 25, 27, 36, 37] with the limitation of involved sample preparation methods as required for TEM experiments. Figure 1: (a) Illustration of energy dispersive X-ray spectroscopy in a scanning electron microscope (not to scale): upon excitation with primary electrons (red incoming beam at normal incidence for zero inclination angle $\theta$), X-rays (black arrow) reach the EDX silicon drift detector from the interaction volume (highlighted in red by primary electron trajectories) bound by the penetration range $R$. (b) EDX spectra of bulk transition metal dichalcogenide crystals MoSe2 (left panel) and MoS2 (right panel). For each spectrum, the intensity was normalized to the respective maximum; characteristic peaks of transition metal Molybdenum (Mo: L3M1, L3M5, and L2M4 at $2.020$, $2.293$ and $2.395$ keV) and chalcogens Selenium (Se: L2M1 and L3M45 at $1.245$ and $1.379$ keV) and Sulfur (S: KL3 at $2.308$ keV) are labelled explicitly. In the following, we demonstrate how to adopt EDX analysis for TMD crystals in a standard scanning electron microscope (SEM). For layered TMD materials, sample preparation methods by exfoliation stamping [38] or CVD synthesis on Si/SiO2 and other substrates are well established without the need of modification for EDX spectroscopy. To date, however, the application of the SEM-EDX analysis to few-layer TMD crystals has been impeded by the small interaction volume bound by the monolayer thickness to below one nanometer [39]. Performing optimization of the operation parameters in numerical simulations and calibration experiments, we establish EDX spectroscopy in SEM as a layer-resolving technique for TMD semiconductors, with sensitivity to alloy composition down to the monolayer limit. This feature, particularly beneficial for the characterization of CVD-grown TMD crystals with spatially varying alloy gradients, is confirmed by EDX profiling of a monolayer-thin lateral heterostructure. ## II Experimental methods With access to efficient solid-state drift detectors in the 1960s, EDX spectroscopy became increasingly available for basic characterization of bulk materials with applications in material science, physics, chemistry, biology and medicine [40, 41, 42, 43, 44, 45]. In brief, EDX is based on X-ray detection in electron microscopes, as illustrated in Fig. 1(a). The primary electron beam impinging on the sample undergoes different interactions upon propagation through the sample. The associated inelastic processes give rise to emission of secondary rays from the interaction volume, including element- specific X-rays relevant for EDX. X-ray radiation is generated when a vacancy created by primary electrons in the atomic inner-shells is refilled by outer- shell electrons. The energy of such transition is characteristic of the element and shells involved, and is dissipated either via Auger electrons (predominantly for light elements) or by X-ray radiation (more likely for heavier elements) which can be recorded with an EDX detector [46, 47, 48]. The respective transitions give rise to peaks in the EDX spectra, classified by a capital letter (e.g. K, L, M) corresponding to the core level to which the de- excitation occurs with the subshell as a subscript (e.g. 1, 2, 3), followed by the letter and subscript of the original state. Using tables of element- specific transition energies and probabilities, EDX spectroscopy provides quantitative means for material composition analysis [46]. To adopt EDX spectroscopy in our SEM (Zeiss, LEO DSM 982) equipped with an EDX detector (Oxford Instruments, X-MaxN 50Standard with 50mm2 detector area and an angle of 35° between the detector axis and the horizontally oriented sample) for a quantitative characterization of TMD crystals down to the monolayer limit, we first performed EDX signal calibration with MoSe2 and MoS2 bulk crystals. Bulk crystals were placed on thermal silicon oxide substrate ($285$ nm SiO2 on Si) and mounted in the SEM together with a copper tape above the substrate in close proximity to the sample to reduce beam drifts during the measurements. EDX spectra of bulk MoSe2 and MoS2, recorded with an acquisition time of $5$ min for an aperture of $30$ µm and $10$ keV electron beam energy, yielding 95 pA sample current and a deadtime below 20 $\%$ (acquisition software was used to correct for the deadtime and window transmission), are shown in the left and right panel in Fig. 1(b) with characteristic peaks of Molybdenum (Mo), Selenium (Se) and Sulfur (S). The respective EDX peaks, on top of a weak but finite background of continuous X-ray bremsstrahlung, are proportional to the concentration of elements present in the probe volume of the sample. For element-selective analysis, the characteristic peaks were fitted by Gaussians to yield the total element- specific EDX signal as the sum over all peaks. This procedure resulted in a composition detection accuracy of $\pm\,1\>\%$ and $\pm\,2\>\%$ for bulk crystals of MoSe2 and MoS2 with spectrally distinct and overlapping peaks, respectively. As opposed to TMD bulk crystals, EDX analysis of monolayers is much more challenging because of much smaller interaction volumes limited in one dimension to the few-atom layer. As highlighted in the schematic illustration in Fig. 1(a), X-rays are generated along trajectories of primary electrons with energies sufficient for ionization of inner-shell atomic electrons. The corresponding electron trajectory range $R$ (in nm), as indicated in Fig. 1(a), contains more than 95$\%$ of such trajectories and can be described by the Kanaya-Okayama equation [49]: $R=27.6\,E_{0}^{1.67}A/(Z^{0.89}\rho),$ (1) where $E_{0}$ is the energy of the incoming electrons (in keV), $A$ the atomic mass, $Z$ the atomic number, and $\rho$ the material density (in g/cm3). For TMD bulk crystals and an electron energy of $10$ keV, this depth is about $650$ nm which compares unfavorably with the TMD monolayer thickness below $1$ nm. To increase the EDX signal for monolayers above the noise floor, we performed optimization of operation parameters both in Monte Carlo simulations and experiments. Numerical simulations of different operation conditions were carried out with a software package for quantitative X-ray microanalysis NIST DTSA-II [50] for a MoSe2 monolayer on 285 nm SiO2 on Si substrate. In our simulations, we identified two key factors for signal enhancement: increased interaction volume with the monolayer and optimization of the signal intensity according to non-linear ionization cross-section. The former can be achieved by tilting the sample away from the inclination angle of $\theta=0\degree$ at normal incidence, while the latter effect can accounted for by adjusting the energy of the incoming electrons, keeping in mind that lower energies would effectively increase the interaction volume with the monolayer at the sample surface by decreasing the penetration range according to Eq. 1, yet maintaining sufficiently high energies for ionization. Guided by the simulations, we optimized both key parameters in experiments, with results shown in Fig. 2. Figure 2: (a) EDX intensity of Mo and Se peaks for monolayer MoSe2 as a function of the sample tilt angle $\theta$ at electron beam energy of $5$ keV. The insets show Monte Carlo simulations of the interaction volumes for $0\degree$ and $80\degree$ tilt. (b) Simulated EDX intensity (open circles) of Mo and Se peaks for monolayer MoSe2 as a function of electron beam energy at zero tilt and calculated ionization cross section $\sigma$ (solid lines) for Se L3M45 and Mo L2,3M4,5 transitions. (c) EDX intensity of Mo and Se peaks for monolayer MoSe2 as a function of electron beam energy at a tilt angle of $80\degree$. The insets show Monte Carlo simulations of the interaction volumes at $5$ and $10$ keV beam energy for a tilt angle of $80\degree$. All scale bars are $300$ nm, all data were recorded with an aperture of $30$ µm and $10$ min acquisition time. The data in (a) and (c) were normalized to the KL3 line of Oxygen in the underlying SiO2 substrate. The insets of Fig. 2(a) illustrate the interaction volumes obtained from simulations for tilt angles of $0\degree$ and $80\degree$ at $5$ keV electron beam energy. Obviously, the interaction volume in a tilted geometry samples a larger area of the TMD monolayer on the sample surface. Consistently, the experimentally detected EDX intensity increases upon sample tilt from vanishingly small values at small angles by roughly two orders of magnitude for an inclination angle of $80\degree$, as evidenced for both Mo and Se elements by experimental and numerical results in Fig. 2(a). In addition to the increase of the trajectory length through the monolayer as the inverse cosine of the tilt angle, large tilt angles favor a reentrance of scattered electrons into the monolayer for successive X-ray generation. For the data in Fig. 2(a), the working distance between the cathode and the sample was optimized at each angle of inclination for maximum EDX signal. It is worth noting that the optimal working distance is specific to the configuration of the cathode and the EDX detector in the SEM and thus should be optimized consistently. For our SEM, a working distance of $7.5$ mm proved optimal. The results shown in Fig. 2(b) and (c) highlight the dependence of the X-ray intensity on the beam energy. For a tilt angle of $0\degree$, our simulations shown by open circles in Fig. 2(b) predict maxima in the X-ray signal as a function of the incoming electron beam energy: the element-specific EDX intensities exhibit onsets at the ionization energy $E_{n}$ of the respective element shell $n$ (at $1.436$ keV for the Se L3M45 line, and $2.520$ and $2.625$ keV for the Mo L3M5 and L2M4 lines [51]) and peak around twice to three times the ionization energy. The functional form of this behaviour is dictated by the ionization cross section, shown for both elements as solid lines in Fig. 2(b) and obtained (in cm2) from the equation [46]: $\sigma_{n}=6.51\cdot 10^{-20}\frac{z_{n}b_{n}}{E_{0}E_{n}}\ln{\biggl{(}\frac{c_{n}E_{0}}{E_{n}}\biggr{)}},$ (2) where $n$ is the shell number, $z_{n}$ the number of shell electrons, $E_{0}$ and $E_{n}$ (both in keV) are the beam and ionization energies of the shell, respectively, and $b_{n}$ and $c_{n}$ are effective Bethe parameters for a given element and shell [52]. The calculated cross-sections for Mo and Se transitions exhibit maxima around $7$ and $4$ keV in very good agreement with the functional form of the simulated EDX intensities. Figure 3: (a) Top panel: Optical micrograph of a few-layer MoSe2 crystal on SiO2 with regions from one to five layer thickness. Bottom panel: Corresponding AFM topography scans along the dashed lines in the optical image. Note that the height of the first monolayer terrace is larger than the equidistant height steps for succeeding layers due to TMD-substrate interactions [53]. (b) EDX intensity of Mo and Se peaks as a function of layer number down to the monolayer limit, shown together with a CVD-grown monolayer. All data were normalized to the peak intensity of the exfoliated monolayer. (c) and (d) Same as (a) and (b) but for MoS2. All data were recorded with an aperture of $30$ µm and $10$ min acquisition time at $75\degree$ tilt and $5$ keV electron beam energy. The data in Fig. 2(c) provide experimental proof for the enhancement in X-ray intensity anticipated from simulations. The signal was recorded on a MoSe2 monolayer for a tilt angle of $80\degree$ and normalized to the K$\alpha$ line of oxygen in the underlying thermal oxide of the Si/SiO2 substrate. Enhancement maxima were identified as a function of the electron beam energy in Fig. 2(c) around $3$ and $5$ keV for Se and Mo signals, respectively. Although the energies of maximum enhancement are not identical with the maxima in the scattering cross-section of Fig. 2(b) due to the normalization procedure, the overall behavior of the enhancement in intensity is clearly confirmed. For simultaneous enhancement of Mo and Se signals, we chose an electron beam energy of $5$ keV as an optimal tradeoff, which also favorably increases the interaction volume near the surface as shown by the insets of Fig. 2(c) for $5$ and $10$ keV electron beam energies. We note that an electron beam energy around $5$ keV was also found to be optimal for other TMD materials including MoS2, WSe2 and WS2. ## III Results With the optimized operation parameters for EDX analysis of TMD monolayers at hand, we demonstrate in the following layer-resolving performance of EDX spectroscopy on exfoliated few-layer MoSe2 and MoS2 crystals and CVD-grown monolayer. Further we demonstrate the lateral analysis of atomic composition in extended vertical heterostructures, homobilayers and lateral heterobilayers. Optical images of few-layer MoSe2 and MoS2 on Si/SiO2 substrate are shown in the top panels of Fig. 3(a) and (c), respectively. The layer number was determined with PL in the monolayer limit and with AFM for multilayers. The AFM scans in the bottom panels of Fig. 3(a) and (c) consistently identify extended crystal terraces of one to five layers. Figure 4: (a) EDX profile of an exfoliation-assembled MoSe2-MoS2 vertical heterostructure on SiO2 along the dashed line in the optical micrograph (inset). (b) and (c) Same for CVD-grown MoSe2 homobilayer and MoSe2-MoS2 lateral heterobilayer. The arrows in (c) indicate representative positions where the alloy concentration was determined for the inner and outer regions of the MoSe2-MoS2 heterobilayer. All data were normalized to the KL3 line of Oxygen in the underlying SiO2 substrate. At each terrace, EDX signal acquisition was performed for $10$ min with an electron beam energy of $5$ keV (with 85 pA sample current through a 30 µm aperture and a deadtime below 5 $\%$) for a sample tilt angle of $75\degree$ with results shown in Fig. 3(b) and (d) for exfoliated few-layer crystals with consecutively decreasing number of layers. Additionally, we recorded single- crystal CVD-grown monolayers of MoSe2 and MoS2 on complementary samples (rightmost data points). All data were normalized to the respective EDX intensity of exfoliated monolayers with a detection accuracy of $\pm 5\>\%$ and $\pm 8\>\%$ for MoSe2 and MoS2 monolayers, respectively. The observation of equidistant steps in the EDX intensity as a function of the number of layers unambiguously confirms the single-layer sensitivity of our measurements as well as identical EDX signals (within error bars) for exfoliated and CVD- grown monolayers. This quantitative calibration of material-specific EDX signals down to monolayers of exfoliated and CVD-synthesized TMDs provides means for layer number and composition analysis in laterally extended homo- and heterobilayer crystals. To this end, we fabricated on Si/SiO2 substrates a vertically stacked MoSe2-MoS2 heterobilayer by standard exfoliation stamping of individual monolayers as well as a vertical MoSe2 homobilayer and a lateral MoSe2-MoS2 heterobilayer by CVD synthesis. The optical micrographs of each sample are shown as insets in Fig. 4(a), (b) and (c). The respective EDX profiles were recorded along the dashed lines in the insets as consecutive spectra upon lateral displacement of the electron beam with respect to the sample. For all raster-step EDX measurements, the inclination angle was reduced to $45\degree$ for higher spatial resolution below $300$ nm as estimated from numerical simulations for an electron beam energy of $5$ keV. With this beam energy, EDX data were recorded in discrete steps with an acquisition time of $30$ min at each spot. The optical micrograph in Fig. 4(a) shows the vertical MoSe2-MoS2 heterobilayer region and indicates by the dashed line the lateral trajectory of the EDX profile over $15$ µm recorded in consecutive steps of $1$ µm from the lower left to the upper right point of the line. It starts out with Mo and Se intensities characteristic of monolayer MoSe2 and jumps after $5$ µm in just one lateral step by the excess contributions of Mo and S of monolayer MoS2. After additional four lateral steps, the EDX signal drops to zero within one step away from the heterostructure. According levels of discrete changes in the EDX intensity profile were detected for the CVD-grown MoSe2 homobilayer of Fig. 4(b) with a total distance of $25$ µm in $0.8$ µm steps. The transverse passage of the homobilayer nearly orthogonal to the left triangle edge resulted in sharp jumps of the detected EDX intensity, doubling the characteristic signals of Mo and Se in two consecutive steps and thus unambiguously identifying the transition from monolayer to bilayer. As expected, the EDX profile shows the reverse behavior upon further transition away from the flake with a simultaneous drop of characteristic Mo and Se signals to zero at the bare substrate. It is worth noting the vanishing contamination of the CVD-grown terraces and the underlying substrate by other elements. Finally, we demonstrate that optimized EDX spectroscopy is powerful in quantifying alloyed layer composition. To this end, we inspected a CVD-grown lateral MoSe2-MoS2 heterobilayer shown in the optical micrograph of Fig. 4(c) with the inner MoSe2 monolayer triangle grown first (dark triangle) and the outer lateral MoS2 boundary (lighter regions) added in a subsequent growth step. The corresponding EDX profile was performed over $45$ µm in steps of $2$ µm along the dashed line. Contrary to well-defined boundaries between MoS2 and MoSe2 regions expected from sequential growth, the EDX profile reveals cross- contamination of the adjacent regions by S and Se chalcogens. For points in the outer and inner regions of the lateral heterobilayer indicated by arrows in Fig. 4(c), we determined composition fraction $x$ of the MoSe2xS2(1-x) alloy as $0.15\pm 0.05$ and $0.82\pm 0.05$, respectively. At the boundary between the inner and outer regions, the EDX profile clearly reflects a gradient in the S and Se concentrations in the presence of a constant Mo concentration. Given the spatial resolution of $300$ nm and a step size of $2$ µm, EDX profiling thus detects unambiguously varying alloy concentration not obvious in the optical micrograph with sharp delimiting boundaries between the inner triangle and the outer monolayer region with conformal geometry. Similar observations were made on a lateral CVD-grown WSe2-WS2 heterobilayer (data not shown), confirming cross-contamination of the chalcogen atoms in the growth process [24, 36]. These results highlight the generic analytic power of layer- and element-sensitive EDX profiling of TMD heterostructures down to the monolayer limit. ## IV Conclusions In conclusion, we reported optimized implementation of EDX spectroscopy integrated in a SEM for elemental profiling of semiconducting TMD crystals down to the monolayer limit. The layer-resolving sensitivity was achieved by optimizing operational parameters in both simulations and experiments. Based on quantitative calibration experiments of element-specific EDX intensities on bulk and few-layer TMD crystals, we demonstrated the applicability of the technique to layer number, elemental composition and alloy gradient detection by mapping out EDX profiles of vertical and lateral TMD heterostructures synthesized by CVD or fabricated by exfoliation stacking. Since EDX spectroscopy is not limited to the specific materials used in our study, we anticipate that SEM-based EDX analysis of varying element and alloy compositions in layered crystals will become a valuable characterization method for the entire class of two-dimensional materials and their van der Waals heterostructures. ## V Acknowledgements This research was funded by the European Research Council (ERC) under the Grant Agreement No. 772195 as well as the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the Priority Programme SPP 2244 2DMP and the Germany’s Excellence Strategy EXC-2111-390814868. A. R. acknowledges funding by the Munich Quantum Valley doctoral fellowship program within the Bavarian initiative ”Hightech Agenda Bayern Plus”. Z. Li. was supported by the China Scholarship Council (CSC), No. 201808140196. I. B. acknowledges support from the Alexander von Humboldt Foundation, and A. H. from the Center for NanoScience (CeNS) and the LMUinnovativ project Functional Nanosystems (FuNS). ## REFERENCES * Geim and Grigorieva [2013] A. K. Geim and I. V. Grigorieva, Nature (London) 499, 419 (2013). * Radisavljevic _et al._ [2011] B. Radisavljevic, A. Radenovic, J. Brivio, V. Giacometti, and A. Kis, Nat. Nanotechnol. 6, 147 (2011). * Wang _et al._ [2012] H. Wang, L. Yu, Y.-H. Lee, Y. Shi, A. Hsu, M. L. Chin, L.-J. Li, M. Dubey, J. Kong, and T. Palacios, Nano Lett. 12, 4674 (2012). * Kang _et al._ [2015] K. Kang, S. Xie, L. Huang, Y. Han, P. Y. Huang, K. F. Mak, C.-J. Kim, D. Muller, and J. Park, Nature (London) 520, 656 (2015). * Lopez-Sanchez _et al._ [2013] O. Lopez-Sanchez, D. Lembke, M. Kayci, A. Radenovic, and A. Kis, Nat. Nanotechnol. 8, 497 (2013). * Britnell _et al._ [2013] L. Britnell, R. M. Ribeiro, A. Eckmann, R. Jalil, B. D. Belle, A. Mishchenko, Y.-J. Kim, R. V. Gorbachev, T. Georgiou, S. V. Morozov, _et al._ , Science 340, 1311 (2013). * Bernardi _et al._ [2013] M. Bernardi, M. Palummo, and J. C. Grossman, Nano Lett. 13, 3664 (2013). * Wu _et al._ [2015] S. Wu, S. Buckley, J. R. Schaibley, L. Feng, J. Yan, D. G. Mandrus, F. Hatami, W. Yao, J. Vučković, A. Majumdar, and X. Xu, Nature (London) 520, 69 (2015). * Mak _et al._ [2010] K. F. Mak, C. Lee, J. Hone, J. Shan, and T. Heinz, Phys. Rev. Lett. 105, 136805 (2010). * Splendiani _et al._ [2010] A. Splendiani, L. Sun, Y. Zhang, T. Li, J. Kim, C.-Y. Chim, G. Galli, and F. Wang, Nano Lett. 10, 1271 (2010). * Komsa and Krasheninnikov [2012] H.-P. Komsa and A. V. Krasheninnikov, J. Phys. Chem. Lett. 3, 3652 (2012). * Chen _et al._ [2013] Y. Chen, J. Xi, D. O. Dumcenco, Z. Liu, K. Suenaga, D. Wang, Z. Shuai, Y.-S. Huang, and L. Xie, ACS Nano 7, 4610 (2013). * Zhang _et al._ [2014] M. Zhang, J. Wu, Y. Zhu, D. O. Dumcenco, J. Hong, N. Mao, S. Deng, Y. Chen, Y. Yang, C. Jin, _et al._ , ACS Nano 8, 7130 (2014). * Tongay _et al._ [2014] S. Tongay, D. S. Narang, J. Kang, W. Fan, C. Ko, A. V. Luce, K. X. Wang, J. Suh, K. Patel, V. Pathak, _et al._ , Appl. Phys. Lett. 104, 012101 (2014). * Xie [2015] L. Xie, Nanoscale 7, 18392 (2015). * Wang _et al._ [2015] G. Wang, C. Robert, A. Suslu, B. Chen, S. Yang, S. Alamdari, I. C. Gerber, T. Amand, X. Marie, S. Tongay, and B. Urbaszek, Nat. Commun. 6, 10110 (2015). * Liu _et al._ [2020] S. Liu, A. Granados del Águila, X. Liu, Y. Zhu, Y. Han, A. Chaturvedi, P. Gong, H. Yu, H. Zhang, W. Yao, and Q. Xiong, ACS Nano 14, 9873 (2020). * Zi _et al._ [2019] Y. Zi, C. Li, C. Niu, F. Wang, J.-H. Cho, and Y. Jia, J. Phys. Condens. Matter 31, 435503 (2019). * Fu _et al._ [2015] Q. Fu, L. Yang, W. Wang, A. Han, J. Huang, P. Du, Z. Fan, J. Zhang, and B. Xiang, Adv. Mater. 27, 4732 (2015). * Apte _et al._ [2018] A. Apte, V. Kochat, P. Rajak, A. Krishnamoorthy, P. Manimunda, J. A. Hachtel, J. C. Idrobo, S. A. Syed Amanulla, P. Vashishta, A. Nakano, _et al._ , ACS Nano 12, 3468 (2018). * Zhang _et al._ [2019] Y. Zhang, Y. Yao, M. G. Sendeku, L. Yin, X. Zhan, F. Wang, Z. Wang, and J. He, Adv. Mater. 31, 1901694 (2019). * Feng _et al._ [2014] Q. Feng, Y. Zhu, J. Hong, M. Zhang, W. Duan, N. Mao, J. Wu, H. Xu, F. Dong, F. Lin, _et al._ , Adv. Mater. 26, 2648 (2014). * Li _et al._ [2014] H. Li, X. Duan, X. Wu, X. Zhuang, H. Zhou, Q. Zhang, X. Zhu, W. Hu, P. Ren, P. Guo, _et al._ , J. Am. Chem. Soc. 136, 3756 (2014). * Gong _et al._ [2014] Y. Gong, Z. Liu, A. R. Lupini, G. Shi, J. Lin, S. Najmaei, Z. Lin, A. L. Elías, A. Berkdemir, G. You, _et al._ , Nano Lett. 14, 442 (2014). * Duan _et al._ [2014] X. Duan, C. Wang, J. C. Shaw, R. Cheng, Y. Chen, H. Li, X. Wu, Y. Tang, Q. Zhang, A. Pan, _et al._ , Nat. Nanotechnol. 9, 1024 (2014). * Li _et al._ [2015] H. Li, Q. Zhang, X. Duan, X. Wu, X. Fan, X. Zhu, X. Zhuang, W. Hu, H. Zhou, A. Pan, and X. Duan, J. Am. Chem. Soc. 137, 5284 (2015). * Song _et al._ [2015] J.-G. Song, G. H. Ryu, S. J. Lee, S. Sim, C. W. Lee, T. Choi, H. Jung, Y. Kim, Z. Lee, J.-M. Myoung, _et al._ , Nat. Commun. 6, 1 (2015). * van der Zande _et al._ [2013] A. M. van der Zande, P. Y. Huang, D. A. Chenet, T. C. Berkelbach, Y. You, G.-H. Lee, T. F. Heinz, D. R. Reichman, D. A. Muller, and J. C. Hone, Nat. Mater. 12, 554 (2013). * Najmaei _et al._ [2013] S. Najmaei, Z. Liu, W. Zhou, X. Zou, G. Shi, S. Lei, B. I. Yakobson, J.-C. Idrobo, P. M. Ajayan, and J. Lou, Nat. Mater. 12, 754 (2013). * Zhang _et al._ [2015] W. Zhang, X. Li, T. Jiang, J. Song, Y. Lin, L. Zhu, and X. Xu, Nanoscale 7, 13554 (2015). * Wang _et al._ [2013] X. Wang, H. Feng, Y. Wu, and L. Jiao, J. Am. Chem. Soc. 135, 5304 (2013). * Chen _et al._ [2014] Y. Chen, D. O. Dumcenco, Y. Zhu, X. Zhang, N. Mao, Q. Feng, M. Zhang, J. Zhang, P.-H. Tan, Y.-S. Huang, and L. Xie, Nanoscale 6, 2833 (2014). * Liu _et al._ [2014] H. Liu, K. A. Antwi, S. Chua, and D. Chi, Nanoscale 6, 624 (2014). * Shaw _et al._ [2014] J. C. Shaw, H. Zhou, Y. Chen, N. O. Weiss, Y. Liu, Y. Huang, and X. Duan, Nano Res. 7, 511 (2014). * Huang _et al._ [2014] C. Huang, S. Wu, A. M. Sanchez, J. J. Peters, R. Beanland, J. S. Ross, P. Rivera, W. Yao, D. H. Cobden, and X. Xu, Nat. Mater. 13, 1096 (2014). * Bogaert _et al._ [2016] K. Bogaert, S. Liu, J. Chesin, D. Titow, S. Gradecak, and S. Garaj, Nano Lett. 16, 5129 (2016). * Wang _et al._ [2017] H. Wang, X. Huang, J. Lin, J. Cui, Y. Chen, C. Zhu, F. Liu, Q. Zeng, J. Zhou, P. Yu, _et al._ , Nat. Commun. 8, 1 (2017). * Novoselov _et al._ [2005] K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, Proc. Natl. Acad. Sci. U.S.A. 102, 10451 (2005). * Sheng _et al._ [2017] Y. Sheng, X. Wang, K. Fujisawa, S. Ying, A. L. Elias, Z. Lin, W. Xu, Y. Zhou, A. M. Korsunsky, H. Bhaskaran, _et al._ , ACS Appl. Mater. Interfaces 9, 15005 (2017). * Campbell [1979] W. Campbell, Analyst 104, 177 (1979). * Chen _et al._ [2004] Q. Chen, C. Thomas, and D. M. Knowles, Mater. Sci. Eng. A 374, 398 (2004). * Yao _et al._ [2016] J. Yao, Z. Zheng, and G. Yang, ACS Appl. Mater. Inter. 8, 12915 (2016). * Meiron _et al._ [2017] O. E. Meiron, V. Kuraganti, I. Hod, R. Bar-Ziv, and M. Bar-Sadan, Nanoscale 9, 13998 (2017). * Fonseca _et al._ [2018] J. J. Fonseca, M. K. Horton, K. Tom, J. Yao, W. Walukiewicz, and O. D. Dubon, Chem. Mater. 30, 4226 (2018). * Fox _et al._ [2020] J. J. Fox, S. Bachu, R. L. Cavalero, R. M. Lavelle, S. M. Oliver, S. Yee, P. M. Vora, N. Alem, and D. W. Snyder, J. Cryst. Growth 542, 125609 (2020). * Goldstein _et al._ [2017] J. I. Goldstein, D. E. Newbury, J. R. Michael, N. W. Ritchie, J. H. J. Scott, and D. C. Joy, _Scanning electron microscopy and X-ray microanalysis_ (Springer, 2017). * Reimer [1984] L. Reimer, _Scanning electron microscopy: physics of image formation and microanalysis_ (Springer, 1984). * Bell and Garratt-Reed [2003] D. Bell and A. Garratt-Reed, _Energy dispersive X-ray analysis in the electron microscope_ (Garland Science, 2003). * Kanaya and Okayama [1972] K. Kanaya and S. Okayama, J. Phys. D Appl. Phys. 5, 43 (1972). * Ritchie [2021] N. Ritchie, NIST DTSA-II software, available at: http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html (2021). * Bearden and Burr [1967] J. A. Bearden and A. Burr, Rev. Mod. Phys. 39, 125 (1967). * Powell [1976] C. J. Powell, Rev. Mod. Phys. 48, 33 (1976). * Man _et al._ [2016] M. K. Man, S. Deckoff-Jones, A. Winchester, G. Shi, G. Gupta, A. D. Mohite, S. Kar, E. Kioupakis, S. Talapatra, and K. M. Dani, Sci. Rep. 6, 1 (2016).
Covers in the Canonical Grothendieck Topology C. Lester We explore the canonical Grothendieck topology in some specific circumstances. First we use a description of the canonical topology to get a variant of Giraud's Theorem. Then we explore the canonical Grothendieck topology on the categories of sets and topological spaces; here we get a nice basis for the topology. Lastly, we look at the canonical Grothendieck topology on the category of $R$-modules. § INTRODUCTION In SGA 4.2.2 Verdier defined the canonical Grothendieck topology as the largest Grothendieck topology where all representable presheaves are sheaves. This paper grew out of an attempt to obtain a precise description of the covers in this Grothendieck topology in the cases of some familiar categories; we investigate the question for sets, abelian groups, $R$-modules, topological spaces and compactly generated Hausdorff spaces. The category of sets is simple enough that we can give a complete answer, and in the two categories of topological spaces we give a fairly precise description. The question for abelain groups and $R$-modules seems to be very subtle, though, and we have only been able to obtain partial results. Along the way we prove that the canonical topology has a natural appearance in Giraud's Theorem, which is the source for some of our interest in it. Sieves will be of particular importance in this paper and so we start with a reminder of its definition; we follow the notation and terminology used by Mac Lane and Moerdijk in [3]. For any object $X$ of a category $\EuScript{C}$, we call $S$ a sieve on $X$ if $S$ is a collection of morphisms, all of whose codomains are $X$, that is closed under precomposition, i.e. if $f\in S$ and $f\circ g$ makes sense, then $f\circ g\in S$. In particular, we can view a sieve $S$ on $X$ as a full subcategory of the overcategory $(\EuScript{C}\downarrow X)$. By work from [2], the canonical Grothendieck topology can be characterized in terms of colimits. Specifically, the canonical Grothendieck topology can be described as the collection of all universal colim sieves where: For a category $\EuScript{C}$, an object $X$ of $\EuScript{C}$ and sieve $S$ on $X$, we call $S$ a colim sieve if $\colim_{S}{U}$ exists and the canonical map $\colim_{S}{U}\to X$ is an isomorphism. (Alternatively, $S$ is a colim sieve if $X$ is the universal cocone under the diagram $U\colon S\to \EuScript{C}$.) Moreover, we call $S$ a universal colim sieve if for all arrows $\alpha\colon Y\to X$ in $\EuScript{C}$, $\alpha^\ast S$ is a colim sieve on $Y$. One use of this presentation is the following variant of Giraud's Theorem: PropositionGiraud corollaryIf $\EuScript{E}$ is a `nice' category, then $\EuScript{E}$ is equivalent to the category of sheaves on $\EuScript{E}$ under the canonical topology. The universal-colim-sieve presentation also affords us an explicit description of the canonical Grothendieck topology's covers on the category of topological spaces: PropositionTop ucs characterizationIn the category of all topological spaces, $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}$ is part of a basis for the canonical topology if and only if $\alpha\colon\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a universal quotient map (i.e. $\alpha$ and every pullback of $\alpha$ is a quotient map). Additionally, a sieve $S$ on $X$ is a (universal) colim sieve if and only if there exists some collection $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}} \subset S$ such that $\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a (universal) quotient map. In particular, $T = \langle\{f\colon Y\to X\}\rangle$ is a (universal) colim sieve if and only if $f$ is a (universal) quotient map. PropositionCGWH ucs characterizationIn the category of compactly generated weakly Hausdorff spaces, $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}$ is part of the basis for the canonical topology if and only if $\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a quotient map. In particular, a sieve $S=\langle \{A_\alpha\to X\}_{\alpha\in \EuScript{A}}\rangle$ on $X$ is in the canonical topology if and only if $\coprod_{\alpha\in\EuScript{A}}A_\alpha\to X$ is a quotient map. Moreover, every colim sieve is universal. Furthermore, this presentation allows us to more easily compute examples and non-examples in the category of topological spaces; for instance, Example <ref>/Exampledirect limit top example for CGWHTake $\mathbb{R}^n\to\mathbb{R}^{n+1}$ be the closed inclusion map $(x_1,\dots,x_n)\mapsto(x_1,\dots,x_n,0)$ and use $\mathbb{R}^\infty$ to denote the direct limit $\colim_{n\in\mathbb{N}} \mathbb{R}^n$ with maps $\iota_n\colon \mathbb{R}^n\to\mathbb{R}^\infty$. Then the cover generated by $\{\iota_n\}_{n\in\mathbb{N}}$ is not in the canonical topology for the category of all topological spaces but is in the canonical topology for the category of compactly generated weakly Hausdorff spaces. Additionally, we can use the universal-colim-sieve presentation to get a better idea of the canonical Grothendieck topology's covers on the category of $R$-modules. For example, Propositiongood example pfLet $S$ be the cover generated by $ \{ f_1\colon M_1\to R, f_2\colon M_2\to R\}$ such that $im(f_i) = a_i R$ for $i = 1,2$. Then $S$ is in the canonical topology on $R$-Mod if and only if $(a_1,a_2) = R$. Propositionhope?Let $R$ be an infinite principal ideal domain. Let $S$ be the cover generated by $\{g_i\colon R^n \hookrightarrow R^n\}_{i=1}^M \cup \{f_i\colon R^{m_i}\hookrightarrow R^n\ |\ m_i<n \}_{i=1}^N$. If $S$ a cover in the canonical topology on $R$-Mod, then $g_1\oplus\dots\oplus g_M\colon R^{nM}\to R^n$ is a surjection. Propositionsieves on ZLet $S$ be the cover generated by $\{ \mathbb{Z}\xrightarrow{\times a_i} \mathbb{Z} \}_{i=1}^N$. Then $S$ is in the canonical topology on $\mathbb{Z}\textbf{-Mod}$ if and only if $\text{gcd}(a_1,\dots,a_N) = 1$. Propositiondiag matricesLet $S$ be the cover generated by $\{ \mathbb{Z}^n \xrightarrow{A_i} \mathbb{Z}^n \}_{i=1}^N$ where $A_i$ is a diagonal matrix with $\det(A_i)\neq 0$. Then there exists a map $\beta\colon \mathbb{Z}\to \mathbb{Z}^n$ such that $\beta^\ast S$ is not a colim sieve in $\mathbb{Z}\textbf{-Mod}$ if and only if $\text{gcd}(\det(A_1),\dots,\det(A_N))$ does not equal $ 1$. To start this paper we recall some results from [2] in Section <ref>. Then in Section <ref> we review Giraud's theorem and prove our Corollary to Giraud's Theorem, i.e. we prove that that every category $\EuScript{C}$, which satisfies some hypotheses, is equivalent to the category of sheaves on $\EuScript{C}$ with the canonical topology. In Section <ref> we briefly discuss the canonical topology on the category of sets before exploring the canonical topology on the category of topological spaces. Specifically, we look at the category of all topological spaces and the category of compactly generated weakly Hausdorff spaces. We are able to refine our description and obtain a basis for the canonical topology; this result reduces the question “Is this in the canonical topology?” to the question “Is a specific map a universal quotient map?” Since universal quotient maps have been studied in-depth (for example by Day and Kelly in [1]), this reduction becomes our most computationally agreeable description of the canonical topology and hence we use it to find some specific examples and non-examples. Lastly, in Section <ref> we investigate the canonical topology on the category of $R$-modules and the category of abelian groups, where we work towards refining our description by making some reductions and obtaining some exclusionary results. While these reductions and results lead us to some specific examples and non-examples, a basis for the canonical topology remains elusive. General Notation. For any subcategory $S$ of $(\EuScript{C}\downarrow X)$, we will use $U$ to represent the forgetful functor $S\to\EuScript{C}$. For example, for a sieve $S$ on $X$, $U(f)=\text{domain}\ f$. We say that a sieve $S$ on $X$ is generated by the morphisms $ \{f_\alpha\colon A_\alpha\to X\}_{\alpha\in\mathcal{A}}$ and write $S = \langle \{f_\alpha\colon A_\alpha\to X\}_{\alpha\in\mathcal{A}}\rangle$ if each $f\in S$ factors through one of the $f_\alpha$, i.e. if $f\in S$ then there exists an $\alpha\in\mathcal{A}$ and morphism $g$ such that $f = f_\alpha\circ g$. This work is part of the author's doctoral dissertation at the University of Oregon. The author is extremely grateful to their advisor, Dan Dugger, for all of his guidance, wisdom and patience. § BACKGROUND This section contains a review of the results from [2] that will be used in this paper. Suppose $\EuScript{C}$ is a category with all pullbacks. Let $S = \langle \{g_\alpha\colon A_\alpha\to X \}_{\alpha\in\mathfrak{A}} \rangle$ be a sieve on object $X$ of $\EuScript{C}$ and $f\colon Y\to X$ be a morphism in $\EuScript{C}$. Then $f^\ast S = \langle \{A_\alpha\times_X Y \overset{\pi_2}{\longrightarrow} Y\}_{\alpha\in\mathfrak{A}} \rangle$. Let $\EuScript{C}$ be a cocomplete category. For a sieve in $\EuScript{C}$ on $X$ of the form $S = \langle \{f_\alpha\colon A_\alpha\to X \}_{\alpha\in\mathfrak{A}} \rangle$ such that $A_i\times_X A_j$ exists for all $i,j\in\mathfrak{A}$, $$\colim_{S}{U} \cong \Coeq \left( \begin{tikzcd}\displaystyle \coprod_{(i,j)\in\mathfrak{A}\times\mathfrak{A}} A_i\times_X A_j \arrow[d, shift right=2] \arrow[d, shift left=2] \\ \displaystyle \coprod_{k\in\mathfrak{A}} A_k \end{tikzcd}\right) where the left and right vertical maps are induced from the projection morphisms $\pi_1\colon A_i\times_X A_j \to A_i$ and $\pi_2\colon A_i\times_X A_j \to A_j$. Let $\EuScript{C}$ be a category. Then $S$ is a colim sieve on $X$ if and only if $f^\ast S$ is a colim sieve for any isomorphism $f\colon Y\to X$. Recall that a morphism $f\colon Y\to X$ is called an effective epimorphism provided $Y\times_X Y$ exists, $f$ is an epimorphism and $c\colon \Coeq\left(Y\times_X Y\ \substack{\longrightarrow\\ \longrightarrow}\ Y\right)\to X$ is an isomorphism. Note that this third condition actually implies the second because $f=c\circ g$ where $g\colon Y\to \Coeq\left(Y\times_X Y\ \substack{\longrightarrow\\ \longrightarrow}\ Y\right)$ is the canonical map. Indeed, $g$ is an epimorphism by an easy exercise and $c$ is an epimorphism since it is an isomorphism. Additionally, $f\colon Y\to X$ is called a universal effective epimorphism if $f$ is an effective epimorphism with the additional property that for every pullback diagram W r d[left]π_g Y d[right]f Z r[below]g $\pi_g$ is also an effective epimorphism. Let $\EuScript{C}$ be a cocomplete category with pullbacks. $$S = \langle \{f\colon Y\to X\}\rangle$$ is a sieve on $X$, then $S$ is a colim sieve if and only if $f$ is an effective epimorphism. Moreover, $S$ is a universal colim sieve if and only if $f$ is a universal effective epimorphism. Let $\EuScript{C}$ be any category. The collection of all universal colim sieves on $\EuScript{C}$ forms a Grothendieck topology. For any (locally small) category $\EuScript{C}$, the collection of all universal colim sieves on $\EuScript{C}$ is the canonical topology. Let $\EuScript{C}$ be a cocomplete category with pullbacks. Futher assume that coproducts and pullbacks commute in $\EuScript{C}$. Then a sieve of the form $S = \langle\{f_\alpha\colon A_\alpha\to X\}_{\alpha\in\EuScript{A}}\rangle$ is a (universal) colim sieve if and only if the sieve $T = \langle\{\coprod f_\alpha\colon \coprod_{\alpha\in\EuScript{A}} A_\alpha\to X\}\rangle$ is a (universal) colim sieve. Let $\EuScript{C}$ be a cocomplete category with pullbacks whose coproducts and pullbacks commute. A sieve $S$ on $X$ is a (universal) colim sieve of $\EuScript{C}$ if and only if there exists some $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}} \subset S$ where $\ds\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a (universal) effective epimorphism. Let $\EuScript{C}$ be a cocomplete category with stable and disjoint coproducts and all pullbacks. For each $X$ in $\EuScript{C}$, define $K(X)$ by $$\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}\in K(X)\iff \coprod_{\alpha\in\EuScript{A}} A_\alpha \to X \text{ is a universal effective epimorphism.}$$ Then $K$ is a Grothendieck basis and generates the canonical topology on $\EuScript{C}$. § GIRAUD'S THEOREM AND THE CANONICAL TOPOLOGY Giraud's Theorem shows that categories with certain nice properties can be written as sheaves on a Grothendieck site. We show that in fact, modulo universe considerations, one may take this site to be the original category with the canonical topology. We will specifically use the version of Giraud's Theorem stated in [3]. In fact, the appendix of [3] has a thorough discussion of Giraud's theorem and all of the terminology used in it; we will include the basics of this discussion for completeness. We will begin by recalling the definitions used in Mac Lane and Moerdijk's version of Giraud's Theorem. Throughout this section, let $\EuScript{E}$ be a category with small hom-sets and all finite limits. Disjoint and Stable Coproducts Let $E_\alpha$ be a family of objects in $\EuScript{E}$ and $E = \amalg_\alpha E_\alpha$. The coproduct $E$ is called disjoint if every coproduct inclusion $i_\alpha\colon E_\alpha\to E$ is a monomorphism and, whenever $\alpha\neq\beta$, $E_\alpha\times_E E_\beta$ is the initial object in $\EuScript{E}$. The coproduct $E$ is called stable (under pullback) if for every $f\colon D\to E$ in $\EuScript{E}$, the morphisms $j_\alpha$ obtained from the pullback diagrams D×_E E_αr d[left]j_α D r[below]f induce an isomorphism $\coprod_\alpha(D\times_E E_\alpha)\cong D$. If every coproduct in $\EuScript{E}$ is stable, then the pullback operation $-\times_E D$ “commutes” with coproducts, i.e. $(\coprod_\alpha B_\alpha)\times_E D \cong \coprod_\alpha (B_\alpha\times_E D)$. Coequalizer Morphisms and Kernel Pairs We call a morphism $f\colon Y\to Z$ in $\EuScript{E}$ a coequalizer if there exists some object $X$ and morphisms $\partial_0,\partial_1\colon X\to Y$ such that $$X\, \substack{\overset{\partial_0}{\longrightarrow} \\ \underset{\partial_1}{\longrightarrow}}\, Y \overset{f}{\longrightarrow} Z$$ is a coequalizer diagram. We remark that every coequalizing morphism is an epimorphism but the converse of this statement is not guaranteed. The pair of morphisms $\partial_0,\partial_1\colon X\to Y$ are called a kernel pair for $f\colon Y\to Z$ if the following is a pullback diagram X r[above]∂_1 d[left]∂_0 Y d[right]f Y r[below]f Equivalence Relations and Quotients An equivalence relation on the object $E$ of $\EuScript{E}$ is a subobject $R$ of $E\times E$, represented by the monomorphism $(\partial_0,\partial_1)\colon R\to E\times E$, satisfying the following axioms * (reflexive) the diagonal $\Delta\colon E\to E\times E$ factors through $(\partial_0,\partial_1)$, * (symmetric) the map $(\partial_1,\partial_0)\colon R\to E\times E$ factors through $(\partial_0,\partial_1)$, * (transitivity) if $R\times_E R$ is the pullback R×_E R r[above]π_1 d[left]π_0 R d[right]∂_0 R r[below]∂_1 then $(\partial_1\pi_1,\partial_0\pi_0)\colon R\times_E R \to E\times E$ factors through $R$. If $E$ is an object of $\EuScript{E}$ with equivalence relation $R$, then the quotient is denoted $E/R$ and is defined to be $$\Coeq\left( R\, \substack{\overset{\partial_0}{\longrightarrow} \\ \underset{\partial_1}{\longrightarrow}}\, E \right)$$ provided that this coequalizer exists. Stably Exact Forks A diagram is called a fork if it is of the form \begin{equation}\label{fork} X\, \substack{\overset{\partial_0}{\longrightarrow} \\ \underset{\partial_1}{\longrightarrow}}\, Y \overset{q}{\longrightarrow} Z. \end{equation} The fork (<ref>) is called exact if $\partial_0$ and $\partial_1$ are the kernel pair for $q$, and $q$ is the coequalizer of $\partial_0$ and $\partial_1$. The fork (<ref>) is called stably exact if the pullback of (<ref>) along any morphism in $\EuScript{E}$ yields an exact fork, i.e. if for any $Z' \to Z$ in $\EuScript{E}$, $$X\times_Z Z'\, \substack{\longrightarrow \\ \longrightarrow}\, Y\times_Z Z' \overset{q\times 1}{\longrightarrow} Z\times_Z Z'$$ is an exact fork. Generating Sets A set of objects $\{A_i\,|\,i\in I\}$ of $\EuScript{E}$ is said to generate $\EuScript{E}$ if for every object $E$ of $\EuScript{E}$, $W = \{A_i\to E\,|\, i\in I\}$ is an epimorphic family (in the sense that for any two parallel arrows $u,v\colon E\to E'$, if every $w\in W$ yields the identity $uw=vw$, then $u=v$). Giraud's Theorem A category $\EuScript{E}$ with small hom-sets and all finite limits is a Grothendieck topos if and only if it has the following properties (which we will refer to as Giraud's axioms): (i) $\EuScript{E}$ has small coproducts which are disjoint and stable under pullback, (ii) every epimorphism in $\EuScript{E}$ is a coequalizer, (iii) every equivalence relation $R\ \substack{\to \\ \to}\ E$ in $\EuScript{E}$ is a kernel pair and has a quotient, (iv) every exact fork $R\ \substack{\to\\\to}\ E \to Q$ is stably exact, (v) there is a small set of objects of $\EuScript{E}$ which generate $\EuScript{E}$. Taken together, Giraud's axioms (ii) and (iv) imply that for each epimorphism $B\xrightarrow{f} A$, the fork $B\times_A B\ \substack{\to \\ \to}\ B\to A$ is stably exact. The exactness implies $f$ is an effective epimorphism and the stability implies $f$ is a universal effective epimorphism. We use $Sh(\EuScript{E},J)$ to represent the category of sheaves on the category $\EuScript{E}$ under the topology $J$. Suppose the category $\EuScript{E}$ has small hom-sets and all finite limits, satisfies Giraud's axioms, and whose small set of generators (axiom v) is $\EuScript{C}$. In [3] Mac Lane and Moerdijk specifically prove $\EuScript{E}\cong Sh(\EuScript{C},J)$ where $J$ is the Grothendieck topology on $\EuScript{C}$ defined by: $S\in J(X)$ if and only if $\displaystyle \coprod_{(g\colon D\to X)\in S} D\to X$ is an epimorphism in $\EuScript{E}$. (In particular, Mac Lane and Moerdijk prove that $J$ is a Grothendieck topology.) Suppose the category $\EuScript{E}$ has small hom-sets and all finite limits, satisfies Giraud's axioms, and whose small set of generators (axiom v) is $\EuScript{C}$. Then $\EuScript{E}$ is equivalent to $Sh(\EuScript{C},C)$ where $C$ is the canonical topology on $\EuScript{C}$. Let $J$ be the topology defined above. Additionally, the above discussion implies that it suffices to show that $J$ is the canonical topology. By Theorem <ref>, we will instead show that every universal colim sieve is in $J$ and that every sieve in $J$ is a universal colim sieve. By Remark <ref>, coproducts and pullbacks commute and hence for any collection of morphisms $\{A_i\to X\}_{i\in I}$ in $\EuScript{E}$, the diagrams ∐_I^2 (A_i×_X A_j) [d, shift right=2] [d, shift left=2] ∐_I A_k (∐_I A_i )×_X(∐_I A_j ) [d, shift right=2] [d, shift left=2] ∐_I A_k are isomorphic. Note: in both diagrams, the two maps down are the obvious ones induced/obtained from a pullback diagram. \begin{tikzcd} \coprod_{I^2} (A_i\times_X A_j) \arrow[d, shift right=2] \arrow[d, shift left=2]\\ \coprod_{I} A_k \end{tikzcd} \right) \cong \Coeq\left( \begin{tikzcd} \left(\coprod_I A_i \right)\times_X\left(\coprod_I A_j \right) \arrow[d, shift right=2] \arrow[d, shift left=2] \\ \coprod_{I} A_k \end{tikzcd} \right).$$ But by Proposition <ref> (which is usable since $\EuScript{E}$ is cocomplete), \begin{tikzcd} \coprod_{I^2} (A_i\times_X A_j) \arrow[d, shift right=2] \arrow[d, shift left=2]\\ \coprod_{I} A_k \end{tikzcd} \right)\cong \colim_{S}{U} \quad \text{where } S = \left<\{A_i\to X\}_{i\in I}\right>$$ \begin{tikzcd} \left(\coprod_I A_i \right)\times_X\left(\coprod_I A_j \right) \arrow[d, shift right=2] \arrow[d, shift left=2] \\ \coprod_{I} A_k \end{tikzcd} \right)\cong \colim_{T_S}{U} \quad \text{where } T_S = \left< \left\{ \left(\coprod_I A_i\right)\to X \right\}\right>.$$ \begin{equation}\label{fact} \begin{split} \colim_{S}{U}&\cong\colim_{T_S}{U} \\ \text{where } S = \left<\{A_i\to X\}_{i\in I}\right> \quad &\text{and} \quad T_S = \left< \left\{ \left(\coprod_I A_i\right)\to X \right\}\right> \\ \text{for any generating set } & \{A_i\to X\}_{i\in I} \text{ of $S$.} \end{split} \end{equation} Suppose $S$ is a universal colim sieve. Since $S$ has the some generating set, then by the definition of colim sieve and (<ref>), $$X\cong \colim_{S}{U}\cong \colim_{T_S}{U}.$$ This implies that $T_S$ is a colim sieve. Hence $\left(\coprod_{(g\colon D\to X)\in S} D\right) \to X$ is an effective epimorphism by Corollary <ref> and so $S\in J(X)$. For the converse, suppose that $S\in J(X)$. Thus $p_s\colon \left(\coprod_{(g\colon D\to X)\in S} D\right) \to X$ is an epimorphism, which by Discussion <ref> is a universal effective epimorphism. Hence by Corollary <ref>, $p_s$ generates a universal colim sieve called $T_S$. Then by the definition of colim sieve and (<ref>), $$X\cong \colim_{T_S}{U} \cong \colim_{S}{U}.$$ Therefore $S$ is a colim sieve. Similar to the last paragraph, we can use (<ref>) to show that $f^\ast S$ is a colim sieve for any morphism $f$ in $\EuScript{E}$ if we know that $T_{f^\ast S}$ is a colim sieve. So to finish the proof we will use the fact that $T_S$ is a universal colim sieve to show that $T_{f^\ast S}$ is a colim sieve. Let $f\colon Y\to X$ be any morphism in $\EuScript{E}$. Then by using $S$ as a generating collection for itself and Lemma <ref>, $f^\ast S = \left< \{ A\times_X Y\to Y\ |\ A\to X\in S \} \right>$. Similarly, using Lemma <ref>, $f^\ast T_S = \left<\left\{ \left(\coprod_{(A\to X\in S)} A\right) \times_X Y\to Y\right\}\right>$. Then by Remark <ref> $$\displaystyle \coprod_{(A\to X)\in S} (A\times_X Y) \cong \left(\coprod_{(A\to X) \in S} A\right) \times_X Y$$ over $Y$. $$\colim_{T_{f^\ast S}}{U}\cong \colim_{f^\ast T_S}{U} \cong Y$$ where the first isomorphism is due to the previous few sentences and the second isomorphism is due to the fact that $T_S$ is a universal colim sieve. Thus $T_{f^\ast S}$ is a colim sieve. § UNIVERSAL COLIM SIEVES IN THE CATEGORIES OF SETS AND TOPOLOGICAL SPACES In this section we examine the canonical topology on the categories of sets, all topological spaces and compactly generated weakly Haudsdorff spaces. We will use Sets to denote the category of sets. We will use Top to denote the category of all topological spaces, CG to denote the category of compactly generated spaces, and CGWH to denote the category of compactly generated weakly Hausdorff spaces. When we want to talk about the category of topological spaces without differentiating between Top and CGWH, then we will use Spaces; all results about Spaces will hold for both Top and CGWH. We will begin with a few reminders about the category of compactly generated weakly Hausdorff spaces based on the references [6] and [4]. Specifically, there are functors $k\colon\textbf{Top}\to\textbf{CG}$ and $h\colon\textbf{CG}\to\mathbf{CGWH}$ such that * For a topological space $X$ with topology $\tau$, a subset $Y$ of $X$ is called $k$-closed if $u^{-1}(Y)$ is closed in $K$ for every continuous map $u\colon K\to X$ and compact Hausdorff space $K$. The collection of all $k$-closed subsets, called $k(\tau)$, is a topology. * The functor $k$ takes $X$ with topology $\tau$ to the set $X$ with topology $k(\tau)$. * $k$ is right adjoint to the inclusion functor $\iota\colon\textbf{CG}\to\textbf{Top}$. * $h(X)$ is $X/E$ where $E$ is the smallest equivalence relation on $X$ closed in $X\times X$. * $h$ is left adjoint to the inclusion functor $\iota'\colon\textbf{CGWH}\to\textbf{CG}$. * A limit in $\textbf{CGWH}$ is $k$ applied to the limit taken in $\textbf{Top}$, i.e. for a diagram $F\colon I\to \textbf{CGWH}$, the limit of $F$ is $k(\lim_{I} \iota \iota' F)$. * A colimit in $\textbf{CGWH}$ is $h$ applied to the colimit taken in $\textbf{Top}$, i.e. for a diagram $F\colon I\to \textbf{CGWH}$, the colimit of $F$ is $h(\colim_{I} \iota \iota' F)$. Let $S$ be a sieve on $X$ in either Sets or Top. Let $C$ be $\ds\colim_{S}{U}$. Then the natural map $\varphi\colon C \to X$ is an injection. Suppose $\tilde{y},\tilde{z}\in C$ and $\varphi(\tilde{y}) = x = \varphi(\tilde{z})$. We can pick a $(Y\to X)\in S$ and a $y\in Y$ that represents $\tilde{y}$, i.e. where $y\mapsto \tilde{y}$ under the natural map $Y\to C$; similarly, we can pick a $(Z\to X)\in S$ and a $z\in Z$ representing $\tilde{z}$. Then the inclusion $i\colon \{ x \} \hookrightarrow X$ factors through both $Y$ and $Z$ by $x\mapsto y$ and $x\mapsto z$ respectively. Thus $i\in S$. Hence $\tilde{y}=\tilde{z}$ in $C$. Let $S$ be a sieve on $X$ in CGWH. Then the colimit over $S$ taken in Top is in CGWH, i.e. $h(\colim_{I} \iota \iota' U) = \colim_{I} \iota \iota' U$. Moreover, the natural map $\varphi\colon \colim_{S}{U} \to X$ is an injection. We will make use of the following Proposition from [6]: if $Z$ is in CG, then $Z$ is weakly Hausdorff if and only if the diagonal subspace $\Delta_Z$ is closed in $Z\times Z$. Additionally, we remark that colimits of compactly generated spaces computed in Top are automatically compactly generated. Let $C = \colim_{S} \iota\iota' U$, i.e. $C$ is the colimit over $S$ taken in Top. By Proposition <ref>, the natural map $\varphi\colon C\to X$ is an injection; we remark that it is not the statement of Proposition <ref> that gives this observation since $S$ is not a sieve in Top, instead the proof of Proposition <ref> holds in this situation since $\{x\}$ is in CGWH. Since $X$ is CGWH, then $\Delta_X$ is closed in $X\times X$. Since $\varphi$ is a continuous injection, then $(\varphi\times\varphi)^{-1}(\Delta_X) = \Delta_C$ is closed in $C\times C$. §.§ Basis and Presentation The categories Sets, Top and CGWH all satisfy the hypotheses of Theorems <ref> and <ref>. Thus we have the following corollaries of Theorems <ref> and <ref> based on what the universal effective epimorphisms are in each category. In Sets, $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}$ is part of a basis for the canonical topology if and only if $\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a surjection. In particular, a sieve of the form $S=\langle \{A_\alpha\to X\}_{\alpha\in \EuScript{A}}\rangle$ on $X$ is in the canonical topology if and only if $\ds\coprod_{\alpha\in\EuScript{A}}A_\alpha\to X$ is a surjection. Moreover, every colim sieve is universal. It is easy to see in Sets that the effective epimorphisms are precisely the surjections. Since pulling back a surjection yields a surjection, then the universal effective epimorphisms in the category of sets are also the surjections. Lastly, this implies, by Theorem <ref>, that every colim sieve is universal. Since Sets is a Grothendieck topos, we can compare Proposition <ref> to the proof of Proposition <ref>. Specifically, Proposition <ref> allows us to determine if a sieve is in the canonical topology by looking only at the sieve's generating set whereas the proof of Proposition <ref> along with the Grothendieck topology $J$ require us to look at the entire sieve. Recall that a quotient map $f$ is called universal if every pullback of $f$ along a map yields a quotient map. In Top, $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}$ is part of a basis for the canonical topology if and only if $\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a universal quotient map. Additionally, a sieve $S$ on $X$ is a (universal) colim sieve if and only if there exists some collection $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}} \subset S$ such that $\ds\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a (universal) quotient map. In particular, $T = \langle\{f\colon Y\to X\}\rangle$ is a (universal) colim sieve if and only if $f$ is a (universal) quotient map. It is a well-known fact that in Top the effective epimorphisms are precisely the quotient maps. In CGWH, $\{A_\alpha\to X\}_{\alpha\in\EuScript{A}}$ is part of the basis for the canonical topology if and only if $\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ is a quotient map. In particular, a sieve $S=\langle \{A_\alpha\to X\}_{\alpha\in \EuScript{A}}\rangle$ on $X$ is in the canonical topology if and only if $\ds\coprod_{\alpha\in\EuScript{A}}A_\alpha\to X$ is a quotient map. Moreover, every colim sieve is universal. This is a consequence of Corollary <ref>, Corollary <ref>, the fact that the universal effective epimorphisms in Top are precisely the universal quotient maps, and <cit.>, which states that every quotient map in CGWH is universal. §.§ Examples in the category of Spaces In this section we will use our basis to talk about some specific examples; including a special circumstance (when a sieve is generated by one function) and how the canonical topology on the categories CGWH and Top can differ in this situation. For a category $D$, we call $\mathfrak{A}\subset \operatorfont{ob}(D)$ a weakly terminal set of $D$ if for every object $X$ in $D$, there exists some $A\in\mathfrak{A}$ and morphism $X\to A$ in $D$. Additionally, if $F\colon D\to C$ is a functor and $D$ has a weakly terminal set $\mathfrak{A}$, then we call $\{F(A)\}_{A\in\mathfrak{A}}$ a weakly terminal set of $F$. For example, if $S = \langle \{A_\alpha\to X \}_{\alpha\in\mathfrak{A}} \rangle$ is a sieve on $X$ then $\{A_\alpha\}_{\alpha\in\mathfrak{A}}$ is the weakly terminal set of $U$. Or as another example, $\{Y\}$ is the weakly terminal set of the diagram $Y \times_X Y\ \substack{\longrightarrow\\ \longrightarrow}\ Y$. One easy consequence of this in Top is a reduction of the colimit topology: $V$ is open in the colimit if and only if the preimage of $V$ is open in each member of the weakly terminal set. Let $F\colon D \to \textbf{Spaces}$ be a functor where $D$ has a weakly terminal set $\mathfrak{A}$. Suppose $f_A\colon F(A)\to X$ is an open map for all $A\in\mathfrak{A}$, then the induced map $\varphi\colon \colim_{D} F\to X$ is an open map. Similarly, if the $f_A$ are all closed and $\mathfrak{A}$ is a finite set, then $\varphi$ is a closed map. Let $C= \colim F$ and $i_A\colon F(A)\to C$ be the natural maps. Both results follow from the easy set equality below for $B\subset C$ $$\varphi(B) = \bigcup_{A\in\mathfrak{A}} f_A(i_A^{-1}(B))$$ since $i_A^{-1}$, $f_A$ and unions respect open/closed sets in their respective scenarios. Let $S=\langle \{f_\alpha\colon A_\alpha\to X\}_{\alpha \in \EuScript{A}} \rangle$ be a sieve on $X$ in Spaces with the induced map $\eta\colon \ds\coprod_{\alpha\in\EuScript{A}} A_\alpha \to X$ a surjection. If all of the $f_\alpha$ are open maps or if $\EuScript{A}$ is a finite collection and all of the $f_\alpha$ are closed maps, then $S$ is a colim sieve. Let $\varphi\colon \ds\colim_{S}{U}\to X$ be the natural map. By Proposition <ref>, Corollary <ref>, and the surjectivity of $\eta$, $\varphi$ is a continuous bijection. Then Proposition <ref> implies that $\varphi$ is open or closed, depending on the case, and hence an isomorphism. This corollary leads us to some nice examples of sieves we would hope are in the canonical topology and actually are! Let $X$ be any space and let $\{U_i\}_{i\in I}$ be an open cover of $X$. Then the inclusion maps $U_i\hookrightarrow X$ generate a universal colim sieve, call it $S$. Indeed, by Corollary <ref>, $S$ is a colim sieve. Universality is obvious, as the preimage of an open cover is an open cover. Let $X$ be any space and let $K_1, \dots, K_n$ be a closed cover of $X$. For the exact same reasons as the previous example, the inclusions $K_i\hookrightarrow X$ generate a sieve in the canonical topology. Before we give our next example, we rephrase <cit.>, which completely characterizes universal quotient maps in Top: Let $f\colon Y\to X$ be a quotient map. Then $f$ is a universal quotient map if and only if for every $x\in X$ and cover $\{G_\alpha\}_{\alpha\in\Lambda}$ of $f^{-1}(x)$ by opens in $Y$, there is a finite set $\{\alpha_1, \dots, \alpha_n\}\subset\Lambda$ such that $fG_{\alpha_1}\cup\dots\cup fG_{\alpha_n}$ is a neighborhood of $x$. Consider the diagram $B_1\to B_2\to B_3\to \dots$ and the direct limit $B=\colim B_n$ in Top. Let $S = \langle\{\iota_n\colon B_n\to B\,|\, n\in\mathbb{N}\}\rangle$ where $\iota_n$ are the natural maps into the colimit. By Proposition <ref>, $S$ is a colim sieve because $\coprod_{n\in\mathbb{N}}B_n\to B$ is obviously a quotient map. However, $S$ is not necessarily in the canonical topology – we can use Proposition <ref> on specific examples to see when $S$ is and is not in the canonical topology. For example, suppose there exists an $N$ such that $B_m = B_N$ whenever $m>N$. Then $B = B_N$. Hence it is easy to see by Day and Kelly's condition that the map $\coprod_{n\in\mathbb{N}}B_n\to B$ is a universal quotient map. Therefore, the $S$ from this example is in the canonical topology. As another example, take $B_n = \mathbb{R}^n$ and let $B_n\to B_{n+1}$ be the closed inclusion map $(x_1,\dots,x_n)\mapsto(x_1,\dots,x_n,0)$. Use $\mathbb{R}^\infty$ to denote the direct limit. We claim that $\coprod_{n\in\mathbb{N}}\mathbb{R}^n\to\mathbb{R}^\infty$ is not a universal quotient map. Indeed, consider Day and Kelly's condition; take $x = 0\in\mathbb{R}^\infty$ and the open cover in $\coprod_{n\in\mathbb{N}}\mathbb{R}^n$ consisting of open disks $D^n\subset\mathbb{R}^n$ centered at the origin with fixed radius $\epsilon>0$. Pick any finite collection $D^{n_1},\dots,D^{n_k}$ with $n_1<\dots<n_k$. Then for $i=1,\dots,k$ we can view $D^{n_i}$ as a subset of $\mathbb{R}^{n_k}$. Hence $\cup_{i=1}^k \iota_{n_i}(D^{n_i})$ is $\cup_{i=1}^k \iota_{n_k}(D^{n_i})\subset \iota_{n_k}(\mathbb{R}^{n_k}$). However, by dimensional considerations, we can see that for all $b\in\mathbb{N}$, $\iota_b(\mathbb{R}^b)$ contains no open sets of $\mathbb{R}^\infty$ and hence $\cup_{i=1}^k \iota_{n_i}(D^{n_i})$ cannot be a neighborhood of $x$ in $\mathbb{R}^\infty$. Remark: To see that $\iota_b(\mathbb{R}^b)$ contains no open sets, suppose to the contrary and call the open set $V$. Then $\iota_{b+1}^{-1}(V)$ is open in $\mathbb{R}^{b+1}$ and in particular, contains an open ball of dimension $b+1$. Thus dimensional considerations imply that $\iota_{b+1}^{-1}(V)$ is not contained in the image of $\mathbb{R}^b$ in $\mathbb{R}^{b+1}$. Since each $\iota_n$ is an inclusion map, then $\iota_{b+1}\iota_{b+1}^{-1}(V)\not\subset \iota_{b+1}(\mathbb{R}^b)$ and so $V$ is not contained in $\iota_b(\mathbb{R}^b)$, which is our contradiction. Therefore, the $S$ from this example is not in the canonical topology. Consider the diagram $B_1\to B_2\to B_3\to \dots$ and the direct limit $B=\colim B_n$ in CGWH. Let $S = \langle\{\iota_n\colon B_n\to B\,|\, n\in\mathbb{N}\}\rangle$ where $\iota_n$ are the natural maps into the colimit. Then by Proposition <ref>, $S$ is a universal colim sieve because $\coprod_{n\in\mathbb{N}}B_n\to B$ is a quotient map. Now we shift our focus to sieves that can be generated by one map, called monogenic sieves. There are many reasons one could focus on these kinds of sieves, however by Proposition <ref>, if we fully comprehend when monogenic sieves are in the canonical topology, then we can (in some sense) completely understand the canonical topology. From this point onward, this section will be about monogenic sieves; in other words, by Proposition <ref> and Proposition <ref>, we will be focusing on (universal) quotient maps. Some examples will talk about the space $\mathbb{R}/\mathbb{Z}$. In this section, this space is not a group quotient but instead is the squashing of the subspace $\mathbb{Z}$ to a point. Consider the quotient maps $f\colon S^n\to \mathbb{R}P^n$ and $g\colon \mathbb{R}\to \mathbb{R}/\mathbb{Z}$. There is some subtly, which will depend on the category we are in, in determining if $f$ or $g$ generate universal colim sieves. Throughout the rest of this section we will continue to explore this particular example. Monogenic Sieves in CGWH By Proposition <ref>, if $X$ and $Y$ are in CGWH and $h\colon Y\to X$, then $\langle\{h\}\rangle$ is in the canonical topology if and only if $h$ is a quotient map. Therefore, we immediately get the following examples: Topological manifolds are in CGWH. Thus $S^n$ and $\mathbb{R}P^n$ are in CGWH. Hence $\langle\{ f\colon S^n\to \mathbb{R}P^n \}\rangle$ is in the canonical topology. Every CW-complex is in CGWH. Thus $\mathbb{R}$ and $\mathbb{R}/\mathbb{Z}$ are in CGWH. Hence $\langle\{ g\colon \mathbb{R}\to\mathbb{R}/\mathbb{Z}\}\rangle$ is in the canonical topology. Monogenic Sieves in Top This section will heavily rely on Theorem <ref> (the Theorem by Day and Kelly characterizing universal quotient maps in Top) because a monogenic sieve generated by $f$ is in the canonical topology if and only if $f$ is a universal quotient map. Day and Kelly's theorem implies that every open quotient map is a universal quotient map. Therefore, the quotient map $f\colon S^n\to \mathbb{R}P^n$ is a universal quotient map and $\langle \{ f\colon S^n\to \mathbb{R}P^n \}\rangle$ is in the canonical topology. The quotient map $g\colon \mathbb{R}\to \mathbb{R}/\mathbb{Z}$ is not universal. We will demontrate this in two ways, first by using Day and Kelly's theorem and second by directly showing $g$ is not universal. Note: many sets of $\mathbb{R}/\mathbb{Z}$ will be written as if they are in $\mathbb{R}$ for ease of presentation. (i) We will look at Day and Kelly's condition for $\mathbb{Z}\in \mathbb{R}/\mathbb{Z}$ with the open cover (in $\mathbb{R}$) $\{G_i \coloneqq (i-m,i+m)\}_{i\in\mathbb{Z}}$ for a fixed $m\in\left(0,\frac{1}{2}\right)$. For any open set $U$ of $\mathbb{R}/\mathbb{Z}$ containing $\mathbb{Z}$, the quotient topology tells us that $g^{-1}(U)$ is an open neighborhood of $\mathbb{Z}\subset\mathbb{R}$. But for any $n$, $g^{-1}(\bigcup_{k=1}^n gG_{i_k}) = \mathbb{Z} \cup \left(\bigcup_{k=1}^n (i_k-m, i_k+m)\right)$ is not a neighborhood of $\mathbb{Z}\subset\mathbb{R}$. So there cannot be any open set of $\mathbb{R}/\mathbb{Z}$ containing $\mathbb{Z}$ that is contained in $\bigcup_{k=1}^n gG_{i_k}$ for any finite collection of the cover. (ii) To directly show that $g$ is not universal we need to come up with a space and map to $\mathbb{R}/\mathbb{Z}$ where $g$ pulledbacked along this map is not a quotient map. Our candidate is the following: Let $t(\mathbb{R}/\mathbb{Z})$ be the set $\mathbb{R}/\mathbb{Z}$ with the topology where $U$ (written as if it is in $\mathbb{R})$ is said to be open if (a) $\mathbb{Z}\not\subset U$ or (b) $U$ contains $\mathbb{Z}$ and is a neighborhood (in the typical topology) of $(\mathbb{Z}-\{\text{finitely many or no points}\})$. Remark: this topology was used in Day and Kelly's paper (in the proof of their theorem), however they defined the topology using a filter and we have merely rephrased it for convenience. Define $\kappa\colon t(\mathbb{R}/\mathbb{Z}) \to \mathbb{R}/\mathbb{Z}$ by the set identity map; this is a continuous map. As a set, the pullback of $\text{domain}(g)$ along $\kappa$ is $\mathbb{R}$ but since it now has the limit topology, we denote the pullback as $t(\mathbb{R})$; in particular, $t(\mathbb{R})$ is $\mathbb{R}$ with the discrete topology. Denote the projection maps as $g'\colon t(\mathbb{R})\to t(\mathbb{R}/\mathbb{Z})$ and $\kappa'\colon t(\mathbb{R}) \to \mathbb{R}$. We claim that $g'$ is not a quotient map, i.e. there is some non-open set $B$ in $t(\mathbb{R}/\mathbb{Z})$ with $(g')^{-1}(B)$ open in $t(\mathbb{R})$. Since every $(g')^{-1}(B)$ is open in $t(\mathbb{R})$, then we merely need to find a $B$ that is not open in $t(\mathbb{R}/\mathbb{Z})$; $B = \{\mathbb{Z}\}$ obviously works. The above example shows us that quotient maps of the form $X\to X/A$ may not generate universal colim sieves. So let's understand these special quotient maps a little better. Specifically, using Day and Kelly's theorem, we can completely state what kinds of subspaces $A$ yield universal quotient maps $X\to X/A$: The quotient map $\pi\colon X\to X/A$ is universal if and only if both of the following properties hold: * If $A$ is not open, then for every open cover $\{G_\alpha\}_{\alpha \in \Lambda}$ of $(\partial A)\cap A$ in $X$ there is a finite collection $\{\alpha_1, \dots, \alpha_n\} \subset \Lambda$ with $A\cup G_{\alpha_1}\cup \dots\cup G_{\alpha_n}$ open in $X$. * If $A$ is not closed, then for every open $U$ in $X$ such that $U\cap (\overline{A}-A)\neq \emptyset$, $U\cup A$ is open in $X$. We will be using Theorem <ref> in two ways: first by finding the necessary conditions for $\pi$ to be a universal quotient map (i.e. proving the forward direction) and then second by checking the sufficient conditions in the three cases (i) $x = A$, (ii) $x\in X-\overline{A}$, and (iii) $x\in \overline{A}-A$ (i.e. proving the backward direction). First suppose that $\pi$ is a universal quotient map. To see that the first property is necessary, assume that $(\partial A)\cap A\neq \emptyset$, i.e. $A$ is not open, and we have an open cover $\{G_\alpha\}_{\alpha\in\Lambda}$ of $(\partial A)\cap A$. Then we can expand this cover to an open cover of $A$ by adding $Int(A)$ to $\{G_\alpha\}_{\alpha\in\Lambda}$. Now by assumption (using the point $A$ in $X/A$) there is a finite subcollection $G_{\alpha_1}, \dots, G_{\alpha_n}, Int(A)$ such that $\pi G_{\alpha_1}\cup\dots\cup \pi G_{\alpha_n}\cup \pi Int(A)$ is a neighborhood of $A$ in $X/A$. But $\pi Int(A)\subset \pi G_{\alpha}$ since $G_{\alpha}\cap A\neq \emptyset$ and so $Int(A)$ is not necessary in our finite subcollection. Thus $\pi G_{\alpha_1}\cup\dots\cup \pi G_{\alpha_n}$ is a neighborhood of $A$; let $U$ be an open subset of $\pi G_{\alpha_1}\cup\dots\cup \pi G_{\alpha_n}$ containing $A$. Now by looking at the preimages of $U$ and $\bigcup_{i=1}^n \pi G_{\alpha_i}$ in $X$, we get that $$A\subset \pi^{-1}(U)\subset \pi^{-1}(\bigcup_{i=1}^n \pi G_{\alpha_i}) = G_{\alpha_1}\cup\dots\cup G_{\alpha_n}\cup A.$$ Since $\pi^{-1}(U)$ is open, then the above expression implies $A\subset Int(G_{\alpha_1}\cup\dots\cup G_{\alpha_n}\cup A)$. But since all of the $G_{\alpha}$ are open, then $G_{\alpha_1}\cup\dots\cup G_{\alpha_n}\cup A$ is open. Therefore, the first property is necessary. To see that the second property is necessary, assume that $A$ is not closed and $U$ is any open neighborhood of a fixed $x\in \overline{A}-A$ in $X$. Since $U$ is an open cover of $\pi^{-1}(\pi(x))=x$, then by Theorem <ref>, $\pi U$ is a neighborhood of $x$; let $V$ be an open subset of $\pi U$ that contains $x$. Then by looking at the preimages of $V$ and $\pi U$, we see (using that $U$ intersects $A$ nontrivially) that $$A\subset \pi^{-1}(V) \subset \pi^{-1}(\pi U) = U\cup A.$$ But since $\pi^{-1}(V)$ is open, then $A\subset Int(U\cup A)$, i.e. $U\cup A$ is open. Therefore, the second condition is necessary. Second let's assume the two conditions hold. We will show $\pi$ is a universal quotient map by checking that the conditions of Theorem <ref> hold in all three locations in $X/A$ (i.e. for (i) $x = A$, (ii) $x\in X-\overline{A}$, and (iii) $x\in \overline{A}-A$). (i) For $A\in X/A$, take any open cover $\{G_\alpha\}_{\alpha\in\Lambda}$ of $A$ in $X$. If $A$ is open in $X$, then $\{A\}$ is open in $X/A$ and hence every $\pi G_\alpha$ is a neighborhood. If $A$ is not open, let $\Gamma$ be the finite portion of $\Lambda$ that property 1 guarantees exists, i.e. $A\cup \left(\bigcup_{i\in\Gamma} G_{\alpha_i}\right)$ is open in $X$ and each $G_{\alpha_i}$ intersects $A$ nontrivially. This implies that $\bigcup_{i\in\Gamma} \pi G_{\alpha_i}$ is an open neighborhood of $A$ in $X/A$ (since its preimage is $A\cup \left(\bigcup_{i\in\Gamma} G_{\alpha_i}\right)$). (ii) Any $x\in X-\overline{A}$ has an open neighborhood $U_x\subset X-\overline{A}$. Notice that $\pi$ is a homeomorphism on $X-\overline{A}$. Thus for any such $x$ and any open cover $W$ of $\pi^{-1}(x) = x$ in $X$, $\pi W$ is a neighborhood of $x$ because the open neighborhood (in $X/A$) $U_x\cap W$ is contained in $\pi W$. (iii) If $A$ is closed, then this is trivial so assume that $A$ is not closed and let $x\in \overline{A}-A$. For any open cover $W$ of $\pi^{-1}(x) = x$ in $X$, $\pi^{-1}(\pi W) = W\cup A$, which is open in $X$ by condition 2. Thus $\pi W$ is an open neighborhood of $x$ in $X/A$. Therefore, our two conditions ensure that $\pi$ satisfies Day and Kelly's universal quotient map condition. Corollary <ref> now gives us a way to produce more examples of sieves in the canonical topology: Every quotient of a Hausdorff space by a compact subspace is universal. For example, $\pi\colon D^n\to S^n$ (where $S^n = D^n/\partial D^n$) generates a universal colim sieve. If $A$ is closed, then $S=\langle \{X\to X/A\}\rangle$ is always a colim sieve. Moreover, it is universal if and only if $\partial A$ is compact. For example, this tells us $\langle\{ \mathbb{R}\to\mathbb{R}/[0,\infty) \}\rangle$ is in the canonical topology and reaffirms that $\langle\{\mathbb{R}\to\mathbb{R}/\mathbb{Z} \}\rangle$ is not. § UNIVERSAL COLIM SIEVES IN THE CATEGORY OF $R$-MODULES The category of $R$-modules does not satisfy the assumptions of Theorem <ref> or Theorem <ref>. Indeed, coproducts and pullbacks of $R$-modules do not commute (for example, let $\mathbb{Z}_{(a,b)}$ denote the domain of $\mathbb{Z}\to\mathbb{Z}^2$, $1\mapsto(a,b)$, then we see that $(\mathbb{Z}_{(1,0)}\oplus\mathbb{Z}_{(0,1)})\times_{\mathbb{Z}^2}\mathbb{Z}_{(1,1)}\cong \mathbb{Z}$ but $(\mathbb{Z}_{(1,0)}\times_{\mathbb{Z}^2}\mathbb{Z}_{(1,1)})\oplus (\mathbb{Z}_{(0,1)}\times_{\mathbb{Z}^2}\mathbb{Z}_{(1,1)})\cong 0$). Thus we do not have basis and presentation results. Instead, we have some smaller results, reductions and examples. Let $R$ be a commutative ring with identity. We will use $R$-Mod for the category of $R$-modules and Ab for the category of abelian groups. We start with some basic results. Any sieve containing a universal effective epimorphism (e.g. a surjection in $R$-Mod or in Sets) is a universal colim sieve. This is an immediate consequence of Theorem <ref> and Corollary <ref>. In $R$-Mod, if a sieve $S$ on $X$ can be generated by at most two morphisms, then the canonical map $\ds c\colon \colim_{S}{U} \to X$ is an injection. Suppose $S = \langle \{f\colon Y\to X, g\colon Z\to X\}\rangle$ and $c(x) = 0$. Since every map in $S$ either factors through $f$ or $g$, then $x$, as an element of $\ds\bigoplus_{A\to X\in S} A$, is really an element $(y,z)\in Y\oplus Z$ in the colimit. So $c(x) = 0$ implies that $y+z=0$ in $X$, i.e. $(y,-z)\in Y\times_X Z$. Thus $y\in Y$ gets identified with $-z\in Z$ in the colimit; hence $(y,z) = (0,z-z) = 0$ in the colimit. Therefore, $x=0$ in the colimit and the map $c$ is an injection. Using the fact that $\langle \{A_i\to X\}_\alpha\rangle = \langle \{A_i\to X\}_\alpha\cup \{Z\xrightarrow{0}X \}\rangle$, we can say that any sieve generated by one morphism is also generated by two morphsims. This completes the proof. In $R$-Mod, let $$S = \langle \{f\colon Y\to X\}\rangle \qquad \text{and}\qquad T = \langle \{g\colon U\to X, h\colon V\to X\}\rangle$$ be sieves on $X$. Then * $S$ is a universal colim sieve if and only if $f$ is a surjection. * $T$ is a colim sieve if and only if $g\oplus h\colon U\oplus V\to X$ is a surjection. For part 2, Lemma <ref> tells us that we only need to worry about the surjectivity of $\ds\colim_{T}{U} \to X$ but this is exactly what the above condition is. For part 1, Lemma <ref> and Lemma <ref> tell us that we only need worry about the surjectivity of $A\times_X Y \overset{\pi_1}{\longrightarrow} A$ (the generator of $k^\ast S$) for every map $k\colon A\to X$. But $A\times_X Y = \{(a,y)\in A\times Y\, |\, k(a) = f(y) \}$. Hence $\pi_1$ is a surjection for every map $k$ if and only if $f$ is a surjection. In $R$-Mod, suppose $S = \langle\{ f_i\colon M_i\to R\}_{i\in I}\rangle$ is a sieve on $R$ such that for every $i\in I$ there exists an $a_i\in R$ with $im(f_i) = a_i R$. If the ideal $(a_i\, |\, i\in I)$ equals $R$, then for every $R$-module homomorphism $g\colon N\to R$, the natural map $\colim_{g^\ast S}{U} \to N$ is a surjection. By Proposition <ref> it suffices to show that $\eta\colon\ds\oplus_{i} M_i\times_R N\to N$ is a surjection. Let $\pi_i\colon M_i\times_R N\to N$ be the natural map. Fix $x\in N$. Then $a_i g(x)\in a_i R = im(f_i)$ and $a_i g(x)\in im(g)$. Thus $a_i\cdot x\in im(\pi_i)\subset N$ for all $i\in I$. Therefore, $x = 1_R\cdot x$ is in $\ds\oplus_{i} im(\pi_i) = im(\eta)$ since R is a unital ring and $(a_i\, |\, i\in I) = R$. Suppose $S = \langle \{ f_1\colon M_1\to R, f_2\colon M_2\to R\} \rangle$ is a sieve on $R$ such that $im(f_i) = a_i R$ for $i = 1,2$. Then $S$ is in the canonical topology on $R$-Mod if and only if $(a_1,a_2) = R$. If $S$ is in the canonical topology, then $S$ is a colim sieve and hence by Proposition <ref>, $a_1R\oplus a_2R = R$. If $(a_1,a_2) = R$, then by Proposition <ref>, $S$ is a colim sieve. The universality of $S$ follows immediately from Lemma <ref>, Proposition <ref> and Lemma <ref>. Next we include two results that can help us identify when a sieve is not in the canonical topology. Let $R$ be any nonzero ring. Let $S = \langle\{ f_i\colon A_i\to X\}_{i\in I}\rangle$ be any sieve on $X$ for any nonzero $R$-module $X$. If there exists a nonzero $b\in X$ such that $span_R(b) \subset (X-\cup_I Im(f_i))\cup \{0\}$, then $S$ is not a universal colim sieve. Suppose such a $b\in X$ exists. Define $g\colon R\to X$ by $1\to b$. Then $Im(g)\cap Im(f_i) = \{0\}$ for all $i$. Thus for all $i$, the pullback $R\times_X A_i = ker(g)\times ker(f_i)$ and the image of the natural map $R\times_X A_i\to R$ is $ker(g)$. In particular, $Im\left(\oplus_i R\times_X A_i \to R\right) = ker(g)$, which by construction is not $R$. Therefore, $\colim_{g^\ast S}U \to R$ is not surjective and so $g^\ast S$ not a colim sieve on $R$. Let $R$ be an infinite principal ideal domain. $$S = \langle \{g_i\colon R^n \hookrightarrow R^n\}_{i=1}^M \cup \{f_i\colon R^{m_i}\hookrightarrow R^n\ |\ m_i<n \}_{i=1}^N \rangle$$ be a sieve on $R^n$. If $S$ is a universal colim sieve, then $g_1\oplus\dots\oplus g_M\colon R^{nM}\to R^n$ is a surjection. Let $G = g_1\oplus\dots\oplus g_M$. Suppose that $G$ is not a surjection. We will produce a map $\phi$ that shows $S$ is not universal. By a change of basis (which is allowable by Lemma <ref>) we may assume that $G = diag(d_1,d_2,\dots,d_n)$ with $d_i|d_{i+1}$. Because $G$ is not surjectve, then $d_n$ is not a unit. Indeed, if $d_n$ was a unit, then all of the $d_i$'s would also be units and thus $G$ would be surjective. By Lemma <ref> below, there exists an $x\in R^{n-1}$ so that $span_R\{(x,1)\}\cap Im(f_i) = \{0\}$ for all $i=1,\dots,N$. Additionally, since $d_n$ is not a unit, then $(x,1)\not\in Im(G)$. Define $\phi\colon R\to R^n$ by $1\mapsto (x,1)$. We will show that $\phi^\ast S$ is not a colim sieve. First we will simplify the generating set of $\phi^\ast S$. By the choice of $x$, the pullback module of $R^{m_i}$ along $\phi$ is $\{0\}$ for all $i=1,\dots,N$. Therefore, we can write $\phi^\ast S$ as $\phi^\ast S = \langle\{\pi_i\colon R^n\times_{R^n} R \to R\}_{i=1}^M\rangle$ where the $\pi_i$ are the pullbacks of the $g_i$ along $\phi$. Since $(x,1)\not\in Im(G)$ and we have the following commutative diagram ⊕_i=1^M R^n_i×_R^n R R d[right]ϕ ⊕_i=1^M R^n_i r[below]G then $1\not\in Im(\pi_1 \oplus \dots \oplus \pi_M)$. Therefore, $\ds\eta\colon \colim_{\phi^\ast S}{U} \to R$ is not surjective; hence $\phi^\ast S$ is not a colim sieve. Lastly, for completeness we include the linear algebra result referenced in Proposition <ref>. Let $R$ be an infinite principal ideal domain. For any finite collection $V_1,\dots,V_N$ of submodules of $R^n$ with $\text{dim}(V_i)<n$, there exists an $x\in R^{n-1}$ such that $span_R\{(x,1)\} \cap V_i = \{0\}$ for all $i$. Let $F$ be the quotient field of $R$. $$W_i = V_i \otimes_R F.$$ We will use $F^{n-1}$ to refer to the subspace $\{(a_1,\dots,a_{n-1},0)\ |\ a_i\in F\}$ in $F^n$. For each $V_i\not\subset F^{n-1}$, fix an element $\nu_i\in V_i$ such that $\nu_i\not\in F^{n-1}$ and write $\nu_i = (v_{i1},\dots,v_{in})$. Let $\nu_i^0 = (v_{i1},\dots,v_{i(n-1)}, 0)$. Lastly, for each $V_i\not\subset F^{n-1}$, define a vector space map $\phi_i\colon W_i\to F^{n-1}$ by $w = (w_1,\dots,w_n)\mapsto w - \frac{w_n}{v_{in}}\nu_i$ Ideally, we will find an $x$ such that $(x,1)\not\in W_i$ for all $i$. So first, let's see what kinds of $(z,1)$ are in $W_i$ by computing $\phi_i(z,1)$. \begin{align*} \phi_i(z,1) &= (z,1) - \frac{1}{v_{in}}\nu_i \\ &= z - \frac{1}{v_{in}}\nu_i^0 \end{align*} $$z = \phi_i(z,1) + \frac{1}{v_{in}}\nu_i^0.$$ Therefore, if $(z,1)\in W_i$, then $z = \phi_i(z,1) + \frac{1}{v_{in}}\nu_i^0$. Based on this result, define $\Gamma_i = im(\phi_i)\oplus span_F\{\nu_i^0\}$. So $(z,1)\in W_i$ implies $z\in \Gamma_i$. For each index $i$ exactly one of the following is true: * $W_i\subset F^{n-1}$, * $W_i\not\subset F^{n-1}$ and $\text{dim}_F(\Gamma_i) < n-1$, * $W_i\not\subset F^{n-1}$ and $\Gamma_i = F^{n-1}$. For every index $j$ in collection 1, every $x\in R^{n-1}$ satisfies the equation $span_R\{(x,1)\} \cap V_j = \{0\}$. Thus when picking our $x$, we only need to consider the indices in collections 2 and 3. For each index $i$ in collection 2, $\Gamma_i$ is a proper subspace of $F^{n-1}$. Since there are only finitely many $\Gamma_i$ and $F$ is an infinite field, then there exists a $y = (y_1, \dots, y_{n-1})$ such that $y\neq 0$ and $span_F\{(y,0)\}\cap \Gamma_i = \{0\}$ for all $i$ in collection 2. By multiplying $y$ by an appropriate $s\in F$ we can clear denominators and so we may assume that $y\in R^{n-1}$. In particular, for all $r\in R$, $ry\not\in\Gamma_i$, which implies that $(ry,1)\not\in W_i$. Therefore, for all $r\in R$, $span_R\{(ry,1)\}\cap V_i = \{0\}$ for all indices in collection 2. Continuing with the $y$ from the previous paragraph, we now consider the indices $k$ in collection 3 and their corresponding $\Gamma_k$. In this situation, $(y,0)\in \Gamma_k$, i.e. $y = \phi_k(z) + u_k\nu_k^0$ for some $z\in W_k$ and $u_k\in F$. Since $R$ is an infinite ring and collection 3 contains finitely many indices $k$, we can pick a nonzero $\rho\in R$ such that for all $k$, $\rho u_k\in R$ and $\rho u_k \neq \frac{1}{v_{kn}}$. Thus $\rho y \neq \phi_k(a) + \frac{1}{v_{kn}}\nu_k^0$ for any $a\in W_k$, which implies that $(\rho y,1)\not\in W_k$. Therefore, $span_R\{(\rho y,1)\}\cap V_k = \{0\}$ for all indices in collection 3. We can take $x = \rho y$. Here we include a few examples and non-examples of sieves in the canonical topology for various rings $R$. In the category of $R$-modules every surjective map generates a universal colim sieve (see Proposition <ref>). As more specific examples, the sieve $\langle\{ \mathbb{Z}\overset{\pi}{\longrightarrow} \mathbb{Z}/n\mathbb{Z}\ |\ 1\mapsto 1\}\rangle$ is in the canonical topology on Ab and in $R$-Mod, the sieve $\langle\{R^n\to R\ |\ (a_1,\dots,a_n)\mapsto a_1\}\rangle$ is in the canonical topology. By Proposition <ref>, $\langle\{R\overset{a}{\longrightarrow} R, R\overset{b}{\longrightarrow} R\}\rangle$ is in the canonical topology if and only if $(a,b) = R$. As more specific examples, in Ab the sieve $\langle\{\mathbb{Z}\overset{2}{\longrightarrow} \mathbb{Z}, \mathbb{Z}\overset{3}{\longrightarrow} \mathbb{Z}\}\rangle$ is in the canonical topology; and when the function $\cdot g(x)\colon C^\infty(\mathbb{R})\to C^\infty(\mathbb{R})$ is the map $f(x)\mapsto (g\cdot f)(x)$, then the sieve $\langle\{C^\infty(\mathbb{R})\overset{\cdot x}{\longrightarrow} C^\infty(\mathbb{R}), C^\infty(\mathbb{R})\overset{\cdot \sin(x)}{\longrightarrow} C^\infty(\mathbb{R})\}\rangle$ is not in the canonical topology on $C^\infty(\mathbb{R})$-modules. The sieve $S = \langle\{R\overset{i_1}{\to} R^2, R\overset{i_2}{\to} R^2 \}\rangle$ where $i_1(1) = (1,0)$ and $i_2(1) = (0,1)$ (in the category of $R$-modules for nontrivial $R$) is not in the canonical topology. By Proposition <ref>, $S$ is clearly a colim sieve so to see that $S$ is not universal consider the map $\Delta\colon R\to R^2$, $1\mapsto (1,1)$. Then for $k=1,2$, $i_k$ pulled back along $\Delta$ yields the zero map $z\colon 0\to R$. Hence Lemma <ref> says $\Delta^\ast S = \langle\{z\colon 0\to R\}\rangle$, which is clearly not a colim sieve. Similarly $\langle\{R\overset{i_k}{\to} R^n\ |\ k = 1,\dots,n\}\rangle$ is a colim sieve but is not in the canonical topology. (This is also a consequence of Proposition <ref>.) Let $S = \langle\{ f_k\colon \mathbb{Q}\to\mathbb{Q}[t]\ |\ f_k(1) = 1+t+\dots+t^k \}_{k=1}^\infty\rangle$ in the category of rational vector spaces. This $S$ is not in the canonical topology. (This is a direct consequence of Proposition <ref> using $b = t$.) Let $F$ be an infinite field. In the category of $F$ vector spaces, a sieve of the form $S = \langle\{ F^{m_i}\hookrightarrow F^n\ |\ m_i\leq n \}_{i=1}^M\rangle$ is in the canonical topology if and only if $m_i = n$ for some $i$ if and only if $S$ contains an isomorphism. (This is a consequence of Proposition <ref>.) Consider the diagram $B_1\hookrightarrow B_2\hookrightarrow B_3\hookrightarrow \dots$ made with only injective maps and the direct limit $B\coloneqq \colim B_n$ in $R$-mod. Let the maps $\iota_n\colon B_n\to B$ be the natural maps into the colimit. Then the sieve $\langle\{\iota_n \,|\, n\in\mathbb{N}\}\rangle$ is a universal colim sieve. Let $\Gamma\colon\mathbb{N}\to S$ by $n\mapsto\iota_n$. Notice that $\Gamma$ is a final functor; this is easy to see since the injectivity of $\iota_n$ and the maps in our diagram imply that $B_i\times_B B_j\cong B_{min(i,j)}$. Thus $\colim_{S}U$ exists and $\colim_{S}U\cong\colim_{\mathbb{N}}U\Gamma \cong B$. Therefore, $S$ is a colim sieve. To see that $S$ is universal, let $f\colon X\to B$ and set $X_i \coloneqq X\times_B B_i$. For each $n\in\mathbb{N}$, $\iota_n$ and $B_n\to B_{n+1}$ are both injective maps; this implies that the natural maps $X_n\to X_{n+1}$ and $X_n\to X$ are also injective maps since the pullback of an injection in $R$-Mod is an injection and $X_i\cong X_{i+1}\times_{B_{i+1}}B_i$. Additionally, it is an easy exercise to see that the direct limit $\colim X_i$ is isomorphic to $X$. In other words, $f^\ast S$ is the type of sieve described in the assumptions of this proposition and proved to be a colim sieve in the previous paragraph. Take $B_n = \mathbb{R}^n$ and let $B_n\to B_{n+1}$ be the inclusion map $(x_1,\dots,x_n)\mapsto(x_1,\dots,x_n,0)$. Use $\mathbb{R}^\infty$ to denote the direct limit. Then the above proposition shows that $\langle\{\mathbb{R}^n\hookrightarrow\mathbb{R}^\infty\}_{n\in\mathbb{N}} \rangle$ is in the canonical topology on the category of $\mathbb{R}$ vector spaces. (Compare this to Example <ref>.) In this part we prove some reductions that allow us to limit our view (of sieve generating sets and the maps universality must be checked over) to the non-full subcategory of free modules with injective maps when $R$ is `nice.' The first reduction will be reducing the types of sieves we need to look at: In $R$-Mod, let $S$ be a sieve on $X$. Then the following are equivalent * $S$ is a universal colim sieve * $f^\ast S$ is a universal colim sieve for every surjection $f\colon Y\to X$ * $f^\ast S$ is a universal colim sieve for some surjection $f\colon Y\to X$ It is obvious that 1 implies 2 and 2 implies 3, so it suffices to show 3 implies 1. Assume $f^\ast S$ is a universal colim sieve for some fixed surjection $f\colon Y\to X$. Set $T = \langle \{f\colon Y\to X\}\rangle$. By Proposition <ref>, $T$ is a universal colim sieve since $f$ is a surjection. We will now use $T$ together with the Grothendieck topology's transitivity axiom to show that $S$ is a universal colim sieve. Notice that $S$ satisfies the hypotheses of this axiom with respect to $T$. Indeed, since every $g\in T$ factors as $f\circ k$ for some $k$, then $g^\ast S = (fk)^\ast S = k^\ast(f^\ast S)$, which implies that $g^\ast S$ is a universal colim sieve (as $f^\ast S$ is universal) for every $g\in T$. Therefore, by the transitivity axiom of a Grothendieck topology, $S$ is a universal colim sieve. To rephrase our first reduction: $S$ is a universal colim sieve on $X$ if and only if $f^\ast S$ is a universal colim on $R^n$ where $f\colon R^n\to X$ is a surjection (note that $n$ is not necessarily assumed to be finite). This reduction means that we can restrict our view to free modules (not necessarily finitely generated). Specifically, we only need to look at sieves on free modules and check the universality condition on free modules. Indeed, $S$ is a universal colim sieve on $X$ if and only if for all $g\colon Y\to X$, $g^\ast S$ is a universal colim sieve on $Y$ if and only if for all $g\colon Y\to X$, $(gf)^\ast S$ is a universal colim sieve on $R^n$ for some surjection $f\colon R^n\to Y$. In $R$-Mod when $R$ is a principal ideal domain, every sieve on $R^n$ equals a sieve of the form \begin{equation*} \langle \{g_i\colon R^{m_i}\hookrightarrow R^n\colon m_i\leq n\}_{i\in I}\rangle \end{equation*} where the $g_i$ are injections. Let $S = \langle\{f_i\colon A_i\to R^n\}_{i\in I}\rangle$ be a sieve on $R^n$. Set $$T = \langle\{g_i\colon Im(f_i)\to R^n\}_{i\in I}\rangle$$ where the $g_i$'s are inclusion maps. Since $R$ is a PID and $Im(f_i)$ is a submodule of $R^n$, then $Im(f_i)\cong R^{m_i}$ for some $m_i\leq n$. Thus $T$ is of the desired form and we will show that $S=T$. First notice that $S\subset T$. To get that $T$ is a subcollection of $S$, notice that $\tilde{f}_i\colon A_i\to Im(f_i)$ (i.e. $f_i$ with a different codomain) is split because $\tilde{f}_i$ is a surjective map onto a projective module; call the splitting $\chi_i$. Hence $g_i = g_i\circ \tilde{f}_i\circ\chi_i = f_i\circ\chi_i$ implies that $T\subset S$ and completes the proof. To rephrase our second reduction: when talking about sieves on $R^n$, we only need to talk about sieves generated by injections of free modules. Thus we can restrict our view of sieve generating sets to the non-full subcategory of free modules with injective morphisms. Our next reduction will also assume $R$ is a principal ideal domain. In particular, fix $n$ and a map $f\colon X\to R^n$ for some $R$-module $X$. Then since $R$ is a PID, we may write X\cong R^m\oplus K \quad \text{for some $m\leq n$, where} R^m\cong Im(f), \quad K = ker(f), \quad f = g+z \quad \text{with} g\colon R^m\to R^n \text{ an injection and } z\colon K\to R^n \text{ the zero map}. Let $R$ be a principal ideal domain, $S$ be a sieve on $R^n$ in $R$-Mod and $f\colon X\to R^n$. Then, using the set-up described in the previous paragraph, $$\colim_{f^\ast S}{U} \cong \left(\colim_{g^\ast S}{U}\right) \oplus \left(\colim_{z^\ast S}{U}\right).$$ Moreover, $z^\ast S$ is a universal colim sieve; hence $f^\ast S$ is a colim sieve if and only if $g^\ast S$ is a colim sieve. By Proposition <ref>, we may assume that $S$ can be written in the form $S = \langle \{\eta_i\colon R^{p_i}\hookrightarrow R^n\colon p_i\leq n\}_{i\in I}\rangle$. Consider the diagrams $\EuScript{X}$, $\EuScript{R}$ and $\EuScript{K}$ defined as: $\EuScript{X} = \left(\begin{tikzcd} \bigoplus_{i\in I}(R^{p_i} \times_{R^n} X)\times_X (R^{p_i} \times_{R^n} X) \arrow[d, shift left = 2] \arrow[d, shift right = 2] \\ \bigoplus_{i\in I}(R^{p_i} \times_{R^n} X) \end{tikzcd}\right)$, $\EuScript{R} = \left(\begin{tikzcd} \bigoplus_{i\in I}(R^{p_i} \times_{R^n} R^m)\times_{R^m} (R^{p_i} \times_{R^n} R^m) \arrow[d, shift left = 2] \arrow[d, shift right = 2] \\ \bigoplus_{i\in I}(R^{p_i} \times_{R^n} R^m) \end{tikzcd}\right), \text{ and }$ $\EuScript{K} = \left(\begin{tikzcd} \bigoplus_{i\in I}(R^{p_i} \times_{R^n} K)\times_K (R^{p_i} \times_{R^n} K) \arrow[d, shift left = 2] \arrow[d, shift right = 2]\\ \bigoplus_{i\in I}(R^{p_i} \times_{R^n} K) \end{tikzcd}\right)$ First we look at the objects of $\EuScript{X}$. Since each $\eta_i$ is injective, then for all $i$ $$R^{p_i} \times_{R^n} X \cong (R^{p_i} \times_{R^n} R^m)\oplus (R^{p_i} \times_{R^n} K)$$ and for all $i$, $q$ \begin{align*} (R^{p_i} &\times_{R^n} X) \times_X (R^{p_q}\times_{R^n} X) \\ &\cong ((R^{p_i}\times_{R^n} R^m)\times_{R^m}(R^{p_q}\times_{R^n} R^m))\oplus ((R^{p_i}\times_{R^n} K)\times_K(R^{p_q}\times_{R^n} K)). \end{align*} In other words, $\EuScript{X} \cong \EuScript{R}\oplus \EuScript{K}$. But since colimits “commute” with colimits, then $\Coeq(\EuScript{X}) \cong \Coeq(\EuScript{R})\oplus \Coeq(\EuScript{K})$. Now by Lemma <ref> and Proposition <ref>, the first part has been proven, i.e. $$\colim_{f^\ast S}{U} \cong \left(\colim_{g^\ast S}{U}\right) \oplus \left(\colim_{z^\ast S}{U}\right).$$ Next we notice that $z^\ast S$ is a universal colim sieve. Indeed, since $\eta_i$ is an injection and $z$ is the zero map, it easily follows that $z^\ast S = \langle \{id\colon K\to K\}\rangle$. To complete the proof, notice that we have the following commutative diagram (X) ≅(R) ⊕(K) [d, shift right = 1, "ρ"] [d, bend right = 25, shift right = 14, "χ"] [d, bend left = 25, shift left = 14, "κ"] X ≅R^m ⊕K where the vertical maps are the obvious canonical maps. This $\chi = \rho\oplus\kappa$ is an isomorphism if and only if both $\rho$ and $\kappa$ are isomorphisms. We have already shown that $\kappa$ is an isomorphism (as $z^\ast S$ is a universal colim sieve), thus this diagram implies that $\chi$ is an isomorphism if and only if $\rho$ is; hence $f^\ast S$ is colim sieve if and only if $g^\ast S$ is a colim sieve. Lastly, we rephrase our third reduction: When $R$ is a PID, a sieve on $R^n$ is a universal colim sieve if and only if $f^\ast S$ is a colim sieve for every injection $f\colon R^m\to R^n$. All together our reductions basically allow us to work in the subcategory of free modules with injective morphisms instead of in $R$-Mod. §.§ The Category of Abelian Groups This section will be primarily made up of examples. Additionally, we include a characterization of sieves on $\mathbb{Z}$ and one result for sieves on larger free abelian groups. By Corollary <ref>, $\langle\{ \mathbb{Z} \xrightarrow{\times a} \mathbb{Z}, \mathbb{Z}\xrightarrow{\times b} \mathbb{Z} \}\rangle$ is a universal colim sieve if and only if $a$ and $b$ are relatively prime. The sieve $S = \langle \{\mathbb{Z}\xrightarrow{\times 1} \mathbb{Z}/4\mathbb{Z}, \mathbb{Z}/2\mathbb{Z}\xrightarrow{\times 2}\mathbb{Z}/4\mathbb{Z}\}\rangle$ is a universal colim sieve on $\mathbb{Z}/4\mathbb{Z}$ by Corollary <ref>. Additionally, $S$ is not monogenic, i.e. it cannot be written as a sieve generated by one morphism. Let $S = \langle \{g\colon \mathbb{Z}^n \hookrightarrow \mathbb{Z}^n\} \cup \{f_i\colon \mathbb{Z}^{m_i}\hookrightarrow \mathbb{Z}^n\ |\ m_i<n \}_{i=1}^N \rangle$ be a sieve on $\mathbb{Z}^n$. Then $S$ is a universal colim sieve if and only if $g$ is a surjection, i.e. $g$ is an isomorphism. (This is a direct corollary of Proposition <ref> and Corollary <ref>.) Ideally, we would like to know a `nice' basis for the canonical topology on Ab, like the bases in Section <ref>; to start moving towards this ideal, we look at the simplest free group, $\mathbb{Z}$. In Example <ref> we see that a relative prime pair of numbers will generate a universal colim sieve; this is actually true in general, specifically: Let $S = \langle \{ \mathbb{Z}\xrightarrow{\times a_i} \mathbb{Z} \}_{i=1}^N \rangle$ be a sieve on $\mathbb{Z}$. Then $S$ is a universal colim sieve if and only if $\text{gcd}(a_1,\dots,a_N) = 1$. First assume that $S$ is a universal colim sieve. In particular, the map $\colim_{S}{U}\to\mathbb{Z}$ is a surjection, i.e. $\mathbb{Z}^N\to\mathbb{Z}$, $(x_1,\dots,x_N) \mapsto a_1x_1 + \dots + a_Nx_N$ is a surjection. Therefore, $(a_1,\dots,a_N) = \mathbb{Z}$ and this proves the forward direction. Now assume that $\text{gcd}(a_1,\dots,a_N) = 1$. We will break the proof that $S$ is a universal colim sieve up into several pieces. First we will reduce the proof to showing that $S$ is a colim sieve. By the reductions (Propositions <ref>, <ref> and <ref>), universality only needs to be checked along maps of the form $f\colon \mathbb{Z}\xrightarrow{\times k}\mathbb{Z}$ where $k\neq 0$. Fix $k\neq 0$, i.e. fix $f$, and write $\mathbb{Z}_b$ for the domain of $\mathbb{Z}\xrightarrow{\times b}\mathbb{Z}$. By Lemma <ref>, $f^\ast S = \langle \{ \pi_i\colon \mathbb{Z}_{a_i}\times_\mathbb{Z} \mathbb{Z}_k \to \mathbb{Z}_k \}_{i=1}^N \rangle$. Moreover, it is easy to see that the pullback $\mathbb{Z}_{a_i}\times_\mathbb{Z} \mathbb{Z}_k\cong \mathbb{Z}$ and $\pi_i$ must be multiplication by $\frac{a_i}{\text{gcd}(a_i,k)}$. Since $\text{gcd}(a_1,\dots,a_N)$ equals $1$, then $\text{gcd}\left(\frac{a_1}{\text{gcd}(a_1,k)},\dots,\frac{a_N}{\text{gcd}(a_N,k)}\right) = 1$ and hence $f^\ast S$ has the same form as $S$. Specifically, any argument showing that $S$ is a colim sieve will similarly show that $f^\ast S$ is a colim sieve. Therefore, it suffices to show that $S$ is a colim sieve. To see that $S$ is a colim sieve, i.e. to see that the map $\colim_{S}{U}\to \mathbb{Z}$ induced by $a_1,\dots,a_N$ is an isomorphism, let $\alpha = \frac{N(N-1)}{2}$ and notice that \begin{equation*} \begin{split} \colim_{S}{U} & \cong \Coeq\left( \begin{tikzcd} \oplus_{i=1}^\alpha \mathbb{Z} \arrow[d, shift left = 2] \arrow[d, shift right = 2] \\ \oplus_{i=1}^{N} \mathbb{Z} \end{tikzcd}\right) \\ & \cong \text{Cokernel}\left(\phi\colon \mathbb{Z}^\alpha \to \mathbb{Z}^N\right) \end{split} \end{equation*} for some map $\phi$ where the first isomorphism comes from Lemma <ref> and the last isomorphism comes from the fact that we are working in an abelian category. Now this map $\phi$ happens to be the third map in the Taylor resolution of $\mathbb{Z}$, i.e. $\phi_1$ in [5]. We make two remarks about this previous sentence: (1) we will not prove that our $\phi$ is [5]'s $\phi_1$, although this is easy to observe, and (2) the Taylor resolution in [5] is specifically for polynomial rings, not $\mathbb{Z}$, however, both the definition of the Taylor resolution and the proof that it is in fact a free resolution are analogous. Here is the end of the Taylor resolution: $$\dots \to \mathbb{Z}^\alpha \xrightarrow{\phi} \mathbb{Z}^N \xrightarrow{(a_1\ \dots\ a_N)} \mathbb{Z}\to \mathbb{Z}/(a_1,\dots,a_N)\mathbb{Z} \to 0$$ Since $\text{gcd}(a_1,\dots,a_N) = 1$, then it follows that $(a_1\ \dots\ a_N)$ is a surjection and $\mathbb{Z}/(a_1,\dots,a_N)\mathbb{Z} \cong 0$. Thus we obtain $0\to Im(\phi) \to \mathbb{Z}^N \to \mathbb{Z}\to 0$, which is an exact sequence and hence implies that the cokernel of $\phi$ is $\mathbb{Z}$. Additionally, since $(a_1\ \dots\ a_N)$ induced our map $\colim_{S}{U}\to \mathbb{Z}$, then this short exact sequence also says that $S$ is a colim sieve. Because of Proposition <ref>, we can now easily determine when a sieve on $\mathbb{Z}$ is in the canonical topology and we can easily come up with examples; for example, $\langle \{ \mathbb{Z}\xrightarrow{\times 15}\mathbb{Z}, \mathbb{Z} \xrightarrow{\times 10} \mathbb{Z}, \mathbb{Z} \xrightarrow{\times 12} \mathbb{Z} \} \rangle$ is in the canonical topology whereas the sieve $\langle \{ \mathbb{Z} \xrightarrow{\times 15} \mathbb{Z}, \mathbb{Z} \xrightarrow{\times 50}\mathbb{Z}, \mathbb{Z} \xrightarrow{\times 20} \mathbb{Z} \}\rangle$ is not. One may hope for a similar outcome for sieves on $\mathbb{Z}^n$ when $n\geq 2$, however, the Taylor resolution used in the proof of Proposition <ref> does not seem to generalize in a suitable manner. Instead, we have a proposition that may tell us when a potential sieve is not in the canonical topology. Let $S = \langle \{ \mathbb{Z}^n \xrightarrow{A_i} \mathbb{Z}^n \}_{i=1}^N \rangle$ where $A_i$ is a diagonal matrix with $\det(A_i)\neq 0$. Then there exists a map $\beta\colon \mathbb{Z}\to \mathbb{Z}^n$ such that $\beta^\ast S$ is not a colim sieve if and only if $\text{gcd}(\det(A_1),\dots,\det(A_N)) \neq 1$. First we set up some notation: Let $A_i = diag(a_{1i},\dots,a_{ni})$ and $\mathbb{Z}^n_i$ be the domain of $A_i$. To prove the backward direction, suppose that $\text{gcd}(\det(A_1),\dots,\det(A_N))$ does not equal $1$. We can rephrase the assumptions as $a_{ik}\neq 0$ for all $k$ and there exists a prime $q$ such that $q$ divides the product $a_{1i}\dots a_{ni}$ for all $i$. Set $\beta$ equal to the diagonal embedding, i.e. $1\mapsto (1,\dots,1)$. Then by Lemma <ref>, $\beta^\ast S = \langle \{ f_i\colon \mathbb{Z}^n_i \times_{\mathbb{Z}^n} \mathbb{Z} \to \mathbb{Z}\}_{i=1}^N \rangle$. Let $k_i = \text{lcm}(a_{1i},\dots, a_{ni})$ and $\chi_i\colon \mathbb{Z}\to \mathbb{Z}^n$, $1\mapsto \left( \frac{k_i}{a_{1i}}, \dots, \frac{k_i}{a_{ni}} \right)$, then ℤ d[left]k_i r[above]χ_i ℤ^n d[right]A_i ℤ r[below]β ℤ^n is a pullback diagram. Moreover, the prime $q$ divides $k_i$ for all $i$ since it divides $a_{1i}\dots a_{ni}$ for all $i$. Thus $\text{gcd}(k_1,\dots,k_N)\neq 1$. Now by Proposition <ref>, we can see that $\beta^\ast S = \langle \{\mathbb{Z} \xrightarrow{\times k_i} \mathbb{Z}\}_{i=1}^N \rangle$ is not a universal colim sieve. In particular, the first part of the proof of Proposition <ref> shows that $\beta^\ast S$ is not a colim sieve. To prove the forward direction, we will prove the contrapositive statement. So suppose that $\text{gcd}(\det(A_1),\dots,\det(A_N)) = 1$. Let $\beta\colon \mathbb{Z}\to \mathbb{Z}^n$ be given as the matrix $\begin{pmatrix} b_1 \\ \vdots \\ b_n \end{pmatrix}$. To see that $\beta^\ast S = \langle \{ f_i\colon \mathbb{Z}^n_i \times_{\mathbb{Z}^n} \mathbb{Z} \to \mathbb{Z}\}_{i=1}^N \rangle$ is a colim sieve, notice that we have the pullback diagram ℤ d[left]k_i r ℤ^n d[right]A_i ℤ r[below]β ℤ^n where $k_i = \text{lcm} \left(\frac{a_{1i}}{\text{gcd}(a_{1i},b_1)},\dots,\frac{a_{ni}}{\text{gcd}(a_{ni},b_n)}\right)$. Hence, $k_i$ divides $\det(A_i)$. This implies that $\text{gcd}(k_1,\dots,k_n)$ divides $\text{gcd}(\det(A_1),\dots,\det(A_N))$ and hence equals 1. Now by Proposition <ref>, we can see that $\beta^\ast S = \langle \{\mathbb{Z} \xrightarrow{\times k_i} \mathbb{Z}\}_{i=1}^N \rangle$ is a universal colim sieve. Based on Proposition <ref> we can automatically say that the sieve $\displaystyle \left\langle \left\{ \begin{pmatrix} 4 & 0 \\ 0 & 14 \end{pmatrix}, \begin{pmatrix} 21 & 0 \\ 0 & 2 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 49 \end{pmatrix} \right\} \right\rangle$ on $\mathbb{Z}^2$ is not in the canonical topology because each matrix has a multiple of 7 somewhere on its diagonal. Suppose, like in Proposition <ref>, $S = \langle \{ \mathbb{Z}^n \xrightarrow{A_i} \mathbb{Z}^n \}_{i=1}^N \rangle$ where each $A_i$ is a diagonal matrix and $\text{gcd}(\det(A_1),\dots,\det(A_N)) = 1$. In order to determine if $S$ is a universal colim sieve, we (only) need to check if $f^\ast S$ is a colim sieve for all $f\colon \mathbb{Z}^m \hookrightarrow \mathbb{Z}^n$, $2\leq m\leq n$. However, this is still a fair amount of work and it would be nice if this process could be simplified further. Now we finish this section with a few more examples. Note: we will not prove any assertions in these examples, however, they are all basic computations that can be checked using undergraduate linear algebra. The sieve $S_1 = \displaystyle \left\langle \left\{ \begin{pmatrix} 7 & 0 \\ 1 & 4 \end{pmatrix}, \begin{pmatrix} 21 & 0 \\ 1 & 18 \end{pmatrix}, \begin{pmatrix} 24 & 0 \\ 6 & 5 \end{pmatrix} \right\} \right\rangle$ on $\mathbb{Z}^2$ is not in the canonical topology although it is a colim sieve. In particular, $S_1$ is not universal because $f^\ast S_1$ is not a colim sieve for $f\colon \mathbb{Z} \to \mathbb{Z}^2$, $f(1) = (1,0)$. If we take the generating set of $S_1$ and change the 1 in the first matrix to a 0, then we get the following example: The sieve $S_2 = \displaystyle \left\langle \left\{ \begin{pmatrix} 7 & 0 \\ 0 & 4 \end{pmatrix}, \begin{pmatrix} 21 & 0 \\ 1 & 18 \end{pmatrix}, \begin{pmatrix} 24 & 0 \\ 6 & 5 \end{pmatrix} \right\} \right\rangle$ on $\mathbb{Z}^2$ is not a colim sieve since $\colim_{S}{U}\cong \mathbb{Z}^2\oplus \mathbb{Z}/2\mathbb{Z}$. Therefore, $S_2$ is also not in the canonical topology. Finally, if take the generating set of $S_2$ and change the 18 in the second matrix to a 9, then we get: The sieve $S_3 = \displaystyle \left\langle \left\{ \begin{pmatrix} 7 & 0 \\ 0 & 4 \end{pmatrix}, \begin{pmatrix} 21 & 0 \\ 1 & 9 \end{pmatrix}, \begin{pmatrix} 24 & 0 \\ 6 & 5 \end{pmatrix} \right\} \right\rangle$ on $\mathbb{Z}^2$ is a colim sieve, however, whether or not this sieve is in the canonical topology is unknown. [1] Brian J Day and G Max Kelly. On topological quotient maps preserved by pullbacks or products. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 67, pages 553–558. Cambridge Univ Press, 1970. [2] C Lester. The canonical grothendieck topology and a homotopical analog. preprint, 2019. [3] Saunders Mac Lane and Ieke Moerdijk. Sheaves in geometry and logic: A first introduction to topos Springer Science & Business Media, 2012. [4] J Peter May. A concise course in algebraic topology. University of Chicago press, 1999. [5] Jeffrey Mermin. Three simplicial resolutions. Progress in commutative algebra, 1:127–141, 2012. [6] Neil P Strickland. The category of cgwh spaces. preprint, 12, 2009.
11institutetext: Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing and School of Medicine, University of Leeds, Leeds, UK 22institutetext: NIHR Leeds Biomedical Research Centre (BRC), Leeds, UK 33institutetext: Alan Turing Institute, London, UK 44institutetext: Medical Imaging Research Center (MIRC), Electrical Engineering and Cardiovascular Sciences Departments, KU Leuven, Leuven, Belgium 55institutetext: Division of Informatics, Imaging and Data Science, Schools of Computer Science and Health Sciences, University of Manchester, Manchester, UK # Shape-guided Conditional Latent Diffusion Models for Synthesising Brain Vasculature Yash Deo 11 Haoran Dou 11 Nishant Ravikumar 1122 Alejandro F. Frangi 1122334455 Toni Lassila 1122 ###### Abstract The Circle of Willis (CoW) is the part of cerebral vasculature responsible for delivering blood to the brain. Understanding the diverse anatomical variations and configurations of the CoW is paramount to advance research on cerebrovascular diseases and refine clinical interventions. However, comprehensive investigation of less prevalent CoW variations remains challenging because of the dominance of a few commonly occurring configurations. We propose a novel generative approach utilising a conditional latent diffusion model with shape and anatomical guidance to generate realistic 3D CoW segmentations, including different phenotypical variations. Our conditional latent diffusion model incorporates shape guidance to better preserve vessel continuity and demonstrates superior performance when compared to alternative generative models, including conditional variants of 3D GAN and 3D VAE. We observed that our model generated CoW variants that are more realistic and demonstrate higher visual fidelity than competing approaches with an FID score 53% better than the best-performing GAN-based model. ###### Keywords: Image Synthesis Deep Learning Brain Vasculature Vessel Synthesis Diffusion Latent Diffusion ## 1 Introduction The Circle of Willis (CoW) comprises a complex network of cerebral arteries that plays a critical role in the supply of blood to the brain. The constituent arteries and their branches provide a redundant route for blood flow in the event of occlusion or stenosis of the major vessels, ensuring continuous cerebral perfusion and mitigating the risk of ischaemic events [16]. However, the structure of the CoW is not consistent between individuals and dozens of anatomical variants exist in the general population [6, 17]. Understanding the differences between these variants is essential to study cerebrovascular diseases, predict disease progression, and improve clinical interventions. Previous studies have attempted to classify and describe the anatomical variations of CoW using categorisations such as the Lippert and Pabst system [6, 17]. However, more than 80% of the general population has one of the three most common CoW configurations [2]. The study of anatomical heterogeneity in CoW is limited by the size of available angiographic research data sets, which may only contain a handful of examples of all but the most common phenotypes. The goal of this study is to develop a generative model for CoW segmentations conditioned on anatomical phenotype. Such a model could be used to generate large anatomically realistic virtual cohorts of brain vasculature, and the less common CoW phenotypes can be augmented and explored in greater numbers. Synthesised virtual cohorts of brain vasculature may subsequently be used for training deep learning algorithms on related tasks (e.g. segmenting brain vasculature, classification of CoW phenotype, etc.), or performing in-silico trials. Generative adversarial networks (GANs) [4] and other generative models have demonstrated success in the synthesis of medical images, including the synthesis of blood vessels and other anatomical structures. However, to the best of our knowledge, no previous study has explored these generative models for synthesising different CoW configurations. Additionally, no previous study has explored the controllable synthesis of different CoW configurations conditioned on desired phenotypes. The synthesis of narrow tubular structures such as blood vessels using conventional generative models is a challenge. Our study builds upon the foundations of generative models in medical imaging and focusses on utilising a conditional latent diffusion model to generate visually realistic CoW configurations with controlled anatomical variations (i.e., by conditioning relevant anatomical information such as CoW phenotypes). Medical images like brain magnetic resonance angiograms (MRA’s) tend to be high-dimensional and as a result are prohibitively memory intensive for generative models. Diffusion models and latent diffusion models (LDM) have recently been used for medical image generation [11] and have been shown to outperform GANs in medical image synthesis [18]. Diffusion models have also been successfully used to generate synthetic MRIs [9, 19, 20] but to the best of our knowledge there are no studies that use latent diffusion models are diffusion models to generate synthetic brain vasculature. We propose a conditional latent diffusion model that learns latent embeddings of brain vasculature and, during inference, samples from the learnt latent space to synthesise realistic brain vasculature. We incorporate class, shape, and anatomical guidance as conditioning factors in our latent diffusion model, allowing the vessels to retain their shape and allowing precise control over the generated CoW variations. The diffusion model is conditioned to generate different anatomical variants of the posterior cerebral circulation. We evaluate the performance of our model using quantitative metrics such as multiscale structural similarity index (MS-SSIM) and Fr’echet inception distance (FID). Comparative analyses are conducted against alternative generative architectures, including a 3D GAN and a 3D variational auto-encoder (VAE), to assess the superiority of our proposed method in reproducing CoW variations. ## 2 Methodology Data and Pre-processing. We trained our model on the publicly available IXI dataset [8] using the 181 3T MRA scans acquired at the Hammersmith Hospital, London. Images were centred, cropped from $512\times 512\times 100$ to $256\times 256\times 100$, and the intensity normalised. We then used a Residual U-net [10] to extract vessel segmentations from the MRA. The authors manually labelled each case with the presence / absence of one or both peripheral communicating arteries in the CoW. Class 1 includes cases where both the peripheral communication arteries are present (PComA), Class 2 includes cases with only one PComA, while Class 3 includes cases where both PComAs are absent. Latent Diffusion Model. Recent advances in diffusion models for medical image generation have achieved remarkable success. Diffusion models define a Markov chain of diffusion steps to add random Gaussian noise to the observed data sequentially and then learn to reverse the diffusion process to construct new samples from the noise. Although effective, vanilla diffusion models can be computationally expensive when the input data is of high dimensionality in image space ($256\times 256\times 100$ in our study). Hence, we employ the latent diffusion model (LDM), comprising a pretrained autoencoder and a diffusion model. The autoencoder learns a lower-dimensional latent embedding of the brain vasculature, while the diffusion model focusses on modelling the high-level semantic representations in the latent space efficiently. Following [18], the diffusion process can be defined as forward and reverse Markov chains, where the forward process iteratively transforms the data $x_{0}$ (i.e. the latent features from the autoencoder in our approach) into a standard Gaussian $X_{T}$ as following: $q\left(\mathbf{x}_{1:T}|\mathbf{x}_{0}\right)=\prod_{t=1}^{T}q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right),q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right):=\mathcal{N}\left(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}\right)$ where $q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)$ is the transition probability at the time step $t$ based on the noise schedule $\beta_{t}$. Therefore, the noisy data $\mathbf{x}_{t}$ can be formulated as $q\left(\mathbf{x}_{t}|\mathbf{x}_{0}\right)=\mathcal{N}\left(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right)$, where $\alpha_{t}:=1-\beta_{t},\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}$. The reverse process, achieved via a deep neural network parameterised by $\theta$, can then be defined as: $p_{\theta}\left(\mathbf{x}_{0}|\mathbf{x}_{T}\right)=p\left(\mathbf{x}_{T}\right)\prod_{t=1}^{T}p_{\theta}\left(\mathbf{x}_{t-1}|\mathbf{x}_{t}\right),p_{\theta}\left(\mathbf{x}_{t-1}|\mathbf{x}_{t}\right):=\mathcal{N}\left(\mathbf{x}_{t-1};\mathbf{\mu}_{\theta}\left(\mathbf{x}_{t},t\right),\mathbf{\Sigma}_{\theta}\left(\mathbf{x}_{t},t\right)\right)$ The simplified evidence lower bound (ELBO) loss to optimise the diffusion model by Ho et al. [18] can be formulated as a score-matching task where the neural network predicts the actual noise $\epsilon$ added to the observed data. The resulting loss function is $\mathcal{L}_{\theta}:=\mathbb{E}_{\textbf{x}_{0},t,C,\epsilon\sim\mathcal{N}\left(0,1\right)}\left[\left\|\epsilon-\epsilon_{\theta}\left(x_{t},t,C\right)\right\|^{2}\right]$ where $C$ is the condition in conditional generation. We pretrained a multitask attention-based autoencoder using a combination of L1 loss and Dice loss. The encoder transforms the brain image $K_{0}$ into a compact latent representation $x_{0}$ with dimensions of $256\times 256\times 1$. Once the compression model is trained, the latent representations from the training set serve as inputs to the diffusion model for further analysis and generation. We employ a model with a U-net-based architecture as the diffusion model. Our model has 5 encoding blocks and 5 decoding blocks with skip connections between the corresponding encoding and decoding blocks. We replace the simple convolution layers in the encoding and decoding blocks with a residual block followed by a multihead attention layer to limit information loss in the latent space. Each encoding and decoding block takes the class category (based on CoW phenotypes) as an additional conditional input, while, only the decoding blocks take shape and anatomy features as additional conditional inputs. Figure 1: Overview of the latent diffusion process. Shape and Anatomy Guidance. Angiographic medical images exhibit intricate anatomical structures, particularly the small vessels in the peripheral cerebral vasculature. Preserving anatomical integrity becomes crucial in the generation of realistic and accurately depicted vessels. However, diffusion models often face challenges in faithfully representing the anatomical structure, which can be attributed to their learning and sampling processes that are heavily based on probability density functions [5]. Previous studies have demonstrated that the inclusion of geometric and shape priors can improve performance in medical image synthesis [1, 22]. Additionally, latent space models are susceptible to noise and information loss within the latent space. To this end, we incorporate shape and anatomy guidance to improve the performance of our CoW generation. The shape guidance component involves incorporating class-wise Hu and Zernike moments as conditions during model training [7, 12]. This choice stems from the nature of our image dataset, which comprises both vessel and background regions. By including these shape-related moments as conditions, we aim to better preserve vascular structures within the synthesised images. Hu and Zernike moments are a set of seven invariant moments and a set of orthogonal moments, respectively, commonly used for shape analysis. These moments are typically computed on greyscale or binary images. To incorporate the Hu and Zernike moments as conditions, we first calculate and concatenate these moments for each class. An embedding layer comprising a dense layer with a SiLU activation function [3] and a reshape layer is then introduced to ensure that the data are reshaped into a suitable format for integration as a condition within the decoding branches. Figure 2: Row 1: Comparison of output of the latent diffusion network with and without using shape guidance as conditional input. In each column, the image on the left shows the output of our latent diffusion model and the image on the right shows the result of passing the output through the pretrained decoder and obtaining the Maximum Intensity Projection (MIP); Row 2: compares the output of the network with and without using anatomy guidance as conditional input. The generated images displayed on the right, which are produced without the incorporation of anatomy guidance, consistently exhibit a similar variation of the circle of Willis. Conversely, the images presented on the left, which are generated with the inclusion of anatomy guidance, demonstrate a greater degree of realism and variability in the synthesised circle of Willis variations. To further enhance the performance of our model, we incorporate anatomy guidance using principal component analysis (PCA) on images from each class. As the majority branches within the CoW exhibit a consistent configuration with minor variations attributed to the presence or absence of specific branches, the model tends to capture an average or mean representation of the CoW and generates synthetic images with very little variation between them. This characteristic becomes significant due to the limited number of images available per class. To address this, we use PCA components as conditions to enable the model to discern distinctive features specific to each class. We extract seven principal components along with the mean component for each class, concatenate them, and reshape the data. The resulting features are then passed through a multi-head attention block, followed by a dense layer and another reshape operation for integration into the decoding branches. Fig. 2 shows the effect of incorporating shape moments and PCA as conditions in our diffusion process. By incorporating shape and anatomy guidance conditions during the training of our diffusion model, we leverage specific features and knowledge related to the vessel structures and the general anatomy of the images. This approach promotes the generation of more realistic images, contributing to an improved anatomical fidelity. ## 3 Experiments and results Implementation Details. All models were implemented in TensorFlow 2.8 and Python 3. For the forward diffusion process we use a linear noise schedule with 1000 time steps. The model was trained for 2000 epochs with a learning rate of 0.0005 on a Nvidia Tesla T4 GPU and 38 Gb of RAM with Adam optimiser. Results and Discussion. To assess the performance of our model, we compared it against two established conditional generative models: 3D C-VAE [13] and a 3D-$\alpha$-WGAN [14] along with a vanilla LDM and an LDM with shape guidance. We use the FID score to measure the realism of the generated vasculature. To calculate FID we used a pre-trained InceptionV3 as a feature extractor. A lower FID score indicates higher perceptual image quality. In addition, we used MS-SSIM and 4-G-R SSIM to measure the quality of the generated images [15, 21]. MS-SSIM and 4-G-R SSIM are commonly used to assess the quality of synthesised images. Typically, a higher score is indicative of better image quality, implying a closer resemblance between the synthesised CoW and the ground truth reference. MS-SSIM and 4-G-R SSIM were calculated over 60 synthesised CoW cases for each model. Table 1 presents the evaluation scores achieved by our model, 3D CVAE, and the 3D-$\alpha$-WGAN and the above metrics. As seen in Table 1, our model demonstrates a better FID score, suggesting that the distribution of CoW variants synthesised by our model is closer to that observed in real CoW data, compared to the other models. Additionally, our model achieves higher MS-SSIM and 4-G-R SSIM scores compared to the other methods. These higher scores indicate better image quality, implying that the generated CoW samples resemble the real CoW images more closely. Fig. 3 provides a qualitative comparison among the generated samples obtained from the three models to provide additional context to the quantitative results presented in Table 1. As the output of each model is a 3D vascular structure, maximum intensity projections (MIP) over the Z-axis which condense the volumetric representation into a 2D plane are used to visually compare the synthesised images. Table 1: Quantitative evaluation of Synthetic CoW vasculature Model | FID $\downarrow$ | MS-SSIM $\uparrow$ | 4-G-R SSIM $\uparrow$ ---|---|---|--- 3D CVAE | $52.78$ | $0.411$ | $0.24$ 3D-$\alpha$-WGAN | $12.11$ | $0.53$ | $0.41$ LDM | $176.41$ | $0.22$ | $0.13$ LDM + Shape Guidance | $8.86$ | $0.58$ | $0.47$ Ours (LDM + Shape & Anatomy Guidance) | $5.644$ | $0.61$ | $0.51$ Figure 3: Comparison between the maximum intensity projections (MIPs) of a real Circle of Willis(CoW) against those synthesised with 3D CVAE, 3D-$\alpha$-WGAN, and our model. Fig. 3 reveals that the 3D CVAE model can only generate a limited number of major vessels with limited details. On the other hand, although the 3D-$\alpha$-WGAN model produces the overall structure of the CoW, it exhibits significant anatomical discrepancies with the presence of numerous phantom vessels. On the contrary, our model demonstrates a faithful synthesis of the majority of CoW, with most vessels identifiable. To generate variations of the CoW based on the presence or absence of the posterior communicating artery, our latent diffusion model uses class-conditional inputs where the classes represent different CoW phenotypes. Consequently, to demonstrate the class- conditional fidelity of the proposed approach, we also evaluate the model’s performance in a class-wise manner. The qualitative performance of our model for different classes, compared to real images belonging to those classes, is shown in Fig. 3 Figure 4: Comparison between the real and synthesised maximum intensity projections (MIPs) for each of the three classes Table 2: Quantitative class-wise evaluation of Generated CoW vasculature Class | FID Score $\downarrow$ | MS-SSIM $\uparrow$ | 4-G-R SSIM $\uparrow$ ---|---|---|--- Class 1 | $4.41$ | $0.65$ | $0.65$ Class 2 | $3.88$ | $0.52$ | $0.52$ Class 3 | $7.63$ | $0.41$ | $0.41$ Overall | $5.64$ | $0.61$ | $0.51$ The results presented in Fig. 4 demonstrate the performance of our model in generating realistic variations of the Circle of Willis. Particularly notable is the model’s proficiency in producing accurate representations for classes 1 and 2, surpassing its performance in class 3 due to the limited sample size of the latter. Our model excels in synthesising the posterior circulation and the middle cerebral arteries, showing remarkable fidelity to anatomical structures. However, it faces challenges in effectively generating continuous representations of the anterior circulation. Further investigation and refinement may be required to enhance the model’s ability in this specific aspect. In addition to the visual assessment, we also compute class-wise FID scores, along with the MS-SSIM and 4-G-R SSIM scores. These quantitative evaluations serve to provide a more comprehensive understanding of the model performance with respect to each class. The class-wise performance scores shown in Table 2 are consistent with our observations from Fig. 4, that the model’s performance for class 3 is worse than its performance on classes 1 and 2. ## 4 Conclusion We proposed a latent diffusion model that used shape and anatomy guidance to generate realistic CoW configurations. Quantitative qualitative results showed that our model outperformed existing generative models based on a conditional 3D GAN and a 3D VAE. Future work will look to enhance the model to capture wider anatomical variability and improve synthetic image quality. ## Acknowledgement This research was partially supported by the National Institute for Health and Care Research (NIHR) Leeds Biomedical Research Centre (BRC) and the Royal Academy of Engineering Chair in Emerging Technologies (CiET1919/19). ## References * [1] Brooksby, B., Dehghani, H., Pogue, B., Paulsen, K.: Near-infrared (NIR) tomography breast image reconstruction with a priori structural information from MRI: algorithm development for reconstructing heterogeneities. IEEE J. Sel. Top. Quantum. Electron. 9(2), 199–209 (2003) * [2] Eftekhar, B., Dadmehr, M., Ansari, S.: Are the distributions of variations of circle of Willis different in different populations? BMC Neurol. 6(1), 1–9 (2006) * [3] Elfwing, S., Uchibef, E., Doya, K.: Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Netw. 107, 3–11 (2018) * [4] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020) * [5] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 33, 6840–6851 (2020) * [6] Hoang, T.M., Huynh, T.V., Ly, A.V.H., Pham, M.V.: The variations in the circle of Willis on 64-multislice spiral computed tomography. Trends Med. Sci. 2(3) (2022) * [7] Hu, M.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962) * [8] Information eXtraction from Images Consortium: IXI dataset – brain development. https://brain-development.org/ixi-dataset/, accessed: 2023-02-14 * [9] Jiang, L., Mao, Y., Chen, X., Wang, X., Li, C.: CoLa-Diff: Conditional latent diffusion model for multi-modal MRI synthesis. arXiv preprint arXiv:2303.14081 (2022) * [10] Kerfoot, E., Clough, J., Oksuz, I., Lee, J., King, A.P., Schnabel, J.A.: Left-ventricle quantification using residual U-Net. In: Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges: 9th International Workshop, STACOM 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers 9. pp. 371–380. Springer (2019) * [11] Khader, F., Mueller-Franzes, G., Arasteh, S., Han, T., Haarburger, C., Schulze-Hagen, M., Schad, P., Engelhardt, S., Baessler, B., Foersch, S., Stegmaier, J.: Medical diffusion–denoising diffusion probabilistic models for 3D medical image generation. arXiv preprint arXiv:2211.03364 (2022) * [12] Khotanzad, A., Hong, Y.: Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 199–209 (1990) * [13] Kingma, D., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013) * [14] Kwon, G., Han, C., Kim, D.: Generation of 3D brain MRI using auto-encoding generative adversarial networks. Medical Image Computing and Computer Assisted Intervention–MICCAI 2019 22(3), 118–126 (2019) * [15] Li, C., Bovik, A.: Content-partitioned structural similarity index for image quality assessment. Signal Processing: Image Communication (2010) * [16] Lin, E., Kamel, H., Gupta, A., RoyChoudhury, A., Girgis, P., Glodzik, L.: Incomplete circle of Willis variants and stroke outcome. Eur. J. Radiol. 153, 110383 (2022) * [17] Lippert, H., Pabst, R.: Arterial Variations in Man: Classification and Frequency. J.F. Bergmann Verlag, Munich (1985) * [18] Müller-Franzes, G., Niehues, J., Khader, F., Arasteh, S., Haarburger, C., Kuhl, C., Wang, T., Han, T., Nebelung, S., Kather, J., Truhn, D.: Diffusion probabilistic models beat GANs on medical images. arXiv preprint arXiv:2212.07501 (2022) * [19] Peng, W., Adeli, E., Zhao, Q., Pohl, K.: Generating realistic 3D brain MRIs using a conditional diffusion probabilistic model. arXiv preprint arXiv:2212.08034 (2022) * [20] Pinaya, W., Tudosiu, P., Dafflon, J., Da Costa, P., Fernandez, V., Nachev, P., Ourselin, S., Cardoso, M.: Brain imaging generation with latent diffusion models. Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022 pp. 117–126 (2022) * [21] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer: Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM. Human Vision and Electronic Imaging XIII. vol. 6806, pp. 410–423. SPIE (2008) * [22] Yu, B., Zhou, L., Wang, L., Shi, Y., Fripp, J., Bourgeat, P.: Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019)
† # Arbitrary rotation invariant random matrix ensembles and supersymmetry: orthogonal and unitary–symplectic case Mario Kieburga)†, Johan Grönqvistb) and Thomas Guhra) a)Universität Duisburg- Essen, Lotharstraße 1, 47048 Duisburg, Germany b)Matematisk Fysik, LTH, Lunds Universitet, Box 118, 22100 Lund, Sweden <EMAIL_ADDRESS> ###### Abstract Recently, the supersymmetry method was extended from Gaussian ensembles to arbitrary unitarily invariant matrix ensembles by generalizing the Hubbard–Stratonovich transformation. Here, we complete this extension by including arbitrary orthogonally and unitary–symplectically invariant matrix ensembles. The results are equivalent to, but the approach is different from the superbosonization formula. We express our results in a unifying way. We also give explicit expressions for all one–point functions and discuss features of the higher order correlations. ###### pacs: 02.30.Px, 05.30.Ch, 05.30.-d, 05.45.Mt ††: J. Phys. A: Math. Gen. ## 1 Introduction In random matrix theory, supersymmetry is an indispensable tool [1, 2, 3, 4]. Recently, this method was extended from Gaussian probability densities to arbitrary rotation invariant ones. Presently, there are two approaches referred as superbosonization. The first approach is a generalization of the Hubbard–Stratonovich transformation for rotation invariant random matrix ensembles [5]. The basic idea is the introduction of a proper Dirac–distribution in superspace, extending earlier work in the context of scattering theory [6], universality considerations [7], field theory [8, 9] and quantum chromodynamics [10]. The second approach is the superbosonization formula developed in Refs. [11, 12]. It is an identity for integrals over superfunctions on rectangular supermatrices which are rotation invariant under an ordinary group. Here, we further extend the generalized Hubbard–Stratonovich transformation to the orthogonal and the unitary symplectic symmetry class in a unifying way. To this end, we use an analog of the Sekiguchi differential operator for ordinary matrix Bessel–functions. We also aim at a presentation which is mathematically more sound than the one in Ref. [5]. The article is organized as follows. The problem is posed in Sec. 2. We give an outline of the calculation in Sec. 3. In Sec. 4, we present the generalized Hubbard–Stratonovich transformation. In Sec. 5, we carry out the calculation for arbitrary ensembles as far as possible. Then, we restrict the computation to the three classical symmetry classes. We, thereby, extend the supersymmetric Ingham–Siegel integral [5]. In Sec. 6, we give a more compact expression of the generating function in terms of supermatrix Bessel–functions. We show that the generating function is independent of the chosen representation for the characteristic function. The one–point and higher correlation functions are expressed as eigenvalue integrals in Sec. 7. In the appendices, we present details of the calculations. ## 2 Posing the problem We consider a sub-vector space $\mathfrak{M}_{N}$ of the hermitian $N\times N$–matrices ${\rm Herm\,}(2,N)$. ${\rm Herm\,}(\beta,N)$ is the set of real orthogonal ($\beta=1$), hermitian ($\beta=2$) and quaternionic self-adjoint ($\beta=4$) matrices and $\beta$ is the Dyson-index. We use the complex $2\times 2$ dimensional matrix representation for quaternionic numbers $\mathbb{H}$. The results can easily be extended to other representations of the quaternionic field. For the relation between the single representations, we refer to a work by Jiang [13]. The object of interest is an arbitrary sufficiently integrable probability density $P$ on $\mathfrak{M}_{N}$. Later, we assume that $P$ is an invariant function under the action of the group ${\rm U\,}^{(\beta)}(N)=\left\\{\begin{array}[]{ll}{\rm O}(N)&,\ \beta=1\\\ {\rm U\,}(N)&,\ \beta=2\\\ {\rm USp}(2N)&,\ \beta=4\end{array}\right.$ (2.1) and $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$. Here, we introduce $\gamma_{2}=1$ for $\beta\in\\{1,2\\}$ and $\gamma_{2}=2$ for $\beta=4$ and, furthermore, $\gamma_{1}=2\gamma_{2}/\beta$ and $\tilde{\gamma}=\gamma_{1}\gamma_{2}$. These constants will play an important role in the sequel. We are interested in the $k$–point correlation functions $R_{k}(x)=\mathbf{d}^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr\delta(x_{p}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}-H)d[H]$ (2.2) with the $k$ energies $x={\rm diag\,}(x_{1},\ldots,x_{k})$. Here, $\mathbf{d}$ is the inverse averaged eigenvalue degeneracy of an arbitrary matrix $H\in\mathfrak{M}_{N}$. The measure $d[H]$ is defined as in Ref. [14], it is the product of all real and imaginary parts of the matrix entries. For example, we have $\mathbf{d}=1/2$ for $\mathfrak{M}_{2N}={\rm Herm\,}(4,N)$ and $\mathbf{d}=1$ for no eigenvalue degeneracy as for $\mathfrak{M}_{N}={\rm Herm\,}(\beta,N)$ with $\beta\in\\{1,2\\}$. We use in Eq. (2.2) the $\delta$–distribution which is defined by the matrix Green’s function. The definition of the $k$–point correlation function (2.2) differs from Mehta’s [15]. The two definitions can always be mapped onto each other as explained for example in Ref. [4]. We recall that it is convenient to consider the more general function $\widehat{R}_{k}\left(x^{(L)}\right)=\mathbf{d}^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr[(x_{p}+L_{p}\imath\varepsilon)\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}-H]^{-1}d[H]$ (2.3) where we have suppressed the normalization constant. The quantities $L_{j}$ in $x^{(L)}={\rm diag\,}(x_{1}+L_{1}\imath\varepsilon,\ldots,x_{k}+L_{k}\imath\varepsilon)$ are elements in $\\{\pm 1\\}$. We define $x^{\pm}={\rm diag\,}(x_{1}\pm\imath\varepsilon,\ldots,x_{k}\pm\imath\varepsilon)$. Considering the Fourier transformation of (2.2) we have $\displaystyle r_{k}(t)$ $\displaystyle=$ $\displaystyle(2\pi)^{-k/2}\int\limits_{\mathbb{R}^{k}}R_{k}(x)\prod\limits_{p=1}^{k}\exp\left(\imath x_{p}t_{p}\right)d[x]=$ (2.4) $\displaystyle=$ $\displaystyle\left(\frac{\mathbf{d}}{\sqrt{2\pi}}\right)^{k}\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\tr\exp\left(\imath Ht_{p}\right)d[H]\ .$ The Fourier transformation of (2.3) yields $\displaystyle\widehat{r}_{k}(t)$ $\displaystyle=$ $\displaystyle(2\pi)^{-k/2}\int\limits_{\mathbb{R}^{k}}\widehat{R}_{k}\left(x^{(L)}\right)\prod\limits_{p=1}^{k}\exp\left(\imath x_{p}t_{p}\right)d[x]=$ (2.5) $\displaystyle=$ $\displaystyle\prod\limits_{p=1}^{k}\left[-L_{p}\ 2\pi\imath\Theta(-L_{p}t_{p})\exp\left(\varepsilon L_{p}t_{p}\right)\right]r_{k}(t)$ where $\Theta$ is the Heavyside–distribution. As in Ref. [5], the $k$–point correlation function is completely determined by Eq. (2.3) with $L_{p}=-1$ for all $p$ if the Fourier transform (2.4) is entire in all entries, i.e. analytic in all entries with infinite radius of convergence. We obtain such a Fourier transform if the $k$–point correlation function $R_{k}$ is a Schwartz–function on $\mathbb{R}^{k}$ with the property $\int\limits_{\mathbb{R}^{k}}|R_{k}(x)|\prod\limits_{p=1}^{k}\exp\left(\tilde{\delta}x_{p}\right)d[x]<\infty\quad,\quad\forall\tilde{\delta}\in\mathbb{R}\ .$ (2.6) This set of functions is dense in the set of Schwartz–functions on $\mathbb{R}^{k}$ without this property. The notion dense refers to uniform convergence. This is true since every Schwartz–function times a Gaussian distribution $\exp\left(-\epsilon\sum\limits_{p=1}^{k}x_{p}^{2}\right)$, $\epsilon>0$, is a Schwartz–function and fulfils Eq. (2.6). We proof that $r_{k}$, see Eq. (2.4), is indeed entire in all entries for such $k$–point correlation functions. To this end, we consider the function $r_{k\delta}(t)=\int\limits_{\mathfrak{B}_{\delta}}R_{k}(x)\prod\limits_{p=1}^{k}\exp\left(\imath x_{p}t_{p}\right)d[x],$ (2.7) where $\mathfrak{B}_{\delta}$ is the closed $k$-dimensional real ball with radius $\delta\in\mathbb{R}^{+}$. Due to the Paley–Wiener theorem [16], $r_{k\delta}$ is for all $\delta\in\mathbb{R}^{+}$ entire analytic. Let $\mathfrak{B}_{\tilde{\delta}}^{\mathbb{C}}$ be another $k$-dimensional complex ball with radius $\tilde{\delta}\in\mathbb{R}^{+}$. Then, we have $\underset{\delta\to\infty}{\lim}\underset{t\in\mathfrak{B}_{\tilde{\delta}}^{\mathbb{C}}}{\sup}|r_{k\delta}(t)-r_{k}(t)|\leq\underset{\delta\to\infty}{\lim}\int\limits_{\mathbb{R}^{k}\setminus\mathfrak{B}_{\delta}}|R_{k}(x)|\prod\limits_{p=1}^{k}\exp\left(\tilde{\delta}x_{p}\right)d[x]=0\ .$ (2.8) The limit of $r_{k\delta}$ to $r_{k}$ is uniform on every compact support on $\mathbb{C}^{k}$. Thus, $r_{k}$ is entire analytic. The modified correlation function $\widehat{R}_{k}$ for all choices of the $L_{p}$ can be reconstructed by Eq. (2.5). In Sec. 7, we extend the results by a limit–value–process in a local convex way to non-analytic functions. We derive $\widehat{R}_{k}\left(x^{-}\right)$ from the generating function $Z_{k}\left(x^{-}+J\right)=\int\limits_{\mathfrak{M}_{N}}P(H)\prod\limits_{p=1}^{k}\frac{\det[H-(x_{p}^{-}+J_{p})\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}]}{\det[H-(x_{p}^{-}-J_{p})\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}]}d[H]$ (2.9) by differentiation with respect to the source variables [17] $\widehat{R}_{k}\left(x^{-}\right)=\left(\frac{\mathbf{d}}{2}\right)^{k}\left.\frac{\partial^{k}}{\prod_{p=1}^{k}\partial J_{p}}Z_{k}\left(x^{-}+J\right)\right|_{J=0}$ (2.10) where $x^{-}+J=x^{-}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{4}+{\rm diag\,}(J_{1},\ldots,J_{k})\otimes{\rm diag\,}(-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2},\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2})$. By definition, $Z_{k}$ is normalized to unity at $J=0$. ## 3 Sketch of our approach To provide a guideline through the detailed presentation to follow in the ensuing Sections, we briefly sketch the main ideas as in Ref. [5] and as further extended in the present contribution. To express the generating function (2.9) as an integral in superspace, we write the determinants as Gaussian integrals over vectors of ordinary and Grassmann variables. We then perform the ensemble average which is equivalent to calculating the characteristic function $\Phi(K)=\int P(H)\exp(\imath\tr HK)d[H]$ (3.1) of the probability density. The rotation invariance of $P(H)$ carries over to $\Phi(K)$. The ordinary matrix $K$ contains the abovementioned vectors of ordinary and Grassmann variables as dyadic matrices. It has a dual matrix $B$ in superspace whose entries are all scalarproducts of these vectors. The reduction in the degrees of freedom is fully encoded in this duality, as the dimensions of $K$ and $B$ scale with $N$ and $k$, respectively. The crucial identity $\tr K^{m}={\rm Str\,}B^{m},\quad\forall m\in\mathbb{N},$ (3.2) yields the supersymmetric extension of the rotation invariant characteristic function, $\Phi(K)=\Phi(\tr K,\tr K^{2},...)=\Phi({\rm Str\,}B,{\rm Str\,}B^{2},...)=\Phi(B)\ ,$ (3.3) which is now viewed as a function in ordinary and superspace. We rewrite it by inserting a proper Dirac–distribution in superspace, $\displaystyle\Phi(B)$ $\displaystyle=$ $\displaystyle\int\Phi(\rho)\delta(\rho-B)d[\rho]$ (3.4) $\displaystyle\sim$ $\displaystyle\int\int\Phi(\rho)\exp[\imath{\rm Str\,}(\rho-B)\sigma]d[\rho]d[\sigma]\ ,$ (3.5) where the supermatrix $\rho$ and $\sigma$ are introduced as integration variables. The vectors of ordinary and Grassmann variables now appear as in the conventional Hubbard–Stratonovich transformation and can hence be integrated out in the same way. We are left with the integrals over $\rho$ and $\sigma$. If we do the integral over $\rho$ we arrive at the result $Z_{k}\left(x^{-}+J\right)\sim\int Q(\sigma){\rm Sdet\,}^{-N/\gamma_{1}}(\sigma-x^{-}-J)d[\sigma].$ (3.6) for the generating function. The superfunction $Q$ is the superspace Fourier transform of $\Phi$ and plays the role of a probability density in superspace, $Q(\sigma)=\int\Phi(\rho)\exp(\imath{\rm Str\,}\rho\sigma)d[\rho]\ .$ (3.7) If we choose to integrate over $\sigma$ instead, we obtain another representation of the generating function $Z_{k}\left(x^{-}+J\right)\sim\int\Phi(\rho)I(\rho)\exp[-\imath{\rm Str\,}\rho(x^{-}+J)]d[\rho]\ ,$ (3.8) which still contains the characteristic function. The distribution $I(\rho)$ appears. It is the supersymmetric version of the Ingham–Siegel integral. It is a rotation invariant function resulting from the Fourier transformation of the superdeterminant in Eq. (3.6). One way to proceed further is to diagonalize the supermatrix $\rho$ and to integrate over the angles. We may omit Efetov–Wegner terms and have $Z_{k}\left(x^{-}+J\right)\sim\int\Phi(r)I(r)\varphi(-\imath r,x^{-}+J)d[r],$ (3.9) where $\varphi$ is a supermatrix Bessel–function. The differentiation with respect to $J$ gives $\widehat{R}_{k}$. We can introduce other signatures of $L$ by Fourier transformation of Eq. (3.8) and identification with Eq. (2.5). Eventually, we find the correlation functions $R_{k}$. ## 4 Generalized Hubbard–Stratonovich transformation In Sec. 4.1, we express the determinants in Eq. (2.9) as Gaussian integrals and introduce the characteristic function of the matrix ensemble. In Sec. 4.2, we qualitatively present the duality between ordinary and superspace which is quantitatively discussed in Sec. 4.3. Then, we restrict the matrix ensembles to the classical symmetry classes. In Sec. 4.4, we investigate the diagonalization of the dyadic matrix $K$ appearing from the Gaussian integrals. The ambiguity of the supersymmetric extension of the characteristic function is discussed in Sec. 4.5. In Sec. 4.6, we present the symmetries of the appearing supermatrices. In Sec. 4.7, we replace the dyadic supermatrix in the supersymmetric extended characteristic function with a symmetric supermatrix discussed in the section before. ### 4.1 Average over the ensemble and the characteristic function To formulate the generating function as a supersymmetric integral, we consider a complex Grassmann algebra $\Lambda=\bigoplus\limits_{j=0}^{2Nk}\Lambda_{j}$ with $Nk$-pairs $\\{\zeta_{jp},\zeta_{jp}^{*}\\}_{j,p}$ of Grassmann variables [18]. We define the $k$ anticommuting vectors and their adjoint $\zeta_{p}=(\zeta_{1p},\ldots,\zeta_{Np})^{T}\ \ \ {\rm and}\ \ \ \zeta_{p}^{\dagger}=(\zeta_{1p}^{*},\ldots,\zeta_{Np}^{*})\ ,$ (4.1) respectively. For integrations over Grassmann variables, we use the conventions of Ref. [14]. We also consider $k$ $N$–dimensional complex vectors $\\{z_{p},z_{p}^{\dagger}\\}_{1\leq p\leq k}$. In the usual way, we write the determinants as Gaussian integrals and find for Eq. (2.9) $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle(-\imath)^{Nk}\int\limits_{\mathfrak{M}_{N}}\int\limits_{\mathfrak{C}_{kN}}d[\zeta]d[z]d[H]P(H)\times$ (4.2) $\displaystyle\times$ $\displaystyle{\rm exp}\left(\imath\sum\limits_{p=1}^{k}\left\\{\zeta_{p}^{\dagger}[H-(x_{p}^{-}+J_{p})\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}]\zeta_{p}+z_{p}^{\dagger}[H-(x_{p}^{-}-J_{p})\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}]z_{p}\right\\}\right)$ where $d[\zeta]=\prod\limits_{p=1}^{k}\prod\limits_{j=1}^{N}d\zeta_{jp}d\zeta_{jp}^{*}$, $d[z]=\prod\limits_{p=1}^{k}\prod\limits_{j=1}^{N}dz_{jp}dz_{jp}^{*}$ and $\mathfrak{C}_{kN}=\mathbb{C}^{kN}\times\Lambda_{2Nk}$. Using $\sum\limits_{p=1}^{k}\left(\zeta_{p}^{\dagger}H\zeta_{p}+z_{p}^{\dagger}Hz_{p}\right)=\tr H\widetilde{K}$ (4.3) with $\widetilde{K}=\sum\limits_{p=1}^{k}\left(z_{p}z_{p}^{\dagger}-\zeta_{p}\zeta_{p}^{\dagger}\right)$ (4.4) leads to $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle(-\imath)^{Nk}\int\limits_{\mathfrak{C}_{kN}}\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)\times$ (4.5) $\displaystyle\times$ $\displaystyle{\rm exp}\left(-\imath\sum\limits_{p=1}^{k}\left[(x_{p}^{-}+J_{p})\zeta_{p}^{\dagger}\zeta_{p}+(x_{p}^{-}-J_{p})z_{p}^{\dagger}z_{p}\right]\right)d[\zeta]d[z]\ .$ where the integration over $H$ is the Fourier transformation of the probability density $P$, $\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)=\int\limits_{\mathfrak{M}_{N}}P(H)\exp\left(\imath\tr H\widetilde{K}\right)d[H]\ .$ (4.6) This Fourier transform is called characteristic function and is denoted by $\Phi$ in Ref. [5] and in Eq. (3.1). The projection operator $\hat{\pi}(\mathfrak{M}_{N})$ onto the space $\mathfrak{M}_{N}$ is crucial. For $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$ the projection operator is $\hat{\pi}\left({\rm Herm\,}(\beta,N);\widetilde{K}\right)=\frac{1}{2}\left[\widetilde{K}+\widehat{Y}(\widetilde{K})\right]$ (4.7) with $\widehat{Y}(\widetilde{K})=\left\\{\begin{array}[]{ll}\widetilde{K}^{T}&,\ \beta=1\\\ \widetilde{K}&,\ \beta=2\\\ \left(Y_{{\rm s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}\right)\widetilde{K}^{T}\left(Y_{{\rm s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}\right)&,\ \beta=4\end{array}\right.$ (4.8) and the symplectic unit $Y_{{\rm s}}=\left[\begin{array}[]{cc}0&1\\\ -1&0\end{array}\right]\ ,$ (4.9) where $\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}$ is the $N\times N$–unit matrix. The transposition in Eq. (4.8) can also be replaced by the complex conjugation due to $\widetilde{K}^{\dagger}=\widetilde{K}$. The projection onto the set of diagonal matrices $\bigoplus\limits_{j=1}^{N}\mathbb{R}$ is $\hat{\pi}\left(\bigoplus_{j=1}^{N}\mathbb{R};\widetilde{K}\right)={\rm diag\,}\left(\widetilde{K}_{11},\widetilde{K}_{22},\ldots,\widetilde{K}_{NN}\right)\ .$ (4.10) ### 4.2 Duality between ordinary and superspace Is it always possible to find a supermatrix representation for the characteristic function $\mathcal{F}P$ such that Eq. (4.5) has an integral representation over supermatrices as it is known [5, 12] for rotation invariant $P$ on $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$? The integral (4.5) is an integral over the supervectors $v_{j}=(z_{j1}^{*},\ldots,z_{jk}^{*},-\zeta_{j1}^{*},\ldots,-\zeta_{jk}^{*})^{T}$ and their adjoint $v_{j}^{\dagger}=(z_{j1},\ldots,z_{jk},\zeta_{j1},\ldots,\zeta_{jk})$. The adjoint “$\dagger$” is the complex conjugation with the supersymmetric transposition and “$T$” is the ordinary transposition. The entries of the matrix $\widetilde{K}$ are $v_{n}^{\dagger}v_{m}$. If we do not use any symmetry of the matrix ensemble, we can write these scalar products of supervectors as supertraces $v_{n}^{\dagger}v_{m}={\rm Str\,}v_{m}v_{n}^{\dagger}\ .$ (4.11) Then, we can transform each of these supertraces with a Dirac–distribution to an integral over a $(k+k)\times(k+k)$–supermatrix. We defined the Dirac–distribution in superspace as in Refs. [19, 10]. The ambiguity discussed in Ref. [20] occurring by such a transformation is discussed in the subsections 4.5 and 6.3. The procedure above is tedious. Using the symmetries of the ensemble ($\mathcal{F}P,\mathfrak{M}_{N}$), we can reduce the number of integrals in superspace. We will see that the number of commuting real integrals and of Grassmannian integrals is $2k^{2}+2k^{2}$ ($\beta=2$) or $4k^{2}+4k^{2}$ ($\beta\in\\{1,4\\}$) for a rotation invariant matrix ensembles on ${\rm Herm\,}(\beta,N)$. If there is not a symmetry the number of integrals has not been reduced. One has to integrate over $N(N+1)$ ordinary hermitian $k\times k$–matrices and their corresponding anticommuting parameters if the transformation above is used. ### 4.3 Analysis of the duality between ordinary and superspace We consider an orthonormal basis $\\{A_{n}\\}_{1\leq n\leq d}$ of $\mathfrak{M}_{N}$ where $d$ is the dimension of $\mathfrak{M}_{N}$. We use the trace $\tr{A_{n}A_{m}}=\delta_{nm}$ as the scalar product and recall that $\mathfrak{M}_{N}$ is a real vector space. Every element of this basis is represented as $A_{n}=\sum\limits_{j=1}^{N}\lambda_{jn}e_{jn}e_{jn}^{\dagger}\ \ \ {\rm with}\ \ \ \sum\limits_{j=1}^{N}\lambda_{jn}^{2}=1\ .$ (4.12) Here, $e_{jn}$ are the normalized eigenvectors of $A_{n}$ to the eigenvalues $\lambda_{jn}$. Then, we construct every matrix $H\in\mathfrak{M}_{N}$ in this basis $H=\sum\limits_{n=1}^{d}h_{n}A_{n}\ .$ (4.13) We find for the characteristic function $\displaystyle\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)$ $\displaystyle=$ $\displaystyle\int\limits_{\mathfrak{M}_{N}}P\left(\sum\limits_{n=1}^{d}h_{n}A_{n}\right){\rm exp}\left(\imath\sum\limits_{n=1}^{d}h_{n}\tr A_{n}\widetilde{K}\right)d[H]=$ (4.14) $\displaystyle=$ $\displaystyle\mathcal{F}P\left(\sum\limits_{n=1}^{d}\tr\left(\widetilde{K}A_{n}\right)A_{n}\right)\ .$ With help of Eq. (4.12) and an equation analogous to (4.11), the characteristic function is $\mathcal{F}P\left(\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})\right)=\mathcal{F}P\left(\sum\limits_{n=1}^{d}{\rm Str\,}\left(\sum\limits_{j=1}^{N}\lambda_{jn}Ve_{jn}e_{jn}^{\dagger}V^{\dagger}\right)A_{n}\right)$ (4.15) with $V=(v_{1},\ldots,v_{N})$. We see that the matrix $\widetilde{K}$ is projected onto $K=\hat{\pi}(\mathfrak{M}_{N};\widetilde{K})$ (4.16) where the projection is the argument of the characteristic function in Eq. (4.14). The matrices in the supertraces of (4.15) can be exchanged by $(k+k)\times(k+k)$–supermatrices with the Delta–distributions described above. If the ensemble has no symmetry then we have reduced the number of supermatrices to the dimension of $\mathfrak{M}_{N}$. Nevertheless, we can find a more compact supersymmetric expression of the matrix $K$ such that the number of the resulting integrals only depends on $k$ but not on $N$. This is possible if $K$ is a dyadic matrix of vectors where the number of vectors is independent of $N$ and the probability distribution only depends on invariants of $H$. The ensembles with $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$ and a probability density $P$ invariant under the action of ${\rm U\,}^{(\beta)}(N)$ fulfil these properties. It is known [5, 12] that these cases have a very compact supersymmetric expression. Furthermore, these ensembles are well analyzed for Gaussian–distributions with help of the Hubbard–Stratonovitch transformation [1, 3, 2]. In the present context, the cases of interest are $\mathfrak{M}_{\gamma_{2}N}={\rm Herm\,}(\beta,N)$ with a probability density $P$ invariant under the action ${\rm U\,}^{(\beta)}(N)$. We need this symmetry to simplify Eq. (4.15). Let $N\geq\gamma_{1}k$. This restriction also appears in the superbosonization formula [12]. If $N<\gamma_{1}k$, one has to be modify the calculations below. For the superbosonization formula, Bunder, Efetov, Kravtsov, Yevtushenko, and Zirnbauer [20] presented such a modification. The symmetries of a function $f$ carry over to its Fourier transform $\mathcal{F}f$. Thus, the characteristic function $\mathcal{F}P$ is invariant under the action of ${\rm U\,}^{(\beta)}(N)$. Let $\widetilde{K}_{0}$ be an arbitrary ordinary hermitian matrix in the Fourier transformation (4.6) of the probability density. We assume that the characteristic function is analytic in the eigenvalues of $\widetilde{K}_{0}$. Then, we expand $\mathcal{F}P$ as a power series in these eigenvalues. Since the characteristic function is rotation invariant every single polynomial in this power series of a homogeneous degree is permutation invariant. With help of the fundamental theorem of symmetric functions [21] we rewrite these polynomials in the basis of elementary polynomials. This is equivalent to writing these polynomials in the basis of the traces $\tr\left[\hat{\pi}\left({\rm Herm\,}(\beta,N),\widetilde{K}_{0}\right)\right]^{m}$, $m\in\mathbb{N}$. The analytic continuation of $\mathcal{F}P$ from $\widetilde{K}_{0}$ to $\widetilde{K}$ yields that the characteristic function in (4.6) only depends on $\tr\left[\hat{\pi}\left({\rm Herm\,}(\beta,N),\widetilde{K}\right)\right]^{m}$, $m\in\mathbb{N}$. Defining the matrix $V^{\dagger}=(z_{1},\ldots,z_{k},Yz_{1}^{*},\ldots,Yz_{k}^{*},\zeta_{1},\ldots,\zeta_{k},Y\zeta_{1}^{*},\ldots,Y\zeta_{k}^{*})$ (4.17) and its adjoint $V=(z_{1}^{*},\ldots,z_{k}^{*},Yz_{1},\ldots,Yz_{k},-\zeta_{1}^{*},\ldots,-\zeta_{k}^{*},Y\zeta_{1},\ldots,Y\zeta_{k})^{T}$ (4.18) with $Y=\left\\{\begin{array}[]{ll}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}&,\ \beta=1\\\ 0&,\ \beta=2\\\ Y_{{\rm s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}&,\ \beta=4\end{array}\right.,$ (4.19) we find $K=\hat{\pi}\left({\rm Herm\,}(\beta,N);\widetilde{K}\right)=\frac{1}{\tilde{\gamma}}V^{\dagger}V\ .$ (4.20) The crucial identity $\tr(V^{\dagger}V)^{m}={\rm Str\,}(VV^{\dagger})^{m}$ (4.21) holds for all $\beta$. It connects ordinary and superspace. For $\beta=2$, a proof can be found in Ref. [5]. In A, we show that the equation ${\rm Str\,}V_{1}V_{2}={\rm Str\,}V_{2}V_{1}$ (4.22) holds for all rectangular matrices of the form $V_{1}=\left[\begin{array}[]{cc}\overbrace{A_{1}}^{a}&\overbrace{B_{1}}^{b}\hskip 0.85358pt\\}c\\\ C_{1}&D_{1}\hskip 5.97508pt\\}d\end{array}\right]\ \ \ {\rm and}\ \ \ V_{2}=\left[\begin{array}[]{cc}\overbrace{A_{2}}^{c}&\overbrace{B_{2}}^{d}\hskip 0.85358pt\\}a\\\ C_{2}&D_{2}\hskip 5.69054pt\\}b\end{array}\right]$ (4.23) where $A_{j}$ and $D_{j}$ have commuting entries and $B_{j}$ and $C_{j}$ anticommuting ones. This implies in particular that Eq. (4.21) holds for all $\beta$. Hence, we reduced the amount of supermatrices corresponding to $\widetilde{K}$ in Eq. (4.15) to one $(2k+2k)\times(2k+2k)$–supermatrix. In Ref. [5], the characteristic function $\Phi$ was, with help of Eq. (4.21), extended to superspace. We follow this idea and, then, proceed with the Dirac–distribution mentioned above. ### 4.4 Problems when diagonalizing $K$ In Ref. [5], two approaches of the duality relation between ordinary and superspace were presented. The first approach is the duality equation (4.21) for $\beta=2$. In our article, we follow this idea. In the second approach, the matrix $K$ was diagonalized. With the eigenvalues of $K$, a projection operator was constructed for the definition of a reduced probability density according to the probability density $P$. The latter approach fails because $K$ is only diagonalizable if it has no degeneracy larger than $\gamma_{2}$. Moreover for diagonalizable $K$, one can not find an eigenvalue $\lambda=0$. This is included in the following statement which we derive in E. ###### Statement 4.1 Let $N,\widetilde{N}\in\mathbb{N}$, $H^{(0)}\in{\rm Herm\,}(\beta,N)$, $l\in\mathbb{R}^{\widetilde{N}}$ and $\\{\tau_{q}\\}_{1\leq q\leq\widetilde{N}}$ $\gamma_{2}N$–dimensional vectors consisting of Grassmann variables $\tau_{q}=(\tau_{q}^{(1)},\ldots,\tau_{q}^{(\gamma_{2}N)})^{T}$. Then, the matrix $H=H^{(0)}+\sum\limits_{q=1}^{\widetilde{N}}l_{q}\left[\tau_{q}\tau_{q}^{\dagger}+\widehat{Y}\left(\tau_{q}^{*}\tau_{q}^{T}\right)\right]$ (4.24) can not be diagonalized $H=U{\rm diag\,}(\lambda_{1},\ldots,\lambda_{N})U^{\dagger}$ by a matrix $U$ with the properties $U^{\dagger}U=UU^{\dagger}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N}\ ,\ \ U^{*}=\widehat{Y}(U)$ (4.25) and the body of $U$ lies in ${\rm U\,}^{(\beta)}(N)$ iff $H^{(0)}$ has degeneracy larger than $\gamma_{2}$. Moreover, $H$ has no eigenvalue $\lambda\in\mathbb{R}$. In our particular case, $K$ can not be diagonalized for $k<N-1$. Hence, we do not follow the second approach of Ref. [5]. We emphasize that none of the other results in Ref. [5] is affected as they are proven by the correct first approach which we pursue here. ### 4.5 Ambiguity of the characteristic function in the supersymmetric extension In this section, we discuss the problem that the extension of the characteristic function $\mathcal{F}P$ from ordinary matrices to supermatrices is not unique. This results from the fact that symmetric supermatrices comprise two kinds of eigenvalues, i.e. bosonic and fermionic eigenvalues. Whereas ordinary symmetric matrices have only one kind of eigenvalues. In the supertraces, these two different kinds are differently weighted by a minus sign. To illustrate this problem, we also give a simple example. The rotation invariance of $\mathcal{F}P$ enables us to choose a representation $\mathcal{F}P_{0}$ of $\mathcal{F}P$ acting on an arbitrary number of matrix invariants $\mathcal{F}P_{0}\left(\tr K^{m}|m\in\mathbb{N}\right)=\mathcal{F}P(K)\ .$ (4.26) For this representation, a unique superfunction exists defined by $\Phi_{0}(\sigma)=\mathcal{F}P_{0}\left({\rm Str\,}\sigma^{m}|m\in\mathbb{N}\right)$ (4.27) where $\mathcal{F}P_{0}\left({\rm Str\,}B^{m}|m\in\mathbb{N}\right)=\mathcal{F}P_{0}\left(\tr K^{m}|m\in\mathbb{N}\right)$ (4.28) with $B=\tilde{\gamma}^{-1}VV^{\dagger}$. However, the choice of the representation $\mathcal{F}P_{0}$ is not unique. The question arises whether it is a well defined object. It is clear that two representations $\mathcal{F}P_{0}$ and $\mathcal{F}P_{1}$ are equal on ${\rm Herm\,}(\beta,N)$ due to the Cayley–Hamilton theorem, $\mathcal{F}P_{0}(H)=\mathcal{F}P_{1}(H)\ ,\ H\in{\rm Herm\,}(\beta,N).$ (4.29) The Cayley–Hamilton theorem states that there is a polynomial which is zero for $H$. Thus, $H^{M}$ with $M>N$ is a polynomial in $\\{H^{n}\\}_{1\leq n\leq N}$. Plugging an arbitrary symmetric supermatrix $\sigma$ into the corresponding superfunctions $\Phi_{0}$ and $\Phi_{1}$ we realize that the choices are not independent such that $\Phi_{0}(\sigma)\neq\Phi_{1}(\sigma)$ (4.30) holds for some $\sigma$. For example with $N=2$, $k=1$ and $\beta=2$, let the characteristic function $\mathcal{F}P(H)=\mathcal{F}P_{0}\left(\tr H^{3}\right)$. We get with help of the Cayley–Hamilton theorem $\mathcal{F}P_{1}\left(\tr H^{2},\tr H\right)=\mathcal{F}P_{0}\left(2\tr H\tr H^{2}-\tr^{3}H\right)=\mathcal{F}P_{0}\left(\tr H^{3}\right)=\mathcal{F}P(H)\ .$ (4.31) Let the set of ${\rm U\,}^{(\beta)}(p/q)$–symmetric supermatrices be $\displaystyle\left\\{\sigma\in{\rm Mat}(\tilde{\gamma}p/\tilde{\gamma}q)\left|\sigma^{\dagger}=\sigma,\ \sigma^{*}=\widehat{Y}_{{\rm S}}(\sigma)\right.\right\\}{\rm\ and}$ (4.32) $\displaystyle\widehat{Y}_{{\rm S}}(\sigma)=\left\\{\begin{array}[]{ll}\left[\begin{array}[]{cc}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2p}&0\\\ 0&Y_{{\rm s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{q}\end{array}\right]\sigma\left[\begin{array}[]{cc}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2p}&0\\\ 0&Y_{{\rm s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{q}\end{array}\right]&,\ \beta=1,\\\ \sigma^{*}&,\ \beta=2,\\\ \left[\begin{array}[]{cc}Y_{{\rm s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{p}&0\\\ 0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2q}\end{array}\right]\sigma\left[\begin{array}[]{cc}Y_{{\rm s}}^{T}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{p}&0\\\ 0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2q}\end{array}\right]&,\ \beta=4,\end{array}\right.$ (4.44) with respect to the supergroups ${\rm U\,}^{(\beta)}(p/q)=\left\\{\begin{array}[]{ll}{\rm UOSp\,}^{(+)}(p/2q)&,\ \beta=1\\\ {\rm U\,}(p/q)&,\ \beta=2\\\ {\rm UOSp\,}^{(-)}(2p/q)&,\ \beta=4\end{array}\right.\ .$ (4.45) ${\rm Mat}(\tilde{\gamma}p/\tilde{\gamma}q)$ is the set of $(\tilde{\gamma}p+\tilde{\gamma}q)\times(\tilde{\gamma}p+\tilde{\gamma}q)$–supermatrices with the complex Grassmann algebra $\bigoplus\limits_{j=0}^{8k^{2}}\Lambda_{j}$. The definition of the two representations ${\rm UOSp\,}^{(\pm)}$ of the supergroup ${\rm UOSp\,}$ can be found in Refs. [22, 14]. We refer to the classification of Riemannian symmetric superspaces by Zirnbauer [23]. We consider a ${\rm U\,}(1/1)$–symmetric supermatrix $\sigma$. This yields for the supersymmetric extension of Eq. (4.31) $\mathcal{F}P_{0}\left(2{\rm Str\,}\sigma{\rm Str\,}\sigma^{2}-{\rm Str\,}^{3}\sigma\right)\neq\mathcal{F}P_{0}\left({\rm Str\,}\sigma^{3}\right)=\mathcal{F}P_{0}\left(\frac{1}{4}\left(3\frac{{\rm Str\,}^{2}\sigma^{2}}{{\rm Str\,}\sigma}+{\rm Str\,}^{3}\sigma\right)\right)\ .$ (4.46) One obtains the last equation with a theorem similar to the Cayley–Hamilton theorem. More specificly, there exists a unique polynomial equation of order two $\sigma^{2}-\frac{{\rm Str\,}\sigma^{2}}{{\rm Str\,}\sigma}\sigma-\frac{1}{4}\left({\rm Str\,}^{2}\sigma-\frac{{\rm Str\,}^{2}\sigma^{2}}{{\rm Str\,}^{2}\sigma}\right)=0\ ,$ (4.47) for a ${\rm U\,}(1/1)$–symmetric supermatrix $\sigma$. The resulting integral in Sec. 5 for the generating function $Z_{k}|_{\mathfrak{M}_{N}={\rm Herm\,}(\beta,N)}$ is invariant under the choice of $\Phi_{0}$. This is proven in Sec. 6.3. Such an ambiguity of the supersymmetric extension of the characteristic function was also investigated by the authors of Ref. [20]. They avoided the question of the definition of a Dirac–distribution on superspace by the superbosonization formula. We introduce for the supersymmetric extension from Eq. (4.28) to Eq. (4.27) a Dirac–distribution depending on the representation of the superfunction. ### 4.6 Symmetries of the supermatrices We find for a chosen representation $\mathcal{F}P_{0}$ $Z_{k}(x^{-}+J)=(-\imath)^{k_{2}N}\int\limits_{\mathfrak{C}^{k_{2}N}}\Phi_{0}(B)\exp\left[-\imath{\rm Str\,}(x^{-}+J)B\right]d[\zeta]d[z]\ .$ (4.48) Here, we introduce $k_{2}=\gamma_{2}k$, $k_{1}=\gamma_{1}k$ and $\tilde{k}=\tilde{\gamma}k$. We will simplify the integral (4.48) to integrals over $k_{1}$ eigenvalues in the Boson–Boson block and over $k_{2}$ eigenvalues in the Fermion–Fermion block. For every $\beta$, we have $B^{\dagger}=B\ ,$ (4.49) i.e. $B$ is self-adjoint. The complex conjugation yields $B^{*}=\left\\{\begin{array}[]{ll}\widetilde{Y}B\widetilde{Y}^{T}\qquad,\ \beta\in\\{1,4\\}\\\ \widetilde{Y}B^{*}\widetilde{Y}^{T}\qquad,\ \beta=2\end{array}\right.$ (4.50) with the $(2k+2k)\times(2k+2k)$–supermatrices $\left.\widetilde{Y}\right|_{\beta=1}=\left[\begin{array}[]{ccc}0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\\\ \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0&0\\\ 0&0&Y_{{\rm s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\end{array}\right]\qquad,\qquad\left.\widetilde{Y}\right|_{\beta=4}=\left[\begin{array}[]{ccc}Y_{{\rm s}}\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0&0\\\ 0&0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\\\ 0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\end{array}\right]$ (4.51) and $\left.\widetilde{Y}\right|_{\beta=2}={\rm diag\,}(1,0,1,0)\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}$. We notice that for the unitary case $B$ is effectively a $(k+k)\times(k+k)$–supermatrix, i.e. half the dimension. With help of the properties (4.49) and (4.50) we construct the supermatrix sets $\widetilde{\Sigma}_{0}(\beta,k)=\left\\{\sigma\in{\rm Mat}(2k/2k)\left|\sigma^{\dagger}=\sigma,\ \sigma^{*}=\left\\{\begin{array}[]{ll}\widetilde{Y}\sigma\widetilde{Y}^{T}&,\ \beta\in\\{1,4\\}\\\ \widetilde{Y}\sigma^{*}\widetilde{Y}^{T}&,\ \beta=2\end{array}\right\\}\right.\right\\}\ .$ (4.52) A matrix in $\widetilde{\Sigma}_{0}(\beta,k)$ fulfils the odd symmetry (4.50). We transform this symmetry with the unitary transformations $U|_{\beta=1}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\\\ -\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&0\\\ 0&0&\sqrt{2}\ \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k}\end{array}\right]\ \ ,\ \ U|_{\beta=4}=\frac{1}{\sqrt{2}}\left[\begin{array}[]{ccc}\sqrt{2}\ \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k}&0&0\\\ 0&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\\\ 0&-\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}&\imath\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{k}\end{array}\right],$ (4.53) $U|_{\beta=2}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{4k}$, according to the Dyson–index, arriving at the well–known symmetries of symmetric supermatrices [23], see also Eq. (4.32). Defining the sets $\Sigma_{0}(\beta,k)=U\widetilde{\Sigma}_{0}(\beta,k)U^{\dagger}$, we remark that the body of the Boson–Boson block of any element in these sets is a matrix in ${\rm Herm\,}(\beta,k_{1})$. The body of the Fermion–Fermion block of any matrix in $\Sigma_{0}(\beta,k)$ lies in ${\rm Herm\,}(4/\beta,k_{2})$. We introduce a generalized Wick–rotation $e^{\imath\psi}$ to guarantee the convergence of the supermatrix integrals. The usual choice of a Wick–rotation is $e^{\imath\psi}=\imath$ for investigations of Gaussian probability densities [5, 1, 2]. Here, general Wick–rotations [14] are also of interest. Probability densities which lead to superfunction as $\exp\left(-{\rm Str\,}\sigma^{4}\right)$ do not converge with the choice $\imath$. Thus, we consider the modified sets $\Sigma_{\psi}(\beta,k)=\widehat{\Psi}_{\psi}\Sigma_{0}(\beta,k)\widehat{\Psi}_{\psi}\ .$ (4.54) with $\widehat{\Psi}_{\psi}={\rm diag\,}(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k},e^{\imath\psi/2}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2k})$. Let $\Sigma_{\psi}^{0}(\beta,k)$ be the set of supermatrices which contains only zero and first order terms in the Grassmann variables. In the sequel, we restrict our calculations to superfunctions which possess a Wick–rotation such that the integrals below are convergent. We have not further explored the set of superfunctions with this property, but we know that this set has to be very large and sufficient for our purposes. For example, superfunctions of the form $\Phi_{0}(\sigma)=\widetilde{\Phi}(\sigma)\exp\left(-{\rm Str\,}\sigma^{2n}\right),\quad n\in\mathbb{N},$ (4.55) fulfil this property if ${\rm ln}\widetilde{\Phi}(\sigma)$ does not increase as fast as ${\rm Str\,}\sigma^{2n}$ at infinity. ### 4.7 Transformation to supermatrices by a Dirac–distribution Following Refs. [6, 5, 10], $\Phi_{0}(B)$ can be written as a convolution in the space of supermatrices $\Sigma_{\psi}^{0}(\beta,k)$ with a Dirac–distribution. We have $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle(-\imath)^{k_{2}N}\int\limits_{\mathfrak{C}_{k_{2}N}}\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)\delta\left(\rho- UBU^{\dagger}\right)d[\rho]\times$ (4.56) $\displaystyle\times$ $\displaystyle\exp\left[-\imath{\rm Str\,}(x^{-}+J)B\right]d[\zeta]d[z]$ where the measure is defined as $d[\rho]=d[\rho_{1}]d[\rho_{2}]\underset{1\leq n\leq k_{1}}{\prod\limits_{1\leq m\leq k_{2}}}d\eta_{nm}d\eta_{nm}^{*}\ .$ (4.57) Here, $\\{\eta_{nm},\eta_{nm}^{*}\\}$ are pairs of generators of a Grassmann algebra, while $\rho_{1}$ is the Boson–Boson and $\rho_{2}$ is the Fermion–Fermion block without the phase of the Wick–rotation. Since $\rho_{1}$ and $\rho_{2}$ are in ${\rm Herm\,}(\beta,k_{1})$ and ${\rm Herm\,}(4/\beta,k_{2})$, respectively, we use the real measures for $d[\rho_{1}]$ and $d[\rho_{2}]$ which are defined in Ref. [14]. We exchange the Dirac–distribution by two Fourier transformations as in Refs. [5, 10]. Then, Eq. (4.56) becomes $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle(-\imath)^{k_{2}N}2^{2k(k-\tilde{\gamma})}\int\limits_{\mathfrak{C}_{k_{2}N}}\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\mathcal{F}\Phi_{0}(\sigma)\times$ (4.58) $\displaystyle\times$ $\displaystyle\exp\left[\imath{\rm Str\,}B\left(U^{\dagger}\sigma U-x^{-}-J\right)\right]d[\sigma]d[\zeta]d[z]$ where the Fourier transform of $\Phi_{0}$ is $\mathcal{F}\Phi_{0}(\sigma)=\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)\exp\left(-\imath{\rm Str\,}\rho\sigma\right)d[\rho]\ .$ (4.59) We write the supertrace in the exponent in Eq. (4.58) as a sum over expectation values ${\rm Str\,}B\left(U^{\dagger}\sigma U-x^{-}-J\right)=\frac{1}{\tilde{\gamma}}\sum\limits_{j=1}^{N}\tr\Psi_{j}^{\dagger}\left(U^{\dagger}\sigma U-x^{-}-J\right)\Psi_{j}$ (4.60) with respect to the real, complex or quaternionic supervectors $\Psi_{j}^{\dagger}=\left\\{\begin{array}[]{ll}\left\\{z_{jn},z_{jn}^{*},\zeta_{jn},\zeta^{*}_{jn}\right\\}_{1\leq n\leq k}&,\ \beta=1\\\ \left\\{z_{jn},0,\zeta_{jn},0\right\\}_{1\leq n\leq k}&,\ \beta=2\\\ \left\\{\left[\begin{array}[]{c}z_{jn}\\\ z_{j+N,n}\end{array}\right],\left[\begin{array}[]{c}-z_{j+N,n}^{*}\\\ z_{jn}^{*}\end{array}\right],\left[\begin{array}[]{c}\zeta_{jn}\\\ \zeta_{j+N,n}\end{array}\right],\left[\begin{array}[]{c}-\zeta_{j+N,n}^{*}\\\ \zeta_{jn}^{*}\end{array}\right]\right\\}_{1\leq n\leq k}&,\ \beta=4\end{array}\right.$ (4.61) The integration over one of these supervectors yields $\int\limits_{\mathfrak{C}_{k_{2}}}{\rm exp}\left[\frac{\imath}{\tilde{\gamma}}\tr\Psi_{j}^{\dagger}\left(U^{\dagger}\sigma U-x^{-}-J\right)\Psi_{j}\right]d[\Psi_{j}]=\imath^{k_{2}}{\rm Sdet\,}^{-1/\gamma_{1}}\mathfrak{p}\left(\sigma-x^{-}-J\right)\ .$ (4.62) $\mathfrak{p}$ projects onto the non-zero matrix blocks of $\Sigma_{-\psi}(\beta,k)$ which are only $(k+k)\times(k+k)$–supermatrices for $\beta=2$. $\mathfrak{p}$ is the identity for $\beta\in\\{1,4\\}$. The Eq. (4.62) is true because $U$ commutes with $x^{-}+J$. Then, Eq. (4.58) reads $Z_{k}(x^{-}+J)=2^{2k(k-\tilde{\gamma})}\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\mathcal{F}\Phi_{0}(\sigma){\rm Sdet\,}^{-N/\gamma_{1}}\mathfrak{p}\left(\sigma-x^{-}-J\right)d[\sigma]\ .$ (4.63) Indeed, this result coincides with Ref. [5] for $\beta=2$ where the Fourier transform $\mathcal{F}\Phi_{0}(\sigma)$ was denoted by $Q(\sigma)$. Eq. (4.63) reduces for Gaussian ensembles with arbitrary $\beta$ to expressions as in Refs. [3] and [2]. The integral is well defined because $\varepsilon$ is greater than zero and the body of the eigenvalues of the Boson–Boson block is real. The representation (4.63) for the generating function can also be considered as a random matrix ensemble lying in the superspace. Eq. (4.63) is one reason why we called this integral transformation from the space over ordinary matrices to supermatrices as generalized Hubbard–Stratonovich transformation. If the probability density $P$ is Gaussian then we can choose $\Phi_{0}$ also as a Gaussian. Thus, this transformation above reduces to the ordinary Hubbard–Stratonovich transformation and the well-known result (4.63). ## 5 The supersymmetric Ingham–Siegel integral We perform a Fourier transformation in superspace for the convolution integral (4.63) and find $Z_{k}(x^{-}+J)=2^{2k(k-\tilde{\gamma})}\int\limits_{\Sigma_{\psi}^{0}(\beta,k)}\Phi_{0}(\rho)I_{k}^{(\beta,N)}(\rho)\exp\left[-\imath{\rm Str\,}\rho\left(x^{-}+J\right)\right]d[\rho]\ .$ (5.1) Here, we have to calculate the supersymmetric Ingham–Siegel integral $I_{k}^{(\beta,N)}(\rho)=\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\exp\left(-\imath{\rm Str\,}\rho\sigma^{+}\right){\rm Sdet\,}^{-N/\gamma_{1}}\mathfrak{p}\sigma^{+}d[\sigma]$ (5.2) with $\sigma^{+}=\sigma+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{4k}$. Ingham [24] and Siegel [25] independently calculated a version of (5.2) for ordinary real symmetric matrices. The case of hermitian matrices was discussed in Ref. [26]. Since we were unable to find the ordinary Ingham–Siegel integral also for the quaternionic case, we give the result here. It is related to Selbergs integral [27]. Let $R\in{\rm Herm\,}(\beta,m)$, $\varepsilon>0$ and a real number $n\geq m-1+2/\beta$, then we have $\displaystyle\int\limits_{{\rm Herm\,}(\beta,m)}\exp\left(-\imath\tr RS^{+}\right){\det}^{-n/\gamma_{1}}S^{+}d[S]=\imath^{-\beta mn/2}G_{n-m,m}^{(\beta)}\displaystyle{\det}^{\lambda}R\ \Theta(R)$ (5.3) where $S^{+}=S+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\gamma_{2}m}$, the exponent is $\lambda=\frac{n-m}{\gamma_{1}}-\frac{\gamma_{1}-\gamma_{2}}{2}$ (5.4) and the constant is $G_{n-m,m}^{(\beta)}=\left(\frac{\gamma_{2}}{\pi}\right)^{\beta m(n-m+1)/2-m}\prod\limits_{j=n-m+1}^{n}\frac{2\pi^{\beta j/2}}{\Gamma\left(\beta j/2\right)}\ .$ (5.5) $\Gamma(.)$ is the Euler gamma–function and $\Theta(.)$ is the Heavyside- function for matrices which is defined as $\Theta(R)=\left\\{\begin{array}[]{ll}1&,\ R{\rm\ is\ positive\ definite}\\\ 0&,\ {\rm else}\end{array}\right.\ .$ (5.6) The ordinary Ingham–Siegel integral was recently used in the context of supersymmetry by Fyodorov [26]. The integral was extended to the superspace $\Sigma_{\pi/2}^{0}(2,k)$ in Ref. [5]. In this article, we need a generalization to all $\Sigma_{-\psi}^{0}(\beta,k)$, in particular $\beta=1,4$. The integral (5.2) is invariant under the action of ${\rm U\,}^{(\beta)}(k_{1}/k_{2})$. Thus, it is convenient to consider $I(r,\varepsilon)$, where $r={\rm diag\,}(r_{11},\ldots,r_{\tilde{k}1},r_{12},\ldots,r_{\tilde{k}2})$ is the diagonal matrix of eigenvalues of $\rho$ and contains nilpotent terms. The authors of Ref. [10] claimed in their proof of Theorem 1 in Chapter 6 that the diagonalization at this point of the calculation yields Efetov–Wegner terms. These terms do not appear in the $\rho_{2}$ integration because we do not change the integration variables, i.e. the integration measure $d[\rho]$ remains the same. For the unitary case, see Ref. [5]. We consider the eigenvalues of $\rho$ as functions of the Cartesian variables. We may certainly differentiate a function with respect to the eigenvalues if we keep track of how these differential operators are defined in the Cartesian representation. As worked out in C.1, the supersymmetric Ingham–Siegel integral (5.2) reads $I_{k}^{(\beta,N)}(\rho)=\displaystyle C{\det}^{\kappa}r_{1}\Theta(r_{1}){\det}^{k}r_{2}\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\frac{\delta(r_{2})}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\ .$ (5.7) The constant is $C=\displaystyle\left(-\frac{e^{-\imath\psi}}{\gamma_{1}}\right)^{k_{2}N}\left(-\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{G_{Nk_{1}}^{(\beta)}}{g_{k_{2}}^{(4/\beta)}}$ (5.8) with $\displaystyle g_{k_{2}}^{(4/\beta)}=\frac{1}{k_{2}!}\prod\limits_{j=1}^{k_{2}}\frac{\pi^{2(j-1)/\beta}\Gamma\left(2/\beta\right)}{\Gamma\left(2j/\beta\right)}\ .$ (5.9) while the exponent is given by $\kappa=\frac{N}{\gamma_{1}}+\frac{\gamma_{2}-\gamma_{1}}{2}$ (5.10) and the differential operator $D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)=\frac{1}{\Delta_{k_{2}}(r_{2})}\det\left[r_{a2}^{N-b}\left(\frac{\partial}{\partial r_{a2}}+(k_{2}-b)\frac{2}{\beta}\frac{1}{r_{a2}}-e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]_{1\leq a,b\leq k_{2}}$ (5.11) is the analog to the Sekiguchi differential operator [28]. We derived it in B. The complexity of $D_{k_{2}r_{2}}^{(4/\beta)}(\imath e^{\imath\psi}\varepsilon)$ makes Eq. (5.7) cumbersome, a better representation is desirable. To simplify Eq. (5.7), we need the following statement which is shown in C.2. ###### Statement 5.1 We consider two functions $F,f:{\rm Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$ invariant under the action of ${\rm U\,}^{(4/\beta)}(k_{2})$ and Schwartz–functions of the matrix eigenvalues. Let $F$ and $f$ have the relation $F(\rho_{2})=f(\rho_{2})\det\rho_{2}^{N/\gamma_{1}-k}{\rm\ \ for\ all\ }\rho_{2}\in{\rm Herm\,}(4/\beta,k_{2})\ .$ (5.12) Then, we have $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}F(r_{2}){\det}^{k}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$ $\displaystyle=w_{1}f(0)=\int\limits_{\mathbb{R}^{k_{2}}}F(r_{2})|\Delta_{k_{2}}(r_{2})|^{4/\beta}\left[\frac{w_{2}\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\prod\limits_{j=1}^{k_{2}}\left(\frac{\partial}{\partial r_{j2}}\right)^{N-k_{1}}\delta(r_{j2})\right]d[r_{2}]$ (5.13) where the constants are $\displaystyle w_{1}=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\left(\imath^{N}e^{-\imath\psi N}\right)^{k_{2}}}{g_{k_{2}}^{(4/\beta)}}\prod_{b=1}^{k_{2}}\prod\limits_{a=1}^{N}\left(\frac{a}{\gamma_{1}}+\frac{b-1}{\gamma_{2}}\right)$ (5.14) $\displaystyle w_{2}=\frac{(-1)^{k_{1}k_{2}}}{g_{k_{2}}^{(4/\beta)}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left[\frac{(-\imath)^{N}e^{-\imath\psi N}}{\left(N-k_{1}\right)!\gamma_{1}^{N}}\right]^{k_{2}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}\ .$ (5.15) This statement yields for the supersymmetric Ingham–Siegel integral $I_{k}^{(\beta,N)}(\rho)=\displaystyle W\Theta(r_{1})\frac{{\det}^{\kappa}r_{1}}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}\prod\limits_{j=1}^{k_{2}}\left(\frac{\partial}{\partial r_{j2}}\right)^{N-k_{1}}\delta(r_{j2})$ (5.16) where the constant reads $\displaystyle W$ $\displaystyle=$ $\displaystyle\left(\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left[\frac{\left(-e^{-\imath\psi}\right)^{N}}{\left(N-k_{1}\right)!\gamma_{1}^{N}}\right]^{k_{2}}\times$ (5.17) $\displaystyle\times$ $\displaystyle\frac{G_{Nk_{1}}^{(\beta)}}{g_{k_{2}}^{(4/\beta)}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}\ .$ We further simplify this formula for $\beta=1$ and $\beta=2$. The powers of the Vandermonde–determinant $\Delta_{k_{2}}^{4/\beta}(r_{2})$ are polynomials of degree $k_{2}\times 2(k_{2}-1)/\beta$. The single power of one eigenvalue derivative must be $2(k_{2}-1)/\beta$ if we substitute these terms in Eq. (5.16) by partial derivatives of the eigenvalues, for details see C.2. Hence, this power is a half-integer for $\beta=4$. Also, $\Delta_{k_{2}}(r_{2})$ has no symmetric term where all eigenvalues have the same power. Therefore, we can not simplify the quaternionic case in the same manner. We use the identities $\displaystyle\prod\limits_{j=1}^{n}\frac{\partial^{n-1}}{\partial x_{j}^{n-1}}\Delta_{n}^{2}(x)$ $\displaystyle=$ $\displaystyle(-1)^{n(n-1)/2}n!\left[(n-1)!\right]^{n}\ ,$ (5.18) $\displaystyle\prod\limits_{j=1}^{n}\frac{\partial^{2(n-1)}}{\partial x_{j}^{2(n-1)}}\Delta_{n}^{4}(x)$ $\displaystyle=$ $\displaystyle n!\left[(2n-2)!\right]^{n}\prod\limits_{j=0}^{n-1}(2j+1)$ (5.19) and find $\displaystyle I_{k}^{(1,N)}(\rho)$ $\displaystyle=$ $\displaystyle 2^{-k(k-2)}\left[\frac{2\pi e^{-\imath\psi N}}{(N-2)!}\right]^{k}\times$ (5.20) $\displaystyle\times$ $\displaystyle\Theta(r_{1})\det r_{1}^{(N-1)/2}\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial r_{j2}}\right)^{N-2}\delta(r_{j2})$ and $\displaystyle I_{k}^{(2,N)}(\rho)$ $\displaystyle=$ $\displaystyle(-1)^{k(k+1)/2}2^{-k(k-1)}\left[\frac{2\pi e^{-\imath\psi N}}{(N-1)!}\right]^{k}\times$ (5.21) $\displaystyle\times$ $\displaystyle\Theta(r_{1})\det r_{1}^{N}\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial r_{j2}}\right)^{N-1}\delta(r_{j2})\ .$ For $\beta=4$, we summarize the constants and have $\displaystyle I_{k}^{(4,N)}(\rho)$ $\displaystyle=$ $\displaystyle 2^{-k(k-2)}\left[\frac{2\pi e^{-\imath\psi N}}{(N-k)!}\right]^{2k}\times$ (5.22) $\displaystyle\times$ $\displaystyle\Theta(r_{1})\det r_{1}^{N+1/2}\frac{4^{k}k!}{\pi^{k}|\Delta_{2k}(r_{2})|}\prod\limits_{j=1}^{2k}\left(-\frac{\partial}{\partial r_{j2}}\right)^{N-k}\delta(r_{j2})\ .$ These distributions are true for superfunctions whose Fermion–Fermion block dependence is as in Eq. (5.12). Eqs. (5.20) and (5.21) can be extended to distributions on arbitrary Schwartz–functions which is not the case for Eq. (5.22). The constants in Eqs. (5.20) and (5.21) must be the same due to the independence of the test–function. ###### Statement 5.2 Equations (5.20) and (5.21) are true for rotation invariant superfunctions $\Phi_{0}$ which are Schwartz–functions in the Fermion–Fermion block entries along the Wick–rotated real axis. We derive this statement in C.3. Indeed, the Eq. (5.21) is the same as the formula for the supersymmetric Ingham–Siegel integral for $\beta=2$ in Ref. [5]. Comparing both results, the different definitions of the measures have to be taken into account. We also see the similarity to the superbosonization formula [9, 8, 12, 11, 20, 10] for $\beta\in\\{1,2\\}$. One can replace the partial derivative in Eq. (5.20) and (5.21) by contour integrals if the characteristic function $\Phi_{0}$ is analytic. However for $\beta=4$, more effort is needed. For our purposes, Eqs. (5.7) and (5.22) are sufficient for the quaternionic case. In the unitary case, the equivalence of Eq. (5.21) with the superbosonization formula was confirmed with help of Cauchy integrals by Basile and Akemann. [10] ## 6 Final representation of the generating function and its independence of the choice for $\Phi_{0}$ In Sec. 6.1, we present the generating function as a supersymmetric integral over eigenvalues and introduce the supersymmetric Bessel–functions. In Sec. 6.2, we revisit the unitary case and point out certain properties of the generating function. Some of these properties, independence of the Wick–rotation and the choice of $\Phi_{0}$, are also proven for the orthogonal and unitary–symplectic case in Sec. 6.3. ### 6.1 Eigenvalue integral representation The next step of the calculation of the generating function $Z_{k}(x^{-}+J)$ is the integration over the supergroup. The function $\Phi_{0}(\rho)I_{k}^{(\beta,N)}(\rho)$ is invariant under the action of ${\rm U\,}^{(\beta)}(k_{1}/k_{2})$. We define the supermatrix Bessel–function $\varphi_{k_{1}k_{2}}^{(\beta)}(s,r)=\int\limits_{{\rm U\,}^{(\beta)}(k_{1}/k_{2})}\exp\left({\rm Str\,}sUrU^{\dagger}\right)d\mu(U)$ (6.1) as in Refs. [29, 14]. We choose the normalization $\displaystyle\int\limits_{\Sigma^{0}_{\psi}(\beta,k)}f(\sigma)\exp\left({\rm Str\,}\sigma x\right)d[e^{-\imath\psi/2}\eta]d[e^{\imath\psi}\sigma_{2}]d[\sigma_{1}]=$ (6.2) $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}f(s)\varphi_{k_{1}k_{2}}^{(\beta)}(s,x)\left|B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})\right|d[e^{\imath\psi}s_{2}]d[s_{1}]+{\rm b.t.}$ which holds for every rotation invariant function $f$. This normalization agrees with Refs. [30, 31, 29, 5, 14]. The boundary terms (${\rm b.t.}$) referred to as Efetov–Wegner terms [32, 33, 10] appear upon changing the integration variables [34] or, equivalently, upon partial integration [14]. The Berezinian is $B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})=\displaystyle\frac{\Delta_{k_{1}}^{\beta}(s_{1})\Delta_{k_{2}}^{4/\beta}(e^{\imath\psi}s_{2})}{V_{k}^{2}(s_{1},e^{\imath\psi}s_{2})}$ (6.3) where $V_{k}(s_{1},e^{\imath\psi}s_{2})=\prod\limits_{n=1}^{k_{1}}\prod\limits_{m=1}^{k_{2}}\left(s_{n1}-e^{\imath\psi}s_{m2}\right)$ mixes bosonic and fermionic eigenvalues. These Berezinians have a determinantal structure $B_{k}^{(\beta)}(s_{1},e^{\imath\psi}s_{2})=\left\\{\begin{array}[]{ll}\displaystyle\det\left[\frac{1}{s_{a1}-e^{\imath\psi}s_{b2}}\ ,\ \frac{1}{(s_{a1}-e^{\imath\psi}s_{b2})^{2}}\right]\underset{1\leq b\leq k}{\underset{1\leq a\leq 2k}{}}&,\ \beta=1\\\ \displaystyle{\det}^{2}\left[\frac{1}{s_{a1}-e^{\imath\psi}s_{b2}}\right]_{1\leq a,b\leq k}&,\ \beta=2\\\ \displaystyle B_{k}^{(1)}(e^{\imath\psi}s_{2},s_{1})&,\ \beta=4\end{array}\right.\ .$ (6.4) For $\beta=2$ this formula was derived in Ref. [32]. The other cases are derived in D. We notice that this determinantal structure is similar to the determinantal structure of the ordinary Vandermonde–determinant raised to the powers $2$ and $4$. This structure was explicitly used [15] to calculate the $k$–point correlation function of the GUE and the GSE. We find for the generating function $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle 2^{2k(k-\tilde{\gamma})}e^{\imath\psi k_{1}}\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}\Phi_{0}(r)I_{k}^{(\beta,N)}(r)\times$ (6.5) $\displaystyle\times$ $\displaystyle\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath r,x^{-}+J)\left|B_{k}^{(\beta)}(r_{1},e^{\imath\psi}r_{2})\right|d[r_{2}]d[r_{1}]+{\rm b.t.}\ .$ The normalization of $Z_{k}$ is guaranteed by the Efetov–Wegner terms. When setting $(k-l)$ with $l<k$ of the source variables $J_{p}$ to zero then we have $\left.Z_{k}(x^{-}+J)\right|_{J_{l}=\ldots=J_{k}=0}=Z_{l-1}(\tilde{x}^{-}+\widetilde{J})\ ,$ (6.6) $\tilde{x}={\rm diag\,}(x_{1},\ldots,x_{l-1}),\ \widetilde{J}={\rm diag\,}(J_{1},\ldots,J_{l-1})$, by the integration theorems in Ref. [1, 35, 36, 37, 3, 14]. This agrees with the definition (2.9). ### 6.2 The unitary case revisited To make contact with the discussion in Ref. [5], we revisit the unitary case using the insight developed here. For a further calculation we need the explicit structure of the supersymmetric matrix Bessel–functions. However, the knowledge of these functions is limited. Only for certain $\beta$ and $k$ we know the exact structure. In particular for $\beta=2$ the supermatrix Bessel–function was first calculated in Ref. [32, 30] with help of the heat equation. Recently, this function was re- derived by integrating the Grassmann variables in Cartesian coordinates [14], $\displaystyle\varphi_{kk}^{(2)}(-\imath r,x^{-}+J)=\displaystyle\frac{\imath^{k}\exp\left(-\varepsilon{\rm Str\,}r\right)}{2^{k^{2}}\pi^{k}}\times$ $\displaystyle\times\frac{\det\left[\exp\left(-\imath r_{m1}(x_{n}-J_{n})\right)\right]_{1\leq m,n\leq k}\det\left[\exp\left(\imath e^{\imath\psi}r_{m2}(x_{n}+J_{n})\right)\right]_{1\leq m,n\leq k}}{\sqrt{B_{k}^{(2)}(r_{1},e^{\imath\psi}r_{2})B_{k}^{(2)}\left(x-J,x+J\right)}}$ (6.7) with $x\pm J={\rm diag\,}(x_{1}\pm J_{1},\ldots,x_{k}\pm J_{k})$ and the positive square root of the Berezinian $\displaystyle\sqrt{B_{k}^{(2,2)}(r_{1},e^{\imath\psi}r_{2})}=\displaystyle\det\left[\frac{1}{r_{a1}-e^{\imath\psi}r_{b2}}\right]_{1\leq a,b\leq k}=(-1)^{k(k-1)/2}\frac{\Delta_{k}(s_{1})\Delta_{k}(e^{\imath\psi}s_{2})}{V_{k}(s_{1},e^{\imath\psi}s_{2})}\ .$ (6.8) Due to the structure of $\varphi_{kk}^{(2)}$ and $B_{k}^{(2)}$, we write the generating function for $\beta=2$ as an integral over $\Phi_{0}$ times a determinant [5] $\displaystyle Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle(-1)^{k(k+1)/2}\displaystyle{\det}^{-1}\left[\frac{1}{x_{a}-x_{b}-J_{a}-J_{b}}\right]_{1\leq a,b\leq k}\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\times$ (6.9) $\displaystyle\times$ $\displaystyle\det\left[\mathfrak{F}_{N}(\tilde{r}_{mn},\tilde{x}_{mn})\Theta(r_{m1})\exp\left(-\varepsilon{\rm Str\,}\tilde{r}_{mn}\right)\right]_{1\leq m,n\leq k}d[r_{2}]d[r_{1}]+{\rm b.t.}$ where $\tilde{r}_{mn}={\rm diag\,}\left(r_{m1},e^{\imath\psi}r_{n2}\right)$, $\tilde{x}_{mn}={\rm diag\,}\left(x_{m}-J_{m},x_{n}+J_{n}\right)$ and $\mathfrak{F}_{N}(\tilde{r}_{mn},\tilde{x}_{mn})=\frac{\imath r_{m1}^{N}\exp\left(-\imath{\rm Str\,}\tilde{r}_{mn}\tilde{x}_{mn}\right)}{(N-1)!(r_{m1}-e^{\imath\psi}r_{n2})}\left(-e^{-\imath\psi}\frac{\partial}{\partial r_{n2}}\right)^{N-1}\delta(r_{n2})\ .$ (6.10) Then, the modified $k$–point correlation function is $\displaystyle\qquad\quad\widehat{R}_{k}(x^{-})$ $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\times$ (6.11) $\displaystyle\times$ $\displaystyle\det\left[\mathfrak{F}_{N}(\tilde{r}_{mn},x_{mn})\Theta(r_{m1})\exp\left(-\varepsilon{\rm Str\,}\tilde{r}_{mn}\right)\right]_{1\leq m,n\leq k}d[r_{2}]d[r_{1}]+{\rm b.t.}$ and the $k$–point correlation function is $R_{k}(x)=\int\limits_{\mathbb{R}^{k}}\int\limits_{\mathbb{R}^{k}}\Phi_{0}(r)\det\left[\frac{\mathfrak{F}_{N}(\tilde{r}_{mn},x_{mn})}{2\pi\imath}\right]_{1\leq m,n\leq k}d[r_{2}]d[r_{1}]+{\rm b.t.}\ .$ (6.12) We defined $x_{mn}={\rm diag\,}(x_{m},x_{n})$. The boundary terms comprise the lower correlation functions. The $k$–point correlation function for $\beta=2$ is a determinant of the fundamental function $R^{({\rm fund})}(x_{m},x_{n})=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\frac{\mathfrak{F}_{N}(r,x_{mn})}{2\pi\imath}dr_{2}dr_{1}$ (6.13) if there is one characteristic function $\mathcal{F}P_{0}$ with a supersymmetric extension $\Phi_{0}$ factorizing for diagonal supermatrices, $\Phi_{0}(r)={\rm Sdet\,}{\rm diag\,}\left[\widehat{\Phi}_{0}(r_{11}),\ldots,\widehat{\Phi}_{0}(r_{k1}),\widehat{\Phi}_{0}\left(e^{\imath\psi}r_{12}\right),\ldots,\widehat{\Phi}_{0}\left(e^{\imath\psi}r_{k2}\right)\right]\ ,$ (6.14) with $\widehat{\Phi}_{0}:\mathbb{C}\rightarrow\mathbb{C}$. For example, the shifted Gaussian ensemble in App. F of Ref. [5] is of such a type. In Eq. (6.13) we notice that this expression is independent of the generalized Wick–rotation. Every derivative of the fermionic eigenvalue $r_{2}$ contains the inverse Wick–rotation as a prefactor. Moreover, the Wick–rotation in the functions are only prefactors of $r_{2}$. Thus, an integration over the fermionic eigenvalues $r_{2}$ in Eq. (6.11) cancels the Wick–rotation out by using the Dirac–distribution. Also, this integration shows that every representation of the characteristic function gives the same result, see Theorem 6.1 in the next subsection. However, the determinantal structure with the fundamental function in Eq. (6.13) depends on a special choice of $\Phi_{0}$. ### 6.3 Independence statement For $\beta=1$ and $\beta=4$ we do not know the ordinary matrix Bessel–function explicitly. Hence, we can not give such a compact expression as in the case $\beta=2$. On the other hand, we can derive the independence of the Wick–rotation and of the $\Phi_{0}$ choice of the generating function. ###### Statement 6.1 The generating function $Z_{k}$ is independent of the Wick–rotation and of the choice of the characteristic functions supersymmetric extension $\Phi_{0}$ corresponding to a certain matrix ensemble $(P,{\rm Herm\,}(\beta,N))$. Derivation: We split the derivation in two parts. The first part regards the Wick–rotation and the second part yields the independence of the choice of $\Phi_{0}$. Due to the normalization of the supermatrix Bessel–function (6.2), $\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath r,x^{-}+J)$ only depends on $e^{\imath\psi}r_{2}$. The same is true for $\Phi_{0}$. Due to the property $D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)=e^{\imath k_{2}\psi}D_{k_{2},e^{\imath\psi}r_{2}}^{(4/\beta)}\left(\imath\gamma_{1}\varepsilon\right)\ ,$ (6.15) the Ingham–Siegel integral in the form (5.7) times the phase $e^{\imath(k_{1}-k_{2})\psi}$ only depends on $e^{\imath\psi}r_{2}$ and $e^{-\imath\psi}\partial/\partial r_{2}$. The additional phase comes from the $\rho$–integration. Thus, we see the independence of the Wick–rotation because of the same reason as in the $\beta=2$ case. Let $\Phi_{0}$ and $\Phi_{1}$ be two different supersymmetric extensions of the characteristic function $\mathcal{F}P$. Then these two superfunctions only depend on the invariants $\\{{\rm Str\,}\sigma^{m_{j}}\\}_{1\leq j\leq l_{0}}$ and $\\{{\rm Str\,}\sigma^{n_{j}}\\}_{1\leq j\leq l_{1}}$, $m_{j},n_{j},l_{0},l_{1}\in\mathbb{N}$. We consider $\Phi_{0}$ and $\Phi_{1}$ as functions of $\mathbb{C}^{l_{0}}\rightarrow\mathbb{C}$ and $\mathbb{C}^{l_{1}}\rightarrow\mathbb{C}$, respectively. Defining the function $\Delta\Phi(x_{1},\ldots,x_{M})=\Phi_{0}(x_{m_{1}},\ldots,x_{m_{l_{0}}})-\Phi_{1}(x_{n_{1}},\ldots,x_{n_{l_{1}}}),$ (6.16) where $M={\rm max}\\{m_{a},n_{b}\\}$, we notice with the discussion in Sec. 4.5 that $\Delta\Phi(x_{1},\ldots,x_{M})|_{x_{j}=\tr H^{j}}=0$ (6.17) for every hermitian matrix $H$. However, there could be a symmetric supermatrix $\sigma$ with $\Delta\Phi(x_{1},\ldots,x_{M})|_{x_{j}={\rm Str\,}\sigma^{j}}\neq 0.$ (6.18) With the differential operator $\mathfrak{D}_{r}=\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N-k_{1}}\frac{\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath r,x^{-}+J)}{V_{k}(r_{1},e^{\imath\psi}r_{2})},$ (6.19) we consider the difference of the generating functions $\displaystyle\Delta Z_{k}(x^{-}+J)$ $\displaystyle=$ $\displaystyle Z_{k}(x^{-}+J)|_{\Phi_{0}}-Z_{k}(x^{-}+J)|_{\Phi_{1}}=$ (6.20) $\displaystyle=$ $\displaystyle\int_{\mathbb{R}^{k_{1}}}|\Delta_{k_{2}}(r_{1})|^{\beta}{\det}^{\kappa}r_{1}\Theta(r_{1})\left.\mathfrak{D}_{r}\Delta\Phi(x)|_{x_{j}={\rm Str\,}r^{j}}\right|_{r_{2}=0}d[r_{1}]$ Here, we omit the Efetov–Wegner terms. The differential operator is invariant under the action of the permutation group $S(k_{2})$ on the fermionic block ${\rm Herm\,}(4/\beta,k_{2})$. Hence, we find $\displaystyle\left.\mathfrak{D}_{r}\Delta\Phi(x)|_{x_{j}={\rm Str\,}r^{j}}\right|_{r_{2}=0}$ $\displaystyle=$ $\displaystyle\left.\underset{|a|\leq k_{2}(N-k_{1})}{\sum\limits_{a\in\\{0,\ldots,N-k_{1}\\}^{M}}}d_{a}(r)\prod\limits_{j=1}^{M}\frac{\partial^{a_{j}}}{\partial x_{j}^{a_{j}}}\Delta\Phi(x)|_{x_{j}={\rm Str\,}r^{j}}\right|_{r_{2}=0}=$ (6.21) $\displaystyle=$ $\displaystyle\underset{|a|\leq k_{2}(N-k_{1})}{\sum\limits_{a\in\\{0,\ldots,N-k_{1}\\}^{M}}}d_{a}(r_{1})\prod\limits_{j=1}^{M}\frac{\partial^{a_{j}}}{\partial x_{j}^{a_{j}}}\Delta\Phi(x)|_{x_{j}=\tr r^{j}}=$ $\displaystyle=$ $\displaystyle 0,$ where $d_{a}$ are certain symmetric functions depending on the eigenvalues $r$. At $r_{2}=0$ these functions are well-defined since the supermatrix Bessel–functions and the term $V_{k}^{-1}(r_{1},e^{\imath\psi}r_{2})$ are $C^{\infty}$ at this point. Thus, we find that $\Delta Z_{k}(x^{-}+J)=0.$ (6.22) This means that the generating function is independent of the supersymmetric extension of the characteristic function. $\square$ ## 7 One–point and higher order correlation functions We need an explicit expression or some properties of the supermatrix Bessel–function to simplify the integral for the generating function. For $k=1$ we know the supermatrix Bessel–functions for all $\beta$. The simplest case is $\beta=2$ where we take the formula (6.12) with $k=1$ and obtain $R_{1}(x)=R^{({\rm fund})}(x,x)=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\frac{\mathfrak{F}_{N}\left(r,x\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2}\right)}{2\pi\imath}dr_{2}dr_{1}\ .$ (7.1) Since the Efetov–Wegner term in the generating function is just unity there are no boundary terms in the level density. For $\beta\in\\{1,4\\}$ we use the supermatrix Bessel–function [29, 38, 14] $\displaystyle\varphi_{21}^{(1)}(-\imath r,x^{-}+J)$ $\displaystyle=$ $\displaystyle\frac{-2J}{\pi}\exp\left[-\imath{\rm Str\,}r(x^{-}+J)\right]\times$ (7.2) $\displaystyle\times$ $\displaystyle\left[\imath{\rm Str\,}r+J\left(r_{11}-e^{\imath\psi}r_{2}\right)\left(r_{21}-e^{\imath\psi}r_{2}\right)\right]\ .$ We find $\displaystyle\widehat{R}_{1}(x^{-})=\displaystyle-\imath\int\limits_{\mathbb{R}^{2}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\det r_{1}^{(N-1)/2}{\rm Str\,}r\frac{|r_{11}-r_{21}|}{(r_{11}-e^{\imath\psi}r_{2})^{2}(r_{21}-e^{\imath\psi}r_{2})^{2}}\times$ $\displaystyle\times\displaystyle\exp\left(-\imath x^{-}{\rm Str\,}r\right)\Theta(r_{1})\frac{1}{(N-2)!}\left(-e^{-\imath\psi}\frac{\partial}{\partial r_{2}}\right)^{N-2}\delta(r_{2})d[r_{1}]dr_{2}$ (7.3) for $\beta=1$ and $\displaystyle\widehat{R}_{1}(x^{-})=\displaystyle-4\imath\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{2}}\Phi_{0}(r)r_{1}^{2N+1}{\rm Str\,}r\frac{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}{(r_{1}-e^{\imath\psi}r_{12})^{2}(r_{1}-e^{\imath\psi}r_{22})^{2}}\times$ $\displaystyle\times\exp\left(-\imath x^{-}{\rm Str\,}r\right)\Theta(r_{1})\frac{\det e^{\imath\psi}r_{2}}{(2N+1)!}\left(4e^{-2\imath\psi}D_{2,r_{2}}^{(1)}\right)^{N}\frac{\delta(r_{12})\delta(r_{22})}{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}d[r_{2}]dr_{1}$ (7.4) for $\beta=4$. The differential operator has the explicit form $D_{2,r_{2}}^{(1)}=\frac{\partial^{2}}{\partial r_{12}\partial r_{22}}-\frac{1}{2}\frac{1}{r_{12}-r_{22}}\left(\frac{\partial}{\partial r_{12}}-\frac{\partial}{\partial r_{22}}\right)\ .$ (7.5) For the level density we have $\displaystyle R_{1}(x)=\displaystyle-\frac{1}{2\pi}\int\limits_{\mathbb{R}^{2}}\int\limits_{\mathbb{R}}\Phi_{0}(r)\det r_{1}^{(N-1)/2}\exp\left(-\imath x{\rm Str\,}r\right){\rm Str\,}r\frac{|r_{11}-r_{21}|}{(r_{11}-e^{\imath\psi}r_{2})^{2}(r_{21}-e^{\imath\psi}r_{2})^{2}}\times$ $\displaystyle\times\displaystyle\left(\Theta(r_{1})+\Theta(-r_{1})\right)\frac{1}{(N-2)!}\left(-e^{-\imath\psi}\frac{\partial}{\partial r_{2}}\right)^{N-2}\delta(r_{2})d[r_{1}]dr_{2}$ (7.6) for $\beta=1$ and $\displaystyle R_{1}(x)=\displaystyle-\frac{2}{\pi}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{2}}\Phi_{0}(r)r_{1}^{2N+1}\exp\left(-\imath x{\rm Str\,}r\right){\rm Str\,}r\frac{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}{(r_{1}-e^{\imath\psi}r_{12})^{2}(r_{1}-e^{\imath\psi}r_{22})^{2}}\times$ $\displaystyle\times\frac{\det e^{\imath\psi}r_{2}}{(2N+1)!}\left(4e^{-2\imath\psi}D_{2,r_{2}}^{(1)}\right)^{N}\frac{\delta(r_{12})\delta(r_{22})}{e^{\imath\psi}r_{12}-e^{\imath\psi}r_{22}}d[r_{2}]dr_{1}$ (7.7) for $\beta=4$. The equations (7.4) to (7.7) comprise all level–densities for arbitrary matrix ensembles invariant under orthogonal and unitary–symplectic rotations. As probability densities which do not factorize are included, these results considerably extend those obtained by orthogonal polynomials. For higher order correlation functions we use the definition (2.3) and the definition of the matrix Green’s function. With help of the quantities $L={\rm diag\,}(L_{1},\ldots,L_{k})\in\\{\pm 1\\}^{k}$ and $\widehat{L}=L\otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{2\tilde{\gamma}}$, this yields $\displaystyle R_{k}(x)=\displaystyle 2^{2k(k-\tilde{\gamma})}\int\limits_{\mathbb{R}^{k_{1}}}\int\limits_{\mathbb{R}^{k_{2}}}\Phi_{0}(r)\underset{\epsilon\searrow 0}{\lim}\sum\limits_{L\in\\{\pm 1\\}^{k}}\prod\limits_{j=1}^{k}L_{j}\ \frac{I_{k}^{(\beta,N)}\left(\widehat{L}r\right)\exp\left(-\varepsilon{\rm Str\,}\widehat{L}r\right)}{\left(2\pi\imath e^{-\imath\psi\gamma_{1}}\right)^{k}}\times$ $\displaystyle\times\displaystyle\left.\left(\prod\limits_{j=1}^{k}-\frac{1}{2}\frac{\partial}{\partial J_{j}}\right)\varphi_{k_{1}k_{2}}^{(\beta)}(-\imath r,x^{(0)}+J)\right|_{J=0}\left|B_{k}^{(\beta)}(r_{1},e^{\imath\psi}r_{2})\right|d[r_{2}]d[r_{1}]+{\rm b.t.}$ (7.8) for analytic correlation functions. We extend this formula to all rotation invariant ensembles by the universality of the integral kernel. First, we make a limit of a uniformly convergent series of Schwartz–functions analytic in the real components of its entries to every arbitrary Schwartz–function describing a matrix ensemble. The Schwartz–functions are dense in a weak sense in the sets of Lebesgue–integrable Functions $L^{p}$ and the tempered distributions. Thus, we integrate Eq. (7.8) with an arbitrary Schwartz–function on $\mathbb{R}^{k}$ and take the limit of a series of Schwartz–functions describing the ensembles to a tempered distribution which completes the extension. ## 8 Remarks and conclusions We extended the method of the generalized Hubbard–Stratonovich transformation to arbitrary orthogonally and unitary–symplectically invariant random matrix ensembles. Due to a duality between ordinary and supersymmetric matrix spaces, the integral for the $k$–point correlation function is over a superspace. This integral was reduced to an eigenvalue integral for all probability densities, including those which do not factorize. The results are in terms of the characteristic function. Thus, the characteristic function has to be calculated for the ensemble in question. Since the matrix Bessel–functions of the ordinary orthogonal and unitary–symplectic group [39, 29, 40] and, thus, the supermatrix Bessel–functions of ${\rm UOSp\,}(2k/2k)$ are not known explicitly beyond $k=1$, we can not further simplify our results. However, we found the previously unknown determinantal structure of the Berezinian of ${\rm UOSp\,}(2k/2k)$. Up to the restriction $N\geq k_{1}$, formula (7.8) is exact for every $k$, $N$ and rotation invariant ensemble. Thus, it can serve not only as starting point for universality considerations [7], but for all other studies. The expressions for the supersymmetric Ingham–Siegel integrals (5.20), (5.21) and (5.22) confirm the equivalence of the superbosonization formula [20, 11, 12] with our derivation. A work for a proof of this equivalence for all $\beta$’s is in progress. The comparison of the superbosonization formula [12, 11] with Eq. (5.1) shows that the crucial difference lies in the integration domain. However, the Dirac–distribution and the partial derivatives in the fermionic part imply a representation as a contour integral which is equivalent to the compact space used in the superbosonization formula. ## Acknowledgements We thank H. Kohler for clarifying remarks on relation between the ordinary matrix Bessel–functions and the Jack–polynomials as well as on the Sekiguchi differential operators. We are also grateful to S. Mandt, H.-J. Sommers and M.R. Zirnbauer for fruitful discussions. A big thank you goes to P. Heinzner and E. Vishnyakova for helpful advice on the Paley–Wiener theorem. We thank the referee for helpful remarks. We acknowledge financial support from the Deutsche Forschungsgemeinschaft within Sonderforschungsbereich Transregio 12 “Symmetries and Universality in Mesoscopic Systems” (M.K. and T.G.) and from Det Svenska Vetenskapsrådet (J.G.). ## Appendix A Circularity of the supertrace for rectangular supermatrices The circularity for rectangular matrices of pure commuting entries or anticommuting entries was derived by Berezin [18]. Since we have not found the general theorem for arbitrary rectangular supermatrices, we give the trivial statement. ###### Statement A.1 Let the matrices $V_{1}$ and $V_{2}$ be the same as in Eq. (4.23). Then, we have ${\rm Str\,}V_{1}V_{2}={\rm Str\,}V_{2}V_{1}$ (1.1) Derivation: We recall the circularity of the trace for rectangular matrices of commuting elements $\tr A_{1}A_{2}=\tr A_{2}A_{1}$ and its anticommuting analogue $\tr B_{1}B_{2}=-\tr B_{2}B_{1}$ which has been proven by Berezin [18]. We make the simple calculation $\displaystyle{\rm Str\,}V_{1}V_{2}$ $\displaystyle=$ $\displaystyle\tr A_{1}A_{2}+\tr B_{1}C_{2}-\tr C_{1}B_{2}-\tr D_{1}D_{2}$ (1.2) $\displaystyle=$ $\displaystyle\tr A_{2}A_{1}-\tr C_{2}B_{1}+\tr B_{2}C_{1}-\tr D_{2}D_{1}$ $\displaystyle=$ $\displaystyle{\rm Str\,}V_{2}V_{1}$ $\square$ For our purposes we must prove $\tr(V^{\dagger}V)^{m}={\rm Str\,}(VV^{\dagger})^{m}\ .$ (1.3) We define $V_{1}=V^{\dagger}$ and $V_{2}=(VV^{\dagger})^{m-1}V$ and get $a=2k$, $b=2k$, $c=\gamma_{2}N$ and $d=0$. Applying corollary A.1 and reminding that $\tr A={\rm Str\,}A$ for a matrix of commuting elements and identification with the Boson–Boson block, we have the desired result (1.3). ## Appendix B A matrix–Bessel version of the Sekiguchi differential operator We derive a version for the Sekiguchi differential operator for the ordinary matrix Bessel–functions $\varphi_{N}^{(\beta)}(y,x)$ on the connection between the Jack–polynomials and the ordinary matrix Bessel–functions. The Sekiguchi differential operator is defined as [28] $\displaystyle D_{Nz}(u,\beta)=\Delta_{N}^{-1}(z)\det\left[z_{a}^{N-b}\left(z_{a}\frac{\partial}{\partial z_{a}}+(N-b)\frac{\beta}{2}+u\right)\right]_{1\leq a,b\leq N}=$ $\displaystyle=\Delta_{N}^{-1}(z)\det\left[\frac{\beta}{2}\left(z_{a}\frac{\partial}{\partial z_{a}}+u\right)z_{a}^{N-b}+\left(1-\frac{\beta}{2}\right)z_{a}^{N-b}\left(z_{a}\frac{\partial}{\partial z_{a}}+u\right)\right]_{1\leq a,b\leq N}\ .$ (2.1) Here, $u$ is a boost and the expansion parameter to generate the elementary polynomials in the Cherednik operators, for more explicit information see Ref. [41]. Let $J_{N}^{(\beta)}(n,z)$ the Jack–polynomial with the partition $n_{1}\geq\ldots\geq n_{N}$ and the standard parameter $\alpha=\frac{2}{\beta}$ in Macdonald’s [42] notation. The Jack–polynomials are eigenfunctions with respect to $D_{Nz}(u,\beta)$ $D_{Nz}(u,\beta)J_{N}^{(\beta)}(n,z)=\prod\limits_{a=1}^{N}\left[n_{a}+(N-a)\frac{\beta}{2}+u\right]J_{N}^{(\beta)}(n,z)\ .$ (2.2) The aim is to find a similar differential operator for the ordinary matrix Bessel–function $\varphi_{N}^{(\beta)}(y,x)$ such that $\displaystyle D_{Nx}^{(\beta)}(B)\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)$ $\displaystyle=$ $\displaystyle\prod\limits_{a=1}^{N}\imath\left(y_{a}+B\right)\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)=$ (2.3) $\displaystyle=$ $\displaystyle{\det}^{1/\gamma_{2}}\imath(y+B\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\gamma_{2}N})\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right).$ ###### Statement B.1 The differential operator which fulfils Eq. (2.3) is $D_{Nx}^{(\beta)}(B)=\Delta_{N}^{-1}(x)\det\left[x_{a}^{N-b}\left(\frac{\partial}{\partial x_{a}}+(N-b)\frac{\beta}{2}\frac{1}{x_{a}}+\imath B\right)\right]_{1\leq a,b\leq N}\ .$ (2.4) Derivation: Kohler [43] has presented a connection between the Jack–polynomials and the matrix Bessel–functions. Let $z_{a}=e^{\imath\frac{2\pi}{L}x_{a}}\ \ \ {\rm and}\ \ \ n_{a}=\frac{L}{2\pi}y_{a}-\left(\frac{N+1}{2}-a\right)\frac{\beta}{2}$ (2.5) then it is true $\varphi_{N}^{(\beta)}\left(\frac{y}{\gamma_{2}},x\right)=\underset{L\to\infty}{\rm lim}\left(\frac{\Delta_{N}(z)}{\Delta_{N}(x)\Delta_{N}(y)}\right)^{\beta/2}\prod\limits_{a=1}^{N}z_{a}^{-\beta(N-1)/4}J_{N}^{(\beta)}(n,z)\ .$ (2.6) We expand the determinant in Eq. (2.1) and have $\displaystyle D_{Nz}(u,\beta)=$ $\displaystyle=\Delta_{N}^{-1}(z)\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(z_{a}\frac{\partial}{\partial z_{a}}+u\right)\right]^{m_{a}}\Delta_{N}(z)\prod\limits_{a=1}^{N}\left[\left(1-\frac{\beta}{2}\right)\left(z_{a}\frac{\partial}{\partial z_{a}}+u\right)\right]^{1-m_{a}}.$ (2.7) Using the substitution (2.5) and $\widetilde{\Delta}(x)=\prod\limits_{1\leq a<b\leq N}2\imath\sin\left(\frac{\pi}{L}(x_{a}-x_{b})\right)\exp\left(\imath\pi\frac{x_{a}+x_{b}}{L}\right)\ ,$ (2.8) we consider the limit $\displaystyle\underset{L\to\infty}{\lim}\left(\frac{2\pi\imath}{L}\right)^{N}D_{Nz}(u,\beta)=$ $\displaystyle=\underset{L\to\infty}{\lim}\frac{1}{\widetilde{\Delta}(x)}\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(\frac{\partial}{\partial x_{a}}+\imath\frac{2\pi u}{L}\right)\right]^{m_{a}}\widetilde{\Delta}(x)\times$ $\displaystyle\times\prod\limits_{j=1}^{N}\left[\left(1-\frac{\beta}{2}\right)\left(\frac{\partial}{\partial x_{a}}+\imath\frac{2\pi u}{L}\right)\right]^{1-m_{a}}=$ $\displaystyle=\Delta_{N}^{-1}(x)\sum\limits_{m\in\\{0,1\\}^{N}}\prod\limits_{a=1}^{N}\left[\frac{\beta}{2}\left(\frac{\partial}{\partial x_{a}}+\imath B\right)\right]^{m_{a}}\Delta_{N}(x)\left[\left(1-\frac{\beta}{2}\right)\left(\frac{\partial}{\partial x_{a}}+\imath B\right)\right]^{1-m_{a}}=$ $\displaystyle=\Delta_{N}^{-1}(x)\det\left[\frac{\beta}{2}\left(\frac{\partial}{\partial x_{a}}+\imath B\right)x_{a}^{N-b}+\left(1-\frac{\beta}{2}\right)x_{a}^{N-b}\left(\frac{\partial}{\partial x_{a}}+\imath B\right)\right]_{1\leq a,b\leq N}=$ $\displaystyle=\Delta_{N}^{-1}(x)\det\left[x_{a}^{N-b}\left(\frac{\partial}{\partial x_{a}}+(N-b)\frac{\beta}{2}\frac{1}{x_{a}}+\imath B\right)\right]_{1\leq a,b\leq N}\ .$ (2.9) Here, we defined a boost $B=\underset{L\to\infty}{\lim}2\pi u/L$ . The eigenvalue in Eq. (2.2) is in the limit $\underset{L\to\infty}{\lim}\left(\frac{2\pi\imath}{L}\right)^{N}\prod\limits_{a=1}^{N}\left[n_{a}+(N-a)\frac{\beta}{2}+u\right]=\prod\limits_{a=1}^{N}\imath\left(y_{a}+B\right)={\det}^{1/\gamma_{2}}\imath(y+B\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\gamma_{2}N})\ .$ (2.10) We assume that Eq. (2.6) is a uniformly convergent limit. Thus, we combine (2.6), (2.9) and (2.10) with Eq. (2.2) and find Eq. (2.4). $\square$ Indeed, $D_{Nx}^{(\beta)}(B)$ is for the unitary case, $\beta=2$, $D_{Nx}^{(2)}(B)=\Delta_{N}^{-1}(x)\prod\limits_{a=1}^{N}\left(\frac{\partial}{\partial x_{a}}+\imath B\right)\Delta_{N}(x)\ .$ (2.11) ## Appendix C Calculation of the supersymmetric Ingham–Siegel integral In C.1, we compute the Ingham–Siegel integral. We derive the statements 5.1 and 5.2 in C.2 and C.3, respectively. ### C.1 Decomposition of the Boson–Boson and Fermion–Fermion block integration We split $\sigma$ in its Boson–Fermion block structure $\mathfrak{p}\sigma=\left[\begin{array}[]{cc}\sigma_{1}&e^{-\imath\psi/2}\sigma_{\eta}^{\dagger}\\\ e^{-\imath\psi/2}\sigma_{\eta}&e^{-\imath\psi}\sigma_{2}\end{array}\right]\ .$ (3.1) The following calculation must be understand in a weak sense. We first integrate over a conveniently integrable function and, then, perform the integral transformations. Hence, we understand $I_{k}^{(\beta,N)}$ as a distribution where we must fix the underlying set of test–functions. For our purposes, we need Schwartz–functions analytic in the real independent variables. Since the superdeterminant of $\mathfrak{p}\left(\sigma+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{4k}\right)$ is ${\rm Sdet\,}\mathfrak{p}\sigma^{+}=\frac{\det\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)}{\det\left[e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}-e^{-\imath\psi}\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}\right]}$ (3.2) we shift $\sigma_{2}$ by analytic continuation to $\sigma_{2}+\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}$ and obtain $\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$ $\displaystyle\int\limits_{\Sigma_{-\psi}^{0}(\beta,k)}\displaystyle{\rm exp}\left(-\imath\tr r_{1}\sigma_{1}+\imath\tr r_{2}\sigma_{2}+\imath\tr\left[r_{2}\sigma_{\eta}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{-1}\sigma_{\eta}^{\dagger}\right]\right)\times$ (3.3) $\displaystyle\times$ $\displaystyle\exp\left(\varepsilon{\rm Str\,}r\right)\left[\frac{{\det}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)}{{\det}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)}\right]^{N/\gamma_{1}}d[\sigma]\ .$ An integration over the Grassmann variables yields $\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$ $\displaystyle\left(\frac{-\imath\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\exp\left(\varepsilon{\rm Str\,}r\right){\det}^{k}r_{2}\times$ (3.4) $\displaystyle\times$ $\displaystyle\int\limits_{{\rm Herm\,}(\beta,k_{1})}\exp\left(-\imath\tr r_{1}\sigma_{1}\right){\det}\left(\sigma_{1}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{-N/\gamma_{1}-k}d[\sigma_{1}]\times$ $\displaystyle\times$ $\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)^{N/\gamma_{1}}d[\sigma_{2}]\ .$ With help of Eq. (5.3) we have $\displaystyle I_{k}^{(\beta,N)}(\rho)$ $\displaystyle=$ $\displaystyle\imath^{-k_{2}N}G_{Nk_{1}}^{(\beta)}\left(-\frac{\tilde{\gamma}}{2\pi}\right)^{k_{1}k_{2}}\displaystyle{\det}^{\kappa}r_{1}\Theta(r_{1})\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)\times$ (3.5) $\displaystyle\times$ $\displaystyle{\det}^{k}r_{2}\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]\ .$ The remaining integral over the Fermion–Fermion block $\sigma_{2}$, $\displaystyle\mathfrak{I}(r_{2})=\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(\sigma_{2}+\imath e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]\ ,$ (3.6) is up to a constant a differential operator with respect to $r_{2}$ times the Dirac–distribution of $r_{2}$ because the determinant term is for $\beta\in\\{1,2\\}$ a polynomial in $\sigma_{2}$ and for $\beta=4$ we use Cramers–degeneracy. We give several representations of this distribution. We first start with an eigenvalue–angle decomposition of $\sigma_{2}=Us_{2}U^{\dagger}$ where $s_{2}$ is diagonal and $U\in{\rm U\,}^{(4/\beta)}(k_{2})$. Integrating over the group ${\rm U\,}^{(4/\beta)}(k_{2})$, Eq. (3.6) becomes $\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$ $\displaystyle\displaystyle\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)g_{k_{2}}^{(4/\beta)}\times$ (3.7) $\displaystyle\times$ $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\varphi_{k_{2}}^{(4/\beta)}(r_{2},s_{2}){\det}^{N/\gamma_{1}}\left(s_{2}+\imath e^{\imath\psi}\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)|\Delta_{k_{2}}(s_{2})|^{4/\beta}d[s_{2}].$ For more information about the ordinary matrix Bessel–function $\varphi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})=\int\limits_{{\rm U\,}^{(4/\beta)}(k_{2})}\exp\left(\imath\tr r_{2}Us_{2}U^{\dagger}\right)d\mu(U)$ (3.8) with normalized Haar–measure $d\mu(U)$ see in Ref. [39, 40]. The constant $g_{n}^{(\beta)}$ is defined by $\int\limits_{{\rm Herm\,}(\beta,n)}f(H)d[H]=g_{n}^{(\beta)}\int\limits_{\mathbb{R}^{n}}f(E)|\Delta_{n}(E)|^{\beta}d[E]$ (3.9) independent of a sufficiently integrable function $f$ which is invariant under the action of ${\rm U\,}^{(\beta)}(n)$. The Gaussian distribution is such a function. For the left hand side we obtain $\int\limits_{{\rm Herm\,}(\beta,n)}\exp\left(-\tr H^{2}\right)d[H]=\gamma_{2}^{-n(2n-1)/2}2^{-\beta n(n-1)/4}\pi^{n/2+\beta n(n-1)/4}\ .$ (3.10) The integral on the right hand side is equal to $\int\limits_{\mathbb{R}^{n}}\exp\left(-\gamma_{2}\sum\limits_{j=1}^{n}E_{j}^{2}\right)|\Delta_{n}(E)|^{\beta}d[E]=\left\\{\begin{array}[]{ll}2^{-n(n-5)/4}\prod\limits_{j=1}^{n}\Gamma\left(\frac{j}{2}+1\right)&,\ \beta=1,\\\ 2^{-n(n-1)/2}\pi^{n/2}\prod\limits_{j=1}^{n}\Gamma\left(j+1\right)&,\ \beta=2,\\\ 2^{-n(2n-1/2)}\pi^{n/2}\prod\limits_{j=1}^{n}\Gamma\left(2j+1\right)&,\ \beta=4,\end{array}\right.$ (3.11) see Mehta’s book [15]. Thus, we have $g_{n}^{(\beta)}=\frac{1}{n!}\prod\limits_{j=1}^{n}\frac{\pi^{\beta(j-1)/2}\Gamma\left(\beta/2\right)}{\Gamma\left(\beta j/2\right)}\ .$ (3.12) This constant is the quotient of the volumes of the permutation group $S(n)$ and of the flag manifold ${\rm U\,}^{(\beta)}(n)/[{\rm U\,}^{(\beta)}(1)]^{n}$ with the volume element defined as in Ref. [44] denoted by ${\rm Vol}_{B}$. We plug the differential operator of B (2.3) into Eq. (3.7) and have $\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$ $\displaystyle g_{k_{2}}^{(4/\beta)}\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)(\imath\gamma_{1})^{-k_{2}N}\times$ (3.13) $\displaystyle\times$ $\displaystyle\displaystyle\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\int\limits_{\mathbb{R}^{k_{2}}}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})|\Delta_{k_{2}}(s_{2})|^{4/\beta}d[s_{2}]\ .$ The integration over the eigenvalues leads to the Dirac–distribution $\displaystyle\mathfrak{I}(r_{2})$ $\displaystyle=$ $\displaystyle\displaystyle\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\exp\left(-e^{\imath\psi}\varepsilon\tr r_{2}\right)}{g_{k_{2}}^{(4/\beta)}}(\imath\gamma_{1})^{-k_{2}}\times$ (3.14) $\displaystyle\times$ $\displaystyle\displaystyle\left[D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)\right]^{N}\frac{\delta(r_{2})}{|\Delta_{k_{2}}(r_{2})|^{4/\beta}}$ and we find the representation for the supersymmetric Ingham–Siegel integral (5.7). ### C.2 Derivation of statement 5.1 The boost $\imath e^{\imath\psi}\varepsilon$ in the determinant can simply be shifted away because of $D_{k_{2}r_{2}}^{(4/\beta)}\left(\imath e^{\imath\psi}\gamma_{1}\varepsilon\right)\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)=\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)D_{k_{2}r_{2}}^{(4/\beta)}(0)=\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)D_{k_{2}r_{2}}^{(4/\beta)}$ (3.15) and Eq. (3.14). Let $\mathfrak{S}$ the set of ${\rm U\,}^{(4/\beta)}(k_{2})$–invariant Schwartz–functions on ${\rm Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$. The ordinary matrix Bessel–functions are complete and orthogonal in $\mathfrak{S}$ with the sesquilinear scalar product $\langle f|f^{\prime}\rangle=\int\limits_{\mathbb{R}^{k_{2}}}f^{*}(x)f^{\prime}(x)|\Delta_{k_{2}}(x)|^{4/\beta}d[x]\ .$ (3.16) The completeness and the orthogonality are $\displaystyle\langle\phi_{k_{2}}^{(4/\beta)}(x)|\phi_{k_{2}}^{(4/\beta)}(x^{\prime})\rangle$ $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y)\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y)|\ |\Delta_{k_{2}}(y)|^{4/\beta}d[y]=$ (3.17) $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})|\Delta_{k_{2}}(y)|^{4/\beta}d[y]=$ $\displaystyle=$ $\displaystyle C_{k}^{(\beta)}\frac{1}{k_{2}!}\sum_{p\in S(k_{2})}\frac{\prod\limits_{j=1}^{k_{2}}\delta(x_{j}-x_{p(j)}^{\prime})}{|\Delta_{k_{2}}(x)|^{2/\beta}|\Delta_{k_{2}}(x^{\prime})|^{2/\beta}}$ where $S(n)$ is the permutation group of $n$ elements. We defined the constant $C_{k}^{(\beta)}=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\left(g_{k_{2}}^{(4/\beta)}\right)^{-2}\ .$ (3.18) Thus, we write $D_{k_{2}r_{2}}^{(4/\beta)}$ in the Bessel–function basis $\displaystyle\qquad\qquad D_{k_{2}}^{(4/\beta)}$ $\displaystyle=$ $\displaystyle{C_{k}^{(\beta)}\ }^{-2}\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y)\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y)|\ |\Delta_{k_{2}}(y)|^{4/\beta}d[y]\times$ (3.19) $\displaystyle\times$ $\displaystyle D_{k_{2}x}^{(4/\beta)}\int\limits_{\mathbb{R}^{k_{2}}}|\phi_{k_{2}}^{(4/\beta)}(y^{\prime})\rangle\langle\phi_{k_{2}}^{(4/\beta)}(y^{\prime})|\ |\Delta_{k_{2}}(y^{\prime})|^{4/\beta}d[y^{\prime}]=$ $\displaystyle=$ $\displaystyle{C_{k}^{(\beta)}\ }^{-1}\int\limits_{\mathbb{R}^{k_{2}}}{\det}(i\gamma_{1}y)^{1/\gamma_{1}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})|\Delta_{k_{2}}(y)|^{4/\beta}d[y]$ with the action on a function $f\in\mathfrak{S}$ $\displaystyle D_{k_{2}}^{(4/\beta)}|f\rangle$ $\displaystyle=$ $\displaystyle{C_{k}^{(\beta)}\ }^{-1}\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}{\det}(i\gamma_{1}y)^{1/\gamma_{1}}\phi_{k_{2}}^{(4/\beta)}(y,x)\phi_{k_{2}}^{(4/\beta)*}(y,x^{\prime})f(x^{\prime})\times$ (3.20) $\displaystyle\times$ $\displaystyle|\Delta_{k_{2}}(x^{\prime})|^{4/\beta}|\Delta_{k_{2}}(y)|^{4/\beta}d[x^{\prime}]d[y]\ .$ Due to this representation of the Sekiguchi differential operator analog, $\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}$ is symmetric with respect to the scalar product (3.16) $\langle f|\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}|f^{\prime}\rangle=\langle\imath^{k_{2}}D_{k_{2}}^{(4/\beta)}f|f^{\prime}\rangle\ .$ (3.21) Let $L$ be a real number. Then, we easily see with help of Eq. (2.4) $D_{k_{2}x}^{(4/\beta)}\det x^{L/\gamma_{1}}=\prod\limits_{b=1}^{k_{2}}\left(L+\frac{2}{\beta}b-\frac{2}{\beta}\right)\det x^{(L-1)/\gamma_{1}}\ .$ (3.22) Since the property (3.21), we obtain for a function $f\in\mathfrak{S}$ $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\det x^{L/\gamma_{1}}|\Delta_{k_{2}}(x)|^{4/\beta}D_{k_{2}x}^{(4/\beta)}f(x)d[x]=$ (3.23) $\displaystyle=$ $\displaystyle(-1)^{k_{2}}\int\limits_{\mathbb{R}^{k_{2}}}f(x)|\Delta_{k_{2}}(x)|^{4/\beta}D_{k_{2}x}^{(4/\beta)}\det x^{L/\gamma_{1}}d[x]=$ $\displaystyle=$ $\displaystyle(-1)^{k_{2}}\prod\limits_{b=1}^{k_{2}}\left(L+\frac{2}{\beta}b-\frac{2}{\beta}\right)\int\limits_{\mathbb{R}^{k_{2}}}f(x)|\Delta_{k_{2}}(x)|^{4/\beta}\det x^{(L-1)/\gamma_{1}}d[x]\ .$ The boundary terms of the partial integration do not appear because $f$ is a Schwartz–function and $D_{k_{2}x}^{(4/\beta)}$ has the representation (3.19). Let $F$ and $f$ be the functions of statement 5.1. Then, we calculate $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}F(r_{2}){\det}^{k}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$ $\displaystyle=\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}f(r_{2}){\det}^{N/\gamma_{1}}r_{2}|\Delta_{k_{2}}(r_{2})|^{4/\beta}\exp\left(\imath\tr r_{2}\sigma_{2}\right)\times$ $\displaystyle\times{\det}^{N/\gamma_{1}}\left(e^{-\imath\psi}\sigma_{2}+\imath\varepsilon\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{k}}\right)d[\sigma_{2}]d[r_{2}]=$ $\displaystyle=\left(\frac{-\imath e^{-\imath\psi}}{\gamma_{1}}\right)^{k_{2}N}g_{k_{2}}^{(4/\beta)}\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}f(r_{2})\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)|\Delta_{k_{2}}(r_{2})|^{4/\beta}\times$ $\displaystyle\times{\det}^{N/\gamma_{1}}s_{2}|\Delta_{k_{2}}(s_{2})|^{4/\beta}\left(D_{k_{2}s_{2}}^{(4/\beta)}\right)^{N}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})d[s_{2}]d[r_{2}]=$ $\displaystyle=(\imath e^{-\imath\psi})^{k_{2}N}g_{k_{2}}^{(4/\beta)}\prod\limits_{a=1}^{N}\prod\limits_{b=1}^{k_{2}}\left(\frac{a}{\gamma_{1}}+\frac{b-1}{\gamma_{2}}\right)\times$ $\displaystyle\times\int\limits_{\mathbb{R}^{k_{2}}}\int\limits_{\mathbb{R}^{k_{2}}}f(r_{2})\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)|\Delta_{k_{2}}(r_{2})|^{4/\beta}|\Delta_{k_{2}}(s_{2})|^{4/\beta}\phi_{k_{2}}^{(4/\beta)}(r_{2},s_{2})d[s_{2}]d[r_{2}]=$ $\displaystyle=\left(\frac{2\pi}{\gamma_{1}}\right)^{k_{2}}\left(\frac{\pi}{\gamma_{1}}\right)^{2k_{2}(k_{2}-1)/\beta}\frac{\left(\imath e^{-\imath\psi}\right)^{k_{2}N}}{g_{k_{2}}^{(4/\beta)}\gamma_{1}^{k_{2}N}}\prod_{j=0}^{k_{2}-1}\frac{\Gamma\left(N+1+2j/\beta\right)}{\Gamma\left(1+2j/\beta\right)}f(0)\ .$ (3.24) The second equality in Eq. (5.13) is true because of $f(0)=\left.\prod\limits_{j=1}^{k_{2}}\frac{1}{\left(N-k_{1}\right)!}\left(\frac{\partial}{\partial r_{j2}}\right)^{N-k_{1}}\left[f(r_{2})\exp\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)\det r_{2}^{N/\gamma_{1}-k}\right]\right|_{r_{2}=0}.$ (3.25) The function in the bracket is $F$ times the exponential term ${\rm exp}\left(\varepsilon e^{\imath\psi}\tr r_{2}\right)$. ### C.3 Derivation of statement 5.2 We have to show $\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}F(\rho_{2}){\det}^{k}\rho_{2}\exp\left(\imath\tr\rho_{2}\sigma_{2}\right){\det}^{N/\gamma_{1}}\sigma_{2}d[\sigma_{2}]d[\rho_{2}]\sim$ (3.26) $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}^{k_{2}}}F(r_{2})\prod\limits_{j=1}^{k}\left(-\frac{\partial}{\partial r_{j2}}\right)^{N-2/\beta}\delta(r_{j2})d[r_{2}]$ for every rotation invariant Schwartz–function $F:{\rm Herm\,}(4/\beta,k_{2})\rightarrow\mathbb{C}$ and $\beta\in\\{1,2\\}$. Due to $\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2})}\exp\left(\imath\tr r_{2}\sigma_{2}\right){\det}\sigma_{2}^{N/\gamma_{1}}d[\sigma_{2}]$ $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}y^{N}{\rm exp}\left[\imath r_{k_{2}2}\tr(y\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}+v^{\dagger}v)\right]d[v]dy\times$ (3.27) $\displaystyle\times$ $\displaystyle\int\limits_{{\rm Herm\,}(4/\beta,k_{2}-1)}\exp\left(\imath\tr\tilde{r}_{2}\tilde{\sigma}_{2}\right){\det}\tilde{\sigma}_{2}^{(N+2/\beta)/\gamma_{1}}d[\tilde{\sigma}_{2}]$ with the decompositions $r_{2}={\rm diag\,}\left(\tilde{r}_{2},r_{k_{2}2}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}\right)$ and $\sigma_{2}=\left[\begin{array}[]{cc}\tilde{\sigma}_{2}&v\\\ v^{\dagger}&y\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\tilde{\gamma}}\end{array}\right]\ ,$ (3.28) we make a complete induction. Thus, we reduce the derivation to $\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}f(x)x^{k_{1}}y^{N}{\rm exp}\left[\imath x\tr(y+v^{\dagger}v)\right]d[v]dydx\sim\int\limits_{\mathbb{R}}f(x)\frac{\partial^{N-2/\beta}}{\partial x^{N-2/\beta}}\delta(x)d[x]$ (3.29) where $f:\mathbb{R}\rightarrow\mathbb{C}$ is a Schwartz–function. The function $\tilde{f}(y)=\int\limits_{\mathbb{R}}f(x)x^{k_{1}}\exp\left(\imath xy\right)dx$ (3.30) is also a Schwartz–function. Hence, we compute $\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}f(x)x^{k_{1}}y^{N}{\rm exp}\left[\imath x\tr(y+v^{\dagger}v)\right]d[v]dydx=$ (3.31) $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}\tilde{f}\left[\tr(y+v^{\dagger}v)\right]y^{N}d[v]dy=$ $\displaystyle=$ $\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{4(k_{2}-1)/\beta}}y^{N-2(k_{2}-1)/\beta}\left(-\frac{\partial}{\partial y}\right)^{2(k_{2}-1)/\beta}\tilde{f}\left(\tr(y+v^{\dagger}v)\right)d[v]dy\sim$ $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^{+}}\tilde{v}^{2(k_{2}-1)/\beta-1}\left(-\frac{\partial}{\partial\tilde{v}}\right)^{2(k_{2}-1)/\beta}\tilde{f}\Bigl{(}\tr(y+\tilde{v})\Bigr{)}y^{N-2(k_{2}-1)/\beta}d\tilde{v}dy\sim$ $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}\tilde{f}\left(\tr y\right)y^{N-2(k_{2}-1)/\beta}dy\sim$ $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}f(x)x^{k_{1}}\left(-\frac{\partial}{\partial x}\right)^{N-2(k_{2}-1)/\beta}\delta(x)dx\sim$ $\displaystyle\sim$ $\displaystyle\int\limits_{\mathbb{R}}f(x)\frac{\partial^{N-2/\beta}}{\partial x^{N-2/\beta}}\delta(x)d[x]\ ,$ which is for $\beta\in\\{1,2\\}$ well–defined. ## Appendix D Determinantal structure of the ${\rm UOSp\,}(2k/2k)$–Berezinian ###### Statement D.1 Let $k\in\mathbb{N}$, $x_{1}\in\mathbb{C}^{2k}$ and $x_{2}\in\mathbb{C}^{k}$. $x_{1}$ and $x_{2}$ satisfy the condition $x_{a1}-x_{b2}\neq 0\ \ ,\ \forall a\in\\{1,\ldots,2k\\}\ \wedge\ b\in\\{1,\ldots,k\\}\ .$ (4.1) Then, we have $\frac{\Delta_{2k}(x_{1})\Delta_{k}^{4}(x_{2})}{V_{k}^{2}(x_{1},x_{2})}=(-1)^{k(k-1)/2}\det\left[\left\\{\frac{1}{x_{a1}-x_{b2}}\right\\}\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}},\left\\{\frac{1}{(x_{a1}-x_{b2})^{2}}\right\\}\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}}\right].$ (4.2) We prove this theorem by complete induction. Derivation: We rearrange the determinant by exchanging the columns $\displaystyle\det\left[\left\\{\frac{1}{x_{a1}-x_{b2}}\right\\}\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}}\ ,\ \left\\{\frac{1}{(x_{a1}-x_{b2})^{2}}\right\\}\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}}\right]=$ $\displaystyle=(-1)^{k(k-1)/2}\det\left[\frac{1}{x_{a1}-x_{b2}}\ ,\ \frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}}\ .$ (4.3) Thus, the minus sign in Eq. (4.2) cancels out. We find for $k=1$ $\det\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{2}}&\displaystyle\frac{1}{(x_{11}-x_{2})^{2}}\\\ \displaystyle\frac{1}{x_{21}-x_{2}}&\displaystyle\frac{1}{(x_{21}-x_{2})^{2}}\end{array}\right]=\frac{(x_{11}-x_{21})}{(x_{11}-x_{2})^{2}(x_{21}-x_{2})^{2}}\ .$ (4.4) We assume that this theorem is for $k-1$ true. Let $\displaystyle s$ $\displaystyle=$ $\displaystyle\left[\frac{1}{x_{a1}-x_{b2}}\ ,\ \frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{1\leq b\leq k}}{\underset{1\leq a\leq 2k}{}}=\left[\begin{array}[]{cc}s_{1}&w\\\ v&s_{2}\end{array}\right]\ ,$ (4.7) $\displaystyle s_{1}$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{12}}&\displaystyle\frac{1}{(x_{11}-x_{12})^{2}}\\\ \displaystyle\frac{1}{x_{21}-x_{12}}&\displaystyle\frac{1}{(x_{21}-x_{12})^{2}}\end{array}\right]\ ,$ (4.10) $\displaystyle s_{2}$ $\displaystyle=$ $\displaystyle\left[\frac{1}{x_{a1}-x_{b2}}\ ,\ \frac{1}{(x_{a1}-x_{b2})^{2}}\right]\underset{{2\leq b\leq k}}{\underset{3\leq a\leq 2k}{}}\ ,$ (4.11) $\displaystyle v$ $\displaystyle=$ $\displaystyle\left[\frac{1}{x_{a1}-x_{12}}\ ,\ \frac{1}{(x_{a1}-x_{12})^{2}}\right]_{3\leq a\leq 2k}\ {\rm and}$ (4.12) $\displaystyle w$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}\displaystyle\frac{1}{x_{11}-x_{b2}}&\displaystyle\frac{1}{(x_{11}-x_{b2})^{2}}\\\ \displaystyle\frac{1}{x_{21}-x_{b2}}&\displaystyle\frac{1}{(x_{21}-x_{b2})^{2}}\end{array}\right]_{2\leq b\leq k}\ .$ (4.15) Then, we have $\det s=\det s_{1}\det(s_{2}-vs_{1}^{-1}w)\overset{(D.4)}{=}\frac{(x_{11}-x_{21})}{(x_{11}-x_{12})^{2}(x_{21}-x_{12})^{2}}\det(s_{2}-vs_{1}^{-1}w)\ .$ (4.16) The matrix in the determinant is equal to $(s_{2}-vs_{1}^{-1}w)^{T}=\left[\begin{array}[]{c}\displaystyle\frac{(x_{11}-x_{a1})(x_{21}-x_{a1})(x_{12}-x_{b2})^{2}}{(x_{a1}-x_{12})^{2}(x_{11}-x_{b2})(x_{21}-x_{b2})}\frac{1}{x_{a1}-x_{b2}}\\\ \\\ \displaystyle\frac{(x_{11}-x_{a1})(x_{21}-x_{a1})(x_{12}-x_{b2})}{(x_{a1}-x_{12})^{2}(x_{11}-x_{b2})^{2}(x_{21}-x_{b2})^{2}}\frac{P_{ab}}{(x_{a1}-x_{b2})^{2}}\end{array}\right]\underset{{2\leq b\leq k}}{\underset{3\leq a\leq 2k}{\underset{}{\underset{}{\underset{}{}}}}}$ (4.17) where $P_{ab}$ is a polynomial $\displaystyle P_{ab}=(x_{a1}-x_{b2})(x_{11}-x_{b2})(x_{12}-x_{b2})-(x_{a1}-x_{12})(x_{11}-x_{b2})(x_{21}-x_{b2})-$ $\displaystyle-(x_{21}-x_{b2})(x_{a1}-x_{b2})(x_{11}-x_{12})=$ $\displaystyle=(x_{11}-x_{b2})(x_{21}-x_{b2})(x_{12}-x_{b2})+$ $\displaystyle+(x_{a1}-x_{b2})\left[(x_{11}+x_{21})(x_{12}+x_{b2})-2x_{11}x_{21}-2x_{12}x_{b2}\right]=$ $\displaystyle=A_{b}^{(1)}+(x_{a1}-x_{b2})A_{b}^{(2)}\ .$ (4.18) The polynomials $A_{b}^{(1)}$ and $A_{b}^{(2)}$ are independent of the index $a$. Due to the multilinearity and the skew symmetry of the determinant, the result is $\det s=\frac{(x_{11}-x_{21})}{(x_{11}-x_{12})^{2}(x_{21}-x_{12})^{2}}\frac{\prod\limits_{a=3}^{2k}(x_{11}-x_{a1})(x_{21}-x_{a1})\prod\limits_{b=2}^{k}(x_{12}-x_{b2})^{4}}{\prod\limits_{a=3}^{2k}(x_{a1}-x_{12})^{2}\prod\limits_{b=2}^{k}(x_{11}-x_{b2})^{2}(x_{21}-x_{b2})^{2}}\det s_{2}$ (4.19) which completes the induction. $\square$ ## Appendix E Derivation of statement 4.1 Let $\lambda$ be the wanted eigenvalue and is a commuting variable of the Grassmann algebra constructed from the $\\{\tau_{q}^{(p)},\tau_{q}^{(p)*}\\}_{p,q}$. Then, we split this eigenvalue in its body $\lambda^{(0)}$ and its soul $\lambda^{(1)}$, i.e. $\lambda=\lambda^{(0)}+\lambda^{(1)}$. Let $v$ the $\gamma_{2}N$–dimensional eigenvector of $H$ such that $Hv=\lambda v{\rm\ \ and\ \ }v^{\dagger}v=1\ .$ (5.1) In this equation, we recognize in the lowest order of Grassmann variables that $\lambda^{(0)}$ is an eigenvalue of $H^{(0)}$. Then, let $\lambda^{(0)}$ be an eigenvalue of the highest degeneracy $\delta$ of $H^{(0)}$, i.e. $\delta={\rm dim\ ker}(H^{(0)}-\lambda^{(0)}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N})$. Without loss of generality, we assume that $H^{(0)}$ is diagonal and the eigenvalue $\lambda^{(0)}$ only appears in the upper left $\delta\times\delta$–matrix block, $H^{(0)}=\left[\begin{array}[]{cc}\lambda^{(0)}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\delta}&0\\\ 0&\widetilde{H}^{(0)}\end{array}\right]\ .$ (5.2) We also split the vectors in $\delta$ and $N-\delta$ dimensional vectors $v^{(0)}=\left[\begin{array}[]{c}v_{1}\\\ v_{2}\end{array}\right]{\rm\ \ and\ \ }\tau_{q}=\left[\begin{array}[]{c}\tau_{q1}\\\ \tau_{q2}\end{array}\right]\ .$ (5.3) Thus, we find the two equations from (5.1) $\displaystyle T_{11}v_{1}-\lambda^{(1)}v_{1}+T_{12}v_{2}$ $\displaystyle=$ $\displaystyle 0\ ,$ (5.4) $\displaystyle T_{21}v_{1}+\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]v_{2}$ $\displaystyle=$ $\displaystyle 0$ (5.5) where $T_{nm}=\sum\limits_{q=1}^{\widetilde{N}}l_{q}\left[\tau_{qn}\tau_{qm}^{\dagger}+\widetilde{Y}\left(\tau_{qn}^{*}\tau_{qm}^{T}\right)\right]$. Eq. (5.5) yields $v_{2}=-\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}v_{1}\ .$ (5.6) Hence, the body of $v_{2}$ is zero and we have for Eq. (5.4) $T_{11}v_{1}-\lambda^{(1)}v_{1}-T_{12}\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}v_{1}=0\ .$ (5.7) If the degeneracy is $\delta>\gamma_{2}$, we consider a $\delta$–dimensional real vector $w\neq 0$ such that $w^{\dagger}v_{1}=0$. Then, we get for the lowest order in the Grassmann variables of Eq. (5.7) times $w^{\dagger}$ $w^{\dagger}T_{11}v_{1}^{(0)}=0$ (5.8) where $v_{1}^{(0)}$ is the body of $v_{1}$. The entries of $w^{\dagger}T_{11}$ are linearly independent. Thus, the body of $v_{1}$ is also zero. This violates the second property of (5.1). Let the degeneracy $\delta=\gamma_{2}$. Then, $v_{1}$ is $\gamma_{2}$-dimensional and is normalizable. For $\beta=4$, we have the quaternionic case and the matrix before $v_{1}$ in Eq. (5.7) is a diagonal quaternion. Hence, it must be true $\lambda^{(1)}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\gamma_{2}}=T_{11}-T_{12}\left[\widetilde{H}^{(0)}-\lambda\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}\ .$ (5.9) Considering the second order term in the Grassmann variables of Eq. (5.9), $\lambda$’s second order term is $T_{11}$ for $\beta\in\\{1,2\\}$ and $\tr T_{11}/2$ for $\beta=4$. Eq. (5.9) is unique solvable by recursive calculation. We plug the right hand side of Eq. (5.9) into the $\lambda^{(1)}$ on the same side and repeat this procedure. Hence, we define the operator $\displaystyle O(\mu)$ $\displaystyle=$ $\displaystyle\frac{1}{\gamma_{2}}\tr\left\\{T_{11}-T_{12}\left[\widetilde{H}^{(0)}-(\lambda^{(0)}+\mu)\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{N-\delta}+T_{22}\right]^{-1}T_{21}\right\\}{\rm\ and}$ (5.10) $\displaystyle O^{n+1}(\mu)$ $\displaystyle=$ $\displaystyle O\left[O^{n}(\mu)\right]\ .$ (5.11) Then, $\lambda^{(1)}=O^{n}(\lambda^{(1)})$ is true for arbitrary $n\in\mathbb{N}$. The recursion is finished for $n_{0}\in\mathbb{N}$ if $\lambda^{(1)}=O^{n_{0}}(\lambda^{(1)})=O^{n_{0}}(0)$. Due to the Grassmann variables, this recursion procedure eventually terminates after the $(\gamma_{2}N\widetilde{N}/2)$’th time. Thus, the eigenvalue $\lambda$ depends on Grassmann variables and is not a real number. ## References ## References * [1] K.B. Efetov. Adv. Phys., 32:53, 1983. * [2] J.J.M. Verbaarschot, H.A. Weidenmüller, and M.R. Zirnbauer. Phys. Rep., 129:367, 1985. * [3] K.B. Efetov. Supersymmetry in Disorder and Chaos. Cambridge University Press, Cambridge, 1st edition, 1997. * [4] T. Guhr, A. Müller-Groeling, and H.A. Weidenmüller. Phys. Rep., 299:189, 1998. * [5] T. Guhr. J. Phys., A 39:13191, 2006. * [6] N. Lehmann, D. Saher, V.V. Sokolov, and H.-J. Sommers. Nucl. Phys., A 582:223, 1995. * [7] G. Hackenbroich and H.A. Weidenmüller. Phys. Rev. Lett., 74:4118, 1995. * [8] K.B. Efetov, G. Schwiete, and K. Takahashi. Phys. Rev. Lett., 92:026807, 2004. * [9] K.B. Efetov and V.R. Kogan. Phys. Rev., B 70:195326, 2004. * [10] F. Basile and G. Akemann. JHEP, page 0712:043, 2007. * [11] H.-J. Sommers. Acta Phys. Pol., B 38:1001, 2007. * [12] P. Littelmann, H.-J. Sommers, and M.R. Zirnbauer. Commun. Math. Phys., 283:343, 2008. * [13] T. Jiang. J. Math. Phys., 46:052106, 2005. * [14] M. Kieburg, H. Kohler, and T. Guhr. J. Math. Phys., 50:013528, 2009. * [15] M.L. Mehta. Random Matrices and the statistical Theory of Energy Levels. Academic Press Inc., New York, 1st edition, 1967. * [16] L. Hörmander. The Analysis of Linear Partial Differential Operators. Springer, Berlin, Heidelberg, New York, 1976. * [17] M.R. Zirnbauer. The Supersymmetry Method of Random Matrix Theory, Encyclopedia of Mathematical Physics, eds. J.-P. Franoise, G.L. Naber and Tsou S.T., Elsevier, Oxford, 5:151, 2006. * [18] F.A. Berezin. Introduction to Superanalysis. D. Reidel Publishing Company, Dordrecht, 1st edition, 1987. * [19] B. DeWitt. Supermanifolds. Cambridge University Press, Cambridge, 1st edition, 1984. * [20] J.E. Bunder, K.B. Efetov, K.B. Kravtsov, O.M. Yevtushenko, and M.R. Zirnbauer. J. Stat. Phys., 129:809, 2007. * [21] B.L. van der Waerden. Algebra I. Springer, Berlin, Heidelberg, New York, 8th edition, 1971. * [22] H. Kohler and T. Guhr. J. Phys., A 38:9891, 2005. * [23] M.R. Zirnbauer. J. Math. Phys., 37:4986, 1996. * [24] A.E. Ingham. Proc. Camb. Phil. Soc., 29:271, 1933. * [25] C.L. Siegel. Ann. Math., 36:527, 1935. * [26] Y.V. Fyodorov. Nucl. Phys., B 621:643, 2002. * [27] M.L. Mehta. Random Matrices. Academic Press Inc., New York, 2nd edition, 1991. * [28] A. Okounkov and G. Olshanski. Math. Res. Letters, 4:69, 1997. * [29] T. Guhr and H. Kohler. J. Math. Phys., 43:2741, 2002. * [30] T. Guhr. Commun. Math. Phys., 176:555, 1996. * [31] T. Guhr. Ann. Phys. (N.Y.), 250:145, 1996. * [32] T. Guhr. J. Math. Phys., 32:336, 1991. * [33] T. Guhr. J. Math. Phys., 34:2523, 1993. * [34] M.J. Rothstein. Trans. Am. Math. Soc., 299:387, 1987. * [35] F. Wegner, 1983. * [36] F. Constantinescu. J. Stat. Phys., 50:1167, 1988. * [37] F. Constantinescu and H.F. de Groote. J. Math. Phys., 30:981, 1989. * [38] E. Brezin and S. Hikami. J. Phys. A: Math. Gen., 36:711, 2003. * [39] T. Guhr and H. Kohler. J. Math. Phys., 43:2707, 2002. * [40] M. Bergère and B. Eynard, 2008. arxiv:0805.4482v1 [math-ph]. * [41] B. Feigin, M. Jimbo, T. Miwa, and E. Mukhin. Internat. Math. Res. Notices, 23:1223, 2002. * [42] I.G. Macdonald. Symmetric Functions and Hall polynomials. Oxford University Press, Oxford, 2nd edition, 1995. * [43] H. Kohler, 2007. arxiv:0801.0132v1 [math-ph]. * [44] K. Życzkowski and H.-J. Sommers. J. Phys., A 36:10115, 2003.
compat=1.0.0 IFIC/23-45 FTUV-23-1005.0503 UMN-TH-4226/23 FTPI-MINN-23-18 CERN-TH-2023-186 Perturbatively including inhomogeneities in axion inflation Valerie Domckea, Yohei Emab,c, and Stefan Sandnerd a | _Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland_ ---|--- b | _William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,_ | _University of Minnesota, Minneapolis, MN 55455, USA_ c | _School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA_ d | _Instituto de Física Corpuscular, Universitat de València and CSIC,_ | _Carrer del Catedrátic José Beltrán Martinez 2, 46980 Paterna, Spain_ Axion inflation, i.e. an axion-like inflaton coupled to an Abelian gauge field through a Chern-Simons interaction, comes with a rich and testable phenomenology. This is particularly true in the strong backreaction regime, where the gauge field production heavily impacts the axion dynamics. Lattice simulations have recently demonstrated the importance of accounting for inhomogeneities of the axion field in this regime. We propose a perturbative scheme to account for these inhomogeneities while maintaining high computational efficiency. Our goal is to accurately capture deviations from the homogeneous axion field approximation within the perturbative regime as well as self-consistently determine the onset of the non-perturbative regime. ###### Contents 1. 1 Introduction 2. 2 The gradient expansion formalism including axion inhomogeneities 1. 2.1 Equations of motion 2. 2.2 Boundary terms and truncation 3. 2.3 Validity of the axion gradient expansion 3. 3 Numerical results 4. 4 Discussion and outlook 5. A Equations of motion for $3$-point functions 1. A.1 Equations of motion up to one spatial derivative 2. A.2 Equations of motion with two spatial derivatives 3. A.3 Numerical implementation 6. B Truncation relation ## 1 Introduction Cosmic inflation remains the most attractive theory to explain the precise observations of the cosmic microwave background (CMB) by the Planck satellite [1, 2]. Among the particle physics models which can lead to a quasi- exponential expansion, considerable attention has been paid to axion-like particles as the driving field of inflation. The axion’s angular symmetry is only broken by non-perturbative effects and thus the observationally required flatness of the potential can be ensured naturally [3]. Furthermore, the shift symmetry allows for derivative couplings of the axion to the Chern-Simons density $F_{\mu\nu}\tilde{F}^{\mu\nu}$ of a (dark) gauge field $A_{\mu}$. Such couplings can lead to the exponential production of $A_{\mu}$ due to a tachyonic instability of one of the two helicity modes in the equations of motion (EOM), which is solely controlled by the axion velocity and hence generically impacts the later stages of inflation, corresponding to length scales much smaller than those accessible in CMB observations. The consequences of such a large non-thermal $A_{\mu}$ population are diverse and include: i) an effective friction term in the axion EOM [4], ii) a strong enhancement of the scalar and tensor perturbations with possible observational consequences, such as the production of primordial black holes [5, 6, 7, 8, 9, 10, 11] and (chiral) stochastic gravitation waves [12, 13, 14, 15, 16, 17, 18] and iii) a mechanism for magnetogenesis [19, 20, 21, 22, 23, 24] and baryogenesis [25, 26, 27, 23] if the gauge field is taken to be the Standard Model (SM) hypercharge. Obtaining accurate predictions for any of these processes requires evolving the highly non-linear system containing the axion and gauge fields, as well as when present, any light fermions. In this paper we shall focus on the case where the gauge field is a dark photon, with no other couplings to the SM (or beyond) other than the Chern-Simons coupling to the axion.111For the discussion of the SM case including light fermions see [28, 29, 30]. In this case, the most important backreaction to consider is the effective friction induced by the gauge fields on the axion [4], and our interest will be in the regime where this backreaction is strong, i.e. typically towards the end of inflation. Changes in the axion velocity impact gauge field modes within the tachyonic instability window, which contribute to the friction force. As a result, the friction term reacts with some time delay to the changes in the axion velocity, leading to a resonantly coupled system with distinct peaks in the axion velocity [31, 32]. These results have been confirmed in perturbative stability analysis [33, 34] as well as using the gradient expansion formalism (GEF) [30, 34]. The latter provides a remarkably efficient tool for numerical simulations based on expressing the non-linear EOMs in position space as a tower of ordinary differential equations (ODEs) for the $2$-point functions of the axion, the gauge fields, and the gradients of the gauge fields, reducing the computation time by orders of magnitude compared to e.g. the iterative procedure used in [32]. All these methods, however, make a crucial assumption: they take the axion field to be homogeneous. Given the rapid growth of the axion perturbations, the significant departure from the standard slow-roll regime and the strong non-linearities involved, it is not surprising that this approximation breaks down in the strong backreaction regime. This has recently been explicitly demonstrated in a lattice simulation [35] using CosmoLattice [36, 37], reproducing earlier results when switching off axion gradients but finding significant departures when the axion gradients are taken into account consistently. While accurately dealing with the full non-linear problem including the strong backreaction, the downside of these simulations is that they are extremely costly. Moreover, as can be seen from the results obtained in [35], the highly non-linear dynamics implies that observable quantities (such as the magnitude of the gauge field contribution or the scalar perturbations at a given scale) are not monotonic functions of the axion gauge field coupling. A full exploration of the phenomenology of axion inflation throughout the parameter space thus seems unfeasible based on lattice simulations only. Our goal in this paper is to leverage the benefits of the highly efficient GEF to perturbatively include axion gradients. To obtain a closed set of ODEs, this requires evolving not only the $2$-point functions but also higher $p$-point functions under the GEF scheme. We are particularly interested in the regime where axion gradients are relevant and impact the evolution of the axion vacuum expectation value and the gauge field distribution, while still allowing for a perturbative treatment. As time evolves, this will typically give way to a regime in which the axion gradients become too large to be treated perturbatively, calling for a lattice simulation. Within our formalism, we self-consistently determine the breakdown of perturbativity, providing a tool to compute initial conditions for lattice simulations, focusing their computational power on the truly non-linear regimes. Our work should be seen as a first step in developing this methodology, and we discuss possible extensions and scalability. In the process, we gain new insights into the application of the GEF to axion inflation, notably regarding the need to go to rather high order in the GEF tower to obtain convergence as well as an improved truncation relation. The paper is organized as follows. In Sec. 2 we briefly review the GEF and derive an extension including axion gradients. Some technical details, in particular the more lengthy equations for the 3-point functions are given in App. A, while App. B focuses on the limitations of the GEF and in particular the truncation relation. Our results are presented in Sec. 3, and contrasted with results obtained in lattice computations as well as under the assumption of a homogeneous axion field. The final Sec. 4 summarizes and discusses the results. ## 2 The gradient expansion formalism including axion inhomogeneities The gradient expansion formalism developed in Refs. [30, 38] for axion inflation (see [39, 40] for earlier work in related contexts) provides a computationally efficient way of accounting for the backreaction of the gauge fields on the axion dynamics. The system of interest here is an unbroken, dark $U(1)$ gauge group coupled to the axion via a Chern-Simons interaction, $\displaystyle S$ $\displaystyle=\int d^{4}x\sqrt{-g}\left[\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)-\frac{1}{4}g^{\mu\rho}g^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma}-\frac{\alpha\phi}{4\pi f_{a}}\frac{1}{\sqrt{-g}}F_{\mu\nu}\tilde{F}^{\mu\nu}\right]\,,$ (2.1) where $\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}/2$ with $\epsilon^{0123}=1$ and for simplicity we will set $V(\phi)=m_{\phi}^{2}\phi^{2}/2$. The GEF re-arranges the resulting coupled partial differential equations governing the dynamics of the electric and magnetic field (i.e. Maxwell’s equations in the presence of an axion photon coupling) into a tower of linear ODEs for the $2$-point functions $\displaystyle\mathcal{P}_{X}^{(n)}$ $\displaystyle=\frac{1}{a^{n}}\left\langle\vec{X}\cdot(\vec{\nabla}\times)^{n}\vec{X}\right\rangle\,,\quad\mathcal{P}_{XY}^{(n)}=-\frac{1}{a^{n}}\left\langle\vec{X}\cdot(\vec{\nabla}\times)^{n}\vec{Y}\right\rangle\,.$ (2.2) with $X,Y=\\{E,B\\}$, the bracket indicating the spatial average, and $a$ indicating the scale factor of the expanding universe. Under the assumptions that the axion field is homogeneous and that its velocity varies only slowly, the EOMs of the $2$-point functions form a closed system, and the infinite set of equations can be truncated at finite power $n$ of the curl yielding an efficient and accurate evaluation of the axion and gauge field dynamics [30]. However, as we will discuss in more detail below, typically neither of these assumptions are fulfilled once the gauge field backreaction becomes significant. In the strong backreaction regime, the axion field enters into a oscillatory regime, understood as resonances resulting from the time-delayed friction force exerted by the gauge fields [31, 32, 33, 30]. More recently lattice studies have qualitatively confirmed this result, finding however quantitatively significant differences (notably a significant damping of these oscillations) when including the inhomogeneities in the axion field [35]. The goal of the present paper is to extend the GEF beyond these two assumptions. Below, we derive an extended version of the GEF including axion inhomogeneities (i.e. axion gradient terms) in a perturbative manner. The resulting non-linearities in the EOMs of the $2$-point functions prompt us to include higher $p$-point functions in order to reduce our equations to ODEs, in a similar spirit as the original GEF. Similarly, we will need to find a suitable procedure to truncate this second expansion series as finite (and low) $p$. In the process, we will shed light on the role of rapid changes in the axion velocity and the resulting limitations of the GEF. ### 2.1 Equations of motion We start from the exact EOMs as derived from Eq. (2.1), separating the homogeneous component of the axion $\phi(t)$ from its inhomogeneous component $\chi(t,\vec{x})$, $\displaystyle 0$ $\displaystyle=\ddot{\phi}+3H\dot{\phi}+m_{\phi}^{2}\phi-\frac{\beta}{M_{P}}\left\langle{\vec{E}\cdot\vec{B}}\right\rangle\,,$ (2.3) $\displaystyle 0$ $\displaystyle=\ddot{\chi}+3H\dot{\chi}-\frac{\nabla^{2}\chi}{a^{2}}+m_{\phi}^{2}\chi-\frac{\beta}{M_{P}}\left(\vec{E}\cdot\vec{B}-\left\langle{\vec{E}\cdot\vec{B}}\right\rangle\right)\,,$ (2.4) $\displaystyle 0$ $\displaystyle=\dot{\vec{E}}+2H\vec{E}-\frac{1}{a}\vec{\nabla}\times\vec{B}+\frac{\beta}{M_{P}}\left(\dot{\phi}+\dot{\chi}\right)\vec{B}+\frac{\beta}{M_{P}}\frac{1}{a}\vec{\nabla}\chi\times\vec{E}\,,$ (2.5) $\displaystyle 0$ $\displaystyle=\dot{\vec{B}}+2H\vec{B}+\frac{1}{a}\vec{\nabla}\times\vec{{E}}\,,$ (2.6) $\displaystyle 0$ $\displaystyle=\vec{\nabla}\cdot\vec{E}+\frac{\beta}{M_{P}}\vec{\nabla}\chi\cdot\vec{B}\,,\quad 0=\vec{\nabla}\cdot\vec{B}\,,$ (2.7) for the matter sector with $\beta=\alpha M_{P}/(\pi f_{a})$, and $\displaystyle H^{2}$ $\displaystyle=\frac{1}{3M_{P}^{2}}\left\langle\frac{1}{2}\left(\dot{\phi}^{2}+\dot{\chi}^{2}\right)+\frac{(\partial_{i}\chi)^{2}}{2a^{2}}+\frac{m_{\phi}^{2}}{2}\left(\phi^{2}+\chi^{2}\right)+\frac{1}{2}\left(|{\vec{E}}|^{2}+|{\vec{B}}|^{2}\right)\right\rangle\,,$ (2.8) $\displaystyle\dot{H}$ $\displaystyle=-\frac{1}{6M_{P}^{2}}\left\langle 3\left(\dot{\phi}^{2}+\dot{\chi}^{2}\right)+\frac{(\partial_{i}\chi)^{2}}{a^{2}}+2\left(|{\vec{E}}|^{2}+|{\vec{B}}|^{2}\right)\right\rangle\,,$ (2.9) for the gravity sector. For $\chi=0$, multiplying Eqs. (2.5) and (2.6) with $\\{(\vec{\nabla}\times)^{n}\vec{E},(\vec{\nabla}\times)^{n}\vec{B}\\}$ yields a tower of ODEs linear in the (scalar) $2$-point functions (2.2). To include the axion gradients, the structure of the last two terms in Eq. (2.5) as well as the last term in the Gauss constraint in Eq. (2.7) suggests the inclusion of $3$-point functions including one power of either $\dot{\chi}$ or $\vec{\nabla}\chi$. Using the EOMs, the ODEs governing these 3-point functions in turn depend on $4$-point functions, etc. The result is a double expansion in gradients $(\nabla\times)^{n}$ and $p$-point functions. To first order in this expansion, we will only keep $3$-point functions with up to one spatial derivative. In this case, the EOMs of the $2$-point electromagnetic functions ${\cal P}$ with $n\geq 2$ are the same as in the usual GEF formalism [30], $\displaystyle\dot{\mathcal{P}}_{E}^{(n)}+(n+4)H\mathcal{P}_{E}^{(n)}-\frac{2\beta\dot{\phi}}{M_{P}}\mathcal{P}_{EB}^{(n)}+2\mathcal{P}_{EB}^{(n+1)}=\left[\dot{\mathcal{P}}_{E}^{(n)}\right]_{b}\,,$ (2.10) $\displaystyle\dot{\mathcal{P}}_{B}^{(n)}+(n+4)H\mathcal{P}_{B}^{(n)}-2\mathcal{P}_{EB}^{(n+1)}=\left[\dot{\mathcal{P}}_{B}^{(n)}\right]_{b}\,,$ (2.11) $\displaystyle\dot{\mathcal{P}}_{EB}^{(n)}+(n+4)H\mathcal{P}_{EB}^{(n)}-\mathcal{P}_{E}^{(n+1)}+\mathcal{P}_{B}^{(n+1)}-\frac{\beta\dot{\phi}}{M_{P}}\mathcal{P}_{B}^{(n)}=\left[\dot{\mathcal{P}}_{EB}^{(n)}\right]_{b}\,.$ (2.12) Here, the boundary terms on the right-hand side account for the change in the number of modes which have been excited from the vacuum, see below for a more detailed discussion. The superscript $(n)$ refers to the number of curls, indexing the GEF tower. The EOMs for $2$-point functions for the electromagnetic fields for $n=\\{0,1\\}$ now contain $3$-point functions ${\cal B}$ as anticipated, $\displaystyle\dot{\mathcal{P}}_{E}^{(0)}+4H\mathcal{P}_{E}^{(0)}+2\mathcal{P}_{EB}^{(1)}-\frac{2\beta\dot{\phi}}{M_{P}}\mathcal{P}_{EB}^{(0)}-\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(0)}=\left[\dot{\mathcal{P}}_{E}^{(0)}\right]_{b}\,,$ (2.13) $\displaystyle\dot{\mathcal{P}}_{B}^{(0)}+4H\mathcal{P}_{B}^{(0)}-2\mathcal{P}_{EB}^{(1)}=\left[\dot{\mathcal{P}}_{B}^{(0)}\right]_{b}\,,$ (2.14) $\displaystyle\dot{\mathcal{P}}_{EB}^{(0)}+4H\mathcal{P}_{EB}^{(0)}-\mathcal{P}_{E}^{(1)}+\mathcal{P}_{B}^{(1)}-\frac{\beta\dot{\phi}}{M_{P}}\mathcal{P}_{B}^{(0)}-\frac{\beta}{M_{P}}\mathcal{B}_{\dot{\chi};B}^{(0)}-\frac{\beta}{M_{P}}\left(\mathcal{B}_{\chi;EB}^{(1,0)}-\mathcal{B}_{\chi;EB}^{(0,1)}\right)=\left[\dot{\mathcal{P}}_{EB}^{(0)}\right]_{b}\,,$ (2.15) and $\displaystyle\dot{\mathcal{P}}_{E}^{(1)}+5H\mathcal{P}_{E}^{(1)}+2\mathcal{P}_{EB}^{(2)}-\frac{2\beta\dot{\phi}}{M_{P}}\mathcal{P}_{EB}^{(1)}-\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(1,0)}=\left[\dot{\mathcal{P}}_{E}^{(1)}\right]_{b}\,,$ (2.16) $\displaystyle\dot{\mathcal{P}}_{B}^{(1)}+5H\mathcal{P}_{B}^{(1)}-2\mathcal{P}_{EB}^{(2)}=\left[\dot{\mathcal{P}}_{B}^{(1)}\right]_{b}\,,$ (2.17) $\displaystyle\dot{\mathcal{P}}_{EB}^{(1)}+5H\mathcal{P}_{EB}^{(1)}-\mathcal{P}_{E}^{(2)}+\mathcal{P}_{B}^{(2)}-\frac{\beta\dot{\phi}}{M_{P}}\mathcal{P}_{B}^{(1)}-\frac{\beta}{M_{P}}\mathcal{B}_{\dot{\chi};B}^{(1)}=\left[\dot{\mathcal{P}}_{EB}^{(1)}\right]_{b}\,,$ (2.18) where we have defined $\displaystyle\mathcal{B}_{f;E}^{(n)}$ $\displaystyle=\frac{1}{a^{n}}\left\langle{f\left(\left(\vec{\nabla}\times\right)^{n}\vec{E}\right)\cdot\vec{E}}\right\rangle\,,\quad\mathcal{B}_{f;B}^{(n)}=\frac{1}{a^{n}}\left\langle{f\left(\left(\vec{\nabla}\times\right)^{n}\vec{B}\right)\cdot\vec{B}}\right\rangle\,,\quad\mathcal{B}_{f;EB}^{(0)}=-\left\langle{f\vec{E}\cdot\vec{B}}\right\rangle\,,$ $\displaystyle\mathcal{B}_{f;EB}^{(1,0)}$ $\displaystyle=-\frac{1}{a}\left\langle{f\left(\vec{\nabla}\times\vec{E}\right)\cdot\vec{B}}\right\rangle\,,\quad\mathcal{B}_{f;EB}^{(0,1)}=-\frac{1}{a}\left\langle{f\vec{E}\cdot\left(\vec{\nabla}\times\vec{B}\right)}\right\rangle\,,$ (2.19) with $f=\chi,\dot{\chi}$ and $n=\\{0,1\\}$. The two superscripts on the 3-point functions refer to the number of curls acting on $\vec{E}$ and $\vec{B}$, respectively. The EOMs for these 3-point functions can be obtained analogously from Eqs. (2.5) and (2.6) and are given explicitly in App. A. Importantly, we note here the key approximations which enter in their derivation: Firstly, we keep only $3$-point functions containing up to one spatial derivative. This is not a fundamental limitation of the proposed method but should rather be seen as the first order in a gradient expansion series. Including higher order terms will lead to more (and more lengthy) equations, with the number of equations scaling at most222The scaling will be milder with suitable optimizing of the equations, using in particular integration by parts. as $n_{\cal B}^{3}$ with $n_{\cal B}$ the maximal number of derivatives appearing in the $3$-point function. However, the overall structure of the equations remains unchanged and we expect this to be numerically tractable. The results derived in this way are reliable as long as the subsequent order in this expansion is sufficiently small, and as discussed below, we will use this as a criterion to self-consistently check the validity of our method. Secondly, we factorize the resulting $4$-point functions appearing in the EOMs of the $3$-point functions into products of $2$-point functions assuming Gaussian distributions for the electromagnetic fields.333 Note that in the presence of axion gradients, the electric field is no longer divergence free. Defining $\vec{D}=\vec{E}+(\beta/M_{P})\chi\vec{B}$, we can rewrite our expressions in terms of a divergence free quantity, $\vec{\nabla}\cdot\vec{D}=0$. The price to pay is the appearance of $5$-point functions in the EOMs of the $3$-point functions, whose factorization requires assumptions not only on the gaussianity of the gauge fields but also of the axion perturbation $\chi$. While for the former, this is in good agreement with results found in lattice simulations (we thank Dani Figueroa and Ander Urio Garmendia for providing valuable input and cross-checks on this point), the axion fluctuations are expected to be highly non-gaussian due to the non- linear source terms. Therefore, under the approximations we employ we find it more convenient to work with original electric field, taking into account that it is not divergence free. We moreover set $4$-point functions that involve one factor of $\vec{\nabla}\chi$ to vanish since this leaves a spatial index uncontracted within the $2$-point function, which would be in violation of statistical isotropy. Finally the $2$-point functions of the axion fluctuations evolve as $\displaystyle\dot{\mathcal{P}}^{(0)}_{\chi}-2\mathcal{P}^{(0)}_{\chi\dot{\chi}}=0\,,$ (2.20) $\displaystyle\dot{\mathcal{P}}^{(0)}_{\chi\dot{\chi}}+3H\mathcal{P}_{\chi\dot{\chi}}+m_{\phi}^{2}\mathcal{P}_{\chi}^{(0)}+\frac{\beta}{M_{P}}\mathcal{B}_{\chi;EB}^{(0)}-\mathcal{P}_{\dot{\chi}}^{(0)}=0\,,$ (2.21) $\displaystyle\dot{\mathcal{P}}^{(0)}_{\dot{\chi}}+6H\mathcal{P}^{(0)}_{\dot{\chi}}+2m_{\phi}^{2}\mathcal{P}^{(0)}_{\chi\dot{\chi}}+\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(0)}=0\,,$ (2.22) with $\displaystyle\mathcal{P}_{\chi}^{(2n)}=\frac{1}{a^{2n}}\left\langle{\chi\nabla^{2n}\chi}\right\rangle\,,\quad\mathcal{P}_{\dot{\chi}}^{(2n)}=\frac{1}{a^{2n}}\left\langle{\dot{\chi}\nabla^{2n}\dot{\chi}}\right\rangle\,,\quad\mathcal{P}_{\chi\dot{\chi}}^{(2n)}=\frac{1}{a^{2n}}\left\langle{\chi\nabla^{2n}\dot{\chi}}\right\rangle\,.$ (2.23) ### 2.2 Boundary terms and truncation Two subtleties in the derivation above deserve a more detailed discussion: the boundary terms in Eqs. (2.13) to (2.18) and the truncation of the gradient expansion series for the $2$-point functions at finite $n$. Both of these are a priori not related to the inclusion of the axion inhomogeneities as they appear only in the EOMs of the $2$-point functions. In fact, for a homogeneous axion field, these are the only two approximations by which the GEF prescription deviates from an exact solution and this is where the approximation of a slowly varying axion velocity enters. However, while for a homogeneous axion field the gauge field power spectra ${\cal P}^{(0)}_{E,B,EB}$ have been shown to be robustly reproducing the results found solving the mode equations of the gauge field [30] and are moreover in very good agreement with lattice results after setting the axion gradient terms to zero [35], we find the impact of these approximations on the higher orders in the GEF tower ${\cal P}^{(n)}_{E,B,EB}$ to be significant (see also [34]). We will show that these difficulties can be mitigated by an improved truncation relation. Including axion gradients, we observe that this ensures sufficient stability in the algorithm within the perturbative regime of axion inhomogeneities. #### Boundary terms. In terms of the mode functions of the vector potential $A$, the gauge field $2$-point functions are given as $\displaystyle\mathcal{P}_{E}^{(n)}$ $\displaystyle=\frac{1}{a^{n+4}}\int\frac{d^{3}k}{(2\pi)^{3}}\theta(k_{h}(t)-k)\sum_{\sigma}(\sigma k)^{n}\left|{\frac{dA_{\sigma}}{d\tau}}\right|^{2}\,,$ (2.24) $\displaystyle\mathcal{P}_{B}^{(n)}$ $\displaystyle=\frac{1}{a^{n+4}}\int\frac{d^{3}k}{(2\pi)^{3}}\theta(k_{h}(t)-k)\sum_{\sigma}(\sigma k)^{n+2}\left|{A_{\sigma}}\right|^{2}\,,$ (2.25) $\displaystyle\mathcal{P}_{EB}^{(n)}$ $\displaystyle=\frac{1}{2a^{n+4}}\int\frac{d^{3}k}{(2\pi)^{3}}\theta(k_{h}(t)-k)\sum_{\sigma}(\sigma k)^{n+1}\frac{d}{d\tau}\left|{A_{\sigma}}\right|^{2}\,,$ (2.26) where $\sigma$ encodes the two gauge field polarizations and the Heaviside function ensures the vacuum subtraction, i.e. that only modes with $k<k_{h}$ which have been exited out of their vacuum state contribute to the regularized integral. To determine $k_{h}$, we refer to the EOM for $A_{k}(\tau)$, $\displaystyle\frac{d^{2}A_{\sigma}}{d\tau^{2}}+k\left(k-\lambda\sigma 2\xi aH\right)A_{\sigma}=0\,,\quad\lambda=\text{sign}(\dot{\phi})\,,$ (2.27) which encounters a tachyonic instability for the polarization $\sigma=\lambda$ for $\displaystyle k_{h}(t)=\underset{t^{\prime}\leq t}{\mathrm{max}}\left[2\xi(t^{\prime})a(t^{\prime})H(t^{\prime})\right]\quad\text{with }\xi=\beta|\dot{\phi}|/(2HM_{P})\,.$ (2.28) Taking this into account leads to boundary terms in the EOMs for the $2$-point functions, which are explicitly given as $\displaystyle\left[\dot{\mathcal{P}}_{E}^{(n)}\right]_{b}$ $\displaystyle=\frac{\dot{k}_{h}}{a^{n+4}}\frac{k_{h}^{2}}{2\pi^{2}}\sum_{\sigma}(\sigma k_{h})^{n}\left|{\frac{dA_{\sigma}}{d\tau}}\right|_{k=k_{h}}^{2}\,,$ (2.29) $\displaystyle\left[\dot{\mathcal{P}}_{B}^{(n)}\right]_{b}$ $\displaystyle=\frac{\dot{k}_{h}}{a^{n+4}}\frac{k_{h}^{2}}{2\pi^{2}}\sum_{\sigma}(\sigma k_{h})^{n+2}\left|{A_{\sigma}}\right|_{k=k_{h}}^{2}\,,$ (2.30) $\displaystyle\left[\dot{\mathcal{P}}_{EB}^{(n)}\right]_{b}$ $\displaystyle=\frac{\dot{k}_{h}}{a^{n+4}}\frac{k_{h}^{2}}{2\pi^{2}}\sum_{\sigma}(\sigma k_{h})^{n+1}\mathrm{Re}\left[A_{\sigma}^{*}\frac{dA_{\sigma}}{d\tau}\right]_{k=k_{h}}\,.$ (2.31) To evaluate these, we follow the prescription given in [30] which is based on the solutions for the mode functions for constant $\xi$. The boundary terms thus arise from the need to regularize the vacuum contribution, and in this implementation, rely on an at most slowly varying axion velocity. The final results are not particularly sensitive to the value of the cut-off $k_{h}$ since the change of the integration range in Eqs. (2.24)–(2.26) by changing $k_{h}$ is compensated by the change of the boundary term. For example, reducing $k_{h}$ by $25\%$ induces changes which are smaller than $10\%$ (and largely smaller than $1\%$) in $\xi$. #### Truncation relation. The infinite tower of equations (2.10) to (2.12) needs to be truncated at some finite $n$ to implement the GEF numerically. The truncation relation employed in [30] is $\displaystyle\mathcal{P}_{X}^{(n_{\mathrm{max}}+1)}$ $\displaystyle\simeq\left(\frac{k_{h}}{a}\right)^{2}\mathcal{P}_{X}^{(n_{\mathrm{max}}-1)}\,,$ (2.32) where $X=E,B$ or $EB$. This can be derived from Eqs (2.29) to (2.31) assuming a power law spectrum for $|A_{\sigma}(k)|^{2}$ around $k_{h}$, as proposed in Ref. [30]. However, both results from lattice simulations [35] as well as the results obtained using solving the gauge field mode equations show that there can be a lot of structure in the spectrum at this scale, leading to significant deviations in the ratio $\mathcal{P}_{X}^{(n+1)}/\mathcal{P}_{X}^{(n-1)}$, as shown in App. B. However, as we show in that appendix, Eq. (2.32) can also be obtained (without invoking the power law approximation of the spectrum) as the asymptotic large-$n$ limit under the assumption that $\xi$ is constant. As we show there, a value of $n\sim 55$ is necessary for the relation (2.32) to hold up to about $5\%$. To our understanding, this explains why values of $n\sim 100$ are necessary to achieve convergence in the GEF, whereas $n\gg 1$ should have been sufficient based on the assumption of power law spectrum around $k_{h}$. This in turn indicates that rapid changes in $\xi$ will induce errors in this truncation relation, which propagate through the coupled system of equations down to low $n$. This is supported by our observations in the numerical studies shown in App. B and was also recently observed in Ref. [34]. When the axion velocity drops rapidly, the $\mathcal{P}_{X}^{(n)}$ take (unphysically) large values at large $n$, which over time propagate down to lower $n$ modes. If the phase of rapidly changing (in particular dropping) axion velocity is sufficiently long, this can in principle impact the observables, i.e. the $\mathcal{P}_{X}^{(0)}$ power spectra. This calls for an improvement of the truncation relation (2.32) in the further development of the GEF formalism. We show in App. B results obtained using not only $\mathcal{P}_{X}^{(n_{\text{max}}-1)}$ but instead a series of $\mathcal{P}_{X}^{(n_{\text{max}}+1-2l)}$ to determine $\mathcal{P}_{X}^{(n_{\text{max}}+1)}$, $\displaystyle\bar{\mathcal{P}}_{X}^{(n_{\mathrm{max}}+1)}=\sum_{l=1}^{L}(-1)^{l-1}\begin{pmatrix}L\\\ l\end{pmatrix}\bar{\mathcal{P}}_{X}^{(n_{\mathrm{max}}+1-2l)}\,,$ (2.33) where $\bar{\mathcal{P}}_{X}^{(n)}=\mathcal{P}_{X}^{(n)}/H_{0}^{4}(k_{h}/a)^{n}$. For $L=1$ we trivially recover Eq. (2.32). As we show in in App. B for $L=4$ and $L=10$ this improves the stability of the system to a point which is sufficient for the study of the regime of perturbative axion inhomogeneities. ### 2.3 Validity of the axion gradient expansion In the scheme described above we included only terms with up to one power of the axion gradient. To ensure the consistency of this approach, we monitor the second order axion derivatives sourced in this manner (without including their backreaction on the truncated system described above). More precisely, we monitor the ratio of the gradient to the kinetic energy, $\displaystyle R_{\chi}\equiv\left|\frac{\left\langle{(\nabla\chi)^{2}}\right\rangle}{\dot{\phi}^{2}+\left\langle{\dot{\chi}^{2}}\right\rangle}\right|\,.$ (2.34) As long as this quantity is small, it is justified to drop higher powers of the axion gradients, whereas $R_{\chi}$ above ${\cal O}(0.1\,\mathchar 45\relax\mathchar 45\relax\,1)$ indicates that the axion gradients can no longer be treated perturbatively, calling for a full lattice simulation. To compute $\left\langle{(\nabla\chi)^{2}}\right\rangle=-\left\langle{\chi\nabla^{2}\chi}\right\rangle$, the relevant EOMs for the axion $2$-point functions to second order in the gradient expansion are $\displaystyle\dot{\mathcal{P}}_{\chi}^{(2)}+2H\mathcal{P}_{\chi}^{(2)}-2\mathcal{P}_{\chi\dot{\chi}}^{(2)}=0\,,$ (2.35) $\displaystyle\dot{\mathcal{P}}_{\chi\dot{\chi}}^{(2)}+5H\mathcal{P}_{\chi\dot{\chi}}^{(2)}+m_{\phi}^{2}\mathcal{P}_{\chi}^{(2)}+\frac{\beta}{M_{P}}\mathcal{B}_{\chi;EB}^{(2;0,0)}-\mathcal{P}_{\dot{\chi}}^{(2)}=0\,,$ (2.36) $\displaystyle\dot{\mathcal{P}}_{\dot{\chi}}^{(2)}+8H\mathcal{P}_{\dot{\chi}}^{(2)}+2m_{\phi}^{2}\mathcal{P}_{\chi\dot{\chi}}^{(2)}+\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(2;0,0)}=0\,,$ (2.37) which in turn require the evaluation of the 3-point function $\mathcal{B}_{f;EB}^{(2;0,0)}$, whose definitions and equations are again given in the App. A. The breakdown of this perturbative expansion scheme $R_{\chi}>0.5$ is indicated by the dark gray region in the figures below. ## 3 Numerical results Figure 1: Evolution of $\xi$ assuming a homogeneous axion field (dashed black) and perturbatively including axion gradients (red) for different values of $\beta$. The light (dark) gray region indicates that the gradient energy of the axion exceeds $1\,\%$ ($50\,\%$) of the kinetic energy, while the gray vertical line corresponds to $5\,\%$. Wherever possible we compare to the result of the lattice simulation [35]. Figs. 1 and 2 show the results of the methodology described in Sec. 2 for values of the coupling $\beta$ ranging from 15 to 25. The six panels of Fig. 1 focus on the evolution of the parameter $\xi=\beta|\dot{\phi}|/(2HM_{P})$. In all panels, the dotted black lines show the result for the GEF assuming a homogeneous axion and the solid red lines are new results including perturbatively the axion gradients to first order. Where available, we show for comparison the lattice results obtained in Ref. [35] in dashed blue. The gray regions and horizontal lines indicate the quality of the perturbative expansion. In the dark gray region the axion gradient energy exceeds 50% of the axion kinetic energy, $R_{\chi}>0.5$, indicating the non-perturbative regime. For all panels, we show only 0.25 e-folds of this non-perturbative regime, though we stress that lattice results have shown that inflation can last several e-folds longer [35]. The light gray region indicates the regime in which the gradient energy is sizeable, i.e. $0.01<R_{\chi}<0.5$, but our perturbative treatment is still valid. The vertical gray line indicates $R_{\chi}=0.05$. We observe that below this value, we recover the full lattice results to good agreement (while the deviation from the homogeneous approximation is already significant). Above this value the leading order correction implemented here is insufficient to fully reproduce the lattice results, but since $R_{\chi}\ll 1$ a systematic expansion to higher orders in the axion gradients might conceivably achieve this (within the light gray region). Figure 2: Evolution of the energy densities for different values of $\beta$. Gray bands as in Fig. 1. Results from lattice simulations for $\beta=15,18$ and $20$ from Ref. [35] are shown in dashed. Fig. 2 shows the evolution of the kinetic ($\rho_{\mathrm{K}}$), potential ($\rho_{\mathrm{V}}$), electromagnetic ($\rho_{\mathrm{EM}}$) and axion gradient energy ($\rho_{\mathrm{G}}$) defined as $\displaystyle\rho_{\mathrm{K}}=\frac{1}{2}\left(\dot{\phi}^{2}+\mathcal{P}_{\dot{\chi}}^{(0)}\right),\quad\rho_{\mathrm{V}}=\frac{m_{\phi}^{2}}{2}\left(\phi^{2}+\mathcal{P}_{\chi}^{(0)}\right),\quad\rho_{\mathrm{EM}}=\frac{1}{2}\left(\mathcal{P}_{E}^{(0)}+\mathcal{P}_{B}^{(0)}\right),\quad\rho_{\mathrm{G}}=-\frac{1}{2}\mathcal{P}_{\chi}^{(2)},$ (3.1) for the same values of $\beta$. Here we include $\rho_{\mathrm{G}}$ in the total energy density, $\rho_{\mathrm{tot}}=\rho_{\mathrm{K}}+\rho_{\mathrm{V}}+\rho_{\mathrm{EM}}+\rho_{\mathrm{G}}$, although we do not include $\rho_{\mathrm{G}}$ in the Friedmann equation as we explained above. This figure gives a more detailed view of the importance of the axion gradient energy, showing that in many cases (e.g. $\beta=20,22,25$) there is a prolonged regime in which the axion gradient energy remains at the percent level or below. The opposite is observed for $\beta=15$, where once it becomes relevant, the axion gradient energy rapidly becomes comparable to the other components. The efficiency of the method discussed in Sec. 2 allows us to study a wide range of couplings $\beta$, confirming that diverse and complex dynamics occurring in the strong backreaction regime. In particular, possible observable signatures related to the different energy components depend on the coupling $\beta$ in a non-monotonic way. | ---|--- Figure 3: Power spectrum of axion perturbations, $\Delta_{\zeta}^{2}=(H/\dot{\phi})^{2}\mathcal{P}_{\chi}^{(0)}$, for $\beta=15$ (left) and $\beta=20$ (right). Large values imply PBH formation, see text for details. Gray bands as in Fig. 1. As one example of possible observable consequences we show the scalar power spectrum sourced by the axion fluctuations, $\Delta_{\zeta}^{2}=(H/\dot{\phi})^{2}\mathcal{P}_{\chi}^{(0)}$ in Fig. 3.444 Strictly speaking $\mathcal{P}_{\chi}^{(0)}$ is an integrated quantity and we do not have access to the power spectrum for each different momentum $k$ in our formalism (unless we compute the axion 2-point functions with sufficiently high powers of the spatial derivatives). However, we expect that at each given time the modes with $k\sim k_{h}$ dominate the integral and the time evolution of $\mathcal{P}_{X}^{(0)}$ roughly corresponds to the spectrum. Note that this contains only the contribution from the axion fluctuations, and not the contribution from the gauge field energy density fluctuations or metric fluctuations. We note that even within the perturbative regime of axion gradients we significantly exceed the threshold for primordial black hole (PBH) formation which is estimated to lie around $\Delta_{\zeta}^{2}\sim 10^{-2}$ for a Gaussian distribution and around $\Delta_{\zeta}^{2}\sim 10^{-4}$ for the non-Gaussian spectrum assumed in Ref. [5] for axion inflation. PBHs generated in the last few e-folds of inflation are very light and decay rapidly through Hawking radiation leaving no traces in our cosmological history. However, as we see from Fig. 3, even for a moderate coupling of $\beta=15$ the non-perturbative regime of axion gradients is reached before the end of inflation, making a precise prediction of the PBH spectrum impossible with perturbative methods. It is nevertheless interesting to note that even for such small values of $\beta$ (which, in particular, do not produce excessive gravitational waves during preheating [41, 42]) our results indicate a phase of PBH formation. Increasing the coupling $\beta$ will typically extend this phase, leading to more massive PBHs with longer lifetimes. Interestingly, for some parameter choices such as $\beta=20$, displayed in the right panel, we find indication for scalar perturbations above the PBH formation threshold for a significant range of e-folds within the perturbative regime. ## 4 Discussion and outlook The key result of this work is the extension of the gradient expansion formalism (GEF) to perturbatively include axion gradients when evaluating the dynamics of axion inflation. This involves applying the GEF not only to the $2$-point functions but also to higher $p$-point functions, resulting in an expansion in (i) the number of derivatives $n$ acting on the gauge fields in the $2$-point function as in the original GEF, (ii) the order of $p$ of the highest correlation function which is not factorized into lower $p$-point functions and (iii) the number $n_{p}$ of derivatives in those $p$-point functions. Here we consider the leading order correction due to axion gradients, which requires $p=3$ and $n_{3}=n_{\cal B}=1$. This captures several important aspects of the dynamics of the system. For very small values of the expansion parameter $R_{\chi}$ we recover the results of the homogeneous GEF approximation (i.e. assuming the axion field to be perfectly homogeneous but allowing for gradients in the gauge fields) which in this regime agree with the lattice results capturing the full non-linear dynamics. For $R_{\chi}\gtrsim 1\%$, the homogeneous GEF results and the lattice results start to diverge, with our perturbative expansion recovering the lattice results. For $R_{\chi}\gtrsim 5\%$ the leading order corrections included here no longer suffice to accurately track the system, though given the smallness of $R_{\chi}$ a systematic inclusion of higher order terms might conceivably achieve this. We see no fundamental obstruction to this in this regime. Finally, for $R_{\chi}\gtrsim 0.5$ perturbativity is violated and a full lattice simulation seems unavoidable. Our method allows to very efficiently identify the regions and provide accurate initial conditions for them, thereby enabling future lattice simulations to focus their computational power on these non-perturbative regimes. An accurate calculation of the strong backreaction regime in axion inflation is crucial to exploit its rich phenomenology and conclusively test this inflation model. This includes the remaining duration of inflation once the system enters the strong backreaction regime (see Ref. [35]), the magnitude of peaks in the scalar power spectrum as well their position with regards to the end of inflation which are crucial to determine if a sizable amount of primordial black hole are formed and their mass distribution (see Ref. [5]) as well as the anisotropic component of the gauge field energy momentum tensor which will determine the gravitational wave spectrum (see Ref. [18]). Such results will need to be contrasted with model constraints coming from the gravitational wave production in the preheating era [41, 42]. Assuming a simple shape of the scalar potential throughout the inflation and preheating era, these impose stringent bounds on the axion gauge field coupling, which however still allow for an inflationary phase within the strong backreaction regime. Currently, only costly lattice simulations can give quantitative answers to all these questions. The method proposed here provides a first step towards an efficient way of studying these questions across the parameter space of axion inflation. Several extensions of the scheme proposed here deserve further study, e.g. the inclusion of higher order $p$-point functions and/or derivatives therein as well as including a correction algorithm as proposed in [34]. We hope that this work will trigger some of these developments. #### Acknowledgements We thank Kai Schmitz, Oleksandr Sobol and Richard Voneckardstein for fruitful discussions and the very helpful comparisons of our numerical codes, as well as Dani Figueroa and Ander Urio Garmendia for insightful cross-checks from lattice simulations. We moreover thank Ryo Namba for helpful discussions, and Kyohei Mukaida for collaboration at the initial stages of this project. Y.E. and S.S. would like to thank the CERN Theory Department for their hospitality during crucial parts of this project. The work of Y.E. is supported in part by DOE grant DE-SC0011842. The work of S.S. received the support of a fellowship from “la Caixa” Foundation (ID 100010434) with fellowship code LCF/BQ/DI19/11730034 and by the Generalitat Valenciana grant PROMETEO/2021/083. ## Appendix A Equations of motion for $3$-point functions The main goal of this paper is to extend the GEF and include the axion inhomogeneity $\chi(t,\vec{x})$. Once we include $\chi$, the EOMs become non- linear in terms of the inhomogeneous quantities, $\chi$, $\vec{E}$ and $\vec{B}$. As a result, the EOMs of the electromagnetic $2$-point functions no longer form a closed system and we need to include the evolution of the $3$-point functions. The EOM of the $3$-point functions then depends on $4$-point functions and a similar structure persists for higher-point functions. To truncate this tower of $p$-point functions, we factorize the $4$-point functions as products of the $2$-point functions. Moreover, for simplicity, we include the $3$-point functions with only up to one spatial derivative in the full system. The latter approximation is expected to be valid when the axion gradient energy is suppressed, and to check the quality of our approximation, we compute the axion gradient energy by treating it as a perturbation and monitor its size. In the following, we list the time evolution equations of the $2$-point and $3$-point functions that we need in our numerical computation. These are derived by the repeated use of Eqs. (2.4)–(2.6). In this appendix we use the following notation for 3-point functions in the GEF, $\displaystyle\mathcal{B}_{f;X}^{(2l;n,m)}$ $\displaystyle=\frac{1}{a^{n+m+2l}}\left\langle{\nabla^{2l}f(\vec{\nabla}\times)^{n}\vec{X}\cdot(\vec{\nabla}\times)^{m}\vec{X}}\right\rangle\,,\quad\mathcal{B}_{f;XY}^{(2l;n,m)}=-\frac{1}{a^{n+m+2l}}\left\langle{\nabla^{2l}f(\vec{\nabla}\times)^{n}\vec{X}\cdot(\vec{\nabla}\times)^{m}\vec{Y}}\right\rangle\,,$ (A.1) with $X,Y=\\{E,B\\}$ and $f=\\{\chi,\dot{\chi}\\}$. These are related to the simplified notation used in the main text (which does not include axion gradients) as $\displaystyle\mathcal{B}_{f;E}^{(n)}=\mathcal{B}_{f;E}^{(0;n,0)}\,,\quad\mathcal{B}_{f;B}^{(n)}=\mathcal{B}_{f;B}^{(0;n,0)}\,,\quad\mathcal{B}_{f;EB}^{(0)}=\mathcal{B}_{f;EB}^{(0;0,0)}\,,\quad\mathcal{B}_{f;EB}^{(1,0)}=\mathcal{B}_{f;EB}^{(0;1,0)}\,,\quad\mathcal{B}_{f;EB}^{(0,1)}=\mathcal{B}_{f;EB}^{(0;0,1)}\,.$ (A.2) Notice that $\displaystyle\mathcal{B}_{f;E}^{(2l;n,m)}=\mathcal{B}_{f;E}^{(2l;m,n)}\,,\quad\mathcal{B}_{f;B}^{(2l;n,m)}=\mathcal{B}_{f;B}^{(2l;m,n)}\,,$ (A.3) by definition, but in general $\displaystyle\mathcal{B}_{f;EB}^{(2l;n,m)}\neq\mathcal{B}_{f;EB}^{(2l;m,n)}\,,$ (A.4) for $n\neq m$. ### A.1 Equations of motion up to one spatial derivative As we stated above, we include the $3$-point functions with only up to one spatial derivative. With this in mind, the electromagnetic $2$-point functions evolve as $\displaystyle\dot{\mathcal{P}}_{E}^{(n)}+(n+4)H\mathcal{P}_{E}^{(n)}+2\mathcal{P}_{EB}^{(n+1)}-\frac{2\beta}{M_{P}}\left(\dot{\phi}\mathcal{P}_{EB}^{(n)}+\mathcal{B}_{\dot{\chi};EB}^{(0;n,0)}\right)+\frac{2\beta}{M_{P}}\left(\mathcal{B}_{\chi;E}^{(0;n+1,0)}-\mathcal{B}_{\chi;E}^{(0;n,1)}\right)=\left[\dot{\mathcal{P}}_{E}^{(n)}\right]_{b}\,,$ $\displaystyle\dot{\mathcal{P}}_{B}^{(n)}+(n+4)H\mathcal{P}_{B}^{(n)}-2\mathcal{P}_{EB}^{(n+1)}=\left[\dot{\mathcal{P}}_{B}^{(n)}\right]_{b}\,,$ (A.5) $\displaystyle\dot{\mathcal{P}}_{EB}^{(n)}+(n+4)H\mathcal{P}_{EB}^{(n)}-\mathcal{P}_{E}^{(n+1)}+\mathcal{P}_{B}^{(n+1)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{P}_{B}^{(n)}+\mathcal{B}_{\dot{\chi};B}^{(0;n,0)}\right)-\frac{\beta}{M_{P}}\left(\mathcal{B}_{\chi;EB}^{(0;1,n)}-\mathcal{B}_{\chi;EB}^{(0;0,n+1)}\right)=\left[\dot{\mathcal{P}}_{EB}^{(n)}\right]_{b}\,,$ where it is understood that $\displaystyle\mathcal{B}_{f;XY}^{(2l;n,m)}=0,~{}~{}\mathrm{for}~{}~{}n+m+2l>1\,,$ (A.6) in these equations. The axion $2$-point functions with no spatial derivatives are given by $\displaystyle\dot{\mathcal{P}}_{\chi}^{(0)}-2\mathcal{P}_{\chi\dot{\chi}}^{(0)}=0\,,$ $\displaystyle\dot{\mathcal{P}}_{\chi\dot{\chi}}^{(0)}+3H\mathcal{P}_{\chi\dot{\chi}}^{(0)}+m_{\phi}^{2}\mathcal{P}_{\chi}^{(0)}+\frac{\beta}{M_{P}}\mathcal{B}_{\chi;EB}^{(0;0,0)}-\mathcal{P}_{\dot{\chi}}^{(0)}=0\,,$ $\displaystyle\dot{\mathcal{P}}_{\dot{\chi}}^{(0)}+6H\mathcal{P}_{\dot{\chi}}^{(0)}+2m_{\phi}^{2}\mathcal{P}_{\chi\dot{\chi}}^{(0)}+\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(0;0,0)}=0\,.$ (A.7) The time evolution equations of the $3$-point functions are given as follows. For $(2l;n,m)=(0;0,0)$, we obtain $\displaystyle\dot{\mathcal{B}}_{\chi;E}^{(0;0,0)}+4H\mathcal{B}_{\chi;E}^{(0;0,0)}+2\mathcal{B}_{\chi;EB}^{(0;0,1)}-\frac{2\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\chi;EB}^{(0;0,0)}+\mathcal{P}_{\chi\dot{\chi}}^{(0)}\mathcal{P}_{EB}^{(0)}\right)-\mathcal{B}_{\dot{\chi};E}^{(0;0,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;B}^{(0;0,0)}+4H\mathcal{B}_{\chi;B}^{(0;0,0)}-2\mathcal{B}_{\chi;EB}^{(0;1,0)}-\mathcal{B}_{\dot{\chi};B}^{(0;0,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;EB}^{(0;0,0)}+4H\mathcal{B}_{\chi;EB}^{(0;0,0)}-\mathcal{B}_{\chi;E}^{(0;1,0)}+\mathcal{B}_{\chi;B}^{(0;1,0)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\chi;B}^{(0;0,0)}+\mathcal{P}_{\chi\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(0)}\right)-\mathcal{B}_{\dot{\chi};EB}^{(0;0,0)}=0\,,$ (A.8) and $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};E}^{(0;0,0)}+7H\mathcal{B}_{\dot{\chi};E}^{(0;0,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;E}^{(0;0,0)}+2\mathcal{B}_{\dot{\chi};EB}^{(0;0,1)}-\frac{2\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\dot{\chi};EB}^{(0;0,0)}+\mathcal{P}_{\dot{\chi}}^{(0)}\mathcal{P}_{EB}^{(0)}\right)+\frac{2\beta}{3M_{P}}\mathcal{P}_{EB}^{(0)}\mathcal{P}_{E}^{(0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};B}^{(0;0,0)}+7H\mathcal{B}_{\dot{\chi};B}^{(0;0,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;B}^{(0;0,0)}-2\mathcal{B}_{\dot{\chi};EB}^{(0;1,0)}+\frac{2\beta}{3M_{P}}\mathcal{P}_{EB}^{(0)}\mathcal{P}_{B}^{(0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};EB}^{(0;0,0)}+7H\mathcal{B}_{\dot{\chi};EB}^{(0;0,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;EB}^{(0;0,0)}-\mathcal{B}_{\dot{\chi};E}^{(0;1,0)}+\mathcal{B}_{\dot{\chi};B}^{(0;1,0)}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\dot{\chi};B}^{(0;0,0)}+\mathcal{P}_{\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(0)}\right)+\frac{\beta}{3M_{P}}\left[\left(\mathcal{P}_{EB}^{(0)}\right)^{2}+\mathcal{P}_{E}^{(0)}\mathcal{P}_{B}^{(0)}\right]=0\,.$ (A.9) Notice, in particular, that the equations contain the products of the electromagnetic $2$-point functions. This is a result of the factorization of the $4$-point functions. Assuming the isotropy and Gaussianity of the electromagnetic fields, we can factorize e.g. $\displaystyle\left\langle{(\vec{E}\cdot\vec{B})^{2}}\right\rangle\simeq\frac{4}{3}\left(\mathcal{P}_{EB}^{(0)}\right)^{2}+\frac{1}{3}\mathcal{P}_{E}^{(0)}\mathcal{P}_{B}^{(0)}\,,$ (A.10) and so on. For $(2l;n,m)=(0;1,0),(0;0,1)$, we obtain $\displaystyle\dot{\mathcal{B}}_{\chi;E}^{(0;1,0)}+5H\mathcal{B}_{\chi;E}^{(0;1,0)}-\frac{\beta\dot{\phi}}{M_{P}}\left(\mathcal{B}_{\chi;EB}^{(0;1,0)}+\mathcal{B}_{\chi;EB}^{(0;0,1)}\right)-\frac{2\beta}{M_{P}}\mathcal{P}_{\chi\dot{\chi}}^{(0)}\mathcal{P}_{EB}^{(1)}-\mathcal{B}_{\dot{\chi};E}^{(0;1,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;B}^{(0;1,0)}+5H\mathcal{B}_{\chi;B}^{(0;1,0)}-\mathcal{B}_{\dot{\chi};B}^{(0;1,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;EB}^{(0;1,0)}+5H\mathcal{B}_{\chi;EB}^{(0;1,0)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\chi;B}^{(0;1,0)}+\mathcal{P}_{\chi\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(1)}\right)-\mathcal{B}_{\dot{\chi};EB}^{(0;1,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;EB}^{(0;0,1)}+5H\mathcal{B}_{\chi;EB}^{(0;0,1)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\chi;B}^{(0;1,0)}+\mathcal{P}_{\chi\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(1)}\right)-\mathcal{B}_{\dot{\chi};EB}^{(0;0,1)}=0\,,$ (A.11) and $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};E}^{(0;1,0)}+8H\mathcal{B}_{\dot{\chi};E}^{(0;1,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;E}^{(0;1,0)}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\frac{\beta}{M_{P}}\left(\dot{\phi}\left(\mathcal{B}_{\dot{\chi};EB}^{(0;1,0)}+\mathcal{B}_{\dot{\chi};EB}^{(0;0,1)}\right)+2\mathcal{P}_{\dot{\chi}}^{(0)}\mathcal{P}_{EB}^{(1)}\right)+\frac{\beta}{3M_{P}}\left(\mathcal{P}_{EB}^{(0)}\mathcal{P}_{E}^{(1)}+\mathcal{P}_{E}^{(0)}\mathcal{P}_{EB}^{(1)}\right)=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};B}^{(0;1,0)}+8H\mathcal{B}_{\dot{\chi};B}^{(0;1,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;B}^{(0;1,0)}+\frac{\beta}{3M_{P}}\left(\mathcal{P}_{B}^{(0)}\mathcal{P}_{EB}^{(1)}+\mathcal{P}_{EB}^{(0)}\mathcal{P}_{B}^{(1)}\right)=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};EB}^{(0;1,0)}+8H\mathcal{B}_{\dot{\chi};EB}^{(0;1,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;EB}^{(0;1,0)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\dot{\chi};B}^{(0;1,0)}+\mathcal{P}_{\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(1)}\right)+\frac{\beta}{3M_{P}}\left(\mathcal{P}_{B}^{(0)}\mathcal{P}_{E}^{(1)}+\mathcal{P}_{EB}^{(0)}\mathcal{P}_{EB}^{(1)}\right)=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};EB}^{(0;0,1)}+8H\mathcal{B}_{\dot{\chi};EB}^{(0;0,1)}+m_{\phi}^{2}\mathcal{B}_{\chi;EB}^{(0;0,1)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\dot{\chi};B}^{(0;1,0)}+\mathcal{P}_{\dot{\chi}}^{(0)}\mathcal{P}_{B}^{(1)}\right)+\frac{\beta}{3M_{P}}\left(\mathcal{P}_{E}^{(0)}\mathcal{P}_{B}^{(1)}+\mathcal{P}_{EB}^{(0)}\mathcal{P}_{EB}^{(1)}\right)=0\,.$ (A.12) Finally, the background EOM is given by $\displaystyle 0=\ddot{\phi}+3H\dot{\phi}+m_{\phi}^{2}\phi+\frac{\beta}{M_{P}}\mathcal{P}_{EB}^{(0)}\,,$ $\displaystyle H^{2}=\frac{1}{6M_{P}^{2}}\left[\dot{\phi}^{2}+\mathcal{P}_{\dot{\chi}}^{(0)}+m_{\phi}^{2}\left(\phi^{2}+\mathcal{P}_{\chi}^{(0)}\right)+\mathcal{P}_{E}^{(0)}+\mathcal{P}_{B}^{(0)}\right]\,,$ $\displaystyle\dot{H}=-\frac{1}{6M_{P}^{2}}\left[3\dot{\phi}^{2}+3\mathcal{P}_{\dot{\chi}}^{(0)}+2\left(\mathcal{P}_{E}^{(0)}+\mathcal{P}_{B}^{(0)}\right)\right]\,.$ (A.13) These equations are consistent, even after our approximations, in the sense that we can derive the last equation from the others. ### A.2 Equations of motion with two spatial derivatives To monitor the size of the axion inhomogeneity, we follow the time evolution of the axion gradient energy, i.e., $\mathcal{P}_{\chi}^{(2)}$. For this purpose, we need to follow some of the $3$-point functions with two spatial derivatives. We treat these quantities as perturbations, by treating the quantities with only up to one spatial derivative as the source term and ignoring the backreaction of the axion gradient energy. The relevant equations are given by $\displaystyle\dot{\mathcal{P}}_{\chi}^{(2)}+2H\mathcal{P}_{\chi}^{(2)}-2\mathcal{P}_{\chi\dot{\chi}}^{(2)}=0\,,$ $\displaystyle\dot{\mathcal{P}}_{\chi\dot{\chi}}^{(2)}+5H\mathcal{P}_{\chi\dot{\chi}}^{(2)}+m_{\phi}^{2}\mathcal{P}_{\chi}^{(2)}+\frac{\beta}{M_{P}}\mathcal{B}_{\chi;EB}^{(2;0,0)}-\mathcal{P}_{\dot{\chi}}^{(2)}=0\,,$ $\displaystyle\dot{\mathcal{P}}_{\dot{\chi}}^{(2)}+8H\mathcal{P}_{\dot{\chi}}^{(2)}+2m_{\phi}^{2}\mathcal{P}_{\chi\dot{\chi}}^{(2)}+\frac{2\beta}{M_{P}}\mathcal{B}_{\dot{\chi};EB}^{(2;0,0)}=0\,.$ (A.14) Obviously we need to compute $\mathcal{B}_{f;EB}^{(2;0,0)}$. For this, it is enough to consider only the following equations: $\displaystyle\dot{\mathcal{B}}_{\chi;B}^{(2;0,0)}+6H\mathcal{B}_{\chi;B}^{(2;0,0)}-\mathcal{B}_{\dot{\chi};B}^{(2;0,0)}=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\chi;EB}^{(2;0,0)}+6H\mathcal{B}_{\chi;EB}^{(2;0,0)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\chi;B}^{(2;0,0)}+\mathcal{P}_{\chi\dot{\chi}}^{(2)}\mathcal{P}_{B}^{(0)}\right)-\mathcal{B}_{\dot{\chi};EB}^{(2;0,0)}=0\,,$ (A.15) and $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};B}^{(2;0,0)}+9H\mathcal{B}_{\dot{\chi};B}^{(2;0,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;B}^{(2;0,0)}-\frac{2\beta}{3M_{P}}\left(\mathcal{P}_{EB}^{(0)}\mathcal{P}_{B}^{(2)}+\mathcal{P}_{B}^{(0)}\mathcal{P}_{EB}^{(2)}-\mathcal{P}_{EB}^{(1)}\mathcal{P}_{B}^{(1)}\right)=0\,,$ $\displaystyle\dot{\mathcal{B}}_{\dot{\chi};EB}^{(2;0,0)}+9H\mathcal{B}_{\dot{\chi};EB}^{(2;0,0)}+m_{\phi}^{2}\mathcal{B}_{\chi;EB}^{(2;0,0)}-\frac{\beta}{M_{P}}\left(\dot{\phi}\mathcal{B}_{\dot{\chi};B}^{(2;0,0)}+\mathcal{P}_{\dot{\chi}}^{(2)}\mathcal{P}_{B}^{(0)}\right)$ (A.16) $\displaystyle~{}~{}~{}~{}~{}-\frac{\beta}{3M_{P}}\left[\mathcal{P}_{B}^{(0)}\mathcal{P}_{E}^{(2)}+\mathcal{P}_{E}^{(0)}\mathcal{P}_{B}^{(2)}+2\mathcal{P}_{EB}^{(0)}\mathcal{P}_{EB}^{(2)}-\mathcal{P}_{B}^{(1)}\mathcal{P}_{E}^{(1)}-\left(\mathcal{P}_{EB}^{(1)}\right)^{2}-\frac{\beta^{2}}{3M_{P}^{2}}\mathcal{P}_{\chi}^{(2)}\left(\mathcal{P}_{B}^{(0)}\right)^{2}\right]=0\,,$ where we used the factorization $\displaystyle\left\langle{\partial_{i}(\vec{E}\cdot\vec{B})\partial_{i}(\vec{X}\cdot\vec{Y})}\right\rangle$ $\displaystyle\simeq\frac{1}{3}\left(\left\langle{\vec{E}^{(1)}\cdot\vec{X}^{(1)}}\right\rangle+\left\langle{(\vec{\nabla}\cdot\vec{E})(\vec{\nabla}\cdot\vec{X})}\right\rangle\right)\left\langle{\vec{B}\cdot\vec{Y}}\right\rangle-\frac{1}{6}\left\langle{\vec{E}^{(1)}\cdot\vec{Y}}\right\rangle\left\langle{\vec{B}\cdot\vec{X}^{(1)}}\right\rangle$ $\displaystyle+(X\leftrightarrow Y)+(E\leftrightarrow B)+(X\leftrightarrow Y,E\leftrightarrow B)\,.$ (A.17) Here we use the short-hand notation $\vec{X}^{(n)}=(\vec{\nabla}\times)^{n}\vec{X}$. These are closed by themselves, and hence we do not need to compute e.g. $\mathcal{B}_{f;E}^{(2;0,0)}$ nor $\mathcal{B}_{f;EB}^{(0;2,0)}$. ### A.3 Numerical implementation The system of the equations described above has to be solved numerically. It is then convenient to apply the following variable re-definition $\displaystyle\begin{split}&\bar{\phi}=\frac{\phi}{M_{p}}\,,\,\,\bar{\mathcal{P}}_{X}^{(n)}=\frac{\mathcal{P}_{X}^{(n)}}{H_{0}^{4}\left(k_{h}/a\right)^{n}}\,,\,\,\bar{t}=H_{0}t\,,\,\,\bar{H}=\frac{H}{H_{0}}\,,\,\,\bar{m}=\frac{m}{H_{0}}\,,\,\,\frac{\bar{k}_{h}}{\bar{a}}=\frac{k_{h}}{aH_{0}}\,,\,\,\\\ &\bar{\mathcal{P}}_{\chi}^{(n)}=\frac{\mathcal{P}_{\chi}^{(n)}}{M_{p}^{2}\left(k_{h}/a\right)^{n}}\,,\,\,\bar{\mathcal{P}}_{\dot{\chi}}^{(n)}=\frac{\mathcal{P}_{\dot{\chi}}^{(n)}}{M_{p}^{2}H_{0}^{2}\left(k_{h}/a\right)^{n}}\,,\,\,\bar{\mathcal{P}}_{\chi\dot{\chi}}^{(n)}=\frac{\mathcal{P}_{\chi\dot{\chi}}^{(n)}}{M_{p}^{2}H_{0}\left(k_{h}/a\right)^{n}}\,,\,\,\\\ &\bar{\mathcal{B}}_{\chi;X}^{(2l;n,m)}=\frac{\mathcal{B}_{\chi;X}^{(2l;n,m)}}{M_{p}H_{0}^{4}\left(k_{h}/a\right)^{2l+n+m}}\,,\,\,\bar{\mathcal{B}}_{\dot{\chi};X}^{(2l;n,m)}=\frac{\mathcal{B}_{\dot{\chi};X}^{(2l;n,m)}}{M_{p}H_{0}^{5}\left(k_{h}/a\right)^{2l+n+m}}\,,\end{split}$ (A.18) where $H_{0}$ is the Hubble parameter at the beginning of the simulation. The normalization of the power spectra and bispectra by respective powers of $(k_{h}/a)$ is found to be crucial to numerically evolve the tower of the $2$-point functions of Eqs. (2.29)–(2.31) to high $n$. Indeed, we empirically know that $(k_{h}/aH_{0})\sim\mathcal{O}(0.01)$ at the end of inflation, and hence a different normalization (e.g. by $H_{0}$ instead of $k_{h}/a$) can cause an extremely tiny numerical value for high enough $n$. Our normalization ensures that each term of the gradient expansion of a given $p$-point function is of the same order, resulting in a more numerically stable system. We use the multistep Adams method of the GNU scientific library implemented in C++ which integrates the full system in $\mathcal{O}(1\,\mathrm{s})$. For all plots in this paper we take the initial conditions and parameters following Ref. [30], i.e., $\displaystyle\phi=-15.55\,M_{P}\,,\quad\dot{\phi}=\sqrt{\frac{2}{3}}m_{\phi}M_{P}\,,\quad a=1\,,\quad\mathcal{P}_{X}^{(n)}=0\,,\quad\mathcal{B}_{f;X}^{(2l;n,m)}=0\,,$ (A.19) at the beginning of our simulation, with the reduced Planck mass $M_{p}=2.435\times 10^{18}\,\mathrm{GeV}$. We take the axion mass as $\displaystyle\frac{m_{\phi}}{M_{P}}=6.16\times 10^{-6}\,,$ (A.20) following Ref. [35]. The initial Hubble parameter $H_{0}$ is computed as $\displaystyle H_{0}=\sqrt{\frac{\dot{\phi}^{2}+m_{\phi}^{2}\phi^{2}}{6M_{P}^{2}}}\,.$ (A.21) Throughout this paper, we follow the electromagnetic 2-point function up to $n=n_{\mathrm{max}}=250$, and express those with $n=n_{\mathrm{max}}+1$ by the truncation relation. ## Appendix B Truncation relation | ---|--- Figure 4: For constant values of $\xi$ the truncation relation (2.32) holds asymptotically for large $n$, see Eq. (B.6). The left (right) panel depicts the exemplary case of $\xi=4$ ($\xi=6$), the horizontal line indicates the truncation relation assumed in the GEF in [30]. One of the key approximations in the gradient expansion formalism is the truncation of the tower of equations (2.10)-(2.12) through the relation (2.32). In this appendix we demonstrate that this relation holds asymptotically for large $n$ and approximately constant $\xi$. In contrary to the derivation given in Ref. [30], this does not rely on approximating the gauge field spectra by a power law around $k_{h}$, and explains why values of $n={\cal O}(100)$ are required to obtain accurate results in the GEF. It also illustrates how when $\xi$ changes rapidly, imposing this truncation relation introduces sizable errors, as observed also in Ref. [34]. Finally, we will introduce an improved truncation relation which mitigates some of these effects. For simplicity, the discussion in this appendix is focused on the application of the GEF assuming a homogeneous axion field, as in Ref. [30]. The arguments can immediately be extended to include axion gradients as described in Sec. 2. Let us begin by studying the truncation relation assuming constant $\xi$. In the slow-roll approximation with constant $\xi$ the gauge field $A_{k}(\tau)$ mode function can be determined analytically by solving $\displaystyle\left(\partial_{x}^{2}+1\pm\frac{2\xi}{x}\right)\omega_{\mp}=0\,,$ (B.1) where we defined $\omega_{\pm}=\sqrt{2k}A_{k}^{\pm}(\tau)$ and $x=-k\tau$, see Eq. (2.27). Matching to the Bunch-Davies vacuum fixes $\displaystyle A_{k}^{+}(\tau)=\frac{1}{\sqrt{2k}}e^{\pi\xi/2}W_{-i\xi,1/2}(2ik\tau)\,.$ (B.2) The mode $A_{k}^{-}$ does not experience a tachyonic enhancement and can thus be safely neglected. A reasonable approximation of the mode function $A_{k}^{+}$ in the IR regime ($x\to 0$), which dominates the contribution to the $n$-point functions, is given by $\displaystyle A_{k}^{+}(\tau)\sim\frac{1}{\sqrt{2k}}\frac{e^{\pi\xi/2}}{\Gamma(1+i\xi)}e^{-\xi\sqrt{-k\tau}}\,.$ (B.3) For the $n$-point function of the $E$-field, see Eq. (2.24), the relevant quantity is $\displaystyle\left|\frac{\mathrm{d}A_{k}^{+}(\tau)}{\mathrm{d}\tau}\right|=\left|\frac{1}{\Gamma(1+i\xi)}\right|\frac{1}{2\sqrt{2}}\frac{\xi e^{\pi\xi/2}}{\sqrt{-\tau}}e^{-\xi\sqrt{-k\tau}}\,.$ (B.4) The integral of Eq. (2.24) can then be done analytically and evaluates to $\displaystyle\mathcal{P}_{E}^{(n)}=\frac{4^{-4-n}H^{4+n}}{2\pi^{2}|\Gamma(1+i\xi)|^{2}}e^{\pi\xi}\xi^{-2(2+n)}\left(\Gamma(6+2n)-\Gamma(6+2n,2\xi\sqrt{-k\tau})\right)\,.$ (B.5) The ratio of $\mathcal{P}_{E}^{(n+2)}/\mathcal{P}_{E}^{(n)}$ is thus $\displaystyle\frac{\mathcal{P}_{E}^{(n+2)}}{\mathcal{P}_{E}^{(n)}}\frac{a^{2}}{k_{h}^{2}}=\frac{\bar{\mathcal{P}}_{E}^{(n+2)}}{\bar{\mathcal{P}}_{E}^{(n)}}=\frac{1}{(2\xi)^{6}}\frac{\Gamma(10+2n)-\Gamma(10+2n,2\sqrt{2}\xi^{3/2})}{\Gamma(6+2n)-\Gamma(6+2n,2\sqrt{2}\xi^{3/2})}\,,$ (B.6) where as in the main text, we have introduced the notation $\bar{\mathcal{P}}_{X}^{(n)}\equiv\mathcal{P}_{X}^{(n)}/H_{0}^{4}(k_{h}/a)^{n}$. For large $n$, and any $\xi$, the ratio of Eq. (B.6) asymptotically reaches unity, $\displaystyle\mathrm{lim}_{n\to\infty}\frac{\mathcal{\bar{P}}_{E}^{(n+2)}}{\mathcal{\bar{P}}_{E}^{(n)}}=1\,,$ (B.7) with a deviation of $5\,\%$ for $n=45$ ($n=55$) for $\xi=4$ ($\xi=6$), see Fig. 4. This, on the one hand confirms the truncation relation chosen in [30] in the constant $\xi$ limit, while on the other hand explains why large values of $n$ are required in order for this truncation relation to become relatively accurate. Similar conclusion can also be drawn for $\mathcal{\bar{P}}_{B}^{(n)}$ and $\mathcal{\bar{P}}_{EB}^{(n)}$. The truncation relation remains a good approximation for slowly varying $\xi$, as can be seen in Fig. 5 which displays the result of a fully numerical solution of the GEF. In particular, we show the ratio of $\mathcal{\bar{P}}_{E}^{(n+1)}/\mathcal{\bar{P}}_{E}^{(n-1)}$ for fixed e-fold $\mathcal{N}$ and $\beta=15$. The left panel shows an example from the weak backreaction regime, where $\xi$ changes only slowly and hence the truncation relation is relatively accurate at large $n$. On the other hand, the right panel shows the situation when the inflaton field velocity begins to undergo a more rapid change. In this case, we observe that the truncation relation remains a good approximation for a large range of $n\gtrsim 50$, although at $n\sim n_{\mathrm{max}}=250$ the onset of a violation of this relation can be observed. | ---|--- Figure 5: Truncation relation (2.32) evaluated numerically for fixed $\beta=15$, showing good agreement for $n\gtrsim 100$ when the backreaction is weak (left panel) and mild (right panel). The horizontal line indicates the truncation relation assumed in the GEF. | ---|--- Figure 6: GEF tower for rapidly changing $\xi$ using the truncation relation (2.32) (dotted black) and the improved truncation relation (2.33) with involving the highest four (solid blue) and ten (dashed red) even powers of the GEF tower. This violation becomes more dramatic when inflaton velocity changes, and in particular drops, rapidly. This is displayed by the dotted black curve in Fig. 6 which shows ${\cal\bar{P}}_{E}^{(2n)}$ at a slightly later point in time, ${\cal N}=62$ for different values of $n$. For $50\lesssim 2n\lesssim 150$ we observe a plateau in ${\cal\bar{P}}_{E}^{(2n)}$, indicating that the truncation relation (2.32) is well satisfied. However, at larger values of $n$ we see very large deviations from the truncation relation, propagating over time down to lower values of $n$. If left unchecked, this will finally affect the $n=0$ components of the GEF tower. Similar effects were recently observed in Ref. [34], and similarly to those results we find that these difficulties occur when $\xi$ drops rapidly and $k_{h}$ features a plateau, see Eq. (2.28), and the boundary terms are zero. To counter this issue, we propose the improved truncation relation (2.33). Averaging over several $\mathcal{\bar{P}}_{E}^{(n+1-2l)}$ to determine the truncation relation for $\mathcal{\bar{P}}_{E}^{(n+1)}$, $\displaystyle\bar{\mathcal{P}}_{X}^{(n_{\mathrm{max}}+1)}=\sum_{l=1}^{L}(-1)^{l-1}\begin{pmatrix}L\\\ l\end{pmatrix}\bar{\mathcal{P}}_{X}^{(n_{\mathrm{max}}+1-2l)}$ gives a more robust procedure, as shown by the colored curves in Fig. 6. Compared to the original truncation relation (dotted black), we see that the striking and unphysical feature (‘bump’) at large $n$ has largely disappeared, at no additional computational cost. Nevertheless, smaller unphysical features (‘wiggles’) remain, which over time can numerically destabilize the system (see right panel). In particular, $\mathcal{P}_{E}^{(2n)}$ should always be positive, while the ‘wiggles’ contain negative valued $\mathcal{P}_{E}^{(2n)}$. Similar observations hold for the other $2$-point functions $\mathcal{\bar{P}}^{(n)}_{B}$ and $\mathcal{\bar{P}}^{(n)}_{EB}$ as well as for odd powers of gradients (although in the latter two cases positivity is not necessarily required). We were not able to conclusively determine the origin of these ‘wiggles’ nor find a strategy to remove them. To illustrate their relevance, the grey shaded regions Fig. 7 indicate the regions in which (even with the use of Eq. (2.33) with $L=10$) we obtain $|\bar{\mathcal{P}}_{X}^{(2n+2)}/\bar{\mathcal{P}}_{X}^{(2n)}-1|>0.1$, i.e. a significant violation of the truncation relation, for some values of $50\leq n\leq 75$ with $X=\\{E,B\\}$. As can be seen, this typically happens after a sharp drop in $\xi$. For a milder evolution of $\xi$, e.g. for $\beta=18$ or $20$, this issue does not arise. In this context, it may be interesting to further study the proposal given in Ref. [34] based on re-initializing the GEF through the mode-by-mode method (which in turn only requires the input of the $n=0$ mode). Since it takes some time for these unphysical effects to propagate to the $n=0$ mode, this procedure might further improve the situation. To our understanding, this method was initially proposed to remove the ‘bump’ feature, which Eq. (2.33) achieves more efficiently. However, a similar method may prove useful to address the remaining issue of the ‘wiggles’. So far, the discussion in this appendix has focused on the homogeneous axion case. Despite the difficulties mentioned above, in practice, for the quadratic scalar potential studied here, the numerical issues in the higher orders of the GEF tower do not propagate to the lowest orders for any coupling $\beta$ considered here before the end of inflation (see however [34] for different background dynamics). Moreover, including the axion gradients, the remaining difficulties with maintaining a stable truncation relation only occur in the non-perturbative regime, i.e. for $R_{\chi}>0.5$, for all values of $\beta$ considered, and do hence not seem to pose a problem within the region of validity of the perturbative method proposed here. In this sense, the methods employed here ensure sufficient stability of the GEF scheme to study the perturbative regime of axion gradients which is the main goal of this work. In summary, this appendix clarifies the origin of the truncation relation of the GEF as the asymptotic limit of the GEF tower for large $n$ and approximately constant $\xi$. This sheds some light on the limitations of the GEF formalism for rapidly varying $\xi$, and prompts the introduction of an improved truncation relation. This increases the stability and range of validity of the GEF approach. We point out a remaining instability which however in practice does not impact the results of this work as it does not occur within the perturbative regime of axion inhomogeneities. | ---|--- Figure 7: Evolution of $\xi$ for $beta=15$ (left) and $\beta=22$ (right), assuming a homogeneous axion field. The gray regions indicate a violation of the truncation relation above $10\%$, see text for details. ## References * [1] Planck Collaboration, Y. Akrami et al., “Planck 2018 results. X. Constraints on inflation,” Astron. Astrophys. 641 (2020) A10, arXiv:1807.06211 [astro-ph.CO]. * [2] Planck Collaboration, N. Aghanim et al., “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641 (2020) A6, arXiv:1807.06209 [astro-ph.CO]. [Erratum: Astron.Astrophys. 652, C4 (2021)]. * [3] K. Freese, J. A. Frieman, and A. V. Olinto, “Natural inflation with pseudo - Nambu-Goldstone bosons,” Phys. Rev. Lett. 65 (1990) 3233–3236. * [4] M. M. Anber and L. Sorbo, “Naturally inflating on steep potentials through electromagnetic dissipation,” Phys. Rev. D 81 (2010) 043534, arXiv:0908.4089 [hep-th]. * [5] A. Linde, S. Mooij, and E. Pajer, “Gauge field production in supergravity inflation: Local non-Gaussianity and primordial black holes,” Phys. Rev. D 87 no. 10, (2013) 103506, arXiv:1212.1693 [hep-th]. * [6] E. Bugaev and P. Klimai, “Axion inflation with gauge field production and primordial black holes,” Phys. Rev. D 90 no. 10, (2014) 103501, arXiv:1312.7435 [astro-ph.CO]. * [7] S.-L. Cheng, W. Lee, and K.-W. Ng, “Numerical study of pseudoscalar inflation with an axion-gauge field coupling,” Phys. Rev. D 93 no. 6, (2016) 063510, arXiv:1508.00251 [astro-ph.CO]. * [8] J. Garcia-Bellido, M. Peloso, and C. Unal, “Gravitational waves at interferometer scales and primordial black holes in axion inflation,” JCAP 12 (2016) 031, arXiv:1610.03763 [astro-ph.CO]. * [9] V. Domcke, F. Muia, M. Pieroni, and L. T. Witkowski, “PBH dark matter from axion inflation,” JCAP 07 (2017) 048, arXiv:1704.03464 [astro-ph.CO]. * [10] J. Garcia-Bellido, M. Peloso, and C. Unal, “Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter,” JCAP 09 (2017) 013, arXiv:1707.02441 [astro-ph.CO]. * [11] S.-L. Cheng, W. Lee, and K.-W. Ng, “Primordial black holes and associated gravitational waves in axion monodromy inflation,” JCAP 07 (2018) 001, arXiv:1801.09050 [astro-ph.CO]. * [12] L. Sorbo, “Parity violation in the Cosmic Microwave Background from a pseudoscalar inflaton,” JCAP 06 (2011) 003, arXiv:1101.1525 [astro-ph.CO]. * [13] J. L. Cook and L. Sorbo, “Particle production during inflation and gravitational waves detectable by ground-based interferometers,” Phys. Rev. D85 (2012) 023534, arXiv:1109.0022 [astro-ph.CO]. [Erratum: Phys. Rev.D86,069901(2012)]. * [14] N. Barnaby, E. Pajer, and M. Peloso, “Gauge Field Production in Axion Inflation: Consequences for Monodromy, non-Gaussianity in the CMB, and Gravitational Waves at Interferometers,” Phys. Rev. D85 (2012) 023525, arXiv:1110.3327 [astro-ph.CO]. * [15] N. Barnaby, R. Namba, and M. Peloso, “Phenomenology of a Pseudo-Scalar Inflaton: Naturally Large Nongaussianity,” JCAP 1104 (2011) 009, arXiv:1102.4333 [astro-ph.CO]. * [16] M. M. Anber and L. Sorbo, “Non-Gaussianities and chiral gravitational waves in natural steep inflation,” Phys. Rev. D85 (2012) 123537, arXiv:1203.5849 [astro-ph.CO]. * [17] V. Domcke, M. Pieroni, and P. Binétruy, “Primordial gravitational waves for universality classes of pseudoscalar inflation,” JCAP 1606 (2016) 031, arXiv:1603.01287 [astro-ph.CO]. * [18] J. Garcia-Bellido, A. Papageorgiou, M. Peloso, and L. Sorbo, “A flashing beacon in axion inflation: recurring bursts of gravitational waves in the strong backreaction regime,” arXiv:2303.13425 [astro-ph.CO]. * [19] W. D. Garretson, G. B. Field, and S. M. Carroll, “Primordial magnetic fields from pseudoGoldstone bosons,” Phys. Rev. D 46 (1992) 5346–5351, arXiv:hep-ph/9209238. * [20] M. M. Anber and L. Sorbo, “N-flationary magnetic fields,” JCAP 10 (2006) 018, arXiv:astro-ph/0606534. * [21] C. Caprini and L. Sorbo, “Adding helicity to inflationary magnetogenesis,” JCAP 10 (2014) 056, arXiv:1407.2809 [astro-ph.CO]. * [22] P. Adshead, J. T. Giblin, T. R. Scully, and E. I. Sfakianakis, “Magnetogenesis from axion inflation,” JCAP 10 (2016) 039, arXiv:1606.08474 [astro-ph.CO]. * [23] D. Jiménez, K. Kamada, K. Schmitz, and X.-J. Xu, “Baryon asymmetry and gravitational waves from pseudoscalar inflation,” JCAP 12 (2017) 011, arXiv:1707.07943 [hep-ph]. * [24] R. Durrer, O. Sobol, and S. Vilchinskii, “Backreaction from gauge fields produced during inflation,” Phys. Rev. D 108 no. 4, (2023) 043540, arXiv:2303.04583 [gr-qc]. * [25] M. M. Anber and E. Sabancilar, “Hypermagnetic Fields and Baryon Asymmetry from Pseudoscalar Inflation,” Phys. Rev. D 92 no. 10, (2015) 101501, arXiv:1507.00744 [hep-th]. * [26] V. Domcke, B. von Harling, E. Morgante, and K. Mukaida, “Baryogenesis from axion inflation,” JCAP 10 (2019) 032, arXiv:1905.13318 [hep-ph]. * [27] V. Domcke, K. Kamada, K. Mukaida, K. Schmitz, and M. Yamada, “Wash-in leptogenesis after axion inflation,” JHEP 01 (2023) 053, arXiv:2210.06412 [hep-ph]. * [28] V. Domcke and K. Mukaida, “Gauge Field and Fermion Production during Axion Inflation,” JCAP 11 (2018) 020, arXiv:1806.08769 [hep-ph]. * [29] V. Domcke, Y. Ema, and K. Mukaida, “Chiral Anomaly, Schwinger Effect, Euler-Heisenberg Lagrangian, and application to axion inflation,” JHEP 02 (2020) 055, arXiv:1910.01205 [hep-ph]. * [30] E. V. Gorbar, K. Schmitz, O. O. Sobol, and S. I. Vilchinskii, “Gauge-field production during axion inflation in the gradient expansion formalism,” Phys. Rev. D 104 no. 12, (2021) 123504, arXiv:2109.01651 [hep-ph]. * [31] G. Dall’Agata, S. González-Martín, A. Papageorgiou, and M. Peloso, “Warm dark energy,” JCAP 08 (2020) 032, arXiv:1912.09950 [hep-th]. * [32] V. Domcke, V. Guidetti, Y. Welling, and A. Westphal, “Resonant backreaction in axion inflation,” JCAP 09 (2020) 009, arXiv:2002.02952 [astro-ph.CO]. * [33] M. Peloso and L. Sorbo, “Instability in axion inflation with strong backreaction from gauge modes,” JCAP 01 (2023) 038, arXiv:2209.08131 [astro-ph.CO]. * [34] R. von Eckardstein, M. Peloso, K. Schmitz, O. Sobol, and L. Sorbo, “Axion inflation in the strong-backreaction regime: decay of the Anber-Sorbo solution,” arXiv:2309.04254 [hep-ph]. * [35] D. G. Figueroa, J. Lizarraga, A. Urio, and J. Urrestilla, “The strong backreaction regime in axion inflation,” arXiv:2303.17436 [astro-ph.CO]. * [36] D. G. Figueroa, A. Florio, F. Torrenti, and W. Valkenburg, “The art of simulating the early Universe – Part I,” JCAP 04 (2021) 035, arXiv:2006.15122 [astro-ph.CO]. * [37] D. G. Figueroa, A. Florio, F. Torrenti, and W. Valkenburg, “CosmoLattice: A modern code for lattice simulations of scalar and gauge field dynamics in an expanding universe,” Comput. Phys. Commun. 283 (2023) 108586, arXiv:2102.01031 [astro-ph.CO]. * [38] E. V. Gorbar, K. Schmitz, O. O. Sobol, and S. I. Vilchinskii, “Hypermagnetogenesis from axion inflation: Model-independent estimates,” Phys. Rev. D 105 no. 4, (2022) 043530, arXiv:2111.04712 [hep-ph]. * [39] O. O. Sobol, E. V. Gorbar, and S. I. Vilchinskii, “Backreaction of electromagnetic fields and the Schwinger effect in pseudoscalar inflation magnetogenesis,” Phys. Rev. D 100 no. 6, (2019) 063523, arXiv:1907.10443 [astro-ph.CO]. * [40] O. O. Sobol, A. V. Lysenko, E. V. Gorbar, and S. I. Vilchinskii, “Gradient expansion formalism for magnetogenesis in the kinetic coupling model,” Phys. Rev. D 102 no. 12, (2020) 123512, arXiv:2010.13587 [astro-ph.CO]. * [41] P. Adshead, J. T. Giblin, M. Pieroni, and Z. J. Weiner, “Constraining axion inflation with gravitational waves from preheating,” Phys. Rev. D 101 no. 8, (2020) 083534, arXiv:1909.12842 [astro-ph.CO]. * [42] P. Adshead, J. T. Giblin, M. Pieroni, and Z. J. Weiner, “Constraining Axion Inflation with Gravitational Waves across 29 Decades in Frequency,” Phys. Rev. Lett. 124 no. 17, (2020) 171301, arXiv:1909.12843 [astro-ph.CO].
11institutetext: Guangxi Key Laboratory of Machine Vision and Intelligent Control, Wuzhou University, 82 Fumin Third Road, Wanxiu District, Wuzhou, China 22institutetext: Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Shijingshan District, Beijing, China 33institutetext: College of Physics, Jilin University, 2699 Qianjin Street, Changchun, China 44institutetext: University of Chinese Academy of Sciences, 19A Yuquan Road, Shijingshan District, Beijing, China # Performance studies of jet flavor tagging and measurement of $R_{b}(R_{c})$ using ParticleNet at CEPC Libo Liao<EMAIL_ADDRESS>11 Shudong Wang 2244 Weimin Song 33 Zhaoling Zhang 33 Gang Li<EMAIL_ADDRESS>(corresponding author)22 (Received: date / Revised version: date) ###### Abstract Jet flavor tagging plays a crucial role in the measurement of relative partial decay widths of $Z$ boson, denoted as $R_{b}$($R_{c}$), which is considered as a fundamental test of the Standard Model and sensitive probe to new physics. In this study, a Deep Learning algorithm, ParticleNet, is employed to enhance the performance of jet flavor tagging. The combined efficiency and purity of $c$-tagging is improved by more than 50% compared to the Circular Electron Positron Collider (CEPC) baseline software. In order to measure $R_{b}$($R_{c}$) with this new flavor tagging approach, we have adopted the double-tagging method. The precision of $R_{b}$($R_{c}$) is improved significantly, in particular to $R_{c}$, which has seen a reduction in statistical uncertainty by 40%. ###### pacs: 07.05.KfData analysis: algorithms and implementation; data management and 12.15.−yElectroweak interactions and 14.70.HpZ bosons ## 1 Introduction The measurement of the relative partial decay widths of $Z$ boson, $R_{q}=\Gamma_{q\bar{q}}/\Gamma_{h}$, where $\Gamma_{q\bar{q}}$ and $\Gamma_{h}$ are the partial decay width of $Z\to q\bar{q}$ and the total hadronic decay width respectively, plays a crucial role in testing the Standard Model (SM) Glashow ; Weinberg and searching for new physics. Particularly, $R_{b}$ is sensitive to the loop corrections to the $Zb\bar{b}$ vertex, potentially sensitive to new physics contributions SUSY . The decay width to a quark-antiquark final state can be expressed as Vysotsky:1996he $\displaystyle\Gamma(Z\to q\bar{q})$ $\displaystyle=$ $\displaystyle\frac{G_{F}M_{Z}^{3}}{2\sqrt{2}\pi}(g^{2}_{Aq}R_{Aq}+g^{2}_{Vq}R_{Vq})~{},$ (1) where $g_{Aq}$ and $g_{Vq}$ are the axial and vector coupling constants, respectively, and $R_{Aq}$ and $R_{Vq}$ are radiation factors to account for the final state Quantum Electrodynamics (QED) and Quantum Chromodynamics (QCD) corrections. The electroweak radiative corrections to the propagator and the $Zq\bar{q}$ vertex are effectively accounted for in the $g_{A}$ and $g_{V}$ couplings. The QED and QCD corrections at first order are flavour blind and can be represented as $R_{Aq}\approx R_{Vq}\approx 1+\frac{\alpha_{s}(M_{Z})}{\pi}~{},$ (2) so that the counterparts of the denominator and numerator cancel each other out in the ratio $\Gamma_{q\bar{q}}/\Gamma_{h}$. The latest world averages of $R_{b}$ and $R_{c}$, which are dominated by the measurements of experiments on the LEP and the SLC L3:1999aer ; OPAL:1998kxc ; DELPHI:1998cnd ; ALEPH:1997xqy ; SLD:2005zyw ; ALEPH:2005ab , and the combination results of Gfitter Group Gfitter2018 for $R_{b}$ and $R_{c}$ are shown in Table 1. It is apparent that the theoretical uncertainties given by the Gfitter Group are smaller than the experimental results by about two orders of magnitude. Therefore, It is a promising way to search new physics beyond the Standard Model by reducing the uncertainties in both experimental and theoretical domains. Table 1: $R_{b}$ and $R_{c}$ values in experiment and Gfitter. | Experiment | Gfitter results ---|---|--- $R_{b}$ | $0.21629\pm 0.00066$ | $0.21582\pm 0.00011$ $R_{c}$ | $0.1721\pm 0.0030$ | $0.17224\pm 0.00008$ Various approaches have been used to measure the $R_{b}$($R_{c}$), such as double tagging, multi-tagging, etc. However, the precision was limited by the statistics and detector performance. Recently, a few electron-positron colliders, such as the CEPC CDR-D and the FCC-ee FCC:2018evy , were proposed to perform precision Higgs and electroweak studies. These facilities are going to deliver huge statistics of data at $Z$ pole, $W$ threshold, and about 240 GeV to maximize the production cross section of Higgs-struhlung process, and so on. It is natural that these experiments will adopt both new detector and software technologies to achieve the best performance in the detection and reconstruction of physics objects, especially for jets. To measure $R_{b}$($R_{c}$), jets are essential physics objects. Therefore, good jet reconstruction algorithms are key ingredients, in particular, jet flavor tagging. Jets from different quarks have different characteristics. For instance, the final states of $b$-jets usually have a wider energy distribution, and the vertex displacement of tracks in a $b$-jet are larger than those of other jets because of the long lifetimes of $b$-flavored hadrons, and so on. The LCFIPlus LCFIPlus based on the TMVA package TMVA2007 , is used for the International Linear Collider (ILC) ilc1 ; ilc2 , the CEPC, and the FCC-ee physics performance study and detector optimization. The CEPC delivers great $b/c$-tagging performance thanks to its high precision vertex detector. The $b$-jets can be tagged with an efficiency of 80% at a purity of 90%. Compared with $b$-tagging, $c$-jet tagging is particularly challenging as charm hadrons have relatively shorter lifetimes than bottom ones and suffer more backgrounds. Therefore, an efficiency of 60% and a purity of 60% can be achieved for the $c$-jet tagging. The FCC-ee also investigated jet flavor tagging by developing its own deep learning flavor tagging tool, ParticleNetIDEA Bedeschi:2022rnj ; Gautam:2022szi . It is suggested that this methodology yields performance that is commensurate with those reported in the present study. In this article, the performance of jet flavor tagging of the CEPC baseline detector is improved using a new deep learning (DL) algorithm, ParticleNet Qu:2019gqs . In addition, another novel DL algorithm, Particle Flow Network (PFN) Komiske:2018cqr , is used for comparison and cross- checking. The article is organized as follows. The simulation, reconstruction software, and Monte Carlo (MC) samples are introduced in Section 2; the DL algorithms and the results of jet flavor tagging are presented in Section 3; then the measurement of $R_{b}$($R_{c}$) is discussed in Section 4; and a summary is given in Section 5. ## 2 Detector, software, and samples The study is based on the CEPC baseline detector, which is advanced design from the International Large Detector ild on the ILC and optimized to meet the physics requirement of the CEPC, as shown in Fig. 1. The baseline detector is designed according to the Particle Flow Algorithm Arbor , which could reap a better precision and efficiency of reconstructed objects by using the most suitable sub-detectors. From the inside out, the detector includes a silicon pixel vertex detector, a silicon tracker, a time projection chamber (TPC), a calorimetry system which includes an electromagnetic calorimeter (ECAL) and a hadronic calorimeter (HCAL) of very high granularity, and a muon detector embedded inside the return yoke of a solenoid magnet system which provides a magnetic field of 3 Tesla. Figure 1: The CEPC baseline detector. The left is the $r-\phi$ view of the detector. In the barrel from inside to outside, the detector is composed of a silicon pixel vertex detector, a silicon inner tracker, a TPC, a silicon external tracker, an ECAL, an HCAL, a solenoid of 3 Tesla, and a muon detector. The right is the silicon pixel vertex detector, which consists of 3 concentric cylindrical double-layers of high spatial resolution. The vertex detector consists of six layers of silicon pixel sensors at radii between 1.6 and 6.0 cm with excessive spatial resolution of $\sim 5~{}\mu\text{m}$. The resolution in $r\phi$ plane can be parameterized by $\sigma_{r\phi}=a\oplus\frac{b}{p\text{(GeV)}\sin^{3/2}\theta},$ (3) where $\sigma_{r\phi}$ denotes the impact parameters resolution, $p$ is the track momentum, and $\theta$ is the polar track angle, $a=5~{}\mu\text{m}$ and $b=10~{}\mu\text{m}\cdot\text{GeV}$. The silicon tracker is made of 4 components, which are the Silicon Inner Tracker, the Silicon External Tracker, the Forward Tracking Detector, and the End-cap Tracking Detector. The Time Projection Chamber is designed within the framework of the LCTPC collaboration LCTPC and provided a large number of hits to enhance track finding efficiency. The ECAL and HCAL are each composed of 1 barrel and 2 end-cap sections. The detailed description of the CEPC baseline detector is in Ref. CDR-D . The MC samples for this study are produced with the CEPC full simulation, reconstruction, and analysis framework cepcsoft . The physics processes are generated with WHIZARD 1.9.5 whizard . PYTHIA 6 pythia6 is then used for hadronization. MokkaPlus MokkaPlus , a GEANT4-based geant4 detector simulation tool, is used to model the detector response. Arbor Arbor is used to reconstruct physics objects including tracks, photons, and neutral hadrons, and LCFIPlus LCFIPlus is used to reconstruct (secondary) vertices and jets. There are 3 hadronic decay modes of $Z$ boson used for jet flavor tagging in this study, which are $e^{+}e^{-}\to Z\to b\bar{b}$, $c\bar{c}$, and $q\bar{q}(u\bar{u}/d\bar{d}/s\bar{s})$. For each process, 450,000 events are produced, which has 900,000 jets in total. The jets are reconstructed using $e^{+}e^{-}$-$k_{t}$ algorithm in LCFIPlus LCFIPlus , where all particles, including the reconstructed primary and second vertices, are clustered into two jets. ## 3 Jet flavor tagging with ParticleNet In this study, ParticleNet is utilized as the nominal algorithm, while PFN is employed as a comparison and cross-check method. Based on the particle cloud representation, which treats a jet as an unordered group of particles, an effective algorithm, ParticleNet, has been developed. It is a customized neural network model using Dynamic Graph Convolutional Neural Network (DGCNN) DGCNN for jet tagging. ParticleNet has several advantages. First, it can deal with the varying number of particles in an event, which is common in experimental high energy physics. Second, the algorithm is designed to respect the particle permutation invariance, which refers to the fact that the algorithm does not assume any special order of the particles in a jet. Third, ParticleNet makes extensive use of EdgeConv DGCNN operations to update the graph representation dynamically. The study in Ref. DGCNN shows that it is beneficial to recompute the graph using nearest neighbors in the feature space produced by each layer. With dynamic graph updates, the jet (sub-)structure can be probed hierarchically, which leads to better performance than keeping the graph static. Last but not least, ParticleNet could exploit local neighborhood information explicitly while most of the other DL algorithms could only use global symmetric features. ### 3.1 Visualizing the data sets The jet flavor tagging algorithm is based on features of the data sets. In this study, these features could be categorized into three types. The first type is related to jet kinematics, such as multiplicity, momentum distribution, etc. The second is the impact parameters of the charged tracks, which are very informative for $b$-tagging. The last one is the types of particles in a jet, i.e., particle identification (PID). Those could be expected that the multiplicity of $b$-jet should be larger than the others because of higher masses of $b$-flavored hadrons and that the tracks should have larger vertex displacement because of their longer lifetimes, etc. Considering three types of jets to be studied, some distributions are shown in Fig. 2. Figure 2 is the multiplicity versus momenta of tracks, where it can be seen that the number of tracks in $b$ jets is slightly larger than those of $c$ jets and light ($q$) jets, which is consistent with the decay properties of heavier $B$ hadrons. The distribution of impact parameters versus the momenta is shown in Fig. 2. Clear patterns can be observed: $b$ jets have the significant contribution of larger impact parameters and of energetic tracks compared with $c$ and $q$ jets. Figure 2 shows the momentum weighted fractions of different particle types in the three physics processes. It is clear that $b$ quarks produce more energetic leptons, $c$ quarks produce slightly more energetic kaons. All the above are consistent with our expectations. In this data-set, kinematic information, i.e., the four momentum ($p_{x}$,$p_{y}$,$p_{z}$,$E$), and vertex displacements of each particle when available, are listed in Tab. 2. The ($\cos\theta$, $\phi\sin\theta$) are used as coordinates Li:2020vav to compute the distances between particles in the first EdgeConv block. They are also used together with some other variables, such as $\Delta R$, PID, $E$, $Q$, $\log E$, $\log P$, $D_{0}$, $Z_{0}$, $D_{0}/\sigma_{D_{0}}$, $Z_{0}/\sigma_{Z_{0}}$, and the prob which is defined as $\text{prob}=\int_{\chi^{2}}^{\infty}p(x,N)dx,$ (4) where $\chi^{2}=(D_{0}/\sigma_{D_{0}})^{2}+(Z_{0}/\sigma_{Z_{0}})^{2}$, $p(x,N)$ is the probability density function of the chi-square distribution, and $N(=2)$ is the number of degrees of freedom. Figure 2: The feature plots of $b,c,$ and $q$ in jet level. The 2-dimensional diagram of charged multiplicity versus momentum distribution is shown in the top panel; the 2-dimensional diagrams of momentum versus D0 are shown in the middle panel; the fractions of all particle types in $b\bar{b}$, $c\bar{c}$, and $q\bar{q}$ weighted by momentum in the bottom panel, where the PID is based on the MC truth. Table 2: Variables used in the DL algorithms. Variable | Definition ---|--- $\cos\theta$ | cosine of polar angle of particle $\phi\sin\theta$ | azimuth angle times sine of polar theta of particle $\Delta R$ | $\sqrt{\delta\theta^{2}+\delta\phi^{2}}$, angular separation between the particle and the jet axis PID | particle ID $E$ | energy of a particle $Q$ | electric charge of a particle $\log E$ | logarithm of the particle’s energy $\log P$ | logarithm of the particle’s momentum $D_{0}$ | impact parameter of a track in the r-$\phi$ plane $Z_{0}$ | impact parameter of a track along the $z$ axis $D_{0}/\sigma_{D_{0}}$ | significance of the impact parameter in the r-$\phi$ plane $Z_{0}/\sigma_{Z_{0}}$ | significance of the impact parameter along the $z$ axis prob | the probability for a certain Chi-squared and number of degrees of freedom ### 3.2 Deep learning algorithms and configuration The ParticleNet used in this paper consists of three EdgeConv blocks, a global average pooling layer, and two fully connected layers. The number of channels $C$ for each EdgeConv block is (64, 64,64), (128, 128, 128), and (256, 256, 256), respectively. After the EdgeConv blocks, a channel-wise channelwise global average pooling operation is applied to aggregate the learned features over all particles in the cloud. This is followed by a fully connected layer with 256 neurons and the ReLU activation relu . A dropout layer dropout with a drop probability of 0.1 is included to prevent overfitting. A fully connected layer with $N$ neurons, followed by a softmax function, is used to generate the output, where the $N$ is the number of categories in a classification task. For the number of nearest neighbors $k$ for all three blocks, some optimization is performed, which shows that 12 for jet tagging is optimal. The configuration of PFN is directly taken from the Ref. Komiske:2018cqr , since it is only used for cross-checking. ### 3.3 Training and evaluation Both ParticleNet and PFN are implemented and running with 8 Intel® Xeon® Gold 6240 CPU cores and 4 NVIDIA® Tesla® V100-SXM2-32GB GPU cards at the IHEP GPU farm. During training, the common properties of the neural networks include a categorical cross-entropy loss function, the Adam optimization algorithm adam , a batch size of 1,024, and a starting learning rate of 0.005. 900,000 jets are used for each process, therefore, the total number of jets is 2,700,000. The full data-set is split into training, validation, and test samples according to the fraction of 7:1.5:1.5. The monitoring of loss and accuracy on training and validation shows that the algorithm converges well and there is no obvious over-training. The computation consumption of ParticleNet and PFN algorithms could be estimated. Only the total consumption of GPU and CPU is used for estimation since all the computing resources could only be accessed indirectly via a workload manager server. ParticleNet takes about 190 minutes for training (30 epochs) and 3 minutes for inference. PFN takes about 30 minutes for training (80 epochs) and less than a minute for inference. Both two methods could be finished on a reasonable time scale. However, it could be a problem for the study of $10^{12}~{}Z$ bosons and solved by hardware development in the next decades. ### 3.4 Performance Both ParticleNet and PFN outperform the LCFIPlus in terms of jet flavor tagging. The accuracy of two novel algorithms, which is defined as the fraction of correctly classified jets, are summarized in Tab. 3, together with those in Ref. yangfan . PartcileNet could achieve an accuracy of about 87.6%, which is at least 9% better than those in Ref. yangfan . Table 3: The accuracy of different algorithms for jet flavor tagging. In this study, ParticleNet is trained 9 times using randomly initialized weights, and the results from the median-accuracy are shown, while PFN is trained only once and the uncertainty from randomly initialized weights is negligible. Algorithm | ParticleNet | PFN | DNN | BDT | GBDT | gcforest | XGBoost ---|---|---|---|---|---|---|--- Accuracy | 0.876 | 0.850 | 0.788 | 0.776 | 0.794 | 0.785 | 0.801 The numerical results of efficiencies and Area Under Curve (AUC) of both algorithms in different jet flavor tagging are listed in Tab. 4. The efficiencies, also called recalls, are determined by taking the largest score of a jet predicted by the classifiers, which are the same as the corresponding diagonal terms of the confusion matrix which is shown in Fig. 3. It can be seen that the performance of $b$-tagging is always better than $c$\- and $q$-tagging. The observation is consistent with the results in Ref. yangfan . Figure 3: The confusion matrix with ParticleNet. The training is repeated 9 times using randomly initialized weights, and the results of the training with median-accuracy are adopted. Figure 4: Efficiencies for selecting jets with the wrong flavor when tagging $b$ jets (the left panel) and $c$ jets (the right panel). The points and the rectangles are the efficiencies of the CEPC baseline and this study, respectively. The training is repeated 9 times using randomly initialized weights, and the results of the training with median-accuracy are shown. The performance of ParticleNet and PFN are generally better than those in Ref. yangfan . This could be explained from two sides, one is that much richer information about a jet is used including four momenta, impact parameters, and PIDs, and the other is that ParticleNet and PFN have a strong inductive bias for representing high energy events. ParticleNet outperforms PFN, which is consistent with the study in Ref. Qu:2019gqs , and the authors explained that "the Deep Sets (PFN) approach does not explicitly exploit the local spatial structure of particle clouds, but only processes the particle clouds in a global way." Table 4: The performance of ParticleNet and PFN in jet tagging. ParticleNet is trained 9 times using randomly initialized weights and the one with median-accuracy is taken. tag | ParticleNet | PFN ---|---|--- Efficiency | AUC | Efficiency | AUC $b$ | 0.908 | 0.986 | 0.870 | 0.979 $c$ | 0.798 | 0.951 | 0.765 | 0.930 $q$ | 0.923 | 0.974 | 0.911 | 0.966 An alternative way to show the flavor tagging performance is the tagging efficiencies versus the corresponding wrong flavour efficiencies, as the plots in Fig. 4. For $b$-tagging, the main background is from $c$ jets. In the case of $c$-tagging, the situation is different. The main background is from light flavour jets at efficiency above 80%, while it is dominated by misidentified $b$ jets at lower efficiency. To demonstrate the physics impacts of jet flavor tagging, a detailed comparison in terms of the product of efficiency and purity, $\epsilon\rho$, is performed. Taking the measurement of $R_{b}$($R_{c}$) as an example, the Eq. 5 gives the connection between its statistical uncertainty and $\epsilon\rho$. It is known for decades that maximizing $\epsilon\rho$ is identical to minimizing the statistical uncertainties. $(\Delta R_{i})^{2}\propto\frac{1}{\epsilon_{i}\rho_{i}}.$ (5) To compare the performance of various jet flavor tagging methods, some working points are chosen. Table 5 summarizes the numerical results, where LCFIPlus and XGBoost are taken as references. The table shows that the performance of ParticleNet is much better than the others, especially in $c$-tagging. ParticleNet is more than 50% better compared with LCFIPlus when the efficiency of $c$-taggging is 60%. A specific example to illustrate the impact is that ParticleNet could improve the statistical uncertainty in counting $c$ jets by 30% compared with the XGBoost. PFN also achieves comparable improvement and confirms the correctness of ParticleNet. Table 5: The performance of the specific method in different working points, where the results of LCFIPlus are reported in Ref. CDR-D , and the results of XGBoost are reported in Ref. yangfan tag | $\epsilon_{S}$(%) | $\epsilon\times\rho$ ---|---|--- LCFIPlus | XGBoost | ParticleNet | PFN $b$ | 80 | - | 0.747 | 0.786 | 0.763 90 | 0.72 | 0.713 | 0.821 | 0.752 $c$ | 60 | 0.36 | - | 0.554 | 0.485 70 | - | - | 0.605 | 0.497 80 | - | 0.345 | 0.597 | 0.467 90 | - | 0.292 | 0.532 | 0.402 ## 4 Measurement of relative decay width In the LEP, $R_{b}$ is measured with various methods, which are based on counting the events with either one or both hemisphere tagged. In this study, jet is akin to hemisphere in LEP and it would be used in the rest of this paper. The observed number of jets of flavor $i$ (single tag), $N_{s}^{i,\mathrm{obs}}$, and the observed number of jet pairs (double tag), $N_{d}^{i,\mathrm{obs}}$, are given by: $\begin{split}N_{s}^{i,\mathrm{obs}}=&2N^{h,\mathrm{pro}}\cdot(R_{b}\varepsilon_{ib}+R_{c}\varepsilon_{ic}+R_{q}\varepsilon_{iq})~{},\\\ N_{d}^{i,\mathrm{obs}}=&N^{h,\mathrm{pro}}\cdot[R_{b}\varepsilon_{ib}^{2}(1+C_{ib})+R_{c}\varepsilon_{ic}^{2}(1+C_{ic})\\\ &+R_{q}\varepsilon_{iq}^{2}(1+C_{iq})]~{},\end{split}$ (6) where $i(j)=b,c,q$ are flavors of jets, $C_{ij}$ is the correlation between a jet pair of flavor $j$ when both are tagged as $i$, $\varepsilon_{ij}$ is the efficiency of a $j$ jet being tagged as a $i$ jet, $N^{h,\mathrm{pro}}$ is total number of $Z$ hadronic events produced in collisions, $R_{i}$ is the relative decay widths of $Z$ to jet pair of $i$. $R_{c}$ measurement is more challenging than $R_{b}$, since the $c$-tagging has less efficiency and less purity than $b$-tagging. Therefore, several methods are employed, such as double tag measurement, charm counting, etc. In fact, the key ingredient of a relative partial width measurement is classifying the signal and background correctly, i.e., jet flavor tagging. To measure $R_{b}$($R_{c}$), the double tag method is deployed, which solves the Eq. (6) to get $R_{b}$($R_{c}$) ($R_{q}=1-R_{b}-R_{c}$ by definition) when a working point is determined. All the $\varepsilon_{ij}$ could be determined by MC simulation and the correlation between jets could be neglected temporarily. Signal regions of $b$, $c$, and $q$ candidates are defined as the red lines, i.e, working point, in Fig. 5. There are 2 equations for each region, and 6 in total. As over-determined equations, they could be solved by the least square method. Using the same integrated luminosity assumed in Ref. Li:2021zlv , a toy MC approach is used to calculate the statistical uncertainty of $R_{b}$($R_{c}$). A total number of $10^{11}$ $Z$ hadronic decay events is sampled according to Poisson distribution, and then this number is sampled into three categories, $b\bar{b}$, $c\bar{c}$, and $q\bar{q}$, according to multinomial distribution. The detection and selection procedures are also simulated according to multinomial distribution. Finally, three observed numbers are obtained by adding sampling results. Now $R_{b}$($R_{c}$) could be calculated with the least square method, as well as its statistical uncertainty. Figure 5: The two-dimensional distribution of $b$-likeliness versus $c$-likeliness, where the red lines indicate one of the jet classifications. The upper left triangle is the candidate area of $b$, the lower left rectangle is the candidate area of $q$, and the lower right triangle is the candidate area of $c$. The results are summarized in Table 6. The measurements of the LEP/SLC ALEPH:2005ab , FCC-ee Bernardi:2022hny , and Ref. Li:2021zlv are also listed for comparison purposes. The uncertainties of relative decay width in LEP/SLC ALEPH:2005ab are primarily limited by statistics. The template fit Li:2021zlv got excellent precision by using a much larger sample size and more information. The double tag also achieves comparable precision as the template fit on $R_{b}$, but for $R_{c}$, the precision is improved by nearly 40%, thanks to the superior $c$-tagging performance of the DL algorithm compared to LCFIPlus. The statistical uncertainty in FCC-ee is $0.3\times 10^{-6}$. However, it should be noticed that the statistics used at FCC-ee are 50 times larger than those used in Ref. Li:2021zlv and this study. If the same integrated luminosity is assumed, it would be $2.1\times 10^{-6}$. So the results of Ref. Li:2021zlv and this study are much better because the FCC-ee simply extrapolates the results from LEP, while the other two studies employ innovative analysis methods and enhanced detector designs. Table 6: Statistical uncertainties ($10^{-6}$) of relative decay widths. The results of LEP/SLC ALEPH:2005ab , FCC-ee Bernardi:2022hny , and template fit Li:2021zlv are reported. The flavor tagging methods employed in Template fit and Double tag are also listed. | $\sigma_{R_{b}}$ | $\sigma_{R_{c}}$ | $\sigma_{R_{q}}$ | flavor tagging method ---|---|---|---|--- LEP+SLC | 659 | 3015 | - | - FCC-ee | 2.1(0.3) | - | - | - Template fit | 1.2 | 2.3 | 2.1 | LCFIPlus Double tag | 1.3 | 1.4 | - | ParticleNet ## 5 Summary and discussion This study utilizes two DL algorithms to enhance the performance of jet flavor tagging. ParticleNet, in particular, shows significant improvement in jet flavor tagging, especially with regard to $c$-tagging. In terms of the product of purity and efficiency, the $c$-tagging is improved by over 50% compared to the CEPC baseline software when the efficiency of $c$-tagging is 60%. It’s understandable why ParticleNet achieves significantly better performance. Compared to the traditional methods, ParticleNet can maximize the usage of information in a jet, as it uses lower-level information, such as momenta, energies, and impact parameters, as input. On the other hand, the point-cloud (set) representation, which preserves some important symmetries, has better expressive power for jets Lorentz-symmetry . $R_{b}$($R_{c}$) is used as a test bed to demonstrate the physics impacts of the new DL algorithm. The results indicate that the precision of $R_{c}$ can be improved by a factor of 1.6 compared to those in Ref. Li:2021zlv . In a high-precision study as this, the systematic uncertainties pose a significant challenge and require careful investigation in future studies. #### Acknowledgements This work is partially supported by the National Natural Science Foundation of China (NSFC) (12075271, 12047569) and Guangxi University Young and Middle-aged Teachers Research Basic Research Ability Improvement Project (2023KY0707). ## References * (1) S. L. Glashow, Partial Symmetries of Weak Interactions. Nucl. Phys. 22, 579-588 (1961). https://doi.org/10.1016/0029-5582(61)90469-2 * (2) S. Weinberg, A Model of Leptons. Phys. Rev. Lett. 19, 1264-1266 (1967). https://doi.org/10.1103/PhysRevLett.19.1264 * (3) J. R. Ellis, J. L. Lopez and D. V. Nanopoulos, Supersymmetry, supergravity and R(b) revisited in the light of LEP-2. Phys. Lett. B 397, 88-93 (1997). https://doi.org/10.1016/S0370-2693(97)00156-1. arXiv:hep-ph/9612376 [hep-ph] * (4) M. I. Vysotsky, V. A. Novikov, L. B. Okun and A. N. Rozanov, Electroweak radiative corrections in $Z$ boson decays. Phys. Usp. 39, 503-538 (1996). https://doi.org/10.1070/PU1996v039n05ABEH000146. arXiv:hep-ph/9606253 [hep-ph] * (5) M. Acciarri et al. [L3], Measurement of R($b$) and Br($b\to$ lepton neutrino $X$) at LEP using double tag methods. Eur. Phys. J. C 13, 47-61 (2000). https://doi.org/10.1007/s100520000296. arXiv:hep-ex/9909045 [hep-ex] * (6) G. Abbiendi et al. [OPAL], A Measurement of R(b) using a double tagging method. Eur. Phys. J. C 8 , 217-239 (1999). https://doi.org/10.1007/s100529901087. arXiv:hep-ex/9810002 [hep-ex] * (7) P. Abreu et al. [DELPHI], A Precise measurement of the partial decay width ratio R(b)**0 = Gamma(b anti-b) / Gamma(had). Eur. Phys. J. C 10, 415-442 (1999). https://doi.org/10.1007/s100520050766 * (8) R. Barate et al. [ALEPH], A Measurement of R(b) using mutually exclusive tags. Phys. Lett. B 401, 163-175 (1997). https://doi.org/10.1016/S0370-2693(97)00407-3 * (9) K. Abe et al. [SLD], Measurement of the branching ratio of the Z0 into heavy quarks. Phys. Rev. D 71, 112004 (2005). https://doi.org/10.1103/PhysRevD.71.112004. arXiv:hep-ex/0503005 [hep-ex] * (10) S. Schael et al. [ALEPH, DELPHI, L3, OPAL, SLD, LEP Electroweak Working Group, SLD Electroweak Group and SLD Heavy Flavour Group], Precision electroweak measurements on the $Z$ resonance. Phys. Rept. 427, 257-454 (2006). https://doi.org/10.1016/j.physrep.2005.12.006 arXiv:hep-ex/0509008 [hep-ex] * (11) J. Haller, A. Hoecker, R. Kogler, K. Mönig, T. Peiffer and J. Stelzer, Update of the global electroweak fit and constraints on two-Higgs-doublet models. Eur. Phys. J. C 78, no.8, 675 (2018). https://doi.org/10.1140/epjc/s10052-018-6131-3. arXiv:1803.01853 [hep-ph] * (12) CEPC Study Group, CEPC Conceptual Design Report: Volume 2 - Physics & Detector (2018). arXiv:1811.10545 * (13) A. Abada et al.[FCC], FCC-ee: The Lepton Collider: Future Circular Collider Conceptual Design Report Volume 2. Eur. Phys. J. ST 228, no.2, 261-623 (2019). https://doi.org/10.1140/epjst/e2019-900045-4 * (14) Taikan Suehara, Tomohiko Tanabe, LCFIPlus, A Framework for jet Analysis in Linear Collider Studies. arXiv:1506.08371 [physics.ins-det] * (15) A. Hoecker, P. Speckmayer, J. Stelzer, J. Therhaag, E. von Toerne, and H. Voss, TMVA: Toolkit for Multivariate Data Analysis. PoS A CAT 040 (2007) [physics/0703039]. * (16) C. Adolphsen, M. Barone, B. Barish, K. Buesser, P. Burrows, J. Carwardine, J. Clark, H. Mainaud Durand, G. Dugan and E. Elsen, et al., The International Linear Collider Technical Design Report - Volume 3.I: Accelerator & in the Technical Design Phase. arXiv:1306.6353 [physics.acc-ph] * (17) C. Adolphsen, M. Barone, B. Barish, K. Buesser, P. Burrows, J. Carwardine, J. Clark, H. Mainaud Durand, G. Dugan and E. Elsen, et al., The International Linear Collider Technical Design Report - Volume 3.II: Accelerator Baseline Design. arXiv:1306.6328 [physics.acc-ph] * (18) F. Bedeschi, L. Gouskos and M. Selvaggi, Eur. Phys. J. C 82, 646 (2022) * (19) K. Gautam [FCC], PoS ICHEP2022 (2022), 1147 * (20) H. Qu and L. Gouskos, ParticleNet: Jet Tagging via Particle Clouds. Phys. Rev. D 101, no.5, 056019 (2020). https://doi.org/10.1103/PhysRevD.101.056019. arXiv:1902.08570 [hep-ph] * (21) P. T. Komiske, E. M. Metodiev and J. Thaler, Energy Flow Networks: Deep Sets for Particle Jets. JHEP 01, 121 (2019). https://doi.org/10.1007/JHEP01(2019)121. arXiv:1810.05165 [hep-ph] * (22) T. Behnke, J. E. Brau, P. N. Burrows, J. Fuster, M. Peskin, M. Stanitzki, Y. Sugimoto, S. Yamada, H. Yamamoto and H. Abramowicz, et al., The International Linear Collider Technical Design Report - Volume 4: Detectors. arXiv:1306.6329 [physics.ins-det] * (23) M. Ruan and H. Videau, Arbor, a new approach of the Particle Flow Algorithm. arXiv:1403.4784 [physics.ins-det] * (24) LCTPC collaboration. https://www.lctpc.org/ * (25) M. Ruan, H. Zhao, G. Li, C. Fu, Z. Wang, X. Lou, D. Yu, V. Boudry, H. Videau and V. Balagura, et al. Reconstruction of physics objects at the Circular Electron Positron Collider with Arbor. Eur. Phys. J. C 78, no.5, 426 (2018). https://doi.org/10.1140/epjc/s10052-018-5876-z. arXiv:1806.04879 [hep-ex] * (26) Kilian, W., Ohl, T. & Reuter, J., WHIZARD - simulating multi-particle processes at LHC and ILC. Eur. Phys. J. C 71, 1742 (2011). https://doi.org/10.1140/epjc/s10052-011-1742-y * (27) T. Sjostrand, S. Mrenna and P. Z. Skands, PYTHIA 6.4 Physics and Manual. JHEP 05, 026 (2006). https://doi.org/10.1088/1126-6708/2006/05/026. arXiv:0603175 * (28) P. Moras de Freitas et al, MOKKA: A detailed Geant4 simulation for the international linear collider detectors. https://flcwiki.desy.de/Mokka * (29) S. Agostinelli et al. [GEANT4], GEANT4–a simulation toolkit. Nucl. Instrum. Meth. A 506, 250-303 (2003). https://doi.org/10.1016/S0168-9002(03)01368-8 * (30) Y.Wang, Y.Sun, Z.Liu, S.E.Sarma, M.M.Bronstein and J.M.Solomon, Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 38, 146 (2019). * (31) L. Li, Y. Y. Li, T. Liu and S. J. Xu, JHEP 10, 018 (2020). * (32) Gao, Hongyang, Zhengyang Wang and Shuiwang Ji, ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions. in IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (2021): 2570-2581. * (33) X. Glorot, A. Bordes, and Y. Bengio, Deep sparse rec- tifier neural networks. in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Vol. 15 (PMLR, Fort Lauderdale, FL, USA, 2011) pp. 315–323. * (34) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 1929–1958 (2014). * (35) Diederik P. Kingma and Jimmy Ba, Adam: A Method for Stochastic Optimization. CoRR (2015). arXiv:1412.6980 * (36) Fan Yang, Flavor Tagging Using Machine Learning Algorithms. 2017. indico:FlavorTagging-Nov-7.pdf * (37) B. Li, Y. Du, Z. Liang and B. Liu, Performance study of the relative decay width measurement in hadronic decays of $Z$ boson at CEPC by using the template method. Int. J. Mod. Phys. A 36, no.27, 2150207 (2021). https://doi.org/10.1142/S0217751X21502079 * (38) G. Bernardi, E. Brost, D. Denisov, G. Landsberg, M. Aleksa, D. d’Enterria, P. Janot, M. L. Mangano, M. Selvaggi and F. Zimmermann, et al. arXiv:2203.06520 [hep-ex] * (39) C. Li, H. Qu, S. Qian, Q. Meng, S. Gong, J. Zhang, T. Y. Liu and Q. Li, Does Lorentz-symmetric design boost network performance in jet physics?. arXiv:2208.07814 [hep-ph]
# Nonasymptotic Convergence Rate of Quasi-Monte Carlo: Applications to Linear Elliptic PDEs with Lognormal Coefficients and Importance Samplings Yang Liu111KAUST SRI Center for Uncertainty Quantification in Computational Science and Engineering 0000-0003-0778-3872<EMAIL_ADDRESS>Raúl Tempone22footnotemark: 2333Alexander von Humboldt Professor in Mathematics for Uncertainty Quantification, RWTH Aachen University, 52062 Aachen, Germany 0000-0003-1967-4446 Computer, Electrical and Mathematical Sciences and Engineering, 4700 King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Kingdom of Saudi Arabia ###### Abstract This study analyzes the nonasymptotic convergence behavior of the quasi-Monte Carlo (QMC) method with applications to linear elliptic partial differential equations (PDEs) with lognormal coefficients. Building upon the error analysis presented in (Owen, 2006), we derive a nonasymptotic convergence estimate depending on the specific integrands, the input dimensionality, and the finite number of samples used in the QMC quadrature. We discuss the effects of the variance and dimensionality of the input random variable. Then, we apply the QMC method with importance sampling (IS) to approximate deterministic, real- valued, bounded linear functionals that depend on the solution of a linear elliptic PDE with a lognormal diffusivity coefficient in bounded domains of $\mathbb{R}^{d}$, where the random coefficient is modeled as a stationary Gaussian random field parameterized by the trigonometric and wavelet-type basis. We propose two types of IS distributions, analyze their effects on the QMC convergence rate, and observe the improvements. ###### keywords: Quasi-Monte Carlo , Importance sampling , Finite elements , Partial differential equations with random data , Lognormal diffusion , Wavelets ###### MSC: [2020] 65C05 , 65N50 , 65N22 , 35R60 ## 1 Introduction The quasi-Monte Carlo (QMC) method computes the expectation of a random variable using deterministic low-discrepancy sequences. The QMC method offers a convergence rate of $\mathcal{O}(n^{-1+\epsilon})$ for $\epsilon>0$, surpassing the Monte Carlo convergence rate of $\mathcal{O}(n^{-1/2})$. However, the effectiveness of the QMC method depends on the regularity of the integral and the integration dimension. The integrand variation and integration dimension dictate the classical upper bound for the QMC method, known as the Koksma–Hlawka inequality [34]. Concerns about the quality of point sets arise as the integration dimension increases [32, 49, 28, 12]. For instance, the Halton sequence may exhibit pathological behavior in certain dimensions; thus, researchers have proposed remedies [11, 39]. Despite these challenges, the QMC method has been successful in relatively high dimensions due to the “low effective dimension” [47, 48]. This notion first arose from the analysis of variance (ANOVA) decomposition in [9]. Several studies have aimed to apply various approaches to minimize the effective dimension, particularly in financial applications [7, 6, 46]. Nevertheless, an equivalence theorem in [50], an extension of the worst-case integration error analysis in [41], stated that no decomposition method (includes Brownian bridge, principal component analysis, etc.) is consistently superior to other methods in terms of different payoff functions. Nonetheless, considering the exact forms of the payoff function, the authors of [23] introduced the linear transformation method, exploiting the nonuniqueness of the covariance matrix decomposition to find the most important dimensions. The design of the transformation matrix’s columns optimizes the variance contribution of each random variable. In addition to the effective dimension, the regularity of the integrand also affects the QMC integration. In finance applications, option pricing problems can sometimes involve discontinuous functions. The orthogonal transformation was proposed in [51] to transform the discontinuity, involving a linear combination of the random variables, to become parallel to the axes. The authors of [24] discussed the connection between orthogonal and linear transformation methods. The authors in [19] characterized the “QMC-friendly” axis-parallel discontinuity and explicitly derived the convergence rate for integrands with such discontinuities, supported by numerical experiments. Although the optimal QMC convergence rates are not recovered, the superiorities to the MC methods are shown. Conditioning is a well-known approach to reduce variance, but it can also improve the smoothness [18, 52, 7, 4]. In [17], the authors demonstrated that, under certain conditions, all terms from the ANOVA decomposition except the one with the highest order have infinite smoothness. In addition, in [13], the conditions under which preintegration works were provided. In [30], the authors proposed preintegrating in a subspace comprising a linear combination of random input variables. Moreover, previous studies [25, 5] exploited the regularity with Fourier transformation when the original function exhibits non-smoothness. A partial differential equation (PDE) with random coefficients is another application in which the QMC method accelerates the convergence of computing the statistical properties of a quantity of interest (QoI), often taking the form of a functional of the PDE solution. In [44], the authors proved that the QMC worst-case error is independent of the dimension if the integrand belongs to a class of weighted Sobolev spaces. Another study [33] considered the weights of the form “product and order dependent” to minimize the worst-case error and provide a fast method to design the lattices according to the weights. The dimension-independent worst-case QMC errors of elliptic PDEs with affine uniform and lognormal coefficients were analyzed in [29] and [16]. In the latter case, the space is equipped with additional weight functions. Moreover, in [21, 26], the authors considered the “product weights” in the weighted space and lattice rule design. More recent studies [15] and [40] have considered the median-of-the-mean QMC estimator using lattice rule (without specific design) and digital sequence, respectively. Despite the careful analysis of the asymptotic convergence rates in the literature, the ideal QMC convergence rate $\mathcal{O}(n^{-1})$ is not always observed in practice. In this work, we aim to study the nonasymptotic convergence behavior of the QMC method and explain the often observed suboptimal convergence rates. Specifically, certain integration problems involve integrand domains different from $[0,1]$, such as the integration with respect to (w.r.t) the Gaussian measure on $\mathbb{R}$. The inverse cumulative distribution function (CDF) transformation is applied for compatibility with QMC methods. However, this transformation often introduces singularities at the boundaries. The work [37] characterized the boundary singularity with the boundary growth rate and made connections with the asymptotic QMC convergence rate. Particularly, the author provided examples of integrands which blow up at boundaries but still lead to optimal QMC convergence rate. Building on this work, we aim to analyze the nonasymptotic QMC convergence rate for some examples involving lognormal random variables to explain the observed suboptimal rates. Last, the importance sampling (IS) is a well-known method for variance reduction. We also aim to discover the extent to which IS with certain proposal distributions improves the convergence rate. Some recent publications have delved into the realm of QMC methods for potentially unbounded integrands. One study [14] combined the robust mean estimator with QMC sampling. In parallel, the work [20] and [35] studied the QMC method with IS. While these studies provided valuable insights into the asymptotic convergence rate, they did not address the nonasymptotic behaviors frequently observed in real-world scenarios. In contrast, our work contributes to developing a convergence rate model for QMC methods with a finite number of samples. This innovative approach offers potential pathways to enhance the practically-observed convergence rate. The paper is organized as follows. Section 2 discusses the nonasymptotic convergence rate. Next, Section 3 presents the convergence rate analysis for two examples: the expectation of a lognormal random variable and elliptic PDEs with lognormal coefficients. Then, Section 4 evaluates the effects of two kinds of IS distributions. Section 5 details the numerical results. Finally, Section 6 presents the conclusions. ## 2 Nonasymptotic convergence rate for the randomized QMC method This section follows the proofs in [37] and accordingly modifies them to establish the nonasymptotic results. First, we introduce the notation. We are interested in the following integration problem: $\displaystyle I(g)=\int_{[0,1]^{s}}g(\mathbf{t})d\mathbf{t},$ (1) where $g:[0,1]^{s}\to\mathbb{R}$. We let $\mathcal{P}=\\{\mathbf{t}_{1},\mathbf{t}_{2},\dotsc,\mathbf{t}_{n}\\}$ be a point set in $[0,1]^{s}$. The QMC estimator for the integrand $g$ is given by $\displaystyle\hat{I}_{n}(g)=\frac{1}{n}\sum_{i=1}^{n}g(\mathbf{t}_{i}),$ (2) where the $\\{\mathbf{t_{i}}\\}$ is a predesigned deterministic low- discrepancy sequence [34, 10]. A notion to describe how well the points in $\mathcal{P}$ are uniformly distributed is the star discrepancy $\Delta_{\mathcal{P}}^{*}$, given by $\displaystyle\Delta_{\mathcal{P}}^{*}:=\sup_{\mathbf{x}\in[0,1]^{s}}\left\lvert\Delta_{n}(\mathbf{x};\mathcal{P})\right\rvert,$ (3) where $\Delta_{n}(\mathbf{x};\mathcal{P})=\frac{1}{n}\sum_{j=1}^{n}\mathbbm{1}_{t_{j}\in[0,\mathbf{x})}-\prod_{k=1}^{s}x_{k}$, which is the difference of the measure of $[0,\mathbf{x})$ and the proportion of points that belong to this set, where $[0,\mathbf{x})$ denotes the tensor product of each dimension, i.e. $[0,\mathbf{x})=\prod_{k=1}^{s}[0,x_{k})$. We expect a small difference for an evenly distributed point set. For some low- discrepancy sequences with fixed length $n$, we have $\displaystyle\Delta_{\mathcal{P}}^{*}=\mathcal{O}(n^{-1}(\log n)^{s}),$ (4) where the exponent of the log term becomes $s-1$ for an infinite sequence [34, 10]. In this work we consider functions with a continuous first-order mixed derivative. The variation in the Hardy–Krause sense can be computed as follows: $\displaystyle V_{HK}(g)=\sum_{\emptyset\neq\mathfrak{u}\subseteq\\{1,2,\dotsc,s\\}}\int_{[0,1]^{\lvert\mathfrak{u}\rvert}}\left\lvert{\partial^{\mathfrak{u}}g(\mathbf{t^{\mathfrak{u}}}\colon\mathbf{1^{-\mathfrak{u}}})}\right\rvert d\mathbf{t^{\mathfrak{u}}},$ (5) where $\lvert\mathfrak{u}\rvert$ is the cardinality of the set $\mathfrak{u}$, $\mathbf{y}={\mathbf{t^{\mathfrak{u}}}\colon\mathbf{1^{-\mathfrak{u}}}}\in[0,1]^{s}$ denotes a point in $[0,1]^{s}$ with ${y^{j}}={t^{j}}$ for $j\in\mathfrak{u}$, and $y^{j}=1$ otherwise. For the set $\mathfrak{u}=\\{\mathfrak{u}_{1},\dotsc,\mathfrak{u}_{\lvert\mathfrak{u}\rvert}\\}\subseteq\\{1,\dotsc,s\\}$, the mixed derivative $\partial^{\mathfrak{u}}g$ is explicitly given by $\partial^{\mathfrak{u}}g(\mathbf{t})=\frac{\partial^{\lvert\mathfrak{u}\rvert}g}{\partial\mathbf{{t}}_{\mathfrak{u}}}(\mathbf{{t}})=\frac{\partial}{\partial t_{\mathfrak{u}_{\lvert\mathfrak{u}\rvert}}}\cdots\frac{\partial}{\partial t_{\mathfrak{u}_{1}}}g(\mathbf{{t}})$ where the continuity ensures that the order of differentiation can be switched while maintaining derivative invariance. The Koksma–Hlawka inequality provides an error estimate for the QMC method, given by $\displaystyle\left\lvert I(g)-\hat{I}_{n}(g)\right\rvert\leq V_{HK}(g)\Delta_{\mathcal{P}}^{*},$ (6) where $V_{HK}(g)$ is the variation of $g$ in the Hardy–Krause sense. However, the use of a deterministic point set yields biased results. Randomization techniques have been introduced to address this problem, giving rise to the randomized QMC (RQMC) unbiased estimator [31]: $\displaystyle\hat{I}_{R,n}(g)=\frac{1}{R}\sum_{r=1}^{R}\frac{1}{n}\sum_{i=1}^{n}g(\mathbf{t}_{i}\oplus\bm{\Delta}_{r})=\frac{1}{R}\sum_{r=1}^{R}\hat{I}_{n}^{(r)}(g),$ (7) where $\mathbf{t}_{i}$ is the $i$th deterministic QMC quadrature point, $\bm{\Delta}_{r}$ represents the $r$th randomization, and $\oplus$ denotes the randomization operation. An example of such an operation is the random shift, where $\bm{\Delta}_{r}\sim U[0,1]^{s}$ and $\mathbf{a}\oplus\mathbf{b}=(\mathbf{a}+\mathbf{b})\textrm{ mod }1$, with the modulo taken component-wise. This random shift approach is easy to implement and provides an unbiased integral estimator. Apart from the unbiasedness, the randomization also provides an convenient access to the estimator variance. To prepare for the analysis of the RQMC method, we study the behavior of the samples from the uniform distribution in $[0,1]^{s}$. More specifically, we consider the problem of evaluating the integral $\displaystyle\begin{split}I(g)&=\int_{[0,1]^{s}}g(\mathbf{t})d\mathbf{t}\\\ &=\int_{\mathbb{R}^{s}}g\circ\Phi(\mathbf{y})\rho(\mathbf{y})d\mathbf{y}\end{split}$ (8) where $\circ$ denotes the function composition, $\mathbf{y}=\Phi^{-1}(\mathbf{t})$, $\Phi^{-1}:[0,1]^{s}\to\mathbb{R}^{s}$ the inverse CDF of $s$-dimensional standard normal distribution, and $\rho$ represents its probability density function. Section 3 presents two concrete examples of $g$. Before we analyze such an integrand, we study the behavior of uniform random variables. ### 2.1 Some properties of the uniform distribution Lemma 1 introduces a useful property of the uniform distribution. ###### Lemma 1 (A bound for an $s$-dimensional uniform random variable). Let each $\mathbf{{t}}_{i}$ in the sequence $\\{\mathbf{{t}}_{i}\\}_{i=1}^{\infty}$ be uniformly distributed over $[0,1]^{s}$. Define $E_{n}$ as the event $\\{\prod_{j=1}^{s}t_{n}^{j}\leq C\cdot n^{-r}\\}$ for $r>1$ and $C>0$. Then, we have $\displaystyle\textnormal{Pr}\left(E_{n}\enspace i.o.\right)=0,$ (9) where $i.o.$ stands for “infinitely often.” The sequence $\\{{t}_{i}\\}$ in Lemma 1 is not necessarily independent, as is the case for the RQMC method, where the points are desired to exhibit a negative correlation [53]. Lemma 1 is a slight modification of Lemma 4.1 in [37], where the minimum condition is removed. The proof follows [37], except we corrected an upper bound. ###### Proof. Let $\mathbf{t}_{n}\sim U[0,1]^{s}$. We have $\begin{split}\textnormal{Pr}\left(\prod_{j=1}^{s}t_{n}^{j}\leq C\cdot n^{-r}\right)&=\textnormal{Pr}\left(-2\log\left(\prod_{j=1}^{s}t_{n}^{j}\right)\geq 2r\log(n)-2\log(C)\right)\\\ &=\textnormal{Pr}\left(\chi^{2}_{(2s)}\geq 2r\log(n)-2\log(C)\right)\\\ &=\int_{2r\log(n)-2\log(C)}^{\infty}\frac{z^{s-1}e^{-z/2}}{2^{s}\Gamma(s)}dz\\\ &=\int_{r\log(n)-\log(C)}^{\infty}\frac{y^{s-1}e^{-y}}{\Gamma(s)}dy,\end{split}$ (10) where $\chi^{2}_{(2s)}$ denotes a chi-squared distribution with $2s$ degrees of freedom and $\Gamma(s)$ denotes the gamma function evaluated at $s$. Now, we use the upper bound derived in [43] for the incomplete gamma function: $\displaystyle\Gamma(s,x)=\int_{x}^{\infty}y^{s-1}e^{-y}dy\leq G_{s}(x)$ (11) where $\displaystyle G_{s}(x)=\frac{(x+b_{s})^{s}-x^{s}}{sb_{s}}e^{-x}$ (12) with $b_{s}=\Gamma(s+1)^{\frac{1}{s-1}}$. Then, we can bound (10) by $\displaystyle\int_{r\log(n)-\log(C)}^{\infty}\frac{y^{s-1}e^{-y}}{\Gamma(s)}dy$ $\displaystyle\leq\frac{G_{s}(\log(\frac{n^{r}}{C}))}{\Gamma(s)}$ $\displaystyle=\frac{\left(\log\left(\frac{n^{r}}{C}\right)+\Gamma(s+1)^{\frac{1}{s-1}}\right)^{s}-\left(\log(\frac{n^{r}}{C})\right)^{s}}{\Gamma(s)s\Gamma(s+1)^{\frac{1}{s-1}}}\cdot\frac{C}{n^{r}}.$ (13) The last expression in (13) is summable, for $n=1,\dotsc,+\infty$, yielding $\sum_{n=1}^{\infty}\textrm{Pr}(E_{n})<\infty$. Thus, by the Borel–Cantelli lemma, $\displaystyle\textnormal{Pr}\left(E_{n}\enspace i.o.\right)$ $\displaystyle=\textnormal{Pr}\left(\bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}E_{k}\right)$ $\displaystyle=\lim_{n\to\infty}\textnormal{Pr}\left(\bigcup_{k=n}^{\infty}E_{k}\right)$ $\displaystyle\leq\lim_{n\to\infty}\sum_{k=n}^{\infty}\textnormal{Pr}\left(E_{k}\right)$ $\displaystyle=0.$ ∎ ###### Remark 1 (Results in earlier literature). Lemma 4.1 in [37] states a stronger result, namely that $\displaystyle\textnormal{Pr}\left(\min_{1\leq i\leq n}\prod_{j=1}^{s}t_{i}^{j}\leq C\cdot n^{-r}\enspace i.o.\right)=0.$ (14) However, the additional condition $\displaystyle\min_{1\leq i\leq n}$ leads to a conclusion that is different from the statement (14). Specifically, if we assume $\\{t_{i}\\}_{i=1}^{n}$ are independently and identically distributed (i.i.d.) $U[0,1]^{s}$ and define the event $F_{n}=\\{\min_{1\leq i\leq n}\prod_{j=1}^{s}t_{i}^{j}\leq n^{-r}\\}$. We choose $C=1$ for simplicity. Then, $\displaystyle\textnormal{Pr}\left(F_{n}\enspace i.o.\right)$ $\displaystyle=\textnormal{Pr}\left(\bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}F_{k}\right)$ $\displaystyle=\lim_{n\to\infty}\textnormal{Pr}\left(\bigcup_{k=n}^{\infty}F_{k}\right)$ $\displaystyle\geq\lim_{n\to\infty}\textnormal{Pr}\left(F_{n}\right)$ $\displaystyle=\lim_{n\to\infty}\left(1-\prod_{i=1}^{n}\textnormal{Pr}\left(\prod_{j=1}^{s}t_{i}^{j}>n^{-r}\right)\right).$ For example, when $s=1$, $\displaystyle\lim_{n\to\infty}\left(1-\prod_{i=1}^{n}\textnormal{Pr}\left(t_{i}>n^{-r}\right)\right)$ $\displaystyle=\lim_{n\to\infty}\left(1-\left(1-n^{-r}\right)^{n}\right)$ $\displaystyle=1-\exp\left(-\frac{1}{r}\right).$ ###### Remark 2 (The nonunique constant $C$). The constant $C$ in Lemma 1 is not determined uniquely. Nevertheless, we retain this constant $C$ in the estimation model. The event $E_{n}$, as defined in the above proof, only occurs finitely many times almost surely. The origin is bounded away in this sense. Through symmetry arguments, we can similarly show that all the $2^{s}$ corners are bounded away from the following hyperbolic set: $\displaystyle K_{n,s}=\left\\{\mathbf{t}\in[0,1]^{s}\mid\prod_{1\leq j\leq s}\min(t_{j},1-t_{j})\geq Cn^{-1}\right\\},$ (15) where we use the boundary case $n^{-1}$ instead of $n^{-r}$ from Lemma 1. We also set $Cn^{-1}\leq 1$ to exclude trivial cases. Figure 1: Illustration of the hyperbolic set $K_{n,s}$ introduced in (15) using the linear scale (left) and log scale (right) for various $n$, where $C=1$. From Lemma 1, we see that the uniform distribution samples “diffuse” to the corner at a certain rate. Figure 2 plots the $n$ samples from the uniform distribution $U[0,1]^{2}$ and the corresponding reference boundaries $t_{1}t_{2}=n^{-1}$. The samples approach the corner when size increases and a small portion of samples lie outside their corresponding reference boundaries. Figure 2: Samples from a two-dimensional uniform distribution $U[0,1]^{2}$ with various sample sizes $n$, and the corresponding reference boundaries $t_{1}t_{2}=n^{-1}$. ### 2.2 Integrand with infinite variation We are often interested in integrands with infinite variation, for which the Koksma–Hlawka inequality (6) is not informative. However, cases remain where the QMC methods work. The study [19] considered discontinuities inside the integration domain and proved that the convergence rate is still superior to the Monte Carlo method as long as some axes are parallel to the discontinuity interface. Another study [37] explored integrands that blow up at the boundary of $[0,1]^{s}$ and derived convergence rates based on a boundary growth condition. We are interested in the latter case. Following the work [37], the set $K_{n,s}$ was introduced to split the integration domain and the Sobol’ low-variation extension [45] $\tilde{g}$ extends $g$ from $K_{n,s}$ to $[0,1]^{s}$. The extension $\tilde{g}$ depends on $n$ and $s$ through the hyperbolic set $K_{n,s}$, but we suppress these dependences for the sake of simpler notations. Though the original name in the literatrue states “low- variation”, we assure the readers that the extension $\tilde{g}$ indeed has finite variation. For completeness, we provide the details here. To this end, some notations need to be introduced. For an index set $\mathfrak{u}\subseteq\\{1,2,\dotsc,s\\}$, $-\mathfrak{u}$ denotes its complement $\\{1,2,\dotsc,s\\}\setminus\mathfrak{u}$. The notation $\mathbf{z^{\mathfrak{u}}}:\mathbf{c^{-\mathfrak{u}}}$ is used to denote the point $\mathbf{y}\in[0,1]^{s}$, where $y^{j}=z^{j}$ for $j\in\mathfrak{u}$ and $y^{j}=c^{j}$ for $j\notin\mathfrak{u}$, “concatenating” the vectors $\mathbf{z^{\mathfrak{u}}}$ and $\mathbf{c^{\mathfrak{-u}}}$. For simplicity, we assume the derivative of the integrand $\partial^{\mathfrak{u}}g$ exists in $(0,1)^{s}$, for $\mathfrak{u}\subseteq\\{1,2,\dotsc,s\\}$. Given an anchor point $\mathbf{c}\in K_{n,s}$ and using the fundamental theorem of calculus, we obtain $\displaystyle g(\mathbf{t})=g(\mathbf{c})+\sum_{\mathfrak{u}\neq\emptyset}\int_{[\mathbf{c^{\mathfrak{u}}},\mathbf{t^{\mathfrak{u}}}]}\partial^{\mathfrak{u}}g(\mathbf{z^{\mathfrak{u}}}:\mathbf{c^{-\mathfrak{u}}})d\mathbf{z^{\mathfrak{u}}}.$ (16) The Sobol’ low-variation extension $\tilde{g}:\mathbb{R}^{s}\to\mathbb{R}$ is given by $\displaystyle\tilde{g}(\mathbf{t})=g(\mathbf{c})+\sum_{\mathfrak{u}\neq\emptyset}\int_{[\mathbf{c^{\mathfrak{u}}},\mathbf{t^{\mathfrak{u}}}]}\mathbbm{1}_{\\{\mathbf{z^{\mathfrak{u}}}:\mathbf{c^{-\mathfrak{u}}}\in K_{n,s}\\}}~{}\partial^{\mathfrak{u}}g(\mathbf{z^{\mathfrak{u}}}:\mathbf{c^{-\mathfrak{u}}})d\mathbf{z^{\mathfrak{u}}}.$ (17) Following [37, 45], we apply a three-epsilon argument to bound the integration error. Recalling the notations on the exact integration (1) and QMC estimation (2), we bound: $\displaystyle\begin{split}\lvert I(g)-\hat{I}_{n}(g)\rvert&\leq\lvert I(g)-I(\tilde{g})\rvert+\lvert I(\tilde{g})-\hat{I}_{n}(\tilde{g})\rvert+\lvert\hat{I}_{n}(\tilde{g})-\hat{I}_{n}(g)\rvert\\\ &\leq\int_{[0,1]^{s}-K_{n,s}}\lvert g-\tilde{g}\rvert+\Delta^{*}(\mathbf{t}_{1},\dotsc,\mathbf{t}_{n})V_{HK}(\tilde{g})+\frac{1}{n}\sum_{i=1}^{n}\lvert\tilde{g}(\mathbf{t}_{i})-g(\mathbf{t}_{i})\rvert.\end{split}$ (18) Observe that $g$ and $\tilde{g}$ coincide in $K_{n,s}$. Moreover, we have $\displaystyle\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\left\lvert\tilde{g}(\mathbf{t}_{i})-g(\mathbf{t}_{i})\right\rvert\right]$ $\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\left\lvert\tilde{g}(\mathbf{t}_{i})-g(\mathbf{t}_{i})\right\rvert\right]$ $\displaystyle=\int_{[0,1]^{s}-K_{n,s}}\lvert g-\tilde{g}\rvert.$ (19) Using the above inequalities, we obtain the following finite upper bound for the RQMC method: $\displaystyle\begin{split}\mathbb{E}[\lvert I(g)-\hat{I}_{n}(g)\rvert]&\leq 2\int_{[0,1]^{s}-K_{n,s}}\lvert g-\tilde{g}\rvert+\mathbb{E}\left[\Delta^{*}(\mathbf{t}_{1},\dotsc,\mathbf{t}_{n})\right]V_{HK}(\tilde{g}).\end{split}$ (20) Notice that the above bound (20) depends on the value of $C$, which is used to construct the set $K_{n,s}$ in Equation (15). When $C=0$, this becomes the classical Koksma–Hlawka inequality, and when $C=\frac{n}{2^{s}}$, the hyperbolic set $K_{n,s}$ vanishes and the right hand side becomes $2\int_{[0,1]^{s}}\lvert g\rvert$. For $0<C<\frac{n}{2^{s}}$, the asymptotic upper bound also reduces to the Koksma–Hlawka inequality, as $n\to\infty$. To examine the behavior of the difference $g-\tilde{g}$ and the variation $V_{HK}(\tilde{g})$ as the set $K_{n,s}$ grows when $n$ increases, we assume the following boundary growth condition (in the nonasymptotic case) outside the set $K_{n,s}$. ###### Assumption 1 (Nonasymptotic boundary growth condition). For the integrand $g$, we assume the following boundary growth condition (21) $\displaystyle\lvert\partial^{\mathfrak{u}}g(\mathbf{t})\rvert\leq B(\mathbf{t_{c}})\prod_{j=1}^{s}{t_{j}}^{-A_{j}(t_{c}^{j})-1_{j\in\mathfrak{u}}}\quad\mathbf{0}<\mathbf{t}\leq\mathbf{t_{c}},\quad\mathbf{t_{c}}\in\partial K_{n,s},$ (21) for a given constant $B(\mathbf{t_{c}})$, $\forall\mathfrak{u}\subseteq\\{1,2,\dotsc,s\\}$. The inequality $\mathbf{0}<\mathbf{t}\leq\mathbf{t_{c}}$ holds component-wise. In Assumption 1, we emphasize the dependence $A_{j}=A_{j}(\mathbf{t_{c}})$, which is crucial in the following nonasymptotic analysis. For now, we consider an anisotropic integrand. ###### Example 1 (An anisotropic integrand). $g(\mathbf{t})=\exp(2\Phi^{-1}(1-t_{1})+\Phi^{-1}(1-t_{2}))\quad\mathbf{t}\in[0,1]^{2}.$ (22) As is often the case, the function 22 in our example is not isotropic w.r.t. the coordinates. Figure 3 provides an illustration of the sets $K_{n,s}$ with $s=2$ and the contour of the function 22 on the log scale. The value of $\lvert\partial^{\mathfrak{u}}g(\mathbf{t})\rvert$ may not be constant for $\mathbf{t}\in\partial K_{n,s}$. Figure 3: Illustration of the set $K_{n,s}$ in linear scale with $C=1$ and a contour plot of the function $g(\mathbf{t})=\exp(2\Phi^{-1}(1-t_{1})+\Phi^{-1}(1-t_{2}))$ in log scale. This finding motivates the determination of an upper bound of the following term: $\displaystyle\prod_{j=1}^{s}{t_{c}^{j}}^{-A_{j}(t_{c}^{j})},~{}\textrm{where}~{}\mathbf{t_{c}}\in\partial K_{n,s}$ in dimension $s>1$, which is equivalent to the following optimization problem: $\displaystyle\begin{split}\max_{\mathbf{t_{c}}}&~{}\sum_{j=1}^{s}-A_{j}\log t_{c}^{j}\\\ s.t.&~{}\sum_{j=1}^{s}\log t_{c}^{j}=\log\delta,\end{split}$ (23) for a given $\delta=Cn^{-1}>0$. We define $\mathbf{t_{c}^{*}}$ as the optimizer of (23) and $A_{j}^{*}=A_{j}(\mathbf{t_{c}^{*}}),$ (24) for $j=1,\dotsc,s$. In one dimension, the point $\mathbf{t_{c}^{*}}$ is explicitly given by $Cn^{-1}$. For $s\geq 2$, and $\mathbf{t_{c}^{*}}$ is determined by solving the optimization problem (23). Following the proof of Lemma 5.1 in [37], for an integrand $g$ satisfying the boundary growth condition (21), the difference of the integrand $g$ and its Sobol’ low-variation extension $\tilde{g}$ is bounded by $\displaystyle\lvert g(\mathbf{t})-\tilde{g}(\mathbf{t})\rvert\leq\tilde{B}(\mathbf{t_{c}})\prod_{j=1}^{s}{t_{j}}^{-A_{j}(t_{c}^{j})}\quad 0\leq\mathbf{t}\leq\mathbf{t_{c}},\quad\mathbf{t_{c}}\in\partial K_{n,s},$ (25) where $\tilde{B}(\mathbf{t_{c}})=B\prod_{j=1}^{s}(1+\frac{1}{A_{j}(t_{c}^{j})})$. Notice that the power $-A_{j}$ has a dependence on the specific choice of $\mathbf{t_{c}}\in K_{n,s}$. Consider the optimization problem (23), we further consider the bound of $\lvert g-\tilde{g}\rvert$ with the only dependence on $K_{n,s}$ in Lemma 2. ###### Lemma 2 (A bound for the difference of the integrand and its Sobol’ low-variation extension). Let $\mathbf{t}\notin K_{n,s}$, for integrand $g$ satisfying the boundary growth condition (21), we have $\displaystyle\lvert g(\mathbf{t})-\tilde{g}(\mathbf{t})\rvert\leq\tilde{B}(\mathbf{t_{c}^{*}})\prod_{j=1}^{s}{t_{j}}^{-A_{j}^{*}}$ (26) where $A_{j}^{*}$ is determined by the optimization problem (23). ###### Proof. Directly using Equation (23) and (25) leads to the result. ∎ From Lemma 2, following Lemma 5.4 and the proof of Theorem 5.5 in [37], we have $\displaystyle\int_{[0,1]^{s}-K_{n,s}}\lvert g-\tilde{g}\rvert\leq B_{1}n^{\max_{j}A_{j}^{*}-1},$ (27) and $\displaystyle V_{HK}(\tilde{g})\leq B_{2}n^{\max_{j}A_{j}^{*}},$ (28) for finite $B_{1}$ and $B_{2}$. Now, we have obtained the ingredients to present the nonasymptotic upper bound for the RQMC upper bound. ###### Theorem 3 (Nonasymptotic QMC error estimate). For a continuous integrand $g$ on $[0,1]^{s}$, whose mixed first-order derivative exists, the expected integration for the RQMC method with a quadrature size of $n$ satisfies $\displaystyle\mathbb{E}[\lvert I(g)-\hat{I}_{n}(g)\rvert]\leq C_{1}n^{-1+\max_{j}A_{j}^{*}}+C_{2}n^{-1+\epsilon+\max_{j}A_{j}^{*}}$ (29) where $\epsilon>0$, and $C_{1}$ and $C_{2}$ are finite constants that depend on $g$ and $s$. ###### Proof. Substitute Equation (27) and Equation (28) into (20) and notice that, for low- discrepancy sequences, $\displaystyle\mathbb{E}\left[\Delta^{*}(\mathbf{t}_{1},\dotsc,\mathbf{t}_{n})\right]=\mathcal{O}(n^{-1}(\log n)^{s})$ (30) (see [38], for reference). ∎ Theorem 3 is the contribution of this work to the nonasymptotic QMC error estimate model. We observe that the convergence rate of the RQMC method depends on $\max_{j}A_{j}^{*}$, which depends on the QMC quadrature size $n$. ###### Remark 3 (Nonasymptotics rather than singularity). In some cases, for instances, the two examples considered in the next Section 3, both integrands exhibit singularities at the boundary. However, the value of $A_{j}^{*}$ from (24) converges to 0 as $n\to\infty$. In this situation, the convergence rate for the RQMC method becomes the optimal asymptotic rate $\mathcal{O}(n^{-1+\epsilon})$ for $\epsilon>0$, in the presence of the singularities and hence infinite variations. However, the optimal convergence rate may not be observed for a finite sample size $n$, as $A_{j}^{*}$ may not converge to 0, resulting a suboptimal convergence rate indicated by Theorem 3. The next section illustrates the nonasymptotics with two concrete examples. ## 3 Two examples of infinite variation integrands This section delves into two significant integration examples: the expectation estimate of lognormal random variables and the QoIs relating to the solution of elliptic PDEs with lognormal coefficients. These examples shed light on the intricate nature of infinite variation integrands. ### 3.1 Lognormal random variable We first analyze the integration problem (1), where $g=\exp(\bm{\sigma}^{T}{\Phi}^{-1})$, $\bm{\sigma}\in\mathbb{R}^{s}_{+}$, and ${\Phi}^{-1}:[0,1]^{s}\to\mathbb{R}^{s}$ applies the inverse standard normal distribution CDF component-wise. The partial derivative of $g$ w.r.t. $\mathbf{t}_{\mathfrak{u}}$ is given by $\displaystyle\frac{\partial g}{\partial\mathbf{t}_{\mathfrak{u}}}$ $\displaystyle=\frac{\partial}{\partial\mathbf{t}_{\mathfrak{u}}}\exp\left(\sum_{j=1}^{s}\sigma_{j}\Phi^{-1}(t_{j})\right)$ $\displaystyle=\exp\left(\sum_{j=1}^{s}\sigma_{j}\Phi^{-1}(t_{j})\right)\cdot\prod_{j\in\mathfrak{u}}\sigma_{j}{\partial^{j}\Phi^{-1}(t_{j})}.$ (31) Next, we consider the case where $t_{j}\to 0$ for $j=1,\dotsc,s$. We use the following approximation: $\displaystyle\begin{split}\Phi^{-1}(t_{j})&=-\sqrt{-2\log(t_{j})}+o(1)\quad t\to 0\\\ &\approx-\sqrt{-2\log(t_{j})}.\end{split}$ (32) The case where $t_{j}\to 1$ can be derived similarly due to symmetry: $\displaystyle\Phi^{-1}(t_{j})$ $\displaystyle\approx\sqrt{-2\log(1-t_{j})}\quad t_{j}\to 1$ (33) (see [42], Chapter 3.9, for instance). Using the approximation of $\Phi^{-1}$ (33), we can simplify the partial derivative of $g$ as $\displaystyle\frac{\partial g}{\partial\mathbf{t}_{\mathfrak{u}}}$ $\displaystyle=\exp\left(\sum_{j=1}^{s}\sigma_{j}\Phi^{-1}(t_{j})\right)\cdot\prod_{j\in\mathfrak{u}}\sigma_{j}{\partial^{j}\Phi^{-1}({t}_{j})}$ $\displaystyle\approx\exp\left(\sum_{j=1}^{s}\sigma_{j}{\sqrt{-2\log(1-{t}_{j})}}\right)\cdot\prod_{j\in\mathfrak{u}}\sigma_{j}{\partial^{j}\sqrt{-2\log(1-{t}_{j})}}$ $\displaystyle=\exp\left(\sum_{j=1}^{s}\sigma_{j}{\sqrt{-2\log(1-{t}_{j})}}\right)\cdot\prod_{j\in\mathfrak{u}}\sigma_{j}\frac{1}{(1-{t}_{j})\sqrt{-2\log(1-{t}_{j})}}.$ (34) We define $\mathbf{v}=1-\mathbf{t}$ to simplify the notation, and the term $\frac{1}{(1-{t})\sqrt{-2\log(1-{t})}}$ behaves like $\mathcal{O}({v}^{-1+\epsilon})$ for any $\epsilon>0$ as ${v}\to 0$. To check the boundary growth condition (21), it remains to study the local growth behavior of the term $\displaystyle{h(\mathbf{v}):=\exp\left(\sum_{j=1}^{s}\sigma_{j}{\sqrt{-2\log{v}_{j}}}\right)}=\exp\left(\sum_{j=1}^{s}\sigma_{j}{\sqrt{-2\log(1-{t}_{j})}}\right).$ (35) ###### Lemma 4 (First-order Taylor approximation). The function $\displaystyle h(\mathbf{v}):=\exp\left(\sum_{j=1}^{s}\sigma_{j}{\sqrt{-2\log{v}_{j}}}\right)$ (36) satisfies the following bound: $\displaystyle{h(\mathbf{v})}\leq B(\mathbf{v})\prod_{j=1}^{s}({v}_{j})^{-A_{j}}\quad{\mathbf{v}}\notin K_{n,s},$ (37) where $A_{j}=\frac{\sigma_{j}}{\sqrt{-2\log v_{c}^{j}}}$, $\mathbf{v_{c}}\in\partial K_{n,s}$. ###### Proof. We take the logarithm of both sides of Equation (37): $\displaystyle\log{h(\mathbf{v})}=\sum_{j=1}^{s}\sigma_{j}\sqrt{-2\log({v_{j}})}\leq-\sum_{j=1}^{s}{A}_{j}\log(v_{j})+\log B(\mathbf{v}).$ (38) We further apply the change of variable to simplify the notation. For $\mathbf{v_{c}}\in\partial K_{n,s}$, let $\mathbf{z}=-\log\mathbf{v}$, and $\mathbf{z_{c}}=-\log\mathbf{v_{c}}$. By a first-order Taylor approximation of (38), $\displaystyle\begin{split}\sum_{j=1}^{s}\sigma_{j}\sqrt{2z_{j}}&\approx\nabla_{\mathbf{z}=\mathbf{z_{c}}}\left(\sum_{j=1}^{s}\sigma_{j}\sqrt{2{z_{j}}}\right)\cdot(\mathbf{z}-\mathbf{z_{c}})+\left(\sum_{j=1}^{s}\sigma_{j}\sqrt{2z_{c}^{j}}\right)\\\ &=\sum_{j=1}^{s}\frac{\sigma_{j}}{\sqrt{2z_{c}^{j}}}{z}^{j}+\sum_{j=1}^{s}\sigma_{j}\sqrt{\frac{z_{c}^{j}}{2}}\end{split}$ (39) where we use $\lvert\mathbf{z}-\mathbf{z_{c}}\rvert<\epsilon$ for a small threshold $\epsilon>0$. By comparing Equations (37) and (39), we deduce that $\displaystyle{A}_{j}$ $\displaystyle=\frac{\sigma_{j}}{\sqrt{2z_{c}^{j}}}=\frac{\sigma_{j}}{\sqrt{-2\log v_{c}^{j}}}=\frac{\sigma_{j}}{\sqrt{-2\log(1-t_{c}^{j})}}.$ (40) ∎ Notice that ${A}_{j}\to 0$ as $t_{j}\to 1$ for $j=1,\dotsc,s$. Theorem 3 indicates that the QMC method achieves the asymptotic convergence rate $\mathcal{O}(n^{-1+\epsilon})$. Let us now consider the optimization problem (23): $\displaystyle\begin{split}\max_{\mathbf{v_{c}}}&~{}\sum_{j=1}^{s}-A_{j}\log v_{c}^{j}\\\ s.t.&~{}-\sum_{j=1}^{s}\log v_{c}^{j}=\log\delta^{-1},\end{split}$ (41) for a given $\delta=Cn^{-1}$. The Lagrangian is given by $\displaystyle\begin{split}\mathcal{L}&=\sum_{j=1}^{s}-A_{j}\log v_{c}^{j}+\lambda\left(\sum_{j=1}^{s}-\log v_{c}^{j}-\log\delta^{-1}\right)\\\ &=\sum_{j=1}^{s}\sigma_{j}\sqrt{\frac{-\log v_{c}^{j}}{2}}+\lambda\left(\sum_{j=1}^{s}-\log v_{c}^{j}-\log\delta^{-1}\right).\end{split}$ (42) The optimizer $-\log{v_{c}^{j}}^{*}$ for Equation (41) is given by $\displaystyle-\log{v_{c}^{j}}^{*}=\frac{\sigma_{j}^{2}}{\sum_{j=1}^{s}\sigma_{j}^{2}}\log\delta^{-1}.$ (43) Moreover, the optimal value of $A_{j}$, denoted by $A_{j}^{*}$, is given by $\displaystyle A_{j}^{*}=\sqrt{\frac{\sum_{j=1}^{s}\sigma_{j}^{2}}{2\log\delta^{-1}}}=\sqrt{\frac{\sum_{j=1}^{s}\sigma_{j}^{2}}{2(\log n-\log C)}}.$ (44) Hence, $\max_{j}A_{j}^{*}=\sqrt{\frac{\sum_{j=1}^{s}\sigma_{j}^{2}}{2(\log n-\log C)}}$. For a finite sample size $n$, considering the convergence model (29) in Theorem 3, the convergence rate improves as $n$ increases, or the variance $\sigma_{j}^{2}$ decreases for each dimension $j$. ### 3.2 Elliptic partial differential equations with lognormal coefficients We consider an elliptic PDE on a Lipschitz domain $\mathcal{D}\in\mathbb{R}^{d}$, in the following form: $\displaystyle-\nabla_{\mathbf{x}}\cdot\left(a(\mathbf{x};\omega)\nabla_{\mathbf{x}}u(\mathbf{x};\omega)\right)$ $\displaystyle=f(\mathbf{x};\omega)$ for $\mathbf{x}\in\mathcal{D}$, (45a) $\displaystyle u(\mathbf{x};\omega)$ $\displaystyle=0$ $\displaystyle\text{for $\mathbf{x}\in\partial\mathcal{D}$},$ (45b) with almost all $\omega\in\Omega$, where $\Omega$ corresponds to a complete probability space tuple $(\Omega,\mathcal{F},\mathbb{P})$. The differential operators $\nabla\cdot$ and $\nabla$ are taken w.r.t. the spatial variable $\mathbf{x}$, and $\partial_{n}$ is the outward normal derivative operator. We explore the equations in (45) in the weak form. For a suitable functional space $V$ (e.g., $V=H^{1}_{0}(\mathcal{D})$), we seek $u(\mathbf{x};\omega)\in V$ such that $\begin{split}\left(a(\mathbf{x};\omega)\nabla_{\mathbf{x}}u(\mathbf{x};\omega),\nabla_{\mathbf{x}}v(\mathbf{x};\omega)\right)&=\left(f(\mathbf{x};\omega),v(\mathbf{x};\omega)\right)\quad\\\ &\text{for all }v\in V\text{ and almost all }\omega\in\Omega.\end{split}$ (46) The QoI $Q(\omega)$ is given in the following general form: $\displaystyle Q(\mathbf{y}(\omega))=\mathcal{G}(u(\cdot;\omega)),$ (47) where $\mathcal{G}\in V^{\prime}$, $V^{\prime}$ denotes the dual space of $V$. We are interested in computing $\mathbb{E}[Q]$. The integrand in (1) is given by $g=Q\circ{\Phi}^{-1}$. We consider $a(\mathbf{x};\omega)$, which takes the following form: $\displaystyle a(\mathbf{x};\omega)=\exp\left(\sum_{j=1}^{s}\mathrm{y}_{j}\psi_{j}\right),$ (48) with $s\in\mathbb{N}^{+}$ and $y_{j}$ are i.i.d. samples from $\mathcal{N}(0,1)$, $j=1,\dotsc,s$. Moreover, we define $b_{j}=\lVert\psi_{j}\rVert_{L^{\infty}(\mathcal{D})}<+\infty$, for $j=1,\dotsc,s$. From [16], an upper bound for the derivative of $Q$ w.r.t. the random variable $\mathbf{y}$ is given by $\displaystyle\left\lvert{\partial^{\mathfrak{u}}}Q({\mathbf{y}})\right\rvert$ $\displaystyle\leq\lVert\mathcal{G}\rVert_{V^{\prime}}\lVert\partial^{\mathfrak{u}}u(\cdot,\mathbf{y})\rVert_{V}$ $\displaystyle\leq\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\underbrace{\lVert f\rVert_{V^{\prime}}\lVert\mathcal{G}\rVert_{V^{\prime}}}_{K^{*}}\prod_{j=1}^{s}\exp(b_{j}\lvert{\mathrm{y}}_{j}\rvert)$ (49) The derivative of the integrand $g$ is as follows: $\displaystyle\begin{split}\left\lvert{\partial}^{{\mathfrak{u}}}g(\bm{t})\right\rvert&=\left\lvert\partial^{\mathfrak{u}}Q(\Phi^{-1}(\mathbf{t}))\right\rvert=\lvert\partial^{\mathfrak{u}}Q\rvert\cdot\prod_{j\in\mathfrak{u}}\left\lvert\partial^{j}\Phi^{-1}(t_{j})\right\rvert\\\ &\leq K^{*}\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\cdot\prod_{j=1}^{s}\exp(b_{j}\lvert\mathrm{y}_{j}\rvert)\left\lvert\partial^{j}\Phi^{-1}({t}_{j})\right\rvert\\\ &\approx K^{*}\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\prod_{j=1}^{s}b_{j}\exp\left(b_{j}{\sqrt{-2\log(t_{j})}}\right)\cdot\prod_{j=1}^{s}\frac{1}{t_{j}\sqrt{-2\log(t_{j})}}\end{split}$ (50) where we apply the approximation of $\Phi^{-1}$ (32) as $t_{j}$ approaches 0, where $j=1,\dotsc,s$. Unlike the example in Section 3.1, the considered integrand $g$ has singularities when $t_{j}\to 0$ and $t_{j}\to 1$. Due to the symmetry in each dimension, we only need to consider the singularity at 0. To investigate the local growth behavior of the derivative, we define $\displaystyle h(\mathbf{t})=K^{*}\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\prod_{j=1}^{s}b_{j}\exp\left(b_{j}{\sqrt{-2\log(t_{j})}}\right).$ (51) Similar to the analysis in Section. 3.1, we apply a first-order Taylor approximation. We take the logarithm of both sides of (51): $\displaystyle\log\left\lvert h(\mathbf{t})\right\rvert=\sum_{j=1}^{s}b_{j}\sqrt{-2\log({t_{j}})}+\log\left(K^{*}\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\prod_{j=1}^{s}b_{j}\right).$ (52) Thus, for $\displaystyle{A}_{j}$ $\displaystyle=\frac{b_{j}}{\sqrt{-2\log t_{c}^{j}}},$ (53) we have $\displaystyle\left\lvert h(\mathbf{t})\right\rvert\leq B(\mathbf{t})\prod_{j=1}^{s}({t}_{j})^{-A_{j}}\quad{\mathbf{t}}\notin K_{n,s}$ (54) where $\displaystyle B(\mathbf{t})=\left(K^{*}\frac{\lvert\mathfrak{u}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}\rvert}}\left(\prod_{j\in\mathfrak{u}}b_{j}\right)\prod_{j=1}^{s}b_{j}\right)\cdot\exp\left(\sum_{j=1}^{s}b_{j}\sqrt{\frac{-\log t_{c}^{j}}{2}}\right).$ Similarly, we can determine $A_{j}^{*}$ as follows: $\displaystyle A_{j}^{*}=\sqrt{\frac{\sum_{j=1}^{s}b_{j}^{2}}{2(\log n-\log C)}}$ (55) and $\max_{j}A_{j}^{*}=\sqrt{\frac{\sum_{j=1}^{s}b_{j}^{2}}{2(\log n-\log C)}}$. In this section, we have considered two concrete examples with unbounded variation and applied the theory developed in Section 2 to analyze their convergence rates. Consistent with the theory, the convergence rate can be improved by increasing the sample size $n$. However, whether we can further enhance the convergence rate by modifying the sample distribution and placing more points near the singular corners is unclear. In the next section, we discuss the effects of importance sampling. ## 4 Importance sampling This section provides two kinds of IS proposal distributions and analyzes their effects on the convergence rate. ### 4.1 First proposal distribution This section proposes a Gaussian distribution with scaled variance to distribute more samples closer to the corners. This kind of IS was studied in [27] for multivariate functions that belong to certain Sobolev spaces, where the worst-case integration error was analyzed and optimized. Let $g\in\\{\exp\circ(\bm{\sigma}^{T}\Phi^{-1}),Q\circ\Phi^{-1}\\}:[0,1]^{s}\to\mathbb{R}$, which are the two integrands considered in Section 3. We introduce the component-wise multiplication notation $\odot$, such that $\bm{\alpha}\odot\mathbf{y}=\\{\alpha_{1}\mathrm{y}_{1},\alpha_{2}\mathrm{y}_{2},\dotsc,a_{s}\mathrm{y}_{s}\\}=\mathrm{diag}(\bm{\alpha})\mathbf{y}$ when $\bm{\alpha},\mathbf{y}\in\mathbb{R}^{s}$. The integral of $g$ over $[0,1]^{s}$ is $\displaystyle\begin{split}I(g)&=\int_{[0,1]^{s}}g(\mathbf{t})d\mathbf{t}\\\ &=\int_{\mathbb{R}^{s}}\nu(\mathbf{y})\rho(\mathbf{y})d\mathbf{y}\\\ &=\prod_{j=1}^{s}{\alpha_{j}}\int_{\mathbb{R}^{s}}\nu(\bm{\alpha}\odot\mathbf{y})\rho({\bm{\alpha}}\odot\mathbf{y})d\mathbf{y}\\\ &=\prod_{j=1}^{s}{\alpha_{j}}\int_{[0,1]^{s}}\nu(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))\cdot\frac{\rho(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))}{\rho(\Phi^{-1}(\bm{t}))}d\bm{t},\end{split}$ (56) where $\nu=g\circ\Phi:\mathbb{R}^{s}\to\mathbb{R}$, $\bm{\alpha}\in\mathbb{R}^{s}_{+}$, and $\rho$ is the $s$-dimensional standard normal distribution density. The integrand with IS $g_{\textrm{IS}}$ is given by $\displaystyle\begin{split}g_{\textrm{IS}}(\bm{t})&=\left(\prod_{j=1}^{s}{\alpha_{j}}\right)\nu(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))\cdot\frac{\rho(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))}{\rho(\Phi^{-1}(\bm{t}))}\\\ &=\left(\prod_{j=1}^{s}{\alpha_{j}}\right)\nu(\bar{\mathbf{y}})\cdot\prod_{j=1}^{s}\exp\left(-\frac{\mathrm{y}_{j}^{2}}{2}({\alpha}_{j}^{2}-1)\right)\end{split}$ (57) where we define $\mathbf{y}=\Phi^{-1}(\bm{t})$ and $\bar{\mathbf{y}}=\bm{\alpha}\odot\mathbf{y}=\bm{\alpha}\odot\Phi^{-1}(\bm{t})$. The function $g_{\textrm{IS}}$ no longer has the singularity at the boundaries for ${\alpha}_{j}>1$, $j=1,2,\dotsc,s$. Figure 4 illustrates the one- dimensional integrand with and without IS in Example 1. Figure 4: One-dimensional Integrands $g$ and $g_{\mathrm{IS}}$ in Example 1, where $s=1,\sigma=1.0$, $\alpha>1$. The IS eliminates the singularity at the boundary. The legend IS_A stands for the IS of the first type. Let us define $\varphi(\bm{t})=\prod_{j=1}^{s}{\alpha_{j}}\exp\left(-\frac{(\Phi^{-1}({t_{j}}))^{2}}{2}({\alpha}_{j}^{2}-1)\right)$ for the simplicity of notations. We will analyze the mixed first-order derivative ${\partial^{\mathfrak{u}}}g_{\mathrm{IS}}$. Specifically, we have $\displaystyle\begin{split}\left\lvert{\partial^{\mathfrak{u}}}g_{\mathrm{IS}}(\bm{t})\right\rvert&=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\partial}{\partial t_{\mathfrak{u}-\mathfrak{z}}}\nu(\bar{\mathbf{y}})\cdot{\partial^{\mathfrak{z}}}\varphi(\bm{t})\right\rvert.\end{split}$ (58) We have $\displaystyle\frac{\partial}{\partial t_{\mathfrak{u}-\mathfrak{z}}}\nu(\bar{\mathbf{y}})=\frac{\partial}{\partial\bar{\mathrm{y}}_{\mathfrak{u}-\mathfrak{z}}}\nu(\bar{\mathbf{y}})\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\alpha_{j}\frac{\partial\mathrm{y}_{j}}{\partial t_{j}}.$ (59) and $\displaystyle\begin{split}{\partial^{\mathfrak{z}}}\varphi(\bm{t})&=\prod_{j=1}^{s}{\alpha_{j}}\exp\left(-\frac{(\Phi^{-1}({t_{j}}))^{2}}{2}({\alpha}_{j}^{2}-1)\right)\cdot\prod_{k\in\mathfrak{z}}-\frac{\alpha_{k}^{2}-1}{2}\cdot\frac{\partial\left(\Phi^{-1}(t_{k})\right)^{2}}{\partial t_{k}}\end{split}$ (60) Notice that $\binom{\mathfrak{u}}{\mathfrak{z}}=1$, for all $\mathfrak{z}\preceq\mathfrak{u}$. In Example 1 (i.e., when $\nu=\exp\circ\bm{\sigma}^{T}$), we have $\displaystyle\partial^{\mathfrak{u}-\mathfrak{z}}\nu(\bar{\mathrm{y}})=\exp\left(\sum_{j=1}^{s}\sigma_{j}\bar{\mathrm{y}}_{j}\right)\cdot\prod_{k\in\mathfrak{u}-\mathfrak{z}}\sigma_{k}.$ (61) By combining equations (58) to (61), we have $\displaystyle\begin{split}\left\lvert{\partial^{\mathfrak{u}}}g_{\mathrm{IS}}(\bm{t})\right\rvert&=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\partial}{\partial t_{\mathfrak{u}-\mathfrak{z}}}\nu(\bar{\mathbf{y}})\cdot{\partial^{\mathfrak{u}}}\varphi(\bm{t})\right\rvert\\\ &\leq\prod_{j=1}^{s}{\alpha_{j}}\exp\left(-\frac{(\Phi^{-1}({t_{j}}))^{2}}{2}({\alpha}_{j}^{2}-1)\right)\cdot\prod_{j=1}^{s}\exp(\sigma_{j}\lvert\alpha_{j}\Phi^{-1}(t_{j})\rvert)\\\ &\cdot\sum_{\mathfrak{z}\preceq\mathfrak{u}}\prod_{j\in\mathfrak{u}-\mathfrak{z}}\sigma_{j}\alpha_{j}\frac{\partial\mathrm{y}_{j}}{\partial t_{j}}\cdot\prod_{k\in\mathfrak{z}}-\frac{\alpha_{k}^{2}-1}{2}\cdot{\frac{\partial\left(\Phi^{-1}(t_{k})\right)^{2}}{\partial t_{k}}}\\\ &\approx\prod_{j=1}^{s}\sigma_{j}{\alpha_{j}}t_{j}^{{\alpha}_{j}^{2}-1}\cdot\exp\left(\alpha_{j}\sigma_{j}{\sqrt{-2\log(t_{j})}}\right)\\\ &\cdot\sum_{\mathfrak{z}\preceq\mathfrak{u}}\prod_{j\in\mathfrak{s}-\mathfrak{z}}\sigma_{j}\alpha_{j}\frac{1}{t_{j}\sqrt{-2\log(t_{j})}}\prod_{k\in\mathfrak{z}}\frac{\alpha_{k}^{2}-1}{t_{k}},\end{split}$ (62) where we have substituted $1-t_{j}$ by $t_{j}$ to focus on the boundary growth condition at the singularity. Before further analysis, we now analyze the derivative in Example 2. In Example 2, $\nu=Q$, we apply the following upper bound of $\left\lvert\partial^{{\mathfrak{u}-\mathfrak{z}}}Q(\bar{\mathbf{y}})\right\rvert$: $\displaystyle\left\lvert\partial^{{\mathfrak{u}-\mathfrak{z}}}Q(\bar{\mathbf{y}})\right\rvert\leq\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp(b_{j}\lvert\bar{\mathrm{y}}_{j}\rvert)$ (63) Combining equations (58) to (60) and (63), we obtain $\displaystyle\begin{split}\left\lvert{\partial^{\mathfrak{u}}}g_{\mathrm{IS}}(\bm{t})\right\rvert&=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}{\partial^{\mathfrak{u}-\mathfrak{z}}}Q(\bm{t})\cdot{\partial^{\mathfrak{u}}}\varphi(\bm{t})\right\rvert\\\ &\leq K^{*}\prod_{j=1}^{s}{\alpha_{j}}\exp\left(-\frac{(\Phi^{-1}({t_{j}}))^{2}}{2}({\alpha}_{j}^{2}-1)\right)\cdot\prod_{j=1}^{s}\exp(b_{j}\lvert\alpha_{j}\Phi^{-1}(t_{j})\rvert)\\\ &\cdot\sum_{\mathfrak{z}\preceq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{s}-\mathfrak{z}}b_{j}\right)\prod_{j\in\mathfrak{u}-\mathfrak{z}}\alpha_{j}\frac{\partial\mathrm{y}_{j}}{\partial t_{j}}\cdot\prod_{k\in\mathfrak{z}}-\frac{\alpha_{k}^{2}-1}{2}\cdot{\frac{\partial\left(\Phi^{-1}(t_{k})\right)^{2}}{\partial t_{k}}}\\\ &\approx K^{*}\prod_{j=1}^{s}{\alpha_{j}}t_{j}^{{\alpha}_{j}^{2}-1}\cdot\exp\left(\alpha_{j}b_{j}{\sqrt{-2\log(t_{j})}}\right)\\\ &\cdot\sum_{\mathfrak{z}\preceq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\prod_{j\in\mathfrak{s}-\mathfrak{z}}b_{j}\alpha_{j}\frac{1}{t_{j}\sqrt{-2\log(t_{j})}}\prod_{k\in\mathfrak{z}}\frac{\alpha_{k}^{2}-1}{t_{k}},\end{split}$ (64) where we use the approximation $\Phi^{-1}(t)\approx-\sqrt{-2\log(t)}$ and omit the term $1/\sqrt{-2\log t_{j}}=o(t_{j}^{-1})$, as $t\to 0$. We observe that the upper bound (64) shares similar structures with (62). We will analyze the upper bound (64) in detail and show later that the conclusions will apply to (62) of Example 1 as well. Similar to the analysis in Section 3.1, we have $\displaystyle\exp\left(\alpha_{j}b_{j}{\sqrt{-2\log(t_{j})}}\right)\leq C(\mathbf{t})\cdot(t_{j})^{-B_{j}}$ (65) where $B_{j}=\frac{\alpha_{j}b_{j}}{\sqrt{-2\log t_{j}}}$. Notice $\frac{1}{t\sqrt{-2\log(t)}}=o(t^{-1})$ as $t\to 0$. Thus, $\displaystyle\left\lvert{\partial^{\mathfrak{u}}}g_{\textrm{IS}}(\mathbf{t})\right\rvert\leq C\prod_{j=1}^{s}(t_{j})^{-1+\alpha_{j}^{2}-1-B_{j}}.$ (66) The term $\alpha_{j}^{2}-1-B_{j}$ being greater than or equal to $0$ ensures the term $\max_{j}A_{j}^{*}\leq 0$ in the convergence model (29), which happens when $\displaystyle\alpha_{j}^{2}-\alpha_{j}\frac{b_{j}}{\sqrt{-2\log t_{j}}}\geq 1,$ (67) which is guaranteed by $\alpha_{j}>1$ and a finite $t_{j}^{-1}$ such that $t_{j}^{-1}\geq\exp\left(\frac{1}{2}\left(\frac{\alpha_{j}b_{j}}{\alpha_{j}^{2}-1}\right)^{2}\right)$. For the optimality condition in Example 1 we can simply replace $b_{j}$ in (67) with $\sigma_{j}$. Finite $t_{j}$ for $j=1,\dotsc,s$ imply finite $n$ (via the hyperbolic set (15)), thus the rate $\mathcal{O}(n^{-1})$ is achievable with a finite $n$, in contrast to the asymptotic optimal rates in (44) and (55). Nevertheless, modifying $\alpha$ changes the variance of the integrand $g_{\mathrm{IS}}$. Thus, we aim to determine the optimal $\mathbf{\alpha}^{*}$ by minimizing the variance of $g_{\mathrm{IS}}$: $\displaystyle\textrm{Var}(g_{\mathrm{IS}})=\mathbb{E}[g_{\mathrm{IS}}^{2}]-(\mathbb{E}[g_{\mathrm{IS}}])^{2},$ (68) which is equivalent to find the minimizer for the second-order moment of $g$, as $\mathbb{E}[g_{\mathrm{IS}}]=\mathbb{E}[g]$: $\displaystyle\begin{split}\alpha^{*}&=\operatorname*{arg\,min}_{\alpha}\mathbb{E}[g_{\mathrm{IS}}^{2}(\mathbf{t})]\\\ &=\operatorname*{arg\,min}_{\alpha}\int_{[0,1]^{s}}\left(\prod_{j=1}^{s}{\alpha_{j}}\right)^{2}\nu^{2}(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))\cdot\frac{\rho^{2}(\bm{\alpha}\odot\Phi^{-1}(\bm{t}))}{\rho^{2}(\Phi^{-1}(\bm{t}))}d\mathbf{t}\\\ &=\operatorname*{arg\,min}_{\alpha}{\int_{\mathbb{R}^{s}}\left(\prod_{j=1}^{s}{\alpha_{j}}\right)^{2}\nu^{2}(\bm{\alpha}\odot\mathbf{y})\cdot\frac{\rho^{2}(\bm{\alpha}\odot\mathbf{y})}{\rho^{2}(\mathbf{y})}\rho(\mathbf{y})d\mathbf{y}}\\\ &=\operatorname*{arg\,min}_{\alpha}{\int_{\mathbb{R}^{s}}\left(\prod_{j=1}^{s}{\alpha_{j}}\right)\nu^{2}(\mathbf{y})\cdot\frac{\rho^{2}(\mathbf{y})}{\rho^{2}(\bm{\frac{1}{\alpha}}\odot\mathbf{y})}\rho(\bm{\frac{1}{\alpha}}\odot\mathbf{y})d\mathbf{y}}\\\ &=\operatorname*{arg\,min}_{\alpha}{\int_{[0,1]^{s}}\left(\prod_{j=1}^{s}{\alpha_{j}}\right)\nu^{2}(\mathbf{y})\cdot\frac{\rho(\mathbf{y})}{\rho(\bm{\frac{1}{\alpha}}\odot\mathbf{y})}d\bm{t}}.\end{split}$ (69) When the analytic solution is unavailable, we aim to find an approximate optimizer $\bar{\alpha}$ using $n$ samples. Specifically, we minimize the following objective function $\displaystyle\bar{\alpha}=\operatorname*{arg\,min}_{\alpha}\left(\prod_{j=1}^{s}{\alpha_{j}}\right)\frac{1}{n}\sum_{i=1}^{n}Q^{2}(\mathbf{y}_{i})\cdot\frac{\rho(\mathbf{y}_{i})}{\rho(\bm{\frac{1}{\alpha}}\odot\mathbf{y}_{i})}.$ (70) In practice, the number $n$ in the pilot run does not exceed the number of simulations samples. ### 4.2 Second proposal distribution This section considers another IS proposal distribution. We will only analyze the Example 2 for brevity, as the case for Example 1 can be derived similarly. Inspired by the beta distribution, we propose the following distribution $\rho_{\beta}:[0,1]\to\mathbb{R}_{+}$: $\displaystyle\rho_{\beta}({w})=\begin{cases}Ct^{\beta-1}&\text{$0\leq t<\frac{1}{2}$}\\\ C(1-t)^{\beta-1}&\text{$\frac{1}{2}\leq t\leq 1$},\end{cases}$ (71) with the constant $C=\beta\cdot(\frac{1}{2})^{1-\beta}$. The reason to propose such a distribution is the ease of computing the CDF $\Phi_{{\beta}}:[0,1]\to[0,1]$ and its inverse $\Phi_{{\beta}}^{-1}:[0,1]\to[0,1]$: $\displaystyle\Phi_{{\beta}}({t})=\begin{cases}\frac{C}{\beta}t^{\beta}&\text{$0\leq t<\frac{1}{2}$}\\\ 1-\Phi_{{\beta}}(1-{t})&\text{$\frac{1}{2}\leq t\leq 1$},\end{cases}$ (72) $\displaystyle t=\Phi^{-1}_{\beta}(w)=(2w)^{\frac{1}{\beta}}\cdot\frac{1}{2}\quad\textrm{where }0\leq w<1/2.$ (73) Next, we apply IS to compute the integral $I(g)$. Using the density $\rho_{\beta}$, we can write $\displaystyle\begin{split}I(g)&=\int_{[0,1]^{s}}g(\mathbf{t})d\mathbf{t}\\\ &=\int_{[0,1]^{s}}g(\mathbf{t})\cdot\frac{\rho_{\beta}(\bm{t})}{\rho_{\beta}(\bm{t})}d\bm{t}\\\ &=\int_{[0,1]^{s}}g(\Phi_{{\beta}}^{-1}(\mathbf{w}))\cdot\frac{1}{\rho_{\beta}(\Phi_{{\beta}}^{-1}(\mathbf{w}))}d\mathbf{w}.\end{split}$ (74) In Example 2, $g=Q\circ\Phi^{-1}$, and we introduce $\displaystyle g_{\mathrm{IS}}(\mathbf{w})=Q\left(\Phi^{-1}(\Phi_{{\beta}}^{-1}(\mathbf{w}))\right)\cdot\frac{1}{\rho_{\beta}(\Phi_{{\beta}}^{-1}(\mathbf{w}))}.$ (75) Similar to the last section, a possible choice of the parameter $\beta$ can be determined by minimizing the second-order moment of $g_{\mathrm{IS}}$: $\displaystyle\begin{split}\beta^{*}&=\operatorname*{arg\,min}_{\beta}\mathbb{E}[g_{\mathrm{IS}}^{2}(\mathbf{w})]\\\ &=\operatorname*{arg\,min}_{\beta}\int_{[0,1]^{s}}\left(\frac{g(\Phi^{-1}(\bm{t}))}{\rho_{\beta}(\mathbf{w})}\right)^{2}\cdot\rho_{\beta}(\mathbf{w})d\mathbf{w}\\\ &=\operatorname*{arg\,min}_{\beta}\int_{[0,1]^{s}}\frac{g^{2}(\Phi^{-1}(\mathbf{w}))}{\rho_{\beta}(\mathbf{w})}d\mathbf{w}.\end{split}$ (76) Again, we seek an optimizer based on ensembles when the analytical solution is unavailable. Next, we study the mixed first-order derivative of $g_{\textrm{IS}}$: $\displaystyle\begin{split}\left\lvert\frac{\partial}{\partial w_{\mathfrak{u}}}g_{\mathrm{IS}}(\mathbf{w})\right\rvert&=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\partial}{\partial w_{\mathfrak{u}-\mathfrak{z}}}Q\left(\Phi^{-1}(\Phi_{{\beta}}^{-1}(\mathbf{w}))\right)\cdot\frac{\partial}{\partial w_{\mathfrak{z}}}\varphi(\mathbf{w})\right\rvert,\\\ \end{split}$ (77) where $\varphi(\mathbf{w})=\frac{1}{\rho_{\beta}(\Phi_{{\beta}}^{-1}(\mathbf{w}))}$. We have $\displaystyle\frac{\partial}{\partial\mathrm{w}_{\mathfrak{u}-\mathfrak{z}}}Q(\mathbf{w})=\frac{\partial}{\partial\bar{\mathrm{y}}_{\mathfrak{u}-\mathfrak{z}}}Q(\bar{\mathbf{y}})\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{\partial\bar{\mathrm{y}}_{j}}{\partial t_{j}}\frac{\partial t_{j}}{\partial w_{j}}$ (78) and $\displaystyle\begin{split}\frac{\partial}{\partial u_{\mathfrak{z}}}\varphi(\mathbf{w})&=\frac{\partial}{\partial u_{\mathfrak{z}}}\prod_{k=1}^{s}\frac{1}{\rho_{\beta_{j}}(\Phi_{{\beta}_{j}}^{-1}(\mathbf{w}))}\\\ &=\prod_{k=1}^{s}\frac{1}{\beta_{k}}(2w_{k})^{\frac{1}{\beta_{k}}-1}\cdot\prod_{j\in\mathfrak{z}}\frac{1-\beta_{j}}{\beta_{j}}w_{j}^{-1},\end{split}$ (79) where we define $\bar{\mathbf{y}}=\Phi^{-1}(\mathbf{t})$ and $\mathbf{t}=\Phi^{-1}_{\beta}(\mathbf{w})$. We only consider the case where $0\leq w_{j}\leq 1/2,\forall j=1,\dotsc,s$ for simplicity, as other cases can be induced by symmetry. We have $\displaystyle\frac{\partial t_{j}}{\partial w_{j}}$ $\displaystyle=\frac{1}{\beta_{j}}(2w_{j})^{\frac{1}{\beta_{j}}-1},$ (80) and $\displaystyle\begin{split}\frac{\partial\bar{\mathrm{y}}_{j}}{\partial t_{j}}&=\frac{\partial\Phi^{-1}(t_{j})}{\partial t_{j}}\\\ &\approx-\frac{\partial\sqrt{-2\log t_{j}}}{\partial t_{j}}\\\ &=\frac{1}{t_{j}\sqrt{-2\log t_{j}}}.\end{split}$ (81) Combining (63) and (78) to (81), we obtain $\displaystyle\begin{split}\left\lvert\frac{\partial}{\partial t_{\mathfrak{u}}}g_{\mathrm{IS}}(\mathbf{w})\right\rvert&=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}{\partial^{{\mathfrak{u}-\mathfrak{z}}}}Q\left(\Phi^{-1}(\Phi_{{\beta}}^{-1}(\mathbf{w}))\right)\cdot{\partial^{\mathfrak{z}}}\varphi(\mathbf{w})\right\rvert\\\ &=\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\partial}{\partial\bar{\mathbf{y}}_{\mathfrak{u}-\mathfrak{z}}}Q(\bar{\mathbf{y}})\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{\partial\bar{\mathrm{y}}_{j}}{\partial t_{j}}\frac{\partial t_{j}}{\partial w_{j}}\cdot\frac{\partial}{\partial w_{\mathfrak{z}}}\varphi(\mathbf{w})\right\rvert\\\ &\leq\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp\left(b_{j}\left\lvert\Phi^{-1}\left((2w_{j})^{\frac{1}{\beta}}\cdot\frac{1}{2}\right)\right\rvert\right)\right\rvert\\\ &\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{\partial\bar{\mathrm{y}}_{j}}{\partial t_{j}}\frac{\partial t_{j}}{\partial w_{j}}\cdot\prod_{j=1}^{s}\frac{1}{\beta_{j}}(2w_{j})^{\frac{1}{\beta_{j}}-1}\cdot\prod_{k\in\mathfrak{z}}\frac{1-\beta_{k}}{\beta_{k}w_{k}}.\end{split}$ (82) We apply the change of variable $t_{j}=(2w_{j})^{\frac{1}{\beta_{j}}}\cdot\frac{1}{2}$ for $j=1,\dotsc,s$ and continue the analysis as follows: $\displaystyle\begin{split}\left\lvert\frac{\partial}{\partial t_{\mathfrak{u}}}g_{\mathrm{IS}}(\mathbf{w})\right\rvert&\leq\left\lvert\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp\left(b_{j}\left\lvert\Phi^{-1}\left(t_{j}\right)\right\rvert\right)\right\rvert\\\ &\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{\partial\bar{\mathbf{y}}_{j}}{\partial t_{j}}\frac{\partial t_{j}}{\partial w_{j}}\cdot\prod_{j=1}^{s}\frac{1}{\beta_{j}}(2w_{j})^{\frac{1}{\beta_{j}}-1}\cdot\prod_{k\in\mathfrak{z}}\frac{1-\beta_{k}}{\beta_{k}w_{k}}\\\ &\approx{\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp\left(b_{j}\sqrt{-2\log\left(t_{j}\right)}\right)}\\\ &\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{1}{w_{j}\beta_{j}\sqrt{-2\log\left(t_{j}\right)}}\cdot\prod_{j=1}^{s}\frac{1}{\beta_{j}}(2w_{j})^{\frac{1}{\beta_{j}}-1}\cdot\prod_{k\in\mathfrak{z}}\frac{1-\beta_{k}}{\beta_{k}w_{k}}\\\ &\approx{\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp\left(b_{j}\sqrt{-2\log\left(t_{j}\right)}\right)}\\\ &\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{2}{\beta_{j}\sqrt{-2\log\left(t_{j}\right)}}\cdot\prod_{j=1}^{s}\frac{1}{\beta_{j}}(2w_{j})^{\frac{1}{\beta_{j}}-2}\cdot\prod_{k\in\mathfrak{z}}\frac{2(1-\beta_{k})}{\beta_{k}}\\\ &={\sum_{\mathfrak{z}\subseteq\mathfrak{u}}\frac{\lvert\mathfrak{u}-\mathfrak{z}\rvert!}{(\ln 2)^{\lvert\mathfrak{u}-\mathfrak{z}\rvert}}\left(\prod_{j\in\mathfrak{u}-\mathfrak{z}}b_{j}\right)K^{*}\prod_{j=1}^{s}\exp\left(b_{j}\sqrt{-2\log\left(t_{j}\right)}\right)}\\\ &\cdot\prod_{j\in\mathfrak{u}-\mathfrak{z}}\frac{2}{\beta_{j}^{2}\sqrt{-2\log\left(t_{j}\right)}}\cdot\prod_{k\in\mathfrak{z}}\frac{2(1-\beta_{k})}{\beta_{k}^{2}}\cdot\prod_{j=1}^{s}(2t_{j})^{1-2{\beta_{j}}}\\\ &\leq C\prod_{j=1}^{s}\left({t_{j}}\right)^{1-2{\beta_{j}}-B_{j}}\\\ &=C\prod_{j=1}^{s}\left({w_{j}}\right)^{\frac{1}{\beta_{j}}-2-\frac{B_{j}}{\beta_{j}}},\\\ \end{split}$ (83) where in the last inequality we use, $\displaystyle\exp\left(b_{j}\sqrt{-2\log\left(t_{j}\right)}\right)\leq C(t_{j})t_{j}^{-B_{j}}$ with $B_{j}=\frac{b_{j}}{\sqrt{-2\log t_{j}}}$. Compared to the case without IS ($\bm{\beta}=\mathbf{1}$), an improved regularity of $g_{\mathrm{IS}}$ is achieved for $0<\beta_{j}<1$, as the exponent becomes larger, i.e. $\frac{1}{\beta_{j}}-2-\frac{B_{j}}{\beta_{j}}\geq-1-B_{j}$, when $1-B_{j}\geq 0$. Moreover, the condition $\frac{1}{\beta_{j}}-2-\frac{B_{j}}{\beta_{j}}\geq-1$ for $j=1,\dotsc,s$ ensures the $\mathcal{O}(n^{-1})$ convergence rate, which requires $\beta_{j}\leq 1-\frac{b_{j}}{\sqrt{-2\log t_{j}}}.$ (84) Similar to the optimality condition (67) in the first proposal distribution, $0<\beta_{j}<1$ and sufficiently large and finite $t_{j}^{-1}$ for all $j=1,\dotsc,s$ and thus sufficiently large and finite $n$ are required for the $\mathcal{O}(n^{-1})$ convergence rate. ## 5 Numerical results This section tests the convergence rate of the two examples we have considered, compares them with the analysis, and demonstrates how IS improves the rate. ### 5.1 Lognormal random variable expectation This section approximates the expectation of a lognormal random variable, that is, ${\mathbb{E}\mspace{-2.0mu}\left[g\right]}={\mathbb{E}\mspace{-2.0mu}\left[\exp\left(\sum_{j=1}^{s}\sigma_{j}\xi_{j}\right)\right]},$ where $\xi_{j}$ are i.i.d. $\mathcal{N}(0,1)$. The expectation is analytically given by $\prod_{j=1}^{s}\exp\left(\frac{\sigma_{j}^{2}}{2}\right).$ We test the convergence of the root mean squared error (RMSE) of the QMC method using the Sobol’ sequence. The RMSE for QMC is computed as follows: $\displaystyle\sqrt{\mathbb{E}\left[\left(\frac{1}{R}\sum_{r=1}^{R}\frac{1}{n}\sum_{i=1}^{n}g(t_{i}\oplus\Delta_{r})-\mathbb{E}[g]\right)^{2}\right]}\approx\sqrt{\frac{1}{R(R-1)}\sum_{r=1}^{R}\left(\hat{I}_{n}^{(r)}-\hat{I}_{R,n}\right)^{2}}$ (85) where $\hat{I}_{n}^{(r)}=\frac{1}{n}\sum_{i=1}^{n}g(t_{i}\oplus\Delta_{r})$ and $\hat{I}_{R,n}=\frac{1}{R}\sum_{r=1}^{R}\frac{1}{n}\sum_{i=1}^{n}g(t_{i}\oplus\Delta_{r})$. In Example 1, we choose the nested uniform scrambling randomization (Owen’s scrambling). The convergence rate for the RMSE is $\mathcal{O}(n^{-3/2+\epsilon})$ if the integrand has Lipschitz-continuous mixed first-order derivatives [36]. The integrand $f$ has a singularity at 0 and thus does not have this property. Figures 5, 6, and 7 plot the convergence of the RMSE for the original integrand $g$ and the integrand with importance sampling, $g_{\textrm{IS}}$. IS improves the integrand regularity in all cases, significantly reducing the RMSE and improving the convergence rate. Tables 1, 2, and 3 provide the measured convergence rates $\gamma$ and the value $(1-\gamma)/\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}}$. For the same sample size $n$, the convergence rate $\gamma$ decreases as $\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}}$ increases. The value $(1-\gamma)/\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}}$ exhibits similarity in magnitude for all cases in each dimension setting and increases as the dimension increases. Figure 8 plots the values $\frac{2(1-\gamma)^{2}}{\sum_{j=1}^{s}\sigma_{j}^{2}}$ (left) and $Cn^{-1}$ with the corresponding 95% confidence intervals. The variation within each group is also observed to be smaller than the values across groups. Table 1: Convergence of Example 1 for dimension $s=1$ and standard deviation $\sigma=1.0,2.0,3.0$. The convergence rate is fitted at $n=2^{21}$. $\sigma$ | Conv. Rate $\gamma$ | Conv. Rate with IS | $(1-\gamma)/\sigma$ ---|---|---|--- 1.0 | 0.875 | 1.888 | 0.125 2.0 | 0.719 | 1.340 | 0.141 3.0 | 0.582 | 1.088 | 0.139 Figure 5: Example 1, the root mean squared error with $s=1$: $\sigma=1.0$ (upper left), 2.0 (upper right), and 3.0 (bottom). The convergence rate is fitted at $n=2^{21}$. The legend IS_A stands for the IS of the first type. Figure 6: Example 1, the root mean squared error with $s=3$: $\sigma=\\{1.0,1.0,1.0\\}$ (upper left), $\\{2.0,1.0,1.0\\}$ (upper right), $\\{2.0,1.4,1.0\\}$ (lower left), and $\\{1.0,1.7,1.0\\}$ (lower right). The convergence rate is fitted centered at $n=2^{21}$. The legend IS_A stands for the IS of the first type. Figure 7: Example 1, the root mean squared error with $s=6$: Case I (upper left), Case II (upper right), Case III (lower left), and Case IV (lower right). The convergence rate is fitted at $n=2^{21}$. The legend IS_A stands for the IS of the first type. Table 2: Convergence of Example 1, for dimension $s=3$ and various standard deviation settings, where the convergence rate is fitted at $n=2^{21}$. $\sigma$ | Conv. Rate $\gamma$ | Conv. Rate with IS | $(1-\gamma)/\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}}$ ---|---|---|--- {1.0, 1.0, 1.0} | 0.717 | 1.264 | 0.163 {2.0, 1.0, 1.0} | 0.614 | 1.305 | 0.158 {2.0, 1.4, 1.0} | 0.575 | 1.349 | 0.161 {2.0, 1.7, 1.0} | 0.541 | 1.354 | 0.162 Table 3: Convergence of Example 1 for dimension $s=6$ and various standard deviation $\sigma$ settings, where ${\sigma}_{j}={1}$, $j=1,\dotsc,s$ for Case I, and $\sigma_{j}=\frac{\sqrt{6}\xi_{j}}{\sqrt{\sum_{j=1}^{s}\xi_{j}^{2}}}$, where $\xi_{j}\sim\textrm{Lognormal}(0,1)$, $j=1,\dotsc,s$ for Cases II, III, and IV. The convergence rate is fitted at $n=2^{21}$. $\sigma$ cases | Conv. Rate $\gamma$ | Conv. Rate with IS | $(1-\gamma)/\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}}$ ---|---|---|--- I | 0.577 | 0.962 | 0.173 II | 0.571 | 1.000 | 0.175 III | 0.575 | 1.014 | 0.174 IV | 0.567 | 1.031 | 0.178 Figure 8: Example 1, $\frac{2(1-\gamma)^{2}}{\sum_{j=1}^{s}\sigma_{j}^{2}}$ (left) and $Cn^{-1}$ (right) against the number of samples $n$ across various dimension and variance settings. The colors represent the dimension $s$, whereas each scatter point with the same color represents a unique variance setting. The shaded area is the 95% confidence interval. Both the values $\frac{2(1-\gamma)^{2}}{\sum_{j=1}^{s}\sigma_{j}^{2}}$(left) and $Cn^{-1}$ decrease as the sample size increases. The values exhibit similarity in amplitude within each dimension setting. ### 5.2 Elliptic partial differential equations with lognormal coefficient We first specify the settings of this example. The QoI is the weighted integration of the solution $u$ over the entire domain $\mathcal{D}$, that is, $\displaystyle Q(u)=\int_{\mathcal{D}}g(\mathbf{x})u(\mathbf{x};\cdot)d\mathbf{x},$ where $\mathcal{D}=[-1,1]^{2}$, $g=\rho(;\mu,\Sigma)*1_{D_{0}}$, $*$ denotes the convolution operator, $\mathcal{D}_{0}=\left[0.25,0.5\right]\times\left[-0.5,-0.25\right]\subset\mathcal{D}$, and $\rho(;\mu,\Sigma)$ represents the Gaussian density function in $\mathbb{R}^{2}$ with mean $\mu=0$ and covariance $\Sigma=\frac{1}{16}\bm{I}$. The PDE problem (45a) is solved with DEAL.II [1], using the $\mathcal{Q}_{1}$ finite element on a 16$\times$16 mesh. In this example, we employed the Sobol’ sequence with a random linear scramble as the randomization [22]. Recall that the coefficient $a$ involved in the PDE model (45) is given by, $\displaystyle a(\mathbf{x};\omega)=\exp\left(\sum_{j=1}^{s}y_{j}\psi_{j}\right).$ where the basis $\psi_{j}$ is connected to the following Matérn covariance kernel: $\displaystyle C(h)=\frac{1}{2^{\nu-1}\Gamma({\nu})}\left(\sqrt{2\nu}\frac{h}{r}\right)^{\nu}K_{\nu}\left(\sqrt{2\nu}\frac{h}{r}\right),$ (86) where we chose $\nu=4.5$ and $r=1$. In addition, $\Gamma$ is the Gamma function, and $K_{\nu}$ is the modified Bessel function of the second kind. Let $\lambda_{j,p}$ and $\theta_{j,p}$ be the Fourier coefficients and trigonometric functions in the Fourier expansion of the covariance kernel $C$ on an extended domain $\mathcal{D}_{p}=[-\gamma,\gamma]^{d}$, respectively, with $d=2$. The extension is needed to ensure the positivity of the Fourier coefficients $\lambda_{j,p}$ [3]. We let $\displaystyle\begin{split}\psi_{j,p}&=\lambda_{j,p}\theta_{j,p}\\\ \psi_{j}&=\left.\psi_{j,p}\right|_{\mathcal{D}}.\end{split}$ (87) Apart from the trigonometric-basis expansions, we also apply Meyer wavelet functions to construct the basis $\psi_{j}$ (for a detailed analysis and instructions, see Section 4 in [3]). To validate the proposed convergence model (55), we also considered the following coefficient: $\displaystyle a(\mathbf{x};\omega)=\exp\left(\sum_{j=1}^{s}y_{j}\sigma_{j}\psi_{j}\right),$ (88) where we generated six samples of $\sigma$ within each dimension setting. In the first sample, $\sigma=\mathbf{1}$. In the second and third samples, $\sigma_{j}$ are i.i.d sampled from the uniform distribution $U[1,2]$. In the last three samples, the $\sigma$ values of the first three cases are multiplied component-wise by 2. Figure 9 plots the deteriorated rate $1-\gamma$ against $\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}b_{j}^{2}}$. In all considered cases, the rate $1-\gamma$ exhibits an almost linear dependence on $\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}b_{j}^{2}}$. Unlike Example 1, the effect of dimensions on the convergence rates is not obvious in both cases, which was unexpected because the QMC points quality deteriorates in higher dimensions. One possible reason for this observation lies in Equation. (49), where we applied the upper bounds for $Q$ and its derivatives. Figure 9: Example 2, the value $(1-\gamma)$ against $\sqrt{\sum_{j=1}^{s}\sigma_{j}^{2}b_{j}^{2}}$ for using the Fourier basis (left) and wavelet-type basis (right) across various dimension and variance settings. The colors represent the dimension $s$, whereas each scatter point with the same color represents a unique variance setting. The dimension- independent effect is observed in the convergence rate in both cases. Figure 10 compares the $L^{\infty}$ norm of the basis ${\psi_{j,p}}$ and ${\psi_{j}}$ between trigonometric and wavelet-type bases. The trigonometric basis is optimal in approximating the random field in the $L^{2}$ sense. However, the basis $L^{\infty}$ norm, which appears in the convergence model (55) and (29) is the primary interest. Ideally, $\lVert\psi_{j}\rVert_{L^{\infty}(\mathcal{D})}$ is desired to converge fast as $j$ increases to reduce the effective dimension. In the nonasymptotic QMC convergence model for elliptic PDEs (55), the value $\sum_{j=1}^{s}b_{j}^{2}$ is expected to be small for a fixed dimension $s$. We observed that the values $b_{j}$ of the wavelet-type basis in the plotted range are smaller than those of trigonometric basis. The localization properties of the wavelet-type basis make it even more favorable when the function $\psi_{j,p}$ is restricted on $\mathcal{D}$ since $\lVert\psi_{j}\rVert_{L^{\infty}(\mathcal{D})}\leq\lVert\psi_{j,p}\rVert_{L^{\infty}(\mathcal{D}_{p})}$. Figure 10: Example 2, a comparison of the $L^{2}$ norms $\lVert\psi_{j,p}\rVert_{L^{2}(\mathcal{D}_{p})}$ and $L^{\infty}$ norms $\lVert\psi_{j}\rVert_{L^{\infty}(\mathcal{D})}$ between the Fourier basis and wavelet-type basis. Figure 11 plots the convergence of the RMSE for the original integrand $g$ and the integrand with IS $g_{\textrm{IS}}$, where the wavelet-type basis is used for two variance settings and two dimension settings. In all cases, both types of IS improve the integrand regularity, reducing the RMSE and improving the convergence rate. Furthermore, the effectiveness of the IS is more significant when the dimension is smaller, as IS has more limitations in higher dimensions to avoid increasing the variance. Moreover, when the original integrand has a larger $\sigma^{2}$ the IS works better because $\alpha_{j}$ in (66) becomes larger and $\beta_{j}$ in (83) becomes smaller. Figure 11: Example 2, the root mean squared error with and without two types of IS for $s=16$ (top), $s=64$ (bottom) with $\sigma^{2}=1$ (left) and $\sigma^{2}=4$ (right). The fitted convergence rate is marked in the figure. The random coefficient $a$ is expressed using the wavelet-type basis. Legends IS_A and IS_B stand for IS of the first and the second type, respectively. ## 6 Conclusion In this work, we have studied the nonasymptotic QMC convergence rate model for functions with finite-dimensional inputs. Specifically, we focus on the expectation of a lognormal random variable and an elliptic PDE problem characterized by lognormal coefficients. Drawing upon the hyperbolic set $K_{n,s}$ introduced in the work of Owen [37], we subdivided the integration domain. This division allowed us to split the QMC integration error into two distinct contributions, namely: $2\int_{[0,1]^{s}-K_{n,s}}\lvert g-\tilde{g}\rvert$ and $\mathbb{E}\left[D_{n}^{*}(t_{1},\dotsc,t_{n})\right]V_{HK}(\tilde{g})$. This method replaces the right-hand side of the Koksma–Hlawka inequality $V_{\textrm{HK}}(g)D^{*}_{n}(\mathcal{P})$, which is infinite for the unbounded functions considered in this work, with two finite terms. Our primary contribution through this work has been deriving an upper bound for the integrand derivative via an optimization problem, denoted as (23). With this in place, we applied it within the convergence model, culminating in deriving the nonasymptotic QMC convergence rate model. In our quest to verify the proposed model, we presented numerical examples for the two problems above. Moreover, the analytical procedures delineated here can find relevance in varied integration challenges, one notable instance being the option pricing problem within financial scenarios. In addition to the above, our work also involved applying two IS distributions, leading us to evaluate their potential to enhance integrand regularity. Notably, despite the distinct support of these distributions, their influence on the integrand appeared broadly analogous. However, there remain areas that we have not explored in depth. For instance, we have yet to derive the constant $C$ within the hyperbolic set. Delving into its dependence on dimensionality and integrand characteristics could be a promising trajectory for subsequent studies. Moreover, future inquiries could also encompass the random field’s truncation error, where wavelets demonstrate a superior edge over the trigonometric basis, as indicated in [2]. Furthermore, examining a multilevel setting with adaptivity, as hinted at in [8], could be beneficial. Lastly, it’s worth noting our choice to employ the Sobol’ sequence instead of exploring weighted spaces or designing lattice rules (e.g., [29], [16]). The latter could reduce the worst-case integration error for specific function spaces, potentially leading to a dimension-independent outcome given certain conditions. This combination of lattice rules and nonasymptotic analysis remains a compelling avenue for future exploration. ## 7 Acknowledgement This publication is based on work supported by the Alexander von Humboldt Foundation and the King Abdullah University of Science and Technology (KAUST) office of sponsored research (OSR) under Award No. OSR-2019-CRG8-4033. This work utilized the resources of the Supercomputing Laboratory at King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia. The authors thank Christian Bayer, Fabio Nobile and Erik von Schwerin for fruitful discussions. We are also grateful to Michael Samet for the suggestions that significantly improve the paper’s readibility. We acknowledge the use of the following open-source software packages: deal.II [1]. ## References * [1] D. Arndt, W. Bangerth, B. Blais, T. C. Clevenger, M. Fehling, A. V. Grayver, T. Heister, L. Heltai, M. Kronbichler, M. Maier, P. Munch, J.-P. Pelteret, R. Rastak, I. Thomas, B. Turcksin, Z. Wang, and D. Wells, The deal.II library, version 9.2, Journal of Numerical Mathematics, 28 (2020), pp. 131–146. * [2] M. Bachmayr, A. Cohen, R. DeVore, and G. Migliorati, Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients, ESAIM: Mathematical Modelling and Numerical Analysis, 51 (2017), pp. 341–363. * [3] M. Bachmayr, A. Cohen, and G. Migliorati, Representations of Gaussian random fields and approximation of elliptic PDEs with lognormal coefficients, Journal of Fourier Analysis and Applications, 24 (2018), pp. 621–649. * [4] C. Bayer, C. Ben Hammouda, and R. Tempone, Numerical smoothing with hierarchical adaptive sparse grids and quasi-Monte Carlo methods for efficient option pricing, Quantitative Finance, 23 (2023), pp. 209–227. * [5] C. Bayer, C. B. Hammouda, A. Papapantoleon, M. Samet, and R. Tempone, Optimal damping with hierarchical adaptive quadrature for efficient Fourier pricing of multi-asset options in Lévy models, ArXiv, abs/2203.08196 (2022). * [6] C. Bayer, C. B. Hammouda, and R. Tempone, Numerical smoothing and hierarchical approximations for efficient option pricing and density estimation, arXiv: Computational Finance, (2020). * [7] C. Bayer, M. Siebenmorgen, and R. Tempone, Smoothing the payoff for efficient computation of basket option prices, Quantitative Finance, 18 (2016), pp. 491 – 505. * [8] J. Beck, Y. Liu, E. von Schwerin, and R. Tempone, Goal-oriented adaptive finite element multilevel Monte Carlo with convergence rates, Computer Methods in Applied Mechanics and Engineering, 402 (2022), p. 115582. * [9] R. E. Caflisch, W. J. Morokoff, and A. B. Owen, Valuation of mortgage backed securities using Brownian bridges to reduce effective dimension, vol. 24, Department of Mathematics, University of California, Los Angeles, 1997. * [10] J. Dick and F. Pillichshammer, Digital nets and sequences: discrepancy theory and quasi-Monte Carlo integration, Cambridge University Press, 2010. * [11] G. Y. Dong and C. Lemieux, Dependence properties of scrambled Halton sequences, Mathematics and Computers in Simulation, 200 (2022), pp. 240–262. * [12] M. Gerber and N. Chopin, Sequential quasi Monte Carlo, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77 (2015), pp. 509–579. * [13] A. D. Gilbert, F. Y. Kuo, and I. H. Sloan, Preintegration is not smoothing when monotonicity fails, in Advances in Modeling and Simulation: Festschrift for Pierre L’Ecuyer, Springer, 2022, pp. 169–191. * [14] E. Gobet, M. Lerasle, and D. Métivier, Mean estimation for randomized quasi Monte Carlo method, (2022). * [15] T. Goda and P. L’ecuyer, Construction-free median quasi-Monte Carlo rules for function spaces with unspecified smoothness and general weights, SIAM Journal on Scientific Computing, 44 (2022), pp. A2765–A2788. * [16] I. G. Graham, F. Y. Kuo, J. A. Nichols, R. Scheichl, C. Schwab, and I. H. Sloan, Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients, Numerische Mathematik, 131 (2015), pp. 329–368. * [17] M. Griebel, F. Y. Kuo, and I. H. Sloan, The smoothing effect of integration in $\mathbb{R}^{d}$ and the ANOVA decomposition, Math. Comput., 82 (2013), pp. 383–400. * [18] A. Griewank, F. Y. Kuo, H. Leövey, and I. H. Sloan, High dimensional integration of kinks and jumps—smoothing by preintegration, Journal of Computational and Applied Mathematics, 344 (2018), pp. 259–274. * [19] Z. He and X. Wang, On the convergence rate of randomized quasi-Monte Carlo for discontinuous functions, SIAM Journal on Numerical Analysis, 53 (2015), pp. 2488–2503. * [20] Z. He, Z. Zheng, and X. Wang, On the error rate of importance sampling with randomized quasi-Monte Carlo, arXiv preprint arXiv:2203.03220, (2022). * [21] L. Herrmann and C. Schwab, QMC integration for lognormal-parametric, elliptic PDEs: local supports and product weights, Numerische Mathematik, 141 (2019), pp. 63–102. * [22] H. S. Hong and F. J. Hickernell, Algorithm 823: Implementing scrambled digital sequences, ACM Transactions on Mathematical Software (TOMS), 29 (2003), pp. 95–109. * [23] J. Imai and K. S. Tan, A general dimension reduction technique for derivative pricing, Journal of Computational Finance, 10 (2006), p. 129. * [24] , Pricing derivative securities using integrated quasi-Monte Carlo methods with dimension reduction and discontinuity realignment, SIAM Journal on Scientific Computing, 36 (2014), pp. A2101–A2121. * [25] X. Jin and A. X. Zhang, Reclaiming Quasi-Monte Carlo efficiency in portfolio value-at-risk simulation through Fourier transform, Management Science, 52 (2006), pp. 925–938. * [26] Y. Kazashi, Quasi-Monte Carlo integration with product weights for elliptic pdes with log-normal coefficients, IMA Journal of Numerical Analysis, 39 (2019), pp. 1563–1593. * [27] P. Kritzer, F. Pillichshammer, L. Plaskota, and G. W. Wasilkowski, On efficient weighted integration via a change of variables, Numerische Mathematik, 146 (2018), pp. 545 – 570. * [28] F. Y. Kuo and D. Nuyens, A practical guide to quasi-Monte Carlo methods, (2016). * [29] F. Y. Kuo, C. Schwab, and I. H. Sloan, Quasi-Monte Carlo finite element methods for a class of elliptic partial differential equations with random coefficients, SIAM J. Numer. Anal., 50 (2012), pp. 3351–3374. * [30] S. Liu and A. B. Owen, Preintegration via active subspace, SIAM Journal on Numerical Analysis, 61 (2023), pp. 495–514. * [31] P. L’Ecuyer, Randomized quasi-Monte Carlo: An introduction for practitioners, Springer, 2018. * [32] W. J. Morokoff and R. E. Caflisch, Quasi-random sequences and their discrepancies, SIAM Journal on Scientific Computing, 15 (1994), pp. 1251–1279. * [33] J. A. Nichols and F. Y. Kuo, Fast CBC construction of randomly shifted lattice rules achieving $\mathcal{O}(n^{-1+\delta})$ convergence for unbounded integrands over $\mathbb{R}^{s}$ in weighted spaces with POD weights, Journal of Complexity, 30 (2014), pp. 444–468. * [34] H. Niederreiter, Random number generation and quasi-Monte Carlo methods, SIAM, 1992. * [35] D. Ouyang, X. Wang, and Z. He, Quasi-Monte Carlo for unbounded integrands with importance sampling, arXiv preprint arXiv:2310.00650, (2023). * [36] A. B. Owen, Scrambling Sobol’ and Niederreiter–Xing points, Journal of complexity, 14 (1998), pp. 466–489. * [37] A. B. Owen, Halton sequences avoid the origin, SIAM Review, 48 (2006), pp. 487–503. * [38] A. B. Owen, Monte Carlo theory, methods and examples, 2013. * [39] A. B. Owen, A randomized Halton algorithm in R, arXiv preprint arXiv:1706.02808, (2017). * [40] Z. Pan and A. Owen, Super-polynomial accuracy of one dimensional randomized nets using the median of means, Mathematics of Computation, 92 (2023), pp. 805–837. * [41] A. Papageorgiou, The Brownian bridge does not offer a consistent advantage in quasi-Monte Carlo integration, Journal of Complexity, 18 (2002), pp. 171–186. * [42] J. K. Patel and C. B. Read, Handbook of the normal distribution, vol. 150, CRC Press, 1996. * [43] I. Pinelis, Exact lower and upper bounds on the incomplete gamma function, arXiv preprint arXiv:2005.06384, (2020). * [44] I. H. Sloan and H. Woźniakowski, When are quasi-Monte Carlo algorithms efficient for high dimensional integrals?, Journal of Complexity, 14 (1998), pp. 1–33. * [45] I. Sobol, Calculation of improper integrals using equidistributed sequences, Doklady Akademii Nauk SSSR, 210 (1973), pp. 278–281. * [46] X. Wang, Dimension reduction techniques in quasi-Monte Carlo methods for option pricing, INFORMS Journal on Computing, 21 (2009), pp. 488–504. * [47] X. Wang and K.-T. Fang, The effective dimension and quasi-Monte Carlo integration, Journal of Complexity, 19 (2003), pp. 101–124. * [48] X. Wang and I. H. Sloan, Why are high-dimensional finance problems often of low effective dimension?, SIAM Journal on Scientific Computing, 27 (2005), pp. 159–183. * [49] , Low discrepancy sequences in high dimensions: How well are their projections distributed?, Journal of Computational and Applied Mathematics, 213 (2008), pp. 366–386. * [50] , Quasi-Monte Carlo methods in financial engineering: An equivalence principle and dimension reduction, Operations Research, 59 (2011), pp. 80–95. * [51] X. Wang and K. S. Tan, Pricing and hedging with discontinuous functions: Quasi-Monte Carlo methods and dimension reduction, Management Science, 59 (2013), pp. 376–389. * [52] C. Weng, X. Wang, and Z. He, Efficient computation of option prices and greeks by quasi-Monte Carlo method with smoothing and dimension reduction, SIAM Journal on Scientific Computing, 39 (2017), pp. B298–B322. * [53] J. Wiart, C. Lemieux, and G. Y. Dong, On the dependence structure and quality of scrambled $(t,m,s)$-nets, Monte Carlo Methods and Applications, 27 (2019), pp. 1 – 26.
[1]Fengqiu Adam Dong [1]Department of Physics & Astronomy, University of British Columbia, 325 - 6224 Agricultural Road, Vancouver, V6T 1Z1, British Columbia, Canada 2]U.S. Naval Research Laboratory, 4555 Overlook Ave, SW Washington, 20375, DC, USA 3]Department of Physics, McGill University, 3600 rue University, Montréal, H3A 2T8, QC, Canada 4]Trottier Space Institute, McGill University, 3600 rue University, Montréal, H3A 2A7, QC, Canada 5]Tata Institute of Fundamental Research, National Centre for Radio Astronomy, Savitribai Phule Pune University Campus Spicer College Road, Ganeshkhind, Pune, HR6G+3G5, 411007, Maharashtra, India 6]Cornell Center for Astrophysics and Planetary Science, Cornell University, , Ithaca, 14853, NY, USA 7]David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, M5S 3H4, Ontario, Canada 8]Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, M5S 3H4, Ontario, Canada 9]Department of Physics and Astronomy, West Virginia University, , Morgantown, 26506-6315, WV, USA 10]Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, 26505, WV, USA 11]University of California Santa Cruz, 1156 High St, Santa Cruz, 95064, CA, USA 12]ASTRON, Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, Dwingeloo, PD, 7991, The Netherlands 13]Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, Amsterdam, 1098, XH, The Netherlands 14]Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, 02139, MA, USA 15]MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, McNair Building (MIT Building 37), Cambridge, 02139, MA, USA 16]E.A. Milne Centre for Astrophysics, University of Hull, Cottingham Road, Kingston-upon-Hull, HU6 7RX, UK 17]Centre of Excellence for Data Science Artificial Intelligence and Modelling (DAIM), University of Hull, Cottingham Road, Kingston-upon-Hull, HU6 7RX, UK 18]International Centre for Radio Astronomy Research, Curtin University, Bently, 6102, WA, Australia 19]National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, 22903, VA, USA 20] York University, 4700 Keele Street, Toronto, M3J 1P3, ON, Canada 21]Perimeter Institute for Theoretical Physics, 31 Caroline St N, Waterloo, N2L 2Y5, ON, Canada # The discovery of a nearby 421 s transient with CHIME/FRB/Pulsar <EMAIL_ADDRESS>Tracy Clarke Alice P. Curtin Ajay Kumar Ingrid Stairs Shami Chatterjee Amanda M. Cook Emmanuel Fonseca B. M. Gaensler Jason W.T. Hessels Victoria M. Kaspi Mattias Lazda Kiyoshi W. Masui James W. McKee Bradley W. Meyers Aaron B. Pearlman Scott M. Ransom Paul Scholz Kaitlyn Shin Kendrick M. Smith Chia Min Tan * [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ ###### Abstract Neutron stars and white dwarfs are both dense remnants of post-main-sequence stars. Pulsars, magnetars and strongly magnetised white dwarfs have all been seen to been observed to exhibit coherent, pulsed radio emission in relation to their rotational period. Recently, a new type of radio long period transient (LPT) has been discovered. The bright radio emission of LPTs resembles that of radio pulsars and magnetars. However, they pulse on timescales (minutes) much longer than previously seen. While minute timescales are common rotation periods for white dwarfs, LPTs are much brighter than the known pulsating white dwarfs, and dipolar radiation from isolated (as opposed to binary) magnetic white dwarfs has yet to be observed. Here, we report the discovery of a new $\sim$421 s LPT, CHIME J0630+25, using the CHIME/FRB and CHIME/Pulsar instruments. We used standard pulsar timing techniques and obtained a phase-coherent timing solution which yielded limits on the inferred magnetic field and characteristic age. CHIME J0630+25 is remarkably nearby ($170\pm 80$ pc), making it the closest LPT discovered to date. ###### keywords: keyword1, Keyword2, Keyword3, Keyword4 Recently, a new class of objects known as radio long-period transients (LPTs)111Note that these sources have also been given some other names in the literature such as ultra long period transients (ULPT), long period radio transients (LPRT). In this study, we use the term long period transient throughout. has been discovered. Four such objects have been confirmed: GLEAM-X J162759.5-–523504.3 [1] and GPM 1839–10 [2], both discovered by the Murchison Widefield Array (MWA), ASKAP J1935+2148 [3], discovered by the Australian Square Kilometre Array Pathfinder (ASKAP) telescope, and PSR J0901–4046 [4], discovered by the MeerKAT telescope (1091 s for GLEAM-X J162759.5-–523504.3, 1318 s for GPM 1839–10, 3225 s for ASKAP J1935+2148, and 76 s for PSR J0901–4046). However, their nature remains mysterious. These LPTs are characterised by their exceptionally long periods, wide burst widths (up to 60 seconds), and complex temporal and spectral microstructure. Two types of sources, white dwarfs and neutron stars, have emerged as the favoured models for LPTs due to their emission characteristics and rotation periods. Radio pulsars are the most common form of detectable neutron star. They are remarkably accurate celestial clocks. By carefully measuring pulse times of arrival (TOAs), we can fully account for every rotation of a pulsar through a process called pulsar timing. The rotational period and period derivative obtained via pulsar timing provide critical constraints [5, 6] on the mechanisms of coherent radio emission [7, 8, 9]. The constraints derived from pulsar timing assume that the emissions are powered by rotational energy loss. However, not all emission from neutron stars is powered by spin-down. A subset of magnetically-powered young neutron stars, namely magnetars, exhibits noisy rotational behaviour and drastically changing pulse-to-pulse emissions punctuated by high-energy outbursts [see 10, 11, for reviews]. Accomplishing phase-coherent timing for magnetars can be challenging due to their apparent rotational instability that often but not exclusively occurs contemporaneously with X-ray outbursts [e.g. 12, 13]. The complex temporal and spectral structure of LPT radio emission is somewhat similar to that of magnetars, and some have argued [14] that this favours a neutron-star model for LPTs [15, 16]. However, radio emission from neutron stars with periods as long as those of LPTs has not yet been observed. The longest period pulsar, PSR J0250+5854, has a rotational period of 23.5 s [17]. There is one magnetar candidate, 1E 161348-5055, with a period of 6.67 h at the centre of supernova remnant RCW103 [18]. However, only high-energy emission has ever been detected from 1E 161348-5055 [19, 20]. White dwarfs are an alternative explanation for LPTs due to their longer rotational periods. Most white dwarf emission is not periodic or detectable in radio [21]. However, recently two white dwarfs have been discovered to emit pulsed emission in the radio bank like pulsars, AR Scorpii and J1912–4410 [22, 23]. In the case of J1912–4410, the emission was detectable down to 1.4 GHz with duty cycles similar to radio pulsars. Additionally, the periods of the white dwarfs producing coherent pulsed radio emissions are similar to those of the discovered LPTs. Yet, a couple of issues remain with the white dwarf model for LPTs. Despite optical searches, no optical counterparts have been identified in the localisation regions of any LPT [4]. In addition, only white dwarfs in binary systems have been seen to emit coherent radio emission. At least one LPT, PSR J0901–4046, has well-constrained timing parameters, which show it to be an isolated source. While it is theoretically possible for an isolated white dwarf to produce coherent radio pulses, this has yet to be observed. We have discovered a new $P=421.35542$ s LPT using the Canadian Hydrogen Intensity Mapping Experiment (CHIME) telescope, CHIME J0630+25, utilising search strategies developed for Galactic pulsar searches [24]. We have also obtained a phase-coherent timing solution for CHIME J0630+25. The discovery marks the closest LPT to date at 170(80) pc. Additionally, we have identified two potential X-ray counterparts to CHIME J0630+25 using the Neil Gehrels Swift telescope, although the radio localisation region is large. ## 1 Results CHIME, located near Penticton, British Columbia, Canada, is a transit telescope comprised of four cylindrical dishes and three distinct backend instruments: CHIME/Cosmology [25], CHIME/FRB [26], and CHIME/Pulsar [27]. The unique cylindrical design of CHIME offers a wide field of view (FOV) of approximately 200 square degrees at any given time. Additionally, as an interferometric telescope, CHIME/FRB can form 1024 static synthesised beams within its large primary beam, allowing for the localisation of astrophysical transients with a typical precision of $\sim$0.5 degrees. The extensive FOV enables CHIME/FRB to survey the entire northern sky daily. Of particular interest is CHIME/FRB’s capability to perform continuous single pulse surveys, enabling the detection of even highly sporadic sources, including repeating and one-off FRBs [28], rotating radio transients [24], and LPTs. We identified an astrophysical candidate with the first detection occurring on MJD 58772 at 12:54:51 UTC (see Methods) at Right Ascension (RA) 06:30:43$\pm$6’, Declination (Dec) 25:23:24$\pm$11’ where the quoted uncertainty is at the 1$\sigma$ level.222All coordinates in this study are given in the J2000 coordinate system. This candidate, CHIME J0630+25, was then followed up by the more sensitive CHIME/Pulsar instrument with 688 observations, totalling $\sim$176 hours between MJDs 59300 and 60116. These are in the form of high- time resolution Stokes-I spectra. We detected 11 bursts with the CHIME/Pulsar observations. We also detected 6 bursts with CHIME/FRB. The CHIME/FRB detections only had metadata recorded. However, we confirmed that these bursts are indeed from CHIME J0630+25, as they are consistent with its rotation period. Additionally, one of the bursts was a co-detection with both backends. All the detections are provided in Table 1, which includes the fluence, effective width, spectral index, and burst category for each burst (see Methods). The burst category is a classification system that we used for the quasiperiodicity analysis described in the Methods section. Table 1: Properties for the pulses for CHIME J0630+25 detected by the CHIME/FRB and CHIME/Pulsar systems. For the CHIME/FRB detections, we report the TOA in UTC time at CHIME referenced to 400 MHz, signal-to-noise ratio (S/N), and the automated pipeline DM. For the CHIME/Pulsar detections, we report the pulse TOAs referenced to 800 MHz and provide calibrated characteristics from the total intensity data stream. This includes the effective width ($W_{\text{eff}}$), fluence ($F$), spectral index ($\alpha$), DM, and a burst category that facilitates the quasiperiodicity analysis described further in the Methods. The CHIME/FRB detections lack $W_{\text{eff}}$, $F$, $\alpha$ and Burst Category values as Stokes-I intensity data do not exist for these bursts. This is discussed in more detail in the main text. Note that we omit the detection S/N of the pulses detected by CHIME/Pulsar as the detection pipeline is different from CHIME/FRB, and the detection S/N is not directly comparable. Therefore, to avoid confusion, we only provide the fluence. Burst | Instrument | TOA | $W_{\text{eff}}$ | $F$ | $\alpha$ | DM | Burst | Detection ---|---|---|---|---|---|---|---|--- | | MJD | ms | Jy ms | | pc cm-3 | Category | S/N 58772A | CHIME/FRB | 58772.53808(1) | – | – | – | 23(3) | – | 12.9 58855A | CHIME/FRB | 58855.30794(1) | – | – | – | 23(3) | – | 9.7 58860A | CHIME/FRB | 58860.29698(1) | – | – | – | 24(3) | – | 15.0 58871A | CHIME/FRB | 58871.26526(1) | – | – | – | 24(3) | – | 9.4 59167A | CHIME/FRB | 59167.45383(1) | – | – | – | 24(3) | – | 10.7 59553A∗ | CHIME/FRB | 59553.39770(1) | – | – | – | 23(3) | – | 8.7 59341A | CHIME/Pulsar | 59341.97314(2) | 770(230) | 270(80) | -1.8(3) | 20(3) | C1 | – 59341B | CHIME/Pulsar | 59341.97803(1) | 280(90) | 90(30) | -1.5(3) | 23(3) | C1 | – 59456A | CHIME/Pulsar | 59456.65990(1) | 320(100) | 90(30) | -2.7(3) | 23(3) | C1 | – 59456B | CHIME/Pulsar | 59456.66474(1) | 400(120) | 90(30) | -2.0(3) | 23(3) | C1 | – 59460A | CHIME/Pulsar | 59460.64875(1) | 280(90) | 310(90) | -2.1(3) | 20(3) | C3 | – 59463A | CHIME/Pulsar | 59463.64282(2) | 910(270) | 670(200) | -2.1(3) | 22(6) | C3 | – 59548A | CHIME/Pulsar | 59548.41384(1) | 470(140) | 200(60) | -2.2(3) | 20(3) | C1 | – 59553A∗ | CHIME/Pulsar | 59553.39770(1) | 470(140) | 550(160) | -2.3(3) | 20(7) | C2 | – 59563A | CHIME/Pulsar | 59563.37042(2) | 650(200) | 500(150) | -2.5(3) | 23(3) | C2 | – 59565A | CHIME/Pulsar | 59565.36500(1) | 180(60) | 90(30) | -1.2(3) | 21(3) | C1 | – 59574A | CHIME/Pulsar | 59574.34304(2) | 200(60) | 60(20) | -1.9(3) | 25(4) | C2 | – ∗ This pulse was simultaneously detected with both the CHIME/Pulsar and CHIME/FRB systems. We report the measurement from each instrument seperately. The dynamic spectra for the CHIME/Pulsar detections are shown in Figure 1 and show a more complex temporal and spectral structure than in conventional radio pulsars. The bursts can span widths up to $\sim$4 s (1% of the period, in the case of 59463A) and can show erratic sub-structures reminiscent of the radio bursts seen from radio-loud magnetar XTE 1910–197 [15, 16] (in the case of 59460A). An initial periodicity analysis was performed with the rrat_period package from PRESTO 333https://github.com/scottransom/presto. rrat_period works by iterating over many different trial periods to find the period most likely to be an integer factor of the spacing between successive bursts. Using barycentre corrected arrival times with rrat_period yielded $P\approx 421$ s. This was then refined using TEMPO2 [29] and PINT [30] to obtain a phase- coherent timing solution with period of $P=421.35542(1)$ s and $\dot{P}=-2.5(1.6)\times 10^{-12}$ ss-1 at 1-$\sigma$ uncertainty (see Methods). The full set of phase-coherent timing parameters is given in Table 2. As a result of the low numbers of TOAs and the large uncertainties on each TOA, we emphasise that the negative $\dot{P}$ could contain significant covariances with position. To date, most pulsars which are spinning up have been found in binaries [e.g. 31]. Therefore, if the negative $\dot{P}$ were to be believed, then CHIME J0630+25 likely exists in a tight binary like the AR Scorpii and J1912–4410 systems. This is given further analysis in the Discussion section. More detections are required to determine $\dot{P}$ robustly. Consequently, we adopted a 1-$\sigma$ shifted upper limit on $\dot{P}$ as discussed below. Dispersion measure (DM) is a frequency dependent delay from the ionized component of the interstellar medium. This affects all radio pulses from pulsars and white dwarfs. The DM allows us to estimate the distance to the source using interstellar medium models. There exist two models for the Galactic electron density, NE2001 [32] and YMW16 [33]. We estimate the distance to CHIME J0630+25 using the YMW16 Galactic electron density model [33]. Analyses of Galactic interstellar medium models have shown that the YMW16 model is better than the NE2001 electron density model [32] for nearby pulsars and is likely accurate to a factor of $\sim$1.5 of the YMW16 estimate [34, 35]. Furthermore, we systematically compared pulsars with annual parallax measurements (or other more reliable distance measurements such as globular cluster association) at similar Galactic latitudes as CHIME J0630+25 to obtain the uncertainties for the YMW16 derived distances (see Methods). This results in the low dispersion measure distance of 170(80) pc. The next closest LPT is PSR J0901–4046 with a YMW16 dispersion measure distance of 330(90) pc. The uncertainties are derived with a similar method as CHIME J0630+25. CHIME J0630+25, along with PSR J0901–4046, are remarkably close and are prime candidates for multiwavelength follow-up. The existence of these two sources suggests a large Galactic population yet to be discovered. Figure 1: The collection of pulses from CHIME J0630+25 detected by CHIME/Pulsar. The top panel for each burst contains frequency averaged and dedispersed time series. The second panel shows the dynamic spectrum of each burst, and the bottom panel show the dedispersion heat map for each burst. The dedispersion heat map shows the power for many different DM trials. All astrophysical pulses of CHIME J0630+25 pulses centre on DM$\approx 22.5~{}$pc cm-3. Figure 2: continued ### 1.1 Periodicity and timing solution Table 2: The phase-coherent timing parameters for CHIME J0630+25. The uncertainties quoted are the 1-$\sigma$ confidence intervals. PEPOCH is the epoch for period determination. TIMEEPH is the time ephemeris that is used. NTOA is the number of times of arrivals used in the fit. CLOCK is the timescale that is used. JUMP1 gives the jump between the CHIME/FRB and CHIME/Pulsar instruments and is mainly determined by the single concurrent detection with both instruments. EFAC is a multiplicative factor in the TOA uncertainties. We also provide the derived parameters of $\tau$, the characteristic age, $B_{\text{surface}}$, the surface magnetic field, and $\dot{E}$, the spin-down luminosity. However, these are strictly limits based on the shifted upper limit on $\dot{P}$ as described in the text. The uncertainties given are those obtained from the timing fit. We did not fit the declination (see text). Property | Value ---|--- Name | CHIME J0630+25 R.A. (hh:mm:ss) | 06h30m43s$\pm$6’ Dec (dd:mm:ss) | 25∘23’14” $P$(s) | 421.35542(1) $\dot{P}$($\times 10^{-12}$ ss-1) | –2.5(1.6) PEPOCH(MJD) | 59173 TIMEEPH | FB90 NTOA | 17 CLOCK | TT(TAI) JUMP1(s) | 0.247 EFAC | 2.476 reduced $\chi^{2}$ | 1.0 Derived Values | Galactic Longitude (deg) | 188.0 Galactic Latitude (deg) | 7.1 $\tau$ ($\times 10^{6}$ yr) | $>$4.2 B${}_{\text{surface}}$ ($\times 10^{15}$ G) | $<$0.8 $\dot{E}$ ($\times 10^{26}$ erg/s) | $<$8.5 Figure 3: The $P-\dot{P}$ diagram. 1E161348–5055, a central compact object that shows evidence of magnetar-like emission, is shown with a grey star [20]. The grey circles show the LPTs discovered by the MWA. The longest period pulsar PSR J2050+5854 and LPT PSR J0901–4046 are shown by the coloured dots and labelled as such. The grey-shaded region shows the death valley for coherent pulsed radio emission. The black dashed line corresponds to a pure dipole, dotted lines for a twisted dipole and solid lines for a twisted multipole configuration [5, 6]. The blue dashed lines correspond to lines of constant age and the grey solid lines correspond to lines of constant magnetic field. The code to create this plot was adapted from [2] 1 with more source categories (i.e. millisecond pulsars and RRATs). All downward-facing arrows correspond to upper limits and are at the 1-$\sigma$ level. The CHIME J0630+25 arrow is longer to represent the shifted upper limit used (see text) 1https://github.com/nhurleywalker/GPMTransient We present the phase-connected timing solution in Table 2. Despite an 802-day timing baseline, due to the large uncertainties on the arrival times of each burst and the sparse sampling, we could not robustly constrain the declination of CHIME J0630+25 through timing alone. We note that fitting a timing jump (JUMP1) to account for the clock offset for the CHIME/FRB and CHIME/Pulsar systems resulted in a value consistent with 0. This is expected as the two systems have an offset of a few hundred milliseconds, less than the arrival time uncertainty of the bursts. Therefore, we used a fixed static jump of 0.247 s, derived from timing other pulsars. The clock differences between the two systems are given a full discussion in the Methods. We also note that an additional jump at MJD 59158 can be added to reduce the residuals by a factor of $\sim$2\. However, we find no physical motivation to do so as the global fit is already a small fraction of the pulse period ($<1.5\%$). Almost all theories for radio pulses from pulsars require the production of electron-positron pairs near the neutron star’s polar caps [36, 5, 6]. Within the context of these theories, the minimum magnetic field strengths and configurations govern the ability to generate the electron-positron pairs given a rotation period [5]. The different field configurations give rise to sets of permitted P-$\dot{P}$ values. Possible configurations can include dipolar, multipolar, or even twisted magnetic fields. They are complicated further as the field configuration is likely different for every pulsar due to their formation or evolution. Conventional pulsar radio emission should cease beyond some “death valley” of P-$\dot{P}$ combination where pair production is no longer efficient [5, 6]. In Figure 3, we show the Death Valley shaded in grey for the extremes of these magnetic field configurations. To place a conservative $\dot{P}$ upper limit on CHIME J0630+25, we take the shifted upper limit approach as described in [37] given by $\dot{P}_{\text{upper}}=\textrm{MAX}(\dot{P}_{\mu},0)+\sigma$ where $\sigma$ is the uncertainty given in Table 2. This results in $\dot{P}_{\text{upper}}=1.6\times 10^{-12}$ss-1. Such a shifted upper limit regime is resilient to statistical fluctuations if one believes that the true value is near 0. As all $\dot{P}$ measurements of other LPTs have either been positive or consistent with zero, we consider this a prudent upper limit on the $\dot{P}$ of CHIME J0630+25. The shifted upper limit constrains CHIME J0630+25’s magnetic field to be $<0.8\times 10^{15}$ G. Assuming that the emission is powered by spin-down, the position of CHIME J0630+25 in Figure 3 shows that it is only allowable by the twisted multiple configuration. Magnetic white dwarfs may emit coherent radio pulses with similar mechanisms as radio pulsars [38]. Due to their larger moment of inertia, for the same spin-down, white dwarf pulsars have much more spin-down energy to draw upon for their emission. For a white dwarf, assuming that the emission mechanisms are similar to that of radio pulsars, the shifted upper limit on $\dot{P}$ of CHIME J0630+25 would yield an upper limit on the magnetic dipole field of $\sim 3.4\times 10^{9}$ G by following equation 1 in [39]. The magnetic dipole death line for a white dwarf of $\sim$421 s is $\sim 3.3\times 10^{9}$ G [38]. Other field configurations, such as multipole and twisted magnetic fields lower this death line. Therefore, assuming the detected emission is similar to that in regular radio pulsars, white dwarf models are much more favourable for death valley limits of CHIME J0630+25. ### 1.2 Quasiperiodicity Quasiperiodicity is a feature where substructures within a burst are separated by almost an integer number of a quasiperiod. It is a common feature in radio transient phenomena on the timescales of milliseconds to seconds [40, 41, 16]. It has been suggested that quasiperiodicity may be a universal feature in neutron stars that span across multiple subclasses such as pulsars, magnetars, Rotating Radio Transients, or LPTs [41]. Some bursts of J0630+25 contain large amounts of substructure, such as 59460A. For this reason, we performed an autocorrelation analysis to search for quasiperiodicity in all bursts that show more than one peak in the burst time series (see Methods). We found no significant indications of quasiperiodicity in the 7 bursts with multiple peaks. ### 1.3 Swift X-ray observations Figure 4: The Swift-XRT observations with detected sources and the CHIME J0630+25 localisation region overlaid. The localisation uncertainty region for right ascension is from pulsar timing, and the declination uncertainty region is from multi-beam detections of J0630+25 with CHIME/FRB (see Methods). The sources are numbered to match those in Table 3. Table 3: The four X-ray sources within the CHIME J0630+25 localisation region were detected by the Swift XRT. see Methods for the estimated $N_{\mathrm{H}}$. The maximum Galactic line of sight $N_{\mathrm{H}}$ is $\sim 3\times 10^{21}$cm-2. Source | RA | Dec | S/N | 0.1-1.5 keV | 1.5-10 keV | $N_{\mathrm{H}}$ ---|---|---|---|---|---|--- | hms(J2000) | dms(J2000) | | counts/s $(10^{-4})$ | counts/s $(10^{-4})$ | cm-2 1 | 06:30:49$\pm$6.8” | 25:23:01$\pm$6.8” | 2.8 | 2.4(1.9) | 5.7(2.7) | $\sim 8\times 10^{21}$ 2 | 06:30:23$\pm$6.3” | 25:25:24$\pm$6.3” | 2.8 | 6.4(3.2) | 0(2) | $<2\times 10^{20}$ 3 | 06:30:28$\pm$5” | 25:17:01$\pm$5” | 3.2 | 12(4) | 0.3(8) | $<2\times 10^{20}$ 4 | 06:30:17$\pm$5.3” | 25:19:50$\pm$5.3” | 2.7 | 6.6(2.4) | 0.0(2) | $<2\times 10^{20}$ Neutron stars often emit thermal and non-thermal X-rays. Thermal emission is produced by a neutron star’s cooling surface or by hotspots such as the polar caps on the neutron star. These spots are reheated by magnetospheric return currents of accelerated charges [10]. The particles accelerated by the magnetic fields can also generate non-thermal X-ray emission independently in the form of synchrotron and curvature radiation [42, 43, 44]. Another form of non-thermal X-ray emission comes from young, energetic pulsars via the interactions between the pulsar wind and the surrounding interstellar medium. The interactions can produce shocks that accelerate pulsar wind particles, which in turn produce X-rays [45]. All these mechanisms contribute to a vast array of neutron star high-energy emission phenomenology. A particularly interesting subset of X-ray bright neutron stars are X-ray Dim Isolated Neutron Stars (XDINS) [46, 47]. These neutron stars are particularly close, between 100 and 500 pc away. Due to their proximity, XDINs provided precious insights into the formation pathways and thermal properties of neutron stars [48]. Driven by a similar proximity motivation as XDINS, on the 5th of November 2023, we began a 32 ks observation campaign of CHIME J0630+25 with the Neils Gehrels Swift Observatory X-ray Telescope (Swift XRT). This led to full coverage of the 1-$\sigma$ localisation area of CHIME J0630+25 and partial coverage of the 2-$\sigma$ localisation area. In total, we detected four X-ray sources within the 1-$\sigma$ localisation area. The sources are shown in Figure 4 and Table 3. We note that source 3 is coincident with a known variable star, ATO J097.6166+25.2833. The fastest rotating stars like $\alpha$ Regulus and Vega possess spin periods of $\mathcal{O}($10\text{\,}\textrm{h}$)$ [49, 50]. Significantly faster rotation will cause the star to lose its outer layers as it lacks sufficient centripetal acceleration [51]. Therefore, CHIME J0630+25 is unlikely to be associated with ATO J097.6166+25.2833. As the source counts are too low to build a spectrum, we measured the counts in two different energy bands, 0.1-1.5 keV and 1.5-10 keV. We find that the majority of photons emitted by sources 2, 3, and 4 are in the lower energy bands. To compare the $N_{\mathrm{H}}$ of the Swift candidates with the DM of CHIME J0630+25, we assumed a neutron star blackbody spectrum with a temperature of 0.5 keV. This choice of blackbody spectrum was based on the data reported in the McGill Online Magnetar Catalog 444https://www.physics.mcgill.ca/p̃ulsar/magnetar/main.html. We then estimated the neutral hydrogen along the line of sight using WebPIMMS 555https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl (see Methods) and found that sources 2, 3 and 4 possess low $N_{\mathrm{H}}$ values while source 1 is likely extragalactic. As it is likely that source 3 is associated with ATO J097.6166+25.2833, we argue that sources 2 and 4 are likely nearby and could be the X-ray counterparts of CHIME J0630+25. Deeper X-ray campaigns are required to characterise their spectra robustly. ## 2 Discussion ### 2.1 Timing model From the upper limit on $\dot{P}$, we derived the maximum spin-down luminosity available to CHIME J0630+25 and subsequently compared this with the observed radio luminosity for both the neutron star and white dwarf models. In the neutron star model, we derived the upper limit on the spin-down luminosity to be $8.5\times 10^{26}$erg s-1. Using an approximate mean flux density of 0.6 mJy 666This is the flux density averaged across the period., the radio luminosity is estimated to be $2.2^{+2.5}_{-1.6}\times 10^{26}$erg s-1, within the spin-down luminosity range. We note, that the radio efficiency is higher than any known radio pulsar. The emission mechanisms could also be powered by the magnetic fields of a neutron star such as the case in a magnetar. This is because the emission of CHIME J0630+25 is highly sporadic. One such explanation is the reconnection of magnetic field lines in a neutron star’s magnetosphere, invoked to explain the emission of fast radio bursts [52]. Emission produced by drawing energy from the neutron star’s magnetic field occurs when some mechanism (e.g. magnetohydrodynamics waves) causes the reconnection of magnetic field lines in magnetospheric plasma. Once reconnection occurs, the magnetic field energy is converted into plasma kinetic and thermal energy, accelerating charged particles and releasing radiation. This could be through mechanisms such as curvature radiation [53]. Magnetars are known to emit sporadic radio emissions, for example PSR J1622–4950 [54], XTE J1810–197 [16], and SGR 1935+2154 [55]. Therefore, if the origin of the radio emission for CHIME J0630+25 is similar, it could explain the highly intermittent pulses of CHIME J0630+25. Longer timing baselines and more detections will significantly improve the $\dot{P}$ constraint to determine a robust dipole magnetic field. White dwarfs possess a moment of inertia roughly $1\times 10^{5}$–$5\times 10^{5}$ times larger than a neutron star (assuming a range of 3000–7000 km radius for a white dwarf and 10 km radius for a neutron star). The spin-down luminosity is $0.9$–$4.3\times 10^{32}$erg s-1, significantly higher than the emitted radio luminosity of CHIME J0630+25. Therefore, there is more than sufficient energy in the white dwarf spin-down model. ### 2.2 The nature of CHIME J0630+25 Table 4: All sources that are similar in nature to CHIME J0630+25. Source | Width | $P$ | $\dot{P}$ | $b$ | Duty | Micro- | Binary | ref ---|---|---|---|---|---|---|---|--- | FWHM (s) | s | s s-1 | ∘ | Cycle | structure | | ASKAP J1935+2148 | 10-50 | 3225 | $<1.2\times 10^{-10}$ | 0.74 | 0.3–1.5% | ✓ | N/A | [3] GPM 1839–10 | 30–300 | 1318 | $<3.6\times 10^{-13}$ | $-2.06^{\circ}$ | 2.2–22.7% | ✓ | N/A | [2] GLEAM-X1 | 30-60 | 1091 | $<10^{-9}$ | $-2.6^{\circ}$ | 2–5% | ✓ | N/A | [1] CHIME J0630+25 | 0.2-3.2 | 421 | $<9\times 10^{-13}$ | $+7^{\circ}$ | 0.4–0.8% | ✓ | N/A | J1912–4410 | $<$4 | 318 | – | -22.06 | $<$1.2% | – | ✓ | [23] AR Scorpii | $\sim 60$ | 117 | $3.9\times 10^{-13}$ | 18.7 | $\sim$50% | – | ✓ | [22] PSR J0901–4046 | $\sim$0.3 | 76 | $\sim 2.21\times 10^{-13}$ | $+3.7^{\circ}$ | $\sim$0.70% | ✓ | $\times$ | [4] PSR J0250+5854 | $\sim$0.070 | 23.5 | $2.72\times 10^{-14}$ | $-0.5^{\circ}$ | 0.3–0.4% | – | $\times$ | [17] 1 Abbreviation for GLEAM-X J162759.5-–523504.3. Clearly, the new population of LPTs is unexpected among the decades of pulsar, magnetar and white dwarf discoveries. Comparing CHIME J0630+25 to sources that occupy the same parameter space can offer insight into the emission source [56, 38, 57]. We provide a collection of sources which produce similar emission mechanisms in Table 4 for easy comparison. In the following section, we discuss the possibility of CHIME J0630+25 being the same class of object as actively fusing stars, Galactic Center Radio Transients, white dwarfs and neutron stars. Many objects exhibit periodic radio emission at $\mathcal{O}($1\text{\,}\mathrm{h}$)$ timescales. These include flaring ultra- cool dwarfs, like TLV 513-46546 (1.96 h), main sequence stars like CU Virginis (12.5 h) [58], and sources of unknown origin like the Galactic Centre Radio Transients, GCRT J1745–3009 (1.3h) [59] and GCRT J1742–3001 (no period) [60]. For ultra-cool dwarfs, it has been suggested that the limit to their spin periods cannot be much less than $\sim$1 h due to rotational break-up [61]. A similar argument can be given for main sequence stars at about a $\sim$10 h rotational period. GCRTs are pulsing radio sources towards the Milky Way Centre. Like LPTs, their nature remains mysterious. The pulse widths of GCRT J1745–3009 are $\sim$11 minutes wide and much longer than CHIME J0630+25 or any LPT [59]. Similarly, GCRT J1742–3001’s flares have widths on timescales of months, again much longer than any LPT [60]. Therefore, we conclude that none of the known source types with hour long periods can explain the properties of CHIME J0630+25. A white dwarf origin is attractive as their rotation periods are more aligned to that of LPTs, and in particular, CHIME J0630+25 [22, 23]. AR Scorpii and J1912–4410 are the only two known radio-pulsating white dwarfs, and their rotation periods (117 s and 318 s, respectively) are within a few factors of CHIME J0630+25 [22, 23]. While AR Scorpii shows a high-duty cycle at all observed frequencies, J1912–4410 shows similar duty cycles to radio pulsars at 1.4 GHz and a high-duty cycle at higher frequencies. Therefore, at least some white dwarfs can produce radiation akin to that of radio pulsars at similar observational frequencies. We show this in Table 4. However, pulsating white dwarfs emit pulses of $\mathcal{O}($220\text{\,}\textrm{Jy pc}^{2}\mathrm{/}\textrm{beam}$)$, more than an order of magnitude less luminous than CHIME J0630+25 [22, 23]. Note that this is only a sample of 2, and the deviations in the population of pulsating white dwarfs remain to be seen. Both AR Scorpii and J1912–4410 are in tight binary systems, and their radio emission is thought to be an interaction between the spin frequency and the orbital frequency [39, 23]. In the case of CHIME J0630+25, we can not rule out a binary system. Indeed, if we assume circular orbits and that the timing residuals are caused by orbital motion, then an estimate of the projected semimajor axis is possible. Assuming a maximum timing residual of $\sim 5$ s and that Roemer delay is the main contributor to the timing residuals, we estimate that if CHIME J0630+25 were to exist in a binary, the maximum semi- major axis is $\sim$0.01 AU (using equation 8.28 in [62]). This is a factor of 2 larger than the semimajor axis of AR Scorpii and J1912–4410 both at $\sim 0.0055$ AU. Therefore, more timing data and deeper optical surveys are required to investigate the binary white dwarf scenario for CHIME J0630+25. We performed a cross-check with known white dwarfs compiled from Gaia EDR3 data [63] and found one viable candidate, WDJ063117.11+252250.59, within the CHIME J0630+25 localisation region. WDJ063117.11+252250.59 is located at RA=06:31:17.09$\pm 0.0004"$, Dec=25:22:50.12$\pm 0.0007"$ with a parallax distance of 272(56) pc. The position of WDJ063117.11+252250.59 is within 2 $\sigma$ of CHIME J0630+25. However, despite being within $2\sigma$ of the localisation region of CHIME J0630+25, if a source is $\sim 0.5^{\circ}$ or more deviated from the pointing of CHIME/Pulsar beam, then we should see significant beam attenuation effects at the higher observing frequencies of CHIME/Pulsar. There are no evidence of beam effects in any of the bursts detected by CHIME/Pulsar (Figure 1), and therefore, we conclude that CHIME J0630+25 is unlikely to be associated with WDJ063117.11+252250.59. Assuming the bursts are evenly distributed in time, CHIME J0630+25 has a burst rate of $\sim$0.06 bursts/h. Thus, with $\sim$17 hours one can expect to make a detection at CHIME’s sensitivity. With better localisation by utilising the CHIME/FRB baseband system [64] or other interferometric telescopes, we will be able to examine the white dwarf model more conclusively. Neutron star models for LPTs often invoke an old magnetar. Magnetars are theorised to be the early stage of neutron star evolution, and their strong magnetic fields are predicted to decay on a timescale of $\sim 10^{4}$ years. From spin-down alone, the maximum period of a magnetar can only reach $\sim 13$ s in this scenario [65]. Therefore, it has been theorised that magnetars can only reach longer periods via another mechanism, such as angular momentum kicks via giant flares [66]. Indeed, CHIME J0630+25 possess a characteristic age $>10^{6}$ yr, which disfavours the magnetar model. To support the theoretical ultra long period magnetar, there is one candidate magnetar with a period of 6.67 h, 1E161348-5055 [20]. However, no radio pulsations have been seen to date. This is unsurprising as only a subset of known magnetars are radio emitters [10]. The upper limit for the surface magnetic field of CHIME J0630+25 is 0.8$\times 10^{15}$ G, which allows for a magnetar-like field. CHIME J0630+25 exhibits many similar characteristics to known Galactic magnetars and radio pulsars. Firstly, the duty cycle of CHIME J0630+25 is $\sim$0.4–0.8%, which is similar to known long-period pulsars such as PSR J0250+5854 and magnetar candidate PSR J0901–4046 at 0.3–0.4% and $\sim$0.7% respectively [17, 4]. The burst morphology of J0630+25, characterised by complex temporal and spectral structure (such as that seen in 59460A), has been documented in radio-loud magnetars such as XTE J1810–97 [16], and also the Galactic centre magnetar PSR J1745–2900 [67]. To the best of our knowledge, they have not been seen in single pulses of radio pulsars or white dwarfs. There are some characteristics of CHIME J0630+25 which favour a regular radio pulsar rather than a magnetar. The spectral index of most bursts from CHIME J0630+25 is steep, with an average of -2.0. Radio magnetars emit spectra which are flat even over a wide range of frequencies, from 1.4–45 GHz [68, 69, 70, 71]. However, most radio pulsars have steep spectra, with a mean spectral index of $\sim-1.8$ [72]. If CHIME J0630+25 is a neutron star, then multiwavelength observations of the source region – including spectroscopy of the Swift-identified X-ray sources discussed above – are essential for discerning the nature of CHIME J0630+25. ## 3 Methods ### 3.1 Source Identification CHIME/FRB is a trigger-based FRB-detection instrument on the CHIME telescope [25] that constantly scans the overhead sky with 1024 FFT-formed beams between 400–800 MHz and a field of view of approximately $2^{\circ}$ in RA and $100^{\circ}$ in Dec [26]. The instrument triggers on any impulsive signal that passes the initial radio frequency interference (RFI) check [26]. Then, bespoke software will determine if the incoming signal is terrestrial or astrophysical in nature. Data are then saved to disk if a certain S/N threshold (currently 8.5) is met. Due to the substantial data volume, data for sources within the Milky Way Galaxy were not saved by CHIME/FRB until October 2022. However, metadata were saved for each “event” regardless of origin. The metadata contains real-time pipeline-derived information such as the RA, Dec, DM, TOA, and S/N. We used the CHIME Metadata Clustering Analysis (CHIMEMCA), to identify CHIME J0630+25 [see 24, for a full description]. After filtering out RFI, a cluster was identified by CHIMEMCA, with the first event of the cluster detected on MJD 58772 at 12:54:51 UTC, at RA=06:30:19$\pm 30^{\prime}$ and Dec=25:23:14$\pm 30^{\prime}$ with a DM of (22.6$\pm$3.2) pc cm-3. The CHIME/Pulsar instrument forms ten steerable phased array tracking beams to track sources as they pass through the CHIME field of view. It produces high- time resolution spectra, packaged as conventional SigProc-style filterbank data777https://sigproc.sourceforge.net/. CHIME/Pulsar can also correct for intrachannel dispersion smearing through coherent dedispersion and can record data at significantly higher time resolutions than CHIME/FRB. These advantages, along with the phased array tracking beams, contribute to an increase in sensitivity for CHIME/Pulsar compared to CHIME/FRB. Follow-up observations were conducted using the CHIME/Pulsar system from MJD 59300 to MJD 60116. These observations were carried out nearly daily, consisting of about 10 minutes per scan, resulting in 688 observations and 175.946 hours of Stokes I data for this study. To process all the CHIME/Pulsar data, we have employed the CHIME/Pulsar Single-pulse PIPEline (CHIPSPIPE)888https://github.com/CHIME-Pulsar-Timing/CHIME- Pulsar_automated_filterbank, an automated single-pulse search pipeline designed to handle the large data volume of CHIME/Pulsar. The pipeline is based on PRESTO [73]. It further utilises SPEGID [74] and FETCH [75] to filter out spurious candidates. Finally, the pulses that are graded as astrophysical by FETCH are examined by a human. For a detailed discussion of CHIPSPIPE, refer to [24]. For all the bursts that passed the human check, we manually removed the RFI-contaminated channels and generated the dynamic spectrum, pulse profile, and dedispersion-time plot for each burst using the Your [76] package. These are shown in Figure 1. CHIPSPIPE has limited sensitivity to wider pulses. Therefore, we visually inspected the dynamic spectrum and pulse profile around each detected pulse within a 10-second window. This manual inspection aims to identify additional sub-pulses missed by CHIPSPIPE that may provide evidence of quasiperiodicity. 59460A, 59463A, 59553A, 59563A, and 59574A exhibited distinct second peaks not flagged by the initial CHIPSPIPE detection. In total, with CHIME/FRB, we detected 6 bursts above S/N 8.5, and with CHIME/Pulsar, we detected 11 bursts with Stokes-I intensity data showing extended widths and complex structures, typical of other LPTs. Despite the lack of stokes-I intensity data, we confirmed the CHIME/FRB bursts to be real astrophysical events as the arrival time is consistent with the rotation period of CHIME J0630+25. All bursts are detailed in Table 1. Subsequent localisation with the CHIME/FRB metadata [28] yielded RA=06:31:00$\pm$6’, Dec=25:23:24$\pm$11’ in J2000 coordinates at the 1$\sigma$ level. Considering that the disparity between the CHIME/Pulsar pointing and the CHIME/FRB localisation is roughly equivalent to one full width at half maximum of the CHIME/Pulsar beam at 400 MHz [27], it is likely that the true location of the source lies between the localisation made by CHIME/FRB and the CHIME/Pulsar pointing. Indeed, the long-term timing of CHIME J0630+25 suggests that the best fit RA is 06:30:43$\pm$6’ (see Section 1.1), which is taken to be the preferred RA localisation throughout this study. Interferometric telescopes with low-noise environments, such as the Karl Jansky Very Large Array or MeerKAT, should be able to localise CHIME J0630+25. Furthermore, we are currently saving CHIME/FRB baseband data, which will allow for more precise localisation of future detections. ### 3.2 Quasiperiodicity Table 5: This table shows the results of fitting for the quasiperiodic peaks in the autocorrelation function. The $X_{i}$ parameters are the location parameters for the Gaussian peak in the ACF. The $\sigma$ parameters are the width parameters for the peaks in the ACF. 59460A is the only burst with two peaks in the intensity time series and, therefore, is the only burst with two $X_{i}$ parameters. Burst | $X_{0}$ | $\sigma_{0}$ | $X_{1}$ | $\sigma_{1}$ ---|---|---|---|--- | ms | ms | ms | ms 59460A | 206${}_{-8}^{+7}$ | 88${}_{-7}^{+8}$ | 440${}_{-10}^{+10}$ | 123${}_{-7}^{+7}$ 59463A | 2490${}_{-10}^{+10}$ | 504${}_{-10}^{+20}$ | – | – 59553A | 376${}_{-2}^{+2}$ | 209${}_{-2}^{+3}$ | – | – 59563A | 760${}_{-10}^{+10}$ | 270${}_{-20}^{+20}$ | – | – 59574A | 1460${}_{-60}^{+60}$ | 410${}_{-80}^{+100}$ | – | – (a) 59460A (b) 59463A (c) 59553A (d) 59563A Figure 5: Autocorrelation on the band-averaged time series of C2 and C3 type bursts. The top panel for each burst is the band averaged intensity, and the second panel is the autocorrelation, where the dashed red line is the Gaussian modulated exponential decay that has been fitted. The third panel subtracts the exponential decay from panel two, leaving only the Gaussian component. The bottom panel is the residual after all components of the fit have been subtracted. The red vertical lines represent the location of the fitted Gaussian. The residuals still exhibit a small amount of structure. This is generally an order of magnitude smaller than the subtracted signal and is the result of two factors: the variable CHIME/Pulsar baseline and not being able to exactly describe the ACF peak as a Gaussian (such as is the case for 59463A and 59553A). (a) 59574A Figure 6: continued This section aims to identify any quasiperiodicity in the bursts emitted by CHIME J0630+25. We began our analysis by visually examining the bursts. This initial step allowed us to identify distinct burst morphologies, which we then categorised into three groups: C1, C2, and C3. C1 represents bursts with a single clear peaked envelope. C2 bursts are double-peaked, where the second peak builds on top of the first without returning to the baseline. Finally, C3 bursts occur when the pulse rises, reaches a peak, falls back to near baseline, and exhibits a second peak (and, in the case of 59460A, a third). Among the bursts analysed, we found three (59553A, 59563A, and 59574A) exhibiting C2 behaviour and two (59460A and 59463A) exhibiting C3 behaviour and — the rest exhibit C1 type behaviour. The categories are tabulated in Table 1. In the subsequent analysis, only the bursts belonging to the C2 and C3 categories are used. We examined the separation between the peaks in C2 and C3 bursts by performing autocorrelation on each burst. The autocorrelation results are presented in Figure 5. The second panel shows the autocorrelation function (ACF) result for each burst. To extract the separation between peaks, we fitted an exponential decay modulated by a Gaussian function to the ACF using Bayesian Markov Chain Monte Carlo (MCMC) with uniform priors. The fit model is described by $f(T)=A\exp{(-BT)}+C\exp{\bigg{(}-\frac{(T-X_{0})^{2}}{2\sigma_{0}^{2}}\bigg{)}}$ (1) for all bursts apart from 59460A or $f(T)=A\exp{(-BT)}+C\exp{\bigg{(}-\frac{(T-X_{0})^{2}}{2\sigma_{0}^{2}}\bigg{)}}+D\exp{\bigg{(}-\frac{(T-X_{1})^{2}}{2\sigma_{1}^{2}}\bigg{)}}$ (2) in the case of 59460A. $A,B,C,D,X_{0},\sigma_{0},X_{1},\sigma_{1}$ are fit parameters and $T$ is the time-lag parameter. The $X$ parameters give the separation between sub-bursts. The $\sigma$ parameters give the width of the fitted Gaussian. 59460A requires a two Gaussian fit due to the three peaks in the intensity time series. The exponential baseline is removed for each burst and is shown in the respective third panels of Figure 5. This leaves only the Gaussian (peak) component behind. $A,B,C$, and $D$ are nuisance parameters and are marginalised over to provide the fit results in Table 5. Interestingly, we found that the peak time lag in 59563A is a factor of 2.02(4) greater than that of 59553A, and the peak time lag in 59574A is a factor of 1.9(1) greater than that of 59563A. We then tested the significance of this apparent factor of two. We simulated five different sub-burst separations between 180 ms, the narrowest burst detected, and 10,000 ms, the search range for additional sub-bursts. We then assigned a sub-burst arrival time for each sub-burst, i.e., if one of the simulated burst separations was 760 ms, the TOAs assigned would be 0 ms and 760 ms. Therefore, for five sub- burst separations, we get five pairs of TOAs. We then found every combination of three sub-burst separations pairs. There are ${5\choose 3}=10$ combinations. For each combination, the TOA pairs were inputted into rrat_period_multiday, a sub-program of PRESTO. rrat_period_multiday was designed to find any underlying periodicity for many sets of single-pulse arrival times and will provide the most likely period and the separation between bursts [77, 78]. When the period calculated by rrat_period_multiday results in all separations within 0.1 of an integer number, we counted that as a success. If there is one success within the ${5\choose 3}$ combinations, then that trial was successful. We performed 1,000,000 trials with 982,002 successes. This indicates that the situation where there is a common integer factor between the three different burst separations is highly likely and, thus, is not significant. ### 3.3 Distance determination and dispersion measure For the distance determination, the dispersion measure used is the error- weighted average of all the CHIME/Pulsar detections. This is given by $DM_{av}=\frac{\sum_{i}DM_{i}/\sigma_{i}^{2}}{\sum_{i}1/\sigma_{i}^{2}}.$ (3) Where $DM_{i}$ is the dispersion measure of each individual pulse and $\sigma_{i}$ is the corresponding uncertainty. The average uncertainty is given by $\sigma_{DM}=\frac{1}{\sqrt{\sum_{i}\sigma_{i}^{2}}}.$ (4) The resultant DM value is 22(1) pc cm-3. The distance was determined by the YMW16 electron density model [33]. To find the uncertainties associated with CHIME J0630+25, we queried the ATNF pulsar catalogue and applied a series of cuts to the full catalogue. First, we isolated those pulsars with similar Galactic latitudes as the $b=7^{\circ}$ of CHIME J0630+25. We selected pulsars with $b>4^{\circ}$ and $b<10^{\circ}$. We then limited the DM of these sources to be less than 60 pc cm-3. Finally, we only considered pulsars with an independent distance measure, such as parallax or globular cluster association. In total, 10 pulsars met this criteria. We then defined the uncertainty of CHIME J0630+25 as the standard deviation of CHIME J0630+25 if it were to possess the same fractional uncertainty as the 10 pulsars. Explicitly this is $\sigma_{\text{J0630+25}}=\sqrt{\sum_{i}^{N}\frac{\bigg{(}170~{}\text{pc}\times\frac{d_{i}}{y_{i}}-\mu\bigg{)}^{2}}{N}}$ (5) where $d_{i}$ is the independently measured distance to the pulsar, $y_{i}$ is the YMW16 derived distance, $\mu$ is the mean of all 170 pc$\times\frac{d_{i}}{y_{i}}$, and N is the total number of pulsars which meet the defined criteria. For CHIME J0630+25, we found that the distance is 170(80) pc. Note that the 1 pc cm-3 uncertainty on the DM translates to a deviation of 2 pc according to the YMW16 electron density model and, therefore, is negligible compared to the uncertainty due to the electron density model itself. Therefore, is not included in the final value. Applying this error determination method to the 10 pulsars in our sample, we find that the true position of all but one source is within 2-$\sigma$ of the YMW16 distance estimate. ### 3.4 Timing The time of arrival (TOA) extraction is vital for period determination and timing solution extraction. This is necessary for both the CHIME/FRB and CHIME/Pulsar datasets. Unfortunately, for the CHIME/FRB detections, the only data products saved were the metadata. These contain a timestamp but no total intensity data, and therefore it is difficult to know if the system triggered on one of the microstructure peaks. The complex and long burst widths of CHIME J0630+25 mean that CHIME/FRB could have triggered on any one of the microstructure peak components within the burst envelope. Therefore, we took a conservative TOA error of 1 s for all metadata-only detections. The CHIME/FRB TOAs are referenced at the bottom of the CHIME band at 400 MHz. For the CHIME/Pulsar detections where the total intensity data are recorded, we followed the prescription outlined in [2]. Specifically, we smoothed the bursts’ pulse profile using a 1-s Gaussian kernel. The maximum of the smoothed profile is taken as the TOA, and the full-width-half-max (FWHM) is taken as the uncertainty of the TOA. The CHIME/Pulsar TOAs are referenced at the top of the CHIME band at 800 MHz. In total, we collected 11 TOAs spread over 802 days between MJD 58772 and 59574. All TOAs are then corrected to the solar system barycentre arrival time using Astropy for use with rrat_period. The next step is to generate the full timing ephemeris. Using the rrat_period module of PRESTO with the barycentre corrected TOAs of all CHIME/Pulsar bursts, we found that the most likely period for CHIME J0630+25 is $\sim 421$ s. The derived period is only approximate as rrat_period provides only an estimated period without accounting for period derivatives. On two occasions, CHIME/Pulsar observed two bursts during the same transit. These are MJD 59341A, B and MJD 59456A, B, separated by 422.5(2.2) s and 418.2(1.4) s, respectively, driving the convergence to 421 s. Taking this as a starting point, we created a timing ephemeris, with the period derivative set to 0 and one phase jump (JUMP1) between the CHIME/Pulsar and CHIME/FRB instruments. We tested the validity of fitting the phase jump between the two instruments by performing the same TOA extraction procedure described above on multiple pulsars, namely J0012+54, J0209+5759, and J1838+5051. During this process, we ensured that the timespan covered was similar to that of CHIME J0630+25. Then, using PINT, a jump is fit between the CHIME/FRB and CHIME/Pulsar TOAs of the three pulsars. The fitted jumps for all three pulsars are 0.215(2) s, 0.250(6) s, and 0.272(2) s, respectively. For the timing of CHIME J0630+25 we use a fixed 0.247 s clock offset between the CHIME/FRB and CHIME/Pulsar TOAs. This is the mean of the jumps fit for the three pulsars. We then used two standard pulsar timing software, PINT 999https://github.com/nanograv/PINT[30] and TEMPO2 101010https://ascl.net/1210.015 [29], to perform a least squares fit to all the TOAs of J0630+25. The results for the fit are provided in Table 2, and the residuals are provided in Figure 7. We noticed a slight downward drifting trend before MJD 59168. Therefore, we tried to fit the second period derivative, however it did not provide significant improvements. To test the significance of the 421 s period, we calculated the probability that an alternative period will always be detected as an integer multiple of 421 s. We defined the alternative period as $P2=P_{J0630+25}/i$ where $P_{J0630+25}$ is the period given in Table 2 and i is an integer factor. We also accounted for the timing noise shown in Figure 7 by allowing the detection to be made within 6 s (roughly the worst residual of all the TOAs) of an integer multiple of 421.35542 s. If CHIME J0630+25 had a true period of 421.35542 s/i then the probability that a pulse lands precisely on a multiple of 421.35542 is 1/i. Because we allowed it to land anywhere within 6 s of an integer multiple of 421.35542 s the probability is therefore $(2\times\rm{int}(6/P2)+1)/i$. Where int is the integer flooring function. The first burst does not contribute to the probability as it sets the starting point. Therefore, the probability that a source with an alternative period will always be detected within 6 s of an integer multiple of 421.35542 s is $P_{i}=\bigg{(}\frac{2\times\textrm{int}(6/P2)+1}{i}\bigg{)}^{N-1}$ (6) where N is the number of detections made. The alternative period can vary between i=2 to i=210 such that the smallest alternative period is the largest FWHM of any burst. We then account for the look-elsewhere effect by calculating the probability of any alternative period defined by $P_{\textrm{any alternative}}=1-\Pi_{i=2}^{210}(1-P_{i}).$ (7) Only using the 11 bursts from CHIME/Pulsar, the probability of any alternative period is 0.00099. This is dominated by $P_{2}=0.00097$ and rapidly decays for all other alternative periods. Therefore, we conclude that 421.35542 s is the correct period for CHIME J0630+25. Figure 7: The timing residuals for CHIME J0630+25. The plot shows the residuals of the phase-connected timing parameters in Table 2. ### 3.5 DM Measurement and Flux Calibration Using the total intensity data obtained by CHIME/Pulsar, we measured some essential characteristics such as effective pulse width, fluence, and DM. For the bursts detected from CHIME J0630+25, the fluence is defined by the integrated flux density over the duration of the burst, and the effective width is calculated via $W_{\rm eff}=\text{F}/\text{S}_{peak}$ where F is the fluence and Speak is the peak flux density. CHIME/Pulsar has declination-dependent sensitivity. Therefore, we used 3C133 as a calibrator source to determine the system equivalent flux density (SEFD). The SEFD is calibrated by fitting the telescope temperature and is defined in the following way $\text{SEFD}(\nu)=\frac{T_{\rm telescope}(\nu)+T_{\rm sky}(\nu)}{G(\nu)}$ (8) where $T_{\rm sky}$ is obtained from the Haslam 408-MHz all-sky map [79, 80], $G\approx 1.16$ K/Jy is the telescope gain and $T_{\rm telescope}$ is the system temperature of all telescope components (i.e. receiver, structure, ground etc). The flux density of a steady source is given by $S(\nu)=\frac{T_{\rm on}(\nu)-T_{\rm off}(\nu)}{T_{\rm off}(\nu)}\times\text{SEFD}(\nu),$ (9) where $S(\nu)$ is the flux density. $T_{\rm on}$ and $T_{\rm off}$ are the temperatures of the calibrator source and a blank patch of nearby sky, respectively. $T_{\rm telescope}$ is an unknown, thus, we performed a maximum likelihood reduced $\chi^{2}$ fit of the CHIME/Pulsar measured 3C133 spectrum against the catalogued flux density measurements of 3C133. For the catalogue values, we used the VLA calibrator list 111111https://science.nrao.edu/facilities/vla/observing/callist. We found that the best functional form of $T_{\rm telescope}$ is a 5th-order polynomial. We then used the single pulse radiometer equation to convert from S/N units to Jy units for each burst, $S(\nu)=\frac{\text{S/N}}{\sqrt{n_{p}\Delta\nu\Delta t}}\cdot\text{SEFD}(\nu)$ (10) where S/N is the signal to noise ratio, $n_{p}$ is the number of polarisations, $\Delta\nu$ is the bandwidth, and $\Delta t$ is the time resolution. We measured the DM of each pulse using DM_phase 121212https://github.com/danielemichilli/DM_phase which is a brute force algorithm that maximises the coherent power across the bandwidth by trying many different DMs. Due to the wide burst widths, the DM uncertainty for each pulse is large; this is exemplified in Figure 1 by the large hot spot that the dedispersion panel covers. ### 3.6 Spectral index Using the total intensity data for the CHIME/Pulsar bursts, we measured the spectral index of CHIME J0630+25. We first flux calibrated the bursts using the method described above. This process will also serve to calibrate the spectrum of CHIME J0630+25. Then, each burst is integrated over its duration to obtain a spectrum. Maximum likelihood is used to fit the spectra with a power law model of the form $S(\nu)=A\nu^{\alpha}$ (11) where $\alpha$ is the spectral index, and A is the amplitude parameter. Both $\alpha$ and A are fit parameters. The spectral indices are provided in Table 1. All spectral index fits are provided in the Appendix. To place an uncertainty on the fit, we measured the spectral index of 23 other calibrator sources with known spectral indices using the same technique. We found that the mean uncertainty on the spectral index calibrated in this way is 0.3. ### 3.7 Energy Budget The upper limit of the spin-down energy is given by [62] $\dot{E}=-4\pi^{2}I\dot{P}/P^{3}$ (12) where $I$ is the moment of inertia of the neutron star, P is the period, and $\dot{P}$ is the period derivative. The moment of inertia is assumed to be $I=10^{45}$ g cm2, the period is 421.35542 s, and the period derivative is given by the conservative shifted upper limit of $1.6\times 10^{-12}$ s s-1. This yields an upper limit on the spin-down energy of 8.5$\times 10^{26}$ erg s-1. We followed the prescription laid out in Equation 3.40 of [62] for the estimated radio luminosity. This is given by $L=\frac{2\pi d^{2}}{\delta}(1-cos{\rho})S_{\textrm{mean}}(f_{0})\frac{f_{0}^{-\zeta}}{\zeta+1}(f_{2}^{\zeta+1}-f_{1}^{\zeta+1}),$ (13) where L is the total radio energy output, d is the distance, $\rho$ is the opening angle, $\delta$ is the duty cycle, $f_{0}$ is 600 MHz for CHIME, $S_{\textrm{mean}}$ is the mean flux density at $f_{0}$, $\zeta=-2$ is the mean spectral index, and $f_{1}\approx 10^{7}$ Hz and $f_{2}\approx 10^{11}$ Hz are reference frequencies. For the duty cycle, we took the largest FWHM of the collection of CHIME J0630+25 pulses, i.e., $\delta=W/P=\sim 4~{}s/421~{}s=9.5\times 10^{-3}$. Assuming typical opening angles of $\rho\approx 6^{\circ}$ [62], an observing frequency of 600MHz, mean flux density of 0.0006 Jy (calculated by the average fluence/period), and a distance of 0.17 kpc, we found $L\approx 1.2\times 10^{31}\text{erg~{}s}^{-1}(0.17\pm 0.08\text{ kpc}/\text{kpc})^{2}(0.0006\text{ Jy}/\text{Jy})=2.2^{+2.5}_{-1.6}\times 10^{26}\text{erg~{}s}^{-1}$ (14) Therefore, the total radio luminosity output of CHIME J0630+25 is within the spin-down luminosity range. ### 3.8 Follow up of CHIME J0630+25 with other telescopes We use data from both archival observations near CHIME J0630+25 and targeted follow-up campaigns to explore CHIME J0630+25 across many wavelengths. In the radio band, we used the Green Bank Telescope (GBT) because of its increased sensitivity and longer observation tracks compared to CHIME. We also used the upgraded Giant Metrewave Radio Telescope (uGMRT) and archival VLA Low-band Ionosphere and Transient Experiment (VLITE) to localise CHIME J0630+25. Both uGMRT and VLITE can reach arcsecond angular resolution. Pulsars and magnetars are known to emit X-rays. Therefore, we also performed targetted observations with the Neils Gehrels Swift Observatory’s X-Ray Telescope (XRT). Finally, some magnetars are soft $\gamma$-ray repeaters. Thus, we searched the known $\gamma$-ray archives for as-of-yet-unknown magnetar candidates. #### 3.8.1 Radio Table 6: Properties of all telescopes used for this study | CHIME/FRB | CHIME/Pulsar | GBT L-band | GBT 800MHz | VLITE 338 MHz ---|---|---|---|---|--- Receiver noise temperature | $\sim$50K | $\sim$50K | $\sim$18K | $\sim$20K | $\sim$180K Frequency Range | 400-800MHz | 400-800MHz | 1150-1730MHz | 680-920MHz | 321.9-360MHz Number of beams | 1024 (Static) | 10 (Tracking) | 1 | 1 | 1 Beam width (FWHM) | 40’-20’ | 30’-15’ | 9’ | 15’ | 5” Time resolution | 1ms | 327.68$\mu$s | 20.48$\mu$s | 20.48$\mu$s | 2s Search Frequency Resolution | 24.4 kHz | 390.625 kHz | 781 MHz | 195 MHz | 47.1 MHz Coherent Dedispersion | No | Yes | Yes | Yes | No We were awarded 16 hours of observations using the GBT at 800 MHz and 16 hours at 1440 MHz to observe CHIME J0630+25. We used the VEGAS back end at 20.48 $\mu$s time resolution and coherently dedispersed to 22.5 pc cm-3. The observations were taken between MJD 59861 and 59914. The Stokes I data were processed using CHIPSPIPE to search for single pulses. No pulses were detected. We were also awarded 12 hours of data with the upgraded Giant Metrewave Radio Telescope to localise the source. We performed the observations between MJD 59700 and 59837 from 950 MHz to 1460 MHz. These had an integration time of 0.67 s in incoherent array mode. Unfortunately, due to the variable baseline and the high levels of radio frequency interference (RFI), we were not able to make use of the uGMRT data. Finally, we searched through archival data from VLITE [81, 82]131313https://vlite.nrao.edu for high-resolution observations ($\sim$ 5”) at 340 MHz covering the region of interest. We identified two observations where CHIME J0630+25 was located within 2∘ of the phase centre and made short time interval images at the VLITE sample time of 2 s. The short VLITE images were catalogued using PyBDSF [83], and components associated with all persistent radio sources were eliminated. The remaining catalogued sources were low signal-to-noise (S/N $\sim$ 4), and visual inspection revealed these remaining candidates were likely associated with poorly cleaned sidelobes in the images. As the CHIME/FRB localisation is large compared to the VLITE resolution, the chance coincidence for a low S/N candidate is large. Therefore, it is difficult to associate any low S/N VLITE candidates with CHIME J0630+25. Unfortunately, VLITE’s highest time resolution is 2 s and so bursts from CHIME J0630+25 are predominantly less than one bin long, resulting in low S/N. This makes it challenging to differentiate potential CHIME J0630+25 bursts in VLITE data from remaining uncleaned sidelobe structures. #### 3.8.2 X-ray Our X-ray observations of CHIME J0630+25 consisted of 32ks of Swift XRT time under target ID 97140 and 97203. The two targeted observations allowed comprehensive RA coverage over the large uncertainty area of CHIME J0630+25. To process the data, we used the tools provided by the UK Swift Science Data Centre 141414https://www.swift.ac.uk/user_objects/ to create the images. Then, we used Ximage 151515https://heasarc.gsfc.nasa.gov/docs/software.html to detect the sources and provide S/N estimates. The final image (Figure 4) was produced using Saoimage DS9 161616https://sites.google.com/cfa.harvard.edu/saoimageds9. To measure the count rates at 0.1-1.5keV and 1.5-10keV, we created the images using the UK Swift Science Data Centre image creation package. Then, using Ximage and the sosta program, we estimated the localised count rate and background in the local region of the sources. We then used the WebPIMMS 171717https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl software to estimate the neutral hydrogen along the line of sight for each of the sources. This was done by assuming a black body spectrum of 0.5 keV. Then, we entered an input energy range of 0.1-1.5keV and an output energy range of 1.5-10keV. We then adjusted the Galactic $N_{\mathrm{H}}$ value until we found a reasonable match with the observed hardness ratio. #### 3.8.3 $\gamma$-ray We also searched for possible as-of-yet unknown soft $\gamma$-ray repeater counterparts to CHIME J0630+25. These would reside in the same databases as $\gamma$-ray bursts (GRB), albeit with an unknown classification. We first cross-match the coordinates and times of arrivals of CHIME J0630+25 with all $\gamma$-ray sources reported in GRBWeb [84]. We limit the GRBWeb triggers to those that are well localised (e.g., 1$\sigma$ spatial error $<$1 degree), as it is challenging to claim significant spatial coincidences for triggers with either unknown or large uncertainty regions. In our cross-match, we conservatively assume a 1$\sigma$ positional error in RA of 1 degree and a 1$\sigma$ positional error in DEC of 0.5 degrees for CHIME J0630+25. We then cross-match the localisation region of CHIME J0630+25 with that of all known sources in GRBWeb, requiring the localisations to be consistent within the 3$\sigma$ uncertainties. Within one week of each burst, we did not find any sources to be coincident with CHIME J0630+25. However, given GRBWeb’s focus on cosmological GRBs and not Galactic $\gamma$-ray sources such as soft $\gamma$ repeaters, we also cross-match the position of CHIME J0630+25 and its bursts with all triggers reported in the $\gamma$-ray Coordination Network (GCN)181818www.gcn.gsfc.nasa.gov circulars. We again limit our search to well- localised triggers, e.g., ($\sigma<1$ degree), and do not find any trigger- burst pairs with the given criteria. When considering solely spatial coincidence, however, we find one trigger spatially coincident with CHIME J0630+25. The trigger is GRB110414A, which was detected long before CHIME was built. However, as noted in [85], there is a high chance probability of finding spatial coincidences given CHIME’s current localisation capabilities. Accordingly, we conclude that no significant coincidences exist between CHIME J0630+25 and any known $\gamma$-ray triggers. ## Acknowledgements F.A.D is support by the UBC Four Year Fellowship. Basic research in radio astronomy at the U.S. Naval Research Laboratory is supported by 6.1 Base funding. Construction and installation of VLITE was supported by the NRL Sustainment Restoration and Maintenance fund. Pulsar and FRB research at UBC are supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. K.S. is supported by the NSF Graduate Research Fellowship Program. A.B.P. is a Banting Fellow, a McGill Space Institute (MSI) Fellow, and a Fonds de Recherche du Quebec – Nature et Technologies (FRQNT) postdoctoral fellow. V.M.K. holds the Lorne Trottier Chair in Astrophysics & Cosmology, a Distinguished James McGill Professorship, and receives support from an NSERC Discovery grant (RGPIN 228738-13), from an R. Howard Webster Foundation Fellowship from CIFAR, and from the FRQNT CRAQ. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. B.M.G. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant RGPIN-2022-03163, and of the Canada Research Chairs program. A.M.C. is funded by an NSERC Doctoral Postgraduate Scholarship. S.M.R. is a CIFAR Fellow. and is supported by the NSF Physics Frontiers Center award 2020265. K.W.M. holds the Adam J. Burgasser Chair in Astrophysics. E.F., I.S., S.C., S.M.R. are members of the NANOGrav Physics Frontiers Center, supported by the NSF award 2020265. A.P.C is a Vanier Canada Graduate Scholar. ## Appendix A Appendix (a) 59341A (b) 59341B (c) 59456A (d) 59456B Figure 8: The calibrated spectral fit of all bursts with intensity data. (a) 59460A (b) 59463A (c) 59548A (d) 59553A Figure 9: Continued (a) 59563A (b) 59565A (c) 59574A Figure 10: Continued ## References * * [1] Hurley-Walker, N. _et al._ A radio transient with unusually slow periodic emission. _Nature_ 601, 526–530 (2022). * [2] Hurley-Walker, N. _et al._ A long-period radio transient active for three decades. _Nature_ 619, 487–490 (2023). * [3] Caleb, M. _et al._ An emission-state-switching radio transient with a 54-minute period. _Nature Astronomy_ (2024). * [4] Caleb, M. _et al._ Discovery of a radio-emitting neutron star with an ultra-long spin period of 76 s. _Nature Astronomy_ 6, 828–836 (2022). * [5] Chen, K. & Ruderman, M. Pulsar Death Lines and Death Valley. _ApJ_ 402, 264 (1993). * [6] Zhang, B., Harding, A. K. & Muslimov, A. G. Radio Pulsar Death Line Revisited: Is PSR J2144-3933 Anomalous? _ApJ_ 531, L135–L138 (2000). * [7] Philippov, A., Timokhin, A. & Spitkovsky, A. Origin of Pulsar Radio Emission. _Phys. Rev. Lett._ 124, 245101 (2020). * [8] Melrose, D. B. Coherent Radio Emission from Pulsars. _Philosophical Transactions of the Royal Society of London Series A_ 341, 105–115 (1992). * [9] Mitra, D. Nature of Coherent Radio Emission from Pulsars. _Journal of Astrophysics and Astronomy_ 38, 52 (2017). * [10] Kaspi, V. M. & Beloborodov, A. M. Magnetars. _ARA &A_ 55, 261–301 (2017). * [11] Popov, S. B. The Zoo of Isolated Neutron Stars. _Universe_ 9, 273 (2023). * [12] Dib, R., Kaspi, V. M. & Gavriil, F. P. Rossi X-Ray Timing Explorer Monitoring of the Anomalous X-ray Pulsar 1E 1048.1 - 5937: Long-term Variability and the 2007 March Event. _ApJ_ 702, 614–630 (2009). * [13] Archibald, R. F. _et al._ Repeated, Delayed Torque Variations Following X-Ray Flux Enhancements in the Magnetar 1E 1048.1-5937. _ApJ_ 800, 33 (2015). * [14] Beniamini, P. _et al._ Evidence for an abundant old population of Galactic ultra-long period magnetars and implications for fast radio bursts. _MNRAS_ 520, 1872–1894 (2023). * [15] Caleb, M. _et al._ Radio and X-ray observations of giant pulses from XTE J1810 - 197. _MNRAS_ 510, 1996–2010 (2022). * [16] Maan, Y., Joshi, B. C., Surnis, M. P., Bagchi, M. & Manoharan, P. K. Distinct Properties of the Radio Burst Emission from the Magnetar XTE J1810-197. _ApJ_ 882, L9 (2019). * [17] Tan, C. M. _et al._ LOFAR Discovery of a 23.5 s Radio Pulsar. _ApJ_ 866, 54 (2018). * [18] De Luca, A., Caraveo, P. A., Mereghetti, S., Tiengo, A. & Bignami, G. F. A Long-Period, Violently Variable X-ray Source in a Young Supernova Remnant. _Science_ 313, 814–817 (2006). * [19] Gotthelf, E. V., Petre, R. & Hwang, U. The Nature of the Radio-quiet Compact X-Ray Source in Supernova Remnant RCW 103. _ApJ_ 487, L175–L179 (1997). * [20] D’Aì, A. _et al._ Evidence for the magnetar nature of 1E 161348-5055 in RCW 103. _MNRAS_ 463, 2394–2404 (2016). * [21] Pelisoli, I. _et al._ A survey for radio emission from white dwarfs in the VLA Sky Survey. _MNRAS_ 531, 1805–1822 (2024). * [22] Marsh, T. R. _et al._ A radio-pulsing white dwarf binary star. _Nature_ 537, 374–377 (2016). * [23] Pelisoli, I. _et al._ A 5.3-min-period pulsing white dwarf in a binary detected from radio to X-rays. _Nature Astronomy_ 7, 931–942 (2023). * [24] Dong, F. A. _et al._ The second set of pulsar discoveries by CHIME/FRB/Pulsar: 14 rotating radio transients and 7 pulsars. _MNRAS_ 524, 5132–5147 (2023). * [25] CHIME Collaboration _et al._ An Overview of CHIME, the Canadian Hydrogen Intensity Mapping Experiment. _ApJS_ 261, 29 (2022). * [26] CHIME/FRB Collaboration _et al._ The CHIME Fast Radio Burst Project: System Overview. _ApJ_ 863, 48 (2018). * [27] CHIME/Pulsar Collaboration _et al._ The CHIME Pulsar Project: System Overview. _ApJS_ 255, 5 (2021). * [28] CHIME/FRB Collaboration _et al._ The First CHIME/FRB Fast Radio Burst Catalog. _ApJS_ 257, 59 (2021). * [29] Edwards, R. T., Hobbs, G. B. & Manchester, R. N. TEMPO2, a new pulsar timing package - II. The timing model and precision estimates. _MNRAS_ 372, 1549–1574 (2006). * [30] Luo, J. _et al._ PINT: A Modern Software Package for Pulsar Timing. _ApJ_ 911, 45 (2021). * [31] Klus, H., Ho, W. C. G., Coe, M. J., Corbet, R. H. D. & Townsend, L. J. Spin period change and the magnetic fields of neutron stars in Be X-ray binaries in the Small Magellanic Cloud. _MNRAS_ 437, 3863–3882 (2014). * [32] Cordes, J. M. & Lazio, T. J. W. NE2001.I. A New Model for the Galactic Distribution of Free Electrons and its Fluctuations. _arXiv e-prints_ astro–ph/0207156 (2002). * [33] Yao, J. M., Manchester, R. N. & Wang, N. A New Electron-density Model for Estimation of Pulsar and FRB Distances. _ApJ_ 835, 29 (2017). * [34] Price, D. C., Flynn, C. & Deller, A. A comparison of Galactic electron density models using PyGEDM. _PASA_ 38, e038 (2021). * [35] Chatterjee, S. _et al._ Precision Astrometry with the Very Long Baseline Array: Parallaxes and Proper Motions for 14 Pulsars. _ApJ_ 698, 250–265 (2009). * [36] Ruderman, M. A. & Sutherland, P. G. Theory of pulsars: polar gaps, sparks, and coherent microwave radiation. _ApJ_ 196, 51–72 (1975). * [37] Highland, V. L. ESTIMATION OF UPPER LIMITS FROM EXPERIMENTAL DATA (1986). * [38] Rea, N. _et al._ Long-period Radio Pulsars: Population Study in the Neutron Star and White Dwarf Rotating Dipole Scenarios. _ApJ_ 961, 214 (2024). * [39] Buckley, D. A. H., Meintjes, P. J., Potter, S. B., Marsh, T. R. & Gänsicke, B. T. Polarimetric evidence of a white dwarf pulsar in the binary system AR Scorpii. _Nature Astronomy_ 1, 0029 (2017). * [40] Mitra, D., Arjunwadkar, M. & Rankin, J. M. Polarized Quasiperiodic Structures in Pulsar Radio Emission Reflect Temporal Modulations of Non-stationary Plasma Flow. _ApJ_ 806, 236 (2015). * [41] Kramer, M., Liu, K., Desvignes, G., Karuppusamy, R. & Stappers, B. W. Quasi-periodic sub-pulse structure as a unifying feature for radio-emitting neutron stars. _Nature Astronomy_ 8, 230–240 (2024). * [42] Kisaka, S. & Tanaka, S. J. Efficiency of Synchrotron Radiation from Rotation-powered Pulsars. _ApJ_ 837, 76 (2017). * [43] Íñiguez-Pascual, D., Viganò, D. & Torres, D. F. Synchro-curvature emitting regions in high-energy pulsar models. _MNRAS_ 516, 2475–2485 (2022). * [44] Romani, R. W. Gamma-Ray Pulsars: Radiation Processes in the Outer Magnetosphere. _ApJ_ 470, 469 (1996). * [45] Cheng, K. S., Taam, R. E. & Wang, W. Pulsar Wind Nebulae and the X-Ray Emission of Nonaccreting Neutron Stars. _ApJ_ 617, 480–489 (2004). * [46] Potekhin, A. Y., De Luca, A. & Pons, J. A. Neutron Stars—Thermal Emitters. _Space Sci. Rev._ 191, 171–206 (2015). * [47] Kaplan, D. L. Yuan, Y.-F., Li, X.-D. & Lai, D. (eds) _Nearby, Thermally Emitting Neutron Stars_. (eds Yuan, Y.-F., Li, X.-D. & Lai, D.) _Astrophysics of Compact Objects_ , Vol. 968 of _American Institute of Physics Conference Series_ , 129–136 (AIP, 2008). 0801.1143. * [48] Bahcall, J. N. & Wolf, R. A. An Observational Test of Theories of Neutron-Star Cooling. _ApJ_ 142, 1254–1256 (1965). * [49] McAlister, H. A. _et al._ First Results from the CHARA Array. I. An Interferometric and Spectroscopic Study of the Fast Rotator $\alpha$ Leonis (Regulus). _ApJ_ 628, 439–452 (2005). * [50] Petit, P., Böhm, T., Folsom, C. P., Lignières, F. & Cang, T. A decade-long magnetic monitoring of Vega. _A &A_ 666, A20 (2022). * [51] Dufton, P. L. _et al._ The VLT-FLAMES Tarantula Survey: The Fastest Rotating O-type Star and Shortest Period LMC Pulsar—Remnants of a Supernova Disrupted Binary? _ApJ_ 743, L22 (2011). * [52] Lyubarsky, Y. Fast Radio Bursts from Reconnection in a Magnetar Magnetosphere. _ApJ_ 897, 1 (2020). * [53] Gil, J., Lyubarsky, Y. & Melikidze, G. I. Curvature Radiation in Pulsar Magnetospheric Plasma. _ApJ_ 600, 872–882 (2004). * [54] Camilo, F. _et al._ Revival of the Magnetar PSR J1622-4950: Observations with MeerKAT, Parkes, XMM-Newton, Swift, Chandra, and NuSTAR. _ApJ_ 856, 180 (2018). * [55] Giri, U. _et al._ Comprehensive Bayesian analysis of FRB-like bursts from SGR 1935+2154 observed by CHIME/FRB. _arXiv e-prints_ arXiv:2310.16932 (2023). * [56] Katz, J. I. GLEAM-X J162759.5$-$523504.3 as a white dwarf pulsar. _Ap &SS_ 367, 108 (2022). * [57] Tong, H. Discussions on the Nature of GLEAM-X J162759.5-523504.3. _ApJ_ 943, 3 (2023). * [58] Ravi, V. _et al._ Observations of radio pulses from CU Virginis. _MNRAS_ 408, L99–L103 (2010). * [59] Hyman, S. D., Lazio, T. J. W., Kassim, N. E. & Bartleson, A. L. Low-Frequency Radio Transients in the Galactic Center. _AJ_ 123, 1497–1501 (2002). * [60] Hyman, S. D. _et al._ GCRT J1742-3001: A New Radio Transient Toward the Galactic Center. _ApJ_ 696, 280–286 (2009). * [61] Tannock, M. E. _et al._ Weather on Other Worlds. V. The Three Most Rapidly Rotating Ultra-cool Dwarfs. _AJ_ 161, 224 (2021). * [62] Lorimer, D. R. & Kramer, M. _Handbook of Pulsar Astronomy_ Vol. 4 (2004). * [63] Gentile Fusillo, N. P. _et al._ A catalogue of white dwarfs in Gaia EDR3. _MNRAS_ 508, 3877–3896 (2021). * [64] The CHIME/FRB Collaboration _et al._ Updating the first CHIME/FRB catalog of fast radio bursts with baseband data. _arXiv e-prints_ arXiv:2311.00111 (2023). * [65] Beniamini, P., Hotokezaka, K., van der Horst, A. & Kouveliotou, C. Formation rates and evolution histories of magnetars. _MNRAS_ 487, 1426–1438 (2019). * [66] Beniamini, P., Wadiasingh, Z. & Metzger, B. D. Periodicity in recurrent fast radio bursts and the origin of ultralong period magnetars. _MNRAS_ 496, 3390–3401 (2020). * [67] Pearlman, A. B., Majid, W. A., Prince, T. A., Kocz, J. & Horiuchi, S. Pulse Morphology of the Galactic Center Magnetar PSR J1745-2900. _ApJ_ 866, 160 (2018). * [68] Camilo, F. _et al._ The Variable Radio-to-X-Ray Spectrum of the Magnetar XTE J1810-197. _ApJ_ 669, 561–569 (2007). * [69] Camilo, F., Ransom, S. M., Halpern, J. P. & Reynolds, J. 1E 1547.0-5408: A Radio-emitting Magnetar with a Rotation Period of 2 Seconds. _ApJ_ 666, L93–L96 (2007). * [70] Levin, L. _et al._ A Radio-loud Magnetar in X-ray Quiescence. _ApJ_ 721, L33–L37 (2010). * [71] Lazaridis, K. _et al._ Radio spectrum of the AXP J1810-197 and of its profile components. _MNRAS_ 390, 839–846 (2008). * [72] Maron, O., Kijak, J., Kramer, M. & Wielebinski, R. Pulsar spectra of radio emission. _A &AS_ 147, 195–203 (2000). * [73] Ransom, S. M. _New search techniques for binary pulsars_. Ph.D. thesis, Harvard University, Massachusetts (2001). * [74] Pang, D., Goseva-Popstojanova, K., Devine, T. & McLaughlin, M. A novel single-pulse search approach to detection of dispersed radio pulses using clustering and supervised machine learning. _MNRAS_ 480, 3302–3323 (2018). * [75] Agarwal, D., Aggarwal, K., Burke-Spolaor, S., Lorimer, D. R. & Garver-Daniels, N. FETCH: A deep-learning based classifier for fast transient classification. _MNRAS_ 497, 1661–1674 (2020). * [76] Aggarwal, K. _et al._ Your: Your Unified Reader. _The Journal of Open Source Software_ 5, 2750 (2020). * [77] Karako-Argaman, C. _et al._ Discovery and Follow-up of Rotating Radio Transients with the Green Bank and LOFAR Telescopes. _ApJ_ 809, 67 (2015). * [78] Good, D. C. _et al._ First Discovery of New Pulsars and RRATs with CHIME/FRB. _ApJ_ 922, 43 (2021). * [79] Remazeilles, M., Dickinson, C., Banday, A. J., Bigot-Sazy, M. A. & Ghosh, T. An improved source-subtracted and destriped 408-MHz all-sky map. _MNRAS_ 451, 4311–4327 (2015). * [80] Haslam, C. G. T., Salter, C. J., Stoffel, H. & Wilson, W. E. A 408-MHZ All-Sky Continuum Survey. II. The Atlas of Contour Maps. _A &AS_ 47, 1 (1982). * [81] Polisensky, E. _et al._ Exploring the Transient Radio Sky with VLITE: Early Results. _ApJ_ 832, 60 (2016). * [82] Clarke, T. E. _et al._ Hall, H. J., Gilmozzi, R. & Marshall, H. K. (eds) _Commensal low frequency observing on the NRAO VLA: VLITE status and future plans_. (eds Hall, H. J., Gilmozzi, R. & Marshall, H. K.) _Ground-based and Airborne Telescopes VI_ , Vol. 9906 of _Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series_ , 99065B (2016). * [83] Mohan, N. & Rafferty, D. PyBDSF: Python Blob Detection and Source Finder. Astrophysics Source Code Library, record ascl:1502.007 (2015). 1502.007. * [84] Coppin, P. GRBweb [Online]. https://icecube.wisc.edu/~grbweb_public (2022). * [85] Curtin, A. P. _et al._ Limits on Fast Radio Burst-like Counterparts to Gamma-Ray Bursts Using CHIME/FRB. _ApJ_ 954, 154 (2023).
# Structural, vibrational and electronic properties of Nb substituted orthovanadates LaV1-xNbxO4 Ashok Kumar Department of Applied Physics, Delhi Technological University, Delhi-110042, India Department of Physics, Atma Ram Sanatan Dharma College, University of Delhi, New Delhi-110021, India Anurag Sharma Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India Madhav Sharma Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India Vinod Singh Department of Applied Physics, Delhi Technological University, Delhi-110042, India Anita Dhaka Department of Physics, Hindu College, University of Delhi, New Delhi-110007, India Rajendra S. Dhaka<EMAIL_ADDRESS>Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India ###### Abstract We investigate the structural, vibrational, morphological, and electronic properties of Nb substituted orthovanadate LaV1-xNbxO4 samples prepared by the solid-state reaction method. The x-ray diffraction (XRD) analysis reveals the presence of three crystal structures [monoclinic monazite ($m-m$) type for the $x=$ 0, two-phase equilibrium of monoclinic monazite ($m-m$) and tetragonal scheelite ($t-s$) type for the 0.2$\leq$$x$$\leq$0.8, and monoclinic fergusonite ($m-f$) type for the $x=$ 1 samples] with an increase in Nb5+ concentration. The Raman spectroscopy and x-ray photoelectron spectroscopy (XPS) were employed to study the vibrational and electronic properties of all the samples, respectively. In order to choose an excitation wavelength that does not cause undesirable fluorescence and has observable intensities of all the vibrational modes, the Raman spectra are collected using 532 nm, 633 nm, and 785 nm laser lines. With increasing the Nb5+ concentration, new Raman modes associated with Nb-bonds are clearly visible and the intensity of V-bonds assigned modes is decreasing. The XPS analysis shows the unchanged 3+ oxidation state of La ion where the intensity of the V 2$p$ core-level decreases while the Nb 3$d$ core-level increases with $x$. The equal spin- orbit energy splitting of the states is confirmed by the average energy difference (across La core-level spectra for all the samples) for state I as well as bonding and anti-bonding of state II. Interesting, the relative intensity of La 3$d$ state I and state II show systematic change with Nb doping altering the metal ligand overlap. We discuss and provide insight into the evolution of the structural, morphological, and chemical features with Nb substitution in LaV1-xNbxO4 samples. ## I Introduction In various polycrystalline oxides, rare earth orthovanadates (RVO4; R-Rare earth elements) are interesting because of their potential applications in catalysis, polarizers, luminescent materials, and laser host materials [1, 2, 3]. Also, researchers have reported that complex oxide materials show interesting structural, magnetic and electronic properties [4, 5, 6, 7], and may be utilized for various applications such solid oxide fuel cells and as an electrode material for Lithium-ion batteries due of their high specific capacity and cycle stability [8]. It is interesting to note that the lanthanum based orthovanadates LaVO4 shows the structural trend in rare-earth family, it crystallizes in tetragonal–zircon ($t-z$) type polymorphs with space group I41/amd and monoclinic–monazite ($m-m$) type polymorph with space group P21/n. However, it thermally stabilizes in $m-m$ type, whereas the $t-z$ structure remains in metastable state at room temperature, because of the largest ionic radius of La3+ in lanthanide series, it has a higher oxygen coordination number (9) in $m-m$ type structure as compared to 8 in $t-z$ type [9]. The zircon structure contains a pattern of VO4 tetrahedra (having four identical V-O bonds) [10] and RO8 dodecahedra (coordination no. 8), sharing their edges alternatively and linked together in chains along the $c-$axis. In the monazite structure, deformed VO4 tetrahedra with four different V-O bonds [11] are connected to RO9 polyhedra (coordination no. 9) and sharing their edges. The zircon type LaVO4 sample is difficult to prepare at ambient conditions by conventional solid state reaction method but few reports say that it can be synthesized and stabilized by hydrothermal and precipitation methods [12, 13, 14]. The structural and electronic properties of lanthanum orthovanadate with pentavalent niobium substitution are vital to understand for their practical use. Though the parent compound LaVO4 with substitution at the La site has been extensively explored [15, 16], there are very few studies to understand the effect of substitution at V site [18, 17]. As the niobium is located just below vanadium in the periodic table and has many advantages like Vanadium prices have recently risen to about 300% higher, niobium (Nb5+) is biocompatible, isoelectronic to vanadium ion and has a larger ionic radius (0.48 Å) with four coordination numbers in comparison to vanadium ion (0.36 Å) [19]. The LaNbO4 is a rare-earth niobate and shows a well-known temperature and composition/substitution-induced structural transformation. For example, the LaNbO4 undergoes a thermally induced structural transition from monoclinic fergusonite ($m-f$) with space group I2/a to tetragonal scheelite ($t-s$) with space group I41/a) phase at $\sim$495$\degree$C [20]. Similarly, it undergoes structural transformation by substituting Nb5+ at V5+ site [21]. It has been reported that lanthanum niobate shows interesting properties and very useful for technological applications such as proton conductivity [22, 23], good dielectric, high energy emission using X-ray excitation [24] and its potential for applications in a variety of fields, including sensors [25], contrast agents, waveguides, ferroelectrics [26], phosphors [27], laser crystals [28], luminophores, LEDs [29], etc. In this paper, we study the structural, vibrational, morphological, and electronic properties of LaV1-xNbxO4 using various experimental tools like x-ray powder diffraction (XRD), scanning electron microscopy (SEM), high resolution transmission electron microscopy (HR-TEM), selected area electron diffraction (SAED), Raman spectroscopy, and x-ray photoelectron spectroscopy (XPS). We find the phase purity and structural transition by performing the Rietveld refinement of the measured XRD patterns at room temperature. The Raman spectra of LaV1-xNbxO4 samples are measured with different excitation wavelengths of 532 nm, 633 nm, and 785 nm, where we find significant intensity of all the Raman active modes as well as interesting changes with Nb substitution. The Raman spectra exhibit a pattern of maximum intensity peaks that is compatible with Badger’s rule. The structural phase transition observed in the XRD analysis of LaV1-xNbxO4 is also supported by the intensity variation in the Raman mode observed in the samples with increasing Nb concentration. Through the SEM micrographs, we identify that the samples contain fine particles along with pores as well as changes in particle size and shape can be seen in the surface images of the samples. The core-level photoemission reveals the oxidation state and electronic structure of the constitute elements in these samples. The intensity of the core-level spectra of all the samples varied systematically with an increase in Nb5+ concentration, as shown by XPS analysis. The average energy difference (for the La core-level spectra of all the samples) for state I, state II bonding, and state II anti-bonding verified the equal spin-orbit energy splitting of the states. Moreover, we find a systematic change in the relative intensity of La 3$d$ state I and state II with Nb doping, which suggest an altering in the metal ligand overlap. ## II Experimental We use solid-state reaction method to prepare LaV1-xNbxO4 ($x=$ 0 to 1) samples by mixing V2O5 (99.6$\%$, Sigma), Nb2O5 (99.99$\%$, Sigma), and La2O3 (99.99$\%$, Sigma) as precursors in the stoichiometric proportions. The La2O3 was pre-dried for 6 hrs at 900$\degree$C to remove the moisture. After that the mixture was ground evenly for 8 hours, then heated for 17 hrs at 1000$\degree$C. The mixture was then reground and sintered at 1250$\degree$C for 13 hrs to improve the crystallinity of the samples. The phase purity and structural parameters of LaV1-xNbxO4 were determined using Panalytical XPert3 powder x-ray diffractometer at room temperature using the Cu source of K$\alpha$ radiation ($\lambda$ = 1.5406 Å ). We use the step size of 0.033$\degree$ for each XRD scan taken in the 2$\theta$ range from 10$\degree$ to 90$\degree$. The lattice parameters are extracted by the Rietveld refinement of XRD patterns using FullProf software, where linear interpolation is used to fit the background. We use Jeol JSM-7800F Prime field emission scanning electron microscope (FE-SEM) with LN2 free SDD X-max 80 EDS detector in high vacuum mode to produce the scanning electron microscope (SEM) micrographs of the materials’ surfaces. The analysis of particle size and change in morphology of LaV1-xNbxO4 was done using ImageJ software by analyzing SEM micrographs at the surface of the pellet samples. In order to execute FE-SEM, the non-conducting LaV1-xNbxO4 pellets were turned into conducting by coating the surface with a thin layer of Au using a sputter coater. We use the JEOL/JEM-F200 microscope, equipped with thermal electron field emission and OneView CMOS camera (4k $\times$ 4k pixels), to collect HR- TEM data by operating the system at an acceleration voltage of 200 keV. The Raman spectra were recorded at room temperature with the Renishaw inVia confocal Raman microscope using 2400 lines/mm grating, 10X objective, and three different wavelengths; (i) 532 nm, gas laser with a power of 1 mW, (ii) 633 nm, where the semiconductor diode laser with a power of 1 mW, (iii) 785 nm semiconductor diode laser with a power of 0.1 mW. The samples can be identified by their particular Raman fingerprint, and their structural and chemical information can be discovered through the examination of several Raman active modes in LaV1-xNbxO4. The x-ray photo emission spectroscopy (XPS) measurements are done using AXIS Supra instrument (Kratos Analytical Ltd). The survey spectra and core level spectra (La 3$d$, Nb 3$d$, V 2$p$, and O 1$s$, for each sample), were recorded at room temperature using the monochromatic X-ray source: Al K$\alpha$-1486.6 eV(step size 1 eV for the survey and 0.1 eV for core level spectra), with a charge neutralizer, is used to offset the charging impact in these insulating materials. The pass energy of the analyzer was 160 eV and 20 eV for the survey and core-level spectra, respectively. For all the wide scans and core-level spectra, the C 1$s$ peak is fitted to obtain the peak binding energy (BE) and the calibration for charge correction was done using the C 1$s$ BE reference at 284.6 eV for each sample. We utilize the Igor Pro 9 software to analyze the observed Raman spectra and fitted the modes using the Lorentzian peak function as well as XPS spectra using the Voigt function. ## III Results and Discussion Figure 1: (a–f) The Rietveld refined x-ray diffraction patterns of LaV1-xNbxO4 ($x=$ 0–1) samples. The experimental, simulated, and difference between the experimental and simulated spectra are shown by open red circles, black solid lines, and blue solid lines, respectively. The Bragg positions corresponding to their respective space groups are shown by green vertical markers. In the side of each panel (a1–f1), we show partial amplification for clarity between 2$\theta$ = 25–35$\degree$ for all the samples. The Rietveld refined room-temperature x-ray diffraction (XRD) patterns of the polycrystalline LaV1-xNbxO4 ($x=$ 0–1) samples are displayed in Fig. 1 and lattice parameters of the samples are summarised in Table 1, where we can see that angle $\beta$ is increasing in the $m-m$ type phase of LaV1-xNbxO4 samples with Nb5+ substitution due to higher ionic size of Nb5+ as compared to V5+. The crystallization of LaV1-xNbxO4 is clearly observed in three different phases depending on the substitution of Nb5+ at the site of V5+, and also been reported by Aldred et al. in ref. [21]. We observe that the structure changes from $m-m$ to $m-f$ with increase in the Nb5+ concentration from 0 to 100%. For the $x=$ 0 and 1, a pure monoclinic phase is obtained with no impurity peaks. In between $x=$ 0.2 and 0.8, a monoclinic monazite ($m-m$) and a tetragonal scheelite ($t-s$) type phases coexist. Moreover, all the Bragg reflections of LaVO4 and LaNbO4 can easily be indexed to the $m-m$ and $m-f$ phases with the space group P21/n and I2/a for the $x=$ 0 and 1 samples, respectively. We find that the contribution of space group I41/a is increasing from the $x=$ 0.2 to 0.8 samples (see Table 1) due to the increase of $t-s$ phase with the substitution of Nb5+ at the site of V5+ in LaV1-xNbxO4 samples. So, it can clearly be seen that the LaV1-xNbxO4 samples crystallize in monoclinic monazite ($m-m$) type ($x=$ 0), coexistence of monoclinic monazite ($m-m$) and tetragonal scheelite ($t-s$) type (0.2$\leq$$x$$\leq$0.8), and monoclinic fergusonite ($m-f$) type ($x=$ 1) [21, 30]. Table 1: The Rietveld refinement parameters of polycrystalline LaV1-xNbxO4 ($x=$ 0–1) samples with Nb substitution induced metastable tetragonal–scheelite phase for the $x=$ 0.2 to 0.8 samples, determined using the FullProf software. $~{}~{}~{}x~{}~{}~{}$ | $\chi^{2}$ | Space Group | a (Å) | b (Å) | c (Å) | $\beta(\degree)$ | Volume $(\rm\AA^{3})$ ---|---|---|---|---|---|---|--- 0 | 1.09 | P21/n | 7.042(3) | 7.276(4) | 6.724(7) | 104.88 (6) | 333.033(5) 0.2 | 2.63 | P21/n - 84$\%$ | 7.046(1) | 7.278(2) | 6.733(3) | 104.91(1) | 333.685(4) | | I41/a - 16$\%$ | 5.336(1) | 5.336(1) | 11.731(2) | 90 | 334.042(4) 0.4 | 2.46 | P21/n - 78$\%$ | 7.043(0) | 7.276(3) | 6.732(4) | 104.91(2) | 333.397(3) | | I41/a - 22$\%$ | 5.332(3) | 5.332(3) | 11.735(2) | 90 | 333.509(6) 0.6 | 3.70 | P21/n - 45$\%$ | 6.818(9) | 7.596(5) | 8.030(0) | 105.21(7) | 401.383(6) | | I41/a - 55$\%$ | 5.329(8) | 5.329(8) | 11.714(8) | 90 | 332.787(0) 0.8 | 4.31 | P21/n - 4$\%$ | 6.878(5) | 7.459(4) | 7.679(8) | 105.61(1) | 379.517(2) | | I41/a - 96$\%$ | 5.375(4) | 5.375(42) | 11.624(0) | 90 | 335.869(7) 1 | 4.91 | I2/a | 5.558(5) | 11.529(1) | 5.201(8) | 93.99(2) | 332.546(3) Moreover, for the $x=$ 0 sample, the $m-m$ type crystal structure shows high intensity diffraction peaks corresponding to (200) as well as (120) crystal planes at 26.17$\degree$ and 27.78$\degree$, respectively. However, the $t-s$ type structure contains a peak corresponding to (112) plane at 28.08$\degree$, and the $m-f$ type structure shows high intensity peaks for the ($\overline{1}$21) and (121) planes at 27.5o and 28.9o respectively. In the measured XRD pattern for the $x=$ 0.2 to 0.8 samples, the diffraction peaks for the (200), (120) and (112) planes are present, which clearly indicate the co–existence of both the $m-m$ or $t-s$ type structures. The presence of (110) plane at 17.65$\degree$ for the $x=$ 0.2 and 0.4 samples is due to the dominance of $m-m$ type structure in the LaV1-xNbxO4. The (200) and (120) peaks are also present in these samples; however, their intensity decreasing with higher concentration of Nb substitution and become negligible for the $x\geq$ 0.6 sample. As the Nb5+ concentration becomes more than V5+ concentration the $t-s$ type structure dominates, which results in the reduction/absence of diffraction peaks corresponding to (200) and (120) planes. The variation in peak intensity corresponding to (200) and (120) crystal planes and the presence of (112) plane indicate the co–existence of $t-s$ and $m-m$ type structures for the $x=$ 0.2 to 0.8 samples. This also validates that the $m-m$ type structure (P21/n) is decreasing and $t-s$ type structure (I41/a) is increasing with increasing the Nb concentration, i.e., from the $x=$ 0.2 to 0.8 samples. The determined $\%$ of phases by Rietveld refinement of XRD data is presented in Table 1. For the $x=$ 1 sample, the presence of ($\overline{1}$21) and (121) peaks further confirms the $m-f$ type structure of LaNbO4 and consistent with literature [31]. Note that pure $m-m$ and $m-f$ phases are observed for the $x=$ 0 and 1 samples, respectively. However, for the $x=$ 0.2–0.8 samples, both the monoclinic and scheelite-tetragonal phases coexist in a certain ratio. These results reveal that LaV1-xNbxO4 samples undergo three phase transformation; monoclinic monazite ($m-m$) type (for the $x=$ 0), two-phase equilibrium of monoclinic monazite ($m-m$) and tetragonal scheelite ($t-s$) type (0.2$\leq$$x$$\leq$0.8), and monoclinic fergusonite ($m-f$) type (for the $x=$ 1) with increased substitution of Nb5+ at the V5+ site. It is quite interesting to note that small amount of Nb5+ substitution can transform LaVO4 from $m-m$ phase to mix of $m-m$ and $t-s$ phases. It has also been observed that LaNbO4 shows structural transition from monoclinic to a tetragonal phase at $\sim$495$\degree$C. This structural transformation is very important in governing the protonic conductivity of LaNbO4 [32]. For some compositions of LaV1-xNbxO4, this transition temperature shifts near room temperature. The reported temperature-dependent XRD measurements also suggest that at $x=$ 0.75 (25$\%$ substitution of V5+ at Nb+5 sites in LaNbO4) [21], it possess a tetragonal structure at room temperature as its transition temperature is 250 K. The XRD pattern below 250 K shows some residual intensity (broadened lines) of tetragonal structures because of precursor effects. Similarly, we can see broad peaks in XRD patterns for the $x=$ 0.8 sample due to the above mentioned effect [21]. As we increase the Nb concentration, we find some new peaks appearing in the $x=$ 0.2 sample at 33.56o, 52.68o, 56.69o, and 58.06o. All these peaks are the symbols of t–s structure that belongs to (020), (116), (312), and (224) planes, respectively [33]. These peaks maintained up to the $x=$ 0.8 sample, which confirms presence of some $t-s$ phase, and also indicate the substitution-induced phase transformation. This is an important finding that LaNbO4 can possess a tetragonal structure at room temperature by just 20$\%$ replacement of Nb5+ sites by the V5+ sites. This result opens the possibility for a wide range of applications of LaNbO4 at room temperature. All these patterns discussed above suggest that substitution of larger Nb5+ ($r=$ 0.48 Å) ion for V5+ ($r=$ 0.36 Å) affects the lattice constant of LaV1-xNbxO4 and confirms the transformation of 3 different phases with increasing concentrations of Nb5+. Figure 2: The scanning electron microscope images of the LaV1-xNbxO4 ($x=$ 0–1) samples. The scanning electron microscope images of the LaV1-xNbxO4 for the $x=$ 0–1 are shown in Fig. 2, which depict the closed packed surface morphology in all the samples and some variation in the particle size is clearly visible. The pores are clearly visible from the top view of the surface. We can see that with the increase in Nb5+ concentration, the particle size slightly decreases from $x=$ 0 to $x=$ 0.4 sample, then increases and becomes maximum at $x=$ 0.8 and again decreases for the $x=$ 1 sample. Figure 3: The HR-TEM images of LaV1-xNbxO4 for the (a) $x=$ 0.2 and (b) $x=$ 0.8 samples. The magnified view of HR-TEM images in (c, d) for the $x=$ 0.2 sample, and (e, f) for the $x=$ 0.8 sample. (g, h) The SAED patterns for the $x=$ 0.2 and 0.8 samples, respectively. An average particle size (D) of LaV1-xNbxO4 is 5.14 $\micro$m for the $x=$ 0, 4.22 $\micro$m for the $x=$ 0.2, 3.56 $\micro$m for the $x=$ 0.4, 8.73 $\micro$m for the $x=$ 0.6, 11.31 $\micro$m for the $x=$ 0.8, and 5.70 $\micro$m for the $x=$ 1 samples. It is found that the change in crystal surface morphology of LaV1-xNbxO4 samples with increasing Nb5+ concentration causes variation in the particle size and shape. Further, in Figs. 3 (a, b) we display the HR-TEM images indicating distinct sets of planes with characteristic spacing for the $x=$ 0.2 and 0.8 samples. The images in Figs. 3(c, d) and (e, f) for the samples $x=$ 0.2 and $x=$ 0.8, respectively, show these plane sets in magnified view. The spacing between the planes is determined using ImageJ software, and we find the $d-$spacings of 0.43 and 0.32 nm for the (-1,1,1) and (1,2,0) planes in the $P2_{1}/n$ phase for the $x=$ 0.2 sample, and 0.28 and 0.31 nm for the (0,0,4) and (1,1,2) planes in the $I4_{1}/a$ phase for the $x=$ 0.8 sample. However, these planes only correspond to the dominating phase of the mixed-phase samples. The selected area electron diffraction (SAED) patterns in Figs. 3(g, h) indicate the contributions from both the phases. The indexed ($h,k,l$) planes that relate to $P2_{1}/n$ are coloured in white and yellow colour is designated to the $I4_{1}/a$ space group, as marked in Figs. 3(g, h). We find that the analysis of HR-TEM and SAED results is consistent with the XRD refinement data for these samples, as presented in Figs. 1(b, e). Figure 4: The room temperature Raman spectra of LaV1-xNbxO4 ($x=$ 0 to 1) samples using (a) 532 nm, (b) 633 nm, and (c) 785 nm excitation wavelengths. The dotted blue lines represent the Lorentzian line shape to deconvolute the individual modes. Table 2: The experimentally observed frequencies $\omega_{obs}$ of the individual Raman modes in LaV1-xNbxO4 ($x=$ 0–1) samples measured at room temperature. $x$ | 0 | 0.2 | 0.4 | 0.6 | 0.8 | 1 ---|---|---|---|---|---|--- Peak | $\omega_{obs}$ | $\omega_{obs}$ | $\omega_{obs}$ | $\omega_{obs}$ | $\omega_{obs}$ | $\omega_{obs}$ S0 | Bg(127.24) | | | | | Bg(121.86) S1 | Ag(141.99) | Ag(140.66) | Ag(140.66) | Ag(140.66) | | S2 | Bg(154.05) | Bg(155.39) | Bg(152.71) | | | S3 | Ag(187.44) | Ag(187.44) | Ag(186.11) | | | S4 | Bg(206.07) | Bg(206.07) | Bg(206.079) | Bg(207.40) | Bg(211.39) | Ag(219.35) S5 | Ag(235.26) | Ag(233.94) | Ag(233.94) | | | S6 | Ag(245.84) | Ag(244.52) | Ag(244.52) | | | S7 | Bg(305.11) | Bg(305.11) | | | | S8 | Ag(326.07) | Ag(326.07) | Ag(326.07) | Ag(324.76) | Ag(328.69) | S9 | Ag(344.36) | Ag(344.36) | Ag(344.36) | | | S10 | Ag(370.41) | Ag(370.41) | Ag(369.11) | Ag(369.11) | | S11 | Bg(393.78) | Bg(393.78) | Bg(391.19) | Ag(391.19) | Ag(388.60) | Bg(396.38) S12 | Ag(420.96) | Ag(418.37) | Ag(418.37) | | | S13 | Bg(436.44) | Bg(435.15) | Bg(433.86) | Bg(436.44) | | S14 | Ag(765.31) | Ag(765.31) | Ag(766.54) | Ag(766.54) | | S15 | Bg(788.68) | Bg(788.68) | Bg(788.68) | Bg(788.68) | | S16 | Ag(816.86) | Ag(815.63) | Ag(814.41) | Ag(811.96) | Ag(807.07) | Ag(803.39) S17 | Ag(840.06) | Ag(840.06) | Ag(840.06) | Ag(840.06) | | S18 | Bg(855.88) | Bg(855.88) | Bg(854.67) | Bg(854.67) | | S19 | Bg(874.10) | Bg(874.10) | Bg(874.10) | | | S20 | | | | Ag(170.10) | Ag(168.76) | Ag(174.10) S21 | | | | | | S22 | | | | | Ag(108.41) | Ag(105.72) S23 | | | | | | Bg(164.75) S24 | | | | | | Bg(198.09) S25 | | | | | | Bg(282.77) S26 | | | | | Bg(316.91) | Ag(322.14) S27 | | | | | | Ag(331.30) S28 | | | | | Ag(345.67) | Bg(343.06) S29 | | | | | | Bg(405.44) S30 | | | | | | Ag(422.25) S31 | | | | | | Bg(623.49) S32 | | | | | | Ag(648.58) S33 | | | | Bg(669.83) | Bg(661.08) | Bg(661.08) Table 3: Summary of all the 34 Raman active modes and their assignments with the help of literature (cited in the last coloum of the table) for the LaVO4 and LaNbO4 samples. | LaVO4 | LaNbO4 | ---|---|---|--- Peak | $\omega_{th}$ | Assignments | $\omega_{th}$ | Assignments | Refs. S0 | Bg(127) | Translation mode of La atoms in monoclinic phase | Bg(125.1) | Coupled translation-rotational mode of La atoms in monoclinic phase and NbO${}_{4}^{3-}$ around an axis perpendicular to b-axis | [43, 19, 45] S1 | Ag(143) | Translation mode of La–O bonds | | | [47, 45] S2 | Bg(158) | Translation mode of La–O bonds | | | [47] S3 | Ag(188) | Translation mode of La–O bonds | | | [47, 45] S4 | Bg(204) | Translation mode of La–O bonds | Ag(222.3) | Translational mode of La–O bonds along b-axis | [43, 48] S5 | Ag(230) | Translation mode of La–O bonds | | | [47] S6 | Ag(252) | Translation mode of La–O bonds | | | [47, 45] S7 | Bg(316) | Bending vibration of O–V–O bonds | | | [49, 45] S8 | Ag(336) | Bending vibration of O–V–O bonds | | | [49, 45] S9 | Ag(355) | Bending vibration of O–V–O bonds | | | [49, 45] S10 | Ag(380) | Bending vibration of O–V–O bonds | | | [49, 45] S11 | Bg(389) | Bending vibration of O–V–O bonds | Bg(398.1) | Triply degenerate deformation mode (Rocking mode of NbO${}_{4}^{3-}$) | [43, 49, 45] S12 | Ag(423) | Bending vibration of O–V–O bonds | | | [49, 45] S13 | Bg(427) | Bending vibration of O–V–O bonds | | | [49, 45] S14 | Ag(784) | Stretching vibration of V–O bonds | | | [50] S15 | Bg(799) | Stretching vibration of V–O bonds | | | [50] S16 | Ag(806) | Stretching vibration of V–O bonds | Ag(805.2) | Non degenerate Stretching mode of Nb–O bonds | [50, 43, 51, 52] S17 | Ag(836) | Stretching vibration of V–O bonds | | | [50] S18 | Ag(861) | Non degenerate stretching mode of shortest V–O bonds | | | [50, 45] S19 | Bg(892) | Stretching vibration of O–V–O bonds | | | [51, 52, 45] S20 | | | Ag(177.1) | Translational mode along b axis | [43] S21 | | | Bg(114) | Rotational mode of NbO${}_{4}^{3-}$ along an axis perpendicular to b-axis | [43] S22 | | | Ag(108.6) | Rotational mode of NbO${}_{4}^{3-}$ along b-axis | [43, 48] S23 | | | Bg(170) | Translational mode parallel to ac-plane | [43] S24 | | | Bg(200.2) | Translational mode parallel to ac-plane | [43] S25 | | | Bg(284.9) | Translational mode parallel to ac-plane | [43] S26 | | | Ag(321.7) | Doubly degenerate scissors mode of NbO${}_{4}^{3-}$) | [43, 48] S27 | | | Ag(326.9) | Doubly degenerate scissors mode of NbO${}_{4}^{3-}$) | [43, 48] S28 | | | Bg(344) | Translational mode parallel to ac-plane | [43] S29 | | | Bg(404.9) | Triply degenerate deformation mode (Rocking mode of NbO${}_{4}^{3-}$) | [43] S30 | | | Ag(425.8) | Triply degenerate deformation mode (Twist mode of NbO${}_{4}^{3-}$) | [43] S31 | | | Bg(625.5) | One of triply degenerate Stretching mode of Nb–O bonds | [43, 51, 48] S32 | | | Ag(649) | One of triply degenerate Stretching mode of Nb–O bonds | [43, 51, 48] S33 | | | Bg(664.9) | One of triply degenerate Stretching mode of Nb-O bonds | [43, 51, 48] The Raman spectra of LaV1-xNbxO4 measured at three different excitation wavelengths, 532 nm, 633 nm, and 785 nm are presented in Fig. 4 for all the samples ($x=$ 0–1). Three different excitation wavelengths are used to distinguish the fluorescence effect on the Raman signal and to avoid the background effects from the sample. We use Lorentzian line shape function to deconvolute and fit the observed individual Raman peaks, as marked in Table 1. We find that all the specific Raman peak positions (Raman shift) are independent of excitation wavelength for a sample, which confirm their inherent characteristic of that particular sample, as shown in Fig. 4. The intensity of the modes may vary due to several reasons like the polarizability of the molecule, excitation wavelength of the laser source, and the concentration of the active group [34]. Though there are minor changes in the intensity variation of Raman modes measured with different excitation wavelengths, we can see that Raman active peaks are changing systematically for all the measured samples in Fig. 4. In the measured spectra we see 20 peaks corresponding to LaVO4 and 17 peaks for LaNbO4. According to the Group theory calculations LaVO4 contains 72 vibrational modes and out of them, 36 modes are Raman active modes (18Ag \+ 18Bg) [36, 37, 35] (here, A and B denote symmetric and antisymmetric vibrations about the principal axis of symmetry and subscripts $g$ indicates that the vibrations are symmetric relative to a symmetry center, respectively). All the 20 Raman peaks for the $x=$ 0 sample are represented from S0 to S19, as shown in the Table 2. The theoretical approach predicted that the 8Ag+10Bg modes are for $m-f$ structure, and 13 Raman-active modes in $t-s$ structure (as observed in the $x=$ 0.6 sample), which are summarized in Table 2. The reason for the absence of some of the peaks could be due to the overlap of several Ag and Bg modes and their low Raman scattering cross- section. All the assignments related to each Raman peak in LaV1-xNbxO4 are summarised in Table 3. We can see in Table 2 that the S0 mode (127.24 cm-1) is present only in LaVO4 and LaNbO4 samples and absent for rest of the intermediate samples. The reason for origin of S0 mode is translational motion of La atoms in the monoclinic phase. All the concentrations from $x=$ 0.2 to 0.8 in LaV1-xNbxO4 results in the formation of $t-s$ type structure or $m-m$ and $t-s$ in equilibrium type structure. So, the formation of mixed phase may result in the disappearance of S0 mode. The S18 is the most intense mode for the LaVO4 sample, which decreases with Nb substitution. Whereas, we find that the intensity of S16 mode increases with Nb substitution and the same becomes the most intense mode for the LaNbO4 sample, as can be seen in Fig. 4(a). For the $x=$ 0.8 sample, the S18 mode completely disappears, which indicates that crystal phase transformation from a mixed phase of $m-m$ and $t-s$ in equilibrium to turning into an approximately pure (96%) $t-s$ phase [38]. This behaviour of S0, S16 and S18 corroborate with the structural phase transformation with Nb+5 substitution, as has been observed in XRD analysis. Furthermore, the presence of S8, S9, S10, S13, S14, S15, S17 and S18 modes in the $x=$ 0 sample confirms the existence of VO${}_{4}^{3-}$ ions since none of these modes are visible in LaNbO4 [39, 40, 41, 42]. All the Raman peaks arise due to different vibrational modes, i.e., bonds between different constituent elements, i.e., the La3+, V5+, Nb5+ and O2-. The comparison of experimentally observed peak positions of distinct Raman modes, fitted using Lorentzian function, with the reported data [45, 44, 43, 9, 35], shows a high degree of similarity, as presented in Table 2. In the $m-m$ structured LaVO4 crystal, nine O2- atoms are linked to La3+ whereas four O2- atoms and V5+ are joined in a tetrahedral shape. There are four different O2- locations and it is bound in a 3-coordinate geometry to two equivalent La3+ and one equivalent V5+ atoms at the first site. Also, it is bound to two comparable La3+ and one equivalent V5+ atoms in a deformed single-bond geometry in the second site. Three comparable La3+ and one equivalent V5+ atoms are linked to O2- in a 3-coordinate geometry at the third O2- site and it is bound in a deformed single-bond geometry to three equivalent La3+ and one equivalent V5+ atom in the fourth O2- site [46]. In the $m-f$ structured LaNbO4 crystal, the La3+ is joined to eight O2- atoms in an 8-coordinate geometry and six O2- atoms are bound to Nb5+ to create the deformed, edge- sharing NbO6 tetrahedra. There are two different sites for O2- and it is linked in a 4-coordinate geometry to two equivalent La3+ and two equivalent Nb5+ atoms at the first O2- site. Also, it is bound in a 3-coordinate geometry to two equivalent La3+ and one Nb5+ atoms at the second O2- site. In the analysis of vibrational modes, it has been assumed that the LaNbO4 crystal is made up of La3+ cations and NbO${}_{4}^{3-}$ molecular anions [43, 46]. It is revealed experimentally that on the addition of Nb5+, it replaces V5+ from its site and distorts LaVO4’s unit cell [30]. The mode of vibrations for LaVO4 is categorised as follows: (I) the zone of high wavenumber (765–874 cm-1) resulting from O-V-O bond’s stretching vibration (II) the intermediate (305–436 cm-1) region resulting from O-V-O bond’s bending vibration, and (III) the zone of low wavenumber ($<$ 285 cm-1) resulting from La atom’s translation modes as the La atoms have high mass [9, 17], and the results are presented in Table 3. Similarly, the vibrational modes of LaNbO4 are also categorized as follows: (I) high wavenumber zone (623–803 cm-1) for stretching modes of Nb-O bonds, (II) intermediate zone (322–422 cm-1) for deformation/scissor modes of NbO${}_{4}^{3-}$, and (III) low wavenumber zone (121–282 cm-1) for rotational modes of NbO${}_{4}^{3-}$ and translational lattice modes that include the relative translations of anions and cations [43]. Figure 5: The room temperature XPS survey spectra of LaV1-xNbxO4 ($x=$ 0 to 1) samples. Figure 6: (a) The La 3$d$ core-level spectra of LaV1-xNbxO4, $x=$ 0–1 samples. (b) The intensity ratio I2/I0, and (c) energy separation I2 \- I0 as a function of doping level $x$. The fitted spin-orbit split components are also shown for each sample. Figure 7: Tha Nb $3d$ core level spectra of LaV1-xNbxO4, $x=$ 0–1 samples. The fitted spin-orbit split components are also shown for each sample. The LaNbO4 contains a total of three different modes; rotational modes of NbO${}_{4}^{3-}$, vibrational modes of NbO${}_{4}^{3-}$ and translation modes of La–O and O–La–O bonds. The S0, and S22 peaks are visible corresponding to the combined translation-rotational (Bg) and rotational mode (Ag), respectively, while the third rotational Bg mode (S21) is absent in the observed experimental Raman spectra. The vibrational modes can be categorized into (I) doubly degenerated scissor modes, (II) triply degenerated deformation mode, which further splits into a pair of degenerated rocking mode and one twist mode, (III) stretching mode, a non-degenerate and a triply degenerated with increasing order of wave number [43]. The remaining modes are all translational modes. From the Table 3, we can easily identify that LaNbO4 Raman modes are matching well with the reported one. Two NbO${}_{4}^{3-}$ scissor modes with almost degenerated wave numbers are projected to be seen in the Ag spectrum. Out of all, the most obvious choices are S26 and S27 because the remaining Ag band’s wavenumbers are too low to allocate them. In the LaNbO4, as already discussed, the deformation modes are believed to be divided into two almost degenerated rocking modes (S11 and S29) with Bg symmetry and a twist mode (S30) with Ag symmetry. These modes are also present in the region of intermediate wave numbers. The stretching modes are high energy vibrations and here they are recognised as S16, S31, S32 and S33 peaks. As the non- degenerate symmetric mode is expected to provide the strongest band, band S16 is allocated to it. The remaining S31, S32, and S33 peaks are assigned to other three degenerate stretching modes. The invariance of the S4, S11 and S16 peak positions, all through, from the $x=$ 0 to 1 samples indicates no effect on translational mode along the $b-$axis, the Bg frequency of VO${}_{4}^{3-}$ and NbO${}_{4}^{3-}$ for rocking and stretching modes. The S8 peak disappears only in LaNbO4 spectrum because of absence of O-V-O bending vibrations [43, 17]. Interestingly, the S2, S3, S5, S6, S9, S12 and S19 peaks vanished just before Nb concentration exceeds V (at $x=$ 0.4) and also S1, S10, S13, S14, S15, S16, S17 and S18 peaks vanished just after Nb concentration became more than V. It is quite possible that the low concentration of Nb in the sample results in the weakening and then disappearance of some of the spectral peaks. Due to the same reason some new peaks (S20 and S33) appeared in $x=$ 0.6–1 samples. Furthermore, the S20 peak arises due to translational mode along the $b-$axis, and S33 peak appeared due to one of three triply degenerate stretching modes of NbO${}_{4}^{3-}$ in the sample. The most intense peak in LaNbO4 (S16) and LaVO4 (S18) at higher wavenumber is due to the stretching of Nb-Ot and V-Ot bonds, where Ot represents the oxygen atoms in the terminal position [53]. The terminal position of oxygen is that where it connects the LaO8 dodecahedra and NbO6 octahedra in case of LaNbO4 and LaO9 muffin [54] and VO4 tetrahedra in case of LaVO4 [53]. Since the VO4 tetrahedra appears to be intrinsic in the peak broadening, it has been found that this broadening in the Raman peaks spreads along samples with intermediate Nb and V compositions. However, in certain samples, Nb5+ and V5+ cation-related variables may also play an important role in the increasing the broadening of the peak. The broad peaks are made up of multiple modes which are normally difficult to distinguish from one another [53]. The strongest peak of LaVO4 (S18) is in the high wavenumber region which lies approximatly 52.5 cm-1 on higher side of the spectrum with respect to the strongest peak of LaNbO4 (S16). This difference in wavenumber ($\Delta$) is related to average bond length ($d$) of the atoms by $\Delta\propto 1/d^{3/2}$, as stated by the Badger rule [55]. The bond lengths V–Ot in LaVO4 and Nb–Ot in LaNbO4 are $\sim$1.72 Å [9] and $\sim$1.90 Å [53], respectively. The changes observed in the Raman sepctra of the samples is quite consistent with the Badger’s rule. Figure 8: Tha O $1s$ core level spectra of LaV1-xNbxO4, $x=$ 0–1 samples. Each spectrum is shifted vertically for the clarity. Figure 9: Tha V 2$p$ core level spectra of LaV1-xNbxO4, $x=$ 0–1 samples. The fitted spin-orbit split components are also shown for each sample. Finally we use x-ray photoemission spectroscopy (XPS) to investigate the electronic structure by measuring the survey scan and particular elemental core-level spectra of all the prepared samples. The identified peaks in the survey spectra according to their binding energies are labeled and are in agreement with reported values, as shown in Fig. 5. The characteristic La 3$d$ peaks cluster (830–870 eV), 4$d$ (4$d_{5/2}$ at 101 eV and 4$d_{3/2}$ at 104 eV), and 4$p$ (centered around 195 eV) [56]. The presence of these peaks of La are clearly visible for every synthesised sample, and they are all remarkably comparable. A consistent rise in Nb 3$d$ (discussed later) and Nb 3$p$ (3$p_{3/2}$ at 364 eV and 3$p_{1/2}$ at 379 eV) is observed along with an increase in Nb doping and this feature of Nb is absent in the $x=$ 0 sample [57]. For the V 2$p$ (2$p_{3/2}$ at 517 eV and 2$p_{1/2}$ at 525 eV) and V 2$s$ (630 eV) core-level peaks, the reverse behavior is anticipated and it is quite evident as clearly visible in Fig. 5 [58]. The Voigt function has been used to fit the core level spectra of the constituent elements. The fitted La 3$d$ core-levels are shown in Fig. 6(a). The spin-orbit splitting peaks present in all the samples, have been de-convoluted at binding energies 834.3$\pm$0.2 eV, 836.0$\pm$0.3 eV, 838.7$\pm$0.1 eV, 847.9$\pm$0.1 eV, 851.1$\pm$0.2 eV, 853.0$\pm$0.3 eV, 855.6$\pm$0.1 eV, and 863.4$\pm$0.2 eV (average B.E. of all the samples $\pm$ $\Delta$B.E., calculated for the $x=$ 0–1 samples). The broad diffusive satellite peaks at 847.7 eV and 863 eV in the locality of La 3$d$ core-level are coming from plasmons. Because of the two final states I and II, and the subsequent spin-orbit splitting between each state, making the structure complex. The primary strong peaks (3$d_{5/2}$ at 834.3 eV and 3$d_{3/2}$ at 851.1 eV, respectively) are associated with the final state I (La4+ 3d94f0, L), which involves electron transfer to the continuum from the 3$d$ core-level. The peaks at higher binding energies are features of final state II (La3+ 3d94f1, L, -e) and this feature is experimentally unresolved which indicates multiplet structure, as has been suggested by Mullica et al. [56]. This corresponds to the electron transfer from ligand (L, O2p in our case) valance band to the empty 4$f$ orbitals of La [56, 59]. This multiplet structure of state II is composed of two bonding and anti-bonding states. The prominent signals at higher binding energies (3$d_{5/2}$ at 838.7 eV and 3$d_{3/2}$ at 855.6 eV) are due to bonding of state II and the weak signals at lower binding energies (3$d_{5/2}$ at 836.0 eV and 3$d_{3/2}$ at 853 eV) are because of anti-bonding. The average energy difference (over La core-level spectra of all the samples) between these three pairs of peaks is nearly the same ( 16.9 eV) for the state I, state II bonding, and state II anti-bonding, respectively. This verifies the unaltered spin-orbit energy splitting of the states of La on Nb substitution [60]. Interestingly, we find a significant and systematic change in the intensity variation of the peak at 838.7 eV (I2) relative to the primary peak at 834.3 eV (I0) with Nb doping. The metal-ligand orbital overlaps are reported to be accountable for such doping-induced intensity variations [61, 62] where strong ligands are found to populate the (La3+ 3d94f1, L, -e) state, intensifying I2 [63]. The intensity ratio I2/I0 is shown in Fig. 6(b), which shows a consistent decrease as a function of doping $x$. This signifies that with Nb substitution, the extent of overlapping between La(4f)-O(2p) orbitals decreases monotonically. This conclusion can also be drawn from the trend in the energy separation between I2 and I0 as a function of $x$, as shown in Fig. 6(c). However, the separation is minute in the subsequent samples, but for the $x=$ 0 and $x=$ 1 the energy difference (I2 \- I0) is found to be of the order of 0.3 eV. The value of I2 \- I0 was found to be varying for a variety of La- containing compounds majorly because of the crystal structure, like 3.8 eV for La0.5Sr0.5Co1-xNbxO3 and 5.3 eV La1.85Ba0.15CuO4 [60, 62]. Notably, this energy separation could be related to the ease of electron transfer between the ligand and the more ionic state of La, therefore having an opposite trend with the tendency of ligand’s overlapping with the La $4f$ orbitals [62]. The Nb 3$d$ core level spectra are shown in Fig. 7 where the spin-orbit doublet of Nb 3$d$ core levels are fitted with a single peak for each component and the calculated peak positions for the Nb-doped samples are found to be 3$d_{5/2}$ at 206.2$\pm$0.2 eV and 3d3/2 at 209.0$\pm$0.2 eV [64, 65]. This confirms the presence of prevailed 5$+$ oxidation state of Nb atom [57] in all the samples. However, for the $x=$ 1 sample the Nb 3$d_{5/2}$ is at a higher binding energy as compared to the other Nb-containing samples, which could be due to the charging effects and the change in chemical environments. Therefore, Atuchin et al. characterized the Nb state by using energy difference $\Delta$ (Nb 3$d_{5/2}$ – O 1$s$) instead of solely relying on Nb 3d5/2 binding energy position [66]. The evaluated $\Delta$ (Nb 3$d_{5/2}$ – O 1s) values are found to be around 323.5 eV. The calculated energy difference with respect to O 1$s$ is independent of the carbon correction. The obtained binding energy difference $\approx$323.5 eV is reported to be a fairly highest value for the 5+ oxidation state of Nb. We can also see that the error in the value of binding energy position $\Delta$ is only 0.1 eV in this case, while for the Nb 3d5/2, and O 1$s$, it is 0.3, and 0.2 eV, respectively. In Fig. 8 we can also see that the O 1$s$ peak is shifting to higher binding energy for the $x=$ 1 sample as compared to the $x=$ 0\. Similarly, for the Nb 3$d_{5/2}$ core-level, it is shifting to higher binding energy. The $\Delta$(Nb 3d5/2 \- O 1s) value for the $x=$ 1 sample is quite consistent for all the samples, which strongly supports the electronic characterization using energy difference with respect to O 1$s$ instead of absolute peak positions. In Fig. 9, we present the V $2p$ core level spectra for all the samples, which shows spin orbit components of 2$p_{3/2}$, and 2$p_{1/2}$ at 516.9 and 524.8 eV, respectively indicating V in ${5+}$ state. Interestingly, an unusual broadening in the V $2p_{1/2}$ component is observed for all the samples, whereas no such additional component is evident in the V $2p_{3/2}$ peak at 516.9 eV. More importantly, the deconvolution of the V $2p_{1/2}$ component reveals that the FWHM of the higher energy feature (denoted by I) (1.2 eV) is nearly the same as that of $2p_{3/2}$ component (1.1 eV). In contrast, the lower energy feature (II) is significantly broader (2.8 eV). Moreover, the area ratio of the combined I and II with $2p_{3/2}$ is close to 1/2, which clearly indicates the intrinsic origin of these two features from the vanadium. In contrast to metallic V $2p$ core-level, the vanadium based compounds have often been reported to exhibit an anomalous V $2p_{1/2}$ width as a consequence of Coster-Kronig (C-K) transitions [68]. The C-K type transition is a class of Auger transition in which an electron from a higher sub-shell of the same shell fills the core hole [67]. In the present case, the filling of $2p_{1/2}$ core hole by an electron from $2p_{3/2}$ may give rise to the C-K transitions and that can result in an additional feature in the $2p_{1/2}$ component. Therefore, it is likely that the component I is attributed to the core-hole recombination with the screening electrons, analogous to the 2p3/2, whereas an additional L2-L3 (C-K) relaxation process gives rise to the feature II in 2p1/2 peak [69]. No significant change in these components has been observed with the Nb substitution, indicating the robust nature of the underlined system. Further, the approach of O $1s$ energy difference is also implemented in this case as for vanadium oxides, the energy difference $\Delta$(V 2p3/2 \- O 1s) is an advantageous reference [70]. The average $\Delta$(V 2p3/2 \- O 1s) magnitude is 12.8$\mp$0.1 eV, which is in good agreement with the literature for V5+ oxidation state [71]. ## Conclusions In conclusion, the solid state reaction method was used to prepare LaV1-xNbxO4 samples successfully with regular variable Nb5+ concentration. The XRD measurements established that the substitution of larger Nb5+ ion for V5+ affects the lattice constant of LaV1-xNbxO4 and goes through three different phase transformations [monoclinic monazite ($m-m$) type ($x=$ 0), two-phase equilibrium of monoclinic monazite ($m-m$) and tetragonal scheelite ($t-s$) type (0.2$\leq$$x$$\leq$0.8) and monoclinic fergusonite ($m-f$) type ($x=$ 1)]. The SEM micrographs helped in analyzing that the particle size and shape altered due to the change in crystal phases of these samples with increasing Nb5+ concentration. The analysis of HR-TEM and SAED data found to be consistent with the XRD refinement data. The Raman spectra of LaV1-xNbxO4 were studied using 532 nm, 633 nm, and 785 nm excitation wavelengths. All the Raman assignments were found to have well-ordered enhancement/diminution with the increase in Nb5+ doping. The variation in the intensity as well as appearance/disappearance of the Raman mode with Nb concentration are coinciding with the change in the structural phases, as observed in XRD analysis. This further confirms that the phase transformation in LaV1-xNbxO4 agrees with the maximum intensity peak patterns shown in the Raman spectra of these samples and are consistent with the Badger’s rule. The XPS analysis reveal the changes in Nb 3$d$ and V 2$p$ core-level spectral intensities of the samples with the increase in Nb5+ concentration. The equal spin-orbit energy splitting of the states was confirmed by the average energy difference (over La core spectra of all samples) for state I, state II bonding, and state II anti-bonding and the observed changes in their relative intensities with Nb substitution are due to the metal ligand orbitals overlap. These findings provide valuable insights into the structural and electronic properties of LaV1-xNbxO4 samples and their potential use in different fields of practical applications. ## Acknowledgment AS and MS thank MHRD and CSIR, respectively for the fellowship. The authors acknowledge IIT Delhi’s FIST (DST, Govt. of India) UFO scheme for providing the physics department with the Raman facility. We thank the Physics Department at IIT Delhi for the XRD and the Central Research Facility (CRF) for the FESEM, EDX, and XPS. We also thank Ambuj Mishra for providing HR-TEM facility at IUAC, New Delhi. The preparation of the samples was done in a high-temperature furnace (from Nabertherm GmbH, Germany), funded by BRNS through the DAE Young Scientist Research Award (Project Sanction No. 34/20/12/2015/BRNS). RSD acknowledges SERB–DST for the financial support through a core research grant (project reference no. CRG/2020/003436). ## References * [1] E. Varghese, S. Kumar, B. Pathak, and S. Sen, Temperature-induced crystallinity and vibrational properties in samarium orthovanadate, Phys. Rev. B 101 (2020) 174112. * [2] S. Huang, Z. Wang, Q. Zhu, X. Shi, X. Wang, X. Li, X. Sun, and J.-G. Li, A new protocol for templated synthesis of YVO4:Ln luminescent crystallites (Ln=Eu, Dy, Sm), Journal of Alloys and Compounds 776 (2019) 773. * [3] T. Carbonati, C. Cionti, E. Cosaert, B. Nimmegeers, D. Meroni, and D. Poelman, NIR emitting GdVO4:Nd nanoparticles for bioimaging: The role of the synthetic pathway, Journal of Alloys and Compounds 862 (2021) 158413. * [4] Ajay Kumar, A. Jain, S. M. Yusuf, and R. S. Dhaka, Observation of Anisotropic Thermal Expansion and the Jahn-Teller Effect in Double Perovskites Sr2-xLaxCoNbO6 Using Neutron Diffraction, J. Phys. Chem. Lett. 13 (2022) 3023. * [5] Ajay Kumar, R. Shukla, R. Kumar, R. J. Choudhary, S. N. Jha, and R. S. Dhaka, Probing the electronic and local structure of Sr2-xLaxCoNbO6 using near-edge and extended x-ray absorption fine structures, Phys. Rev. B 105 (2022) 245155. * [6] Ajay Kumar and R. S. Dhaka, Unraveling magnetic interactions and the spin state in insulating Sr2-xLaxCoNbO6, Phys. Rev. B 101 (2020) 094434. * [7] Ajay Kumar, B. Schwarz, H. Ehrenberg, and R. S. Dhaka, Evidence of discrete energy states and cluster-glass behavior in Sr2-xLaxCoNbO6, Phys. Rev. B 102 (2020) 184414. * [8] M. Yi, S.-K. Park, C.-Y. Seong, Y. Piao, and T. Yu, The general synthesis and characterization of rare earth orthovanadate nanocrystals and their electrochemical applications, Journal of Alloys and Compounds 693 (2017) 825. * [9] L. Sun, X. Zhao, Y. Li, P. Li, H. Sun, X. Cheng, and W. Fan, First-principles studies of electronic, optical, and vibrational properties of LaVO4 polymorph, J. Appl. Phys. 108 (2010) 093519. * [10] B. C. Chakoumakos, M. M. Abraham, and L. A. Boatner, Crystal structure refinements of zircon-type MVO4 (M = Sc, Y, Ce, Pr, Nd, Tb, Ho, Er, Tm, Yb, Lu), J. Solid State Chem. 109 (1994) 197. * [11] C. E. Rice and W. R. Robinson, Lanthanum Orthovanadate, Acta Crystallogr B Struct. Sci. 32 (1976) 2232. * [12] W. Fan, X. Song, Y. Bu, S. Sun, and X. Zhao, Selected-Control Hydrothermal Synthesis and Formation Mechanism of Monazite- and Zircon-Type LaVO4 Nanocrystals, J. Phys. Chem. B 110 (2006) 23247. * [13] C. K. Rastogi, S. K. Sharma, A. Patel, G. Parthasarathy, R. G. S. Pala, J. Kumar, and S. Sivakumar, Dopant Induced Stabilization of Metastable Zircon-Type Tetragonal LaVO4, J. Phys. Chem. C 121 (2017) 16501. * [14] B. Xie, G. Lu, Y. Wang, Y. Guo, and Y. Guo, Selective synthesis of tetragonal LaVO4 with different vanadium sources and its luminescence performance, Journal of Alloys and Compounds 544 (2012) 173. * [15] Suzuki, N., Noritake, T., and Hioki, T. Structural analysis and physical properties of Sr2-xLaxVO4-δ. Journal of Alloys and Compounds 612 (2014) 114. * [16] Liu, H., Yuan, J., Jiang, Z., Shangguan, W., Einaga, H., and Teraoka, Y., Roles of Bi, M and VO4 tetrahedron in photocatalytic properties of novel Bi0.5M0.5VO4 (M=La, Eu, Sm and Y) solid solutions for overall water splitting. Journal of Solid State Chemistry 186 (2012) 70. * [17] Himanshu Dua, Rishabh Shukla, and R. S. Dhaka, Structural phase transition and its consequences for the optical behavior of LaV1-xNbxO4, Phys. Rev. B 103 (2021) 174107. * [18] S. Verma, B. N. Wani, and N. M. Gupta, Synthesis, characterisation, TPR/TPO and activity studies on LaMnxV1-xO4-δ–catalysts, Appl. Catal. A: Gen. 205 (2001) 295. * [19] D. Errandonea and F. J. Manjón, Pressure effects on the structural and electronic properties of ABX4 scintillating crystals, Progress in Materials Science 53 (2008) 711. * [20] H. Takei and S. Tsunekawa, Growth and properties of LaNbO4 and NdNbO4 single crystals, Journal of Crystal Growth 38 (1977) 55. * [21] A. T. Aldred, Unusual cell volume behavior in the LaNb1-xVxO4 system, Materials Letters 1 (1983) 197. * [22] R. Haugsrud, and T. Norby, Proton conduction in rare-earth ortho-niobates and ortho-tantalates, Nature Mater. 5 (2006) 193. * [23] L. Hakimova, A. Kasyanova, A. Farlenkov, J. Lyagaeva, D. Medvedev, A. Demin, and P. Tsiakaras, Effect of Isovalent Substitution of La3+ in Ca-Doped LaNbO4 on the Thermal and Electrical Properties, Ceramics International 45 (2019) 209. * [24] G. Blasse, and L. H. Brixner, Ultraviolet emission from ABO4-type niobates, tantalates and tungstates, Chem. Phys. Lett. 173 (1990) 409. * [25] H. Liu, H. Yu, J. Wang, F. Xia, C. Wang, and J. Xiao, LaNbO4 as an electrode material for mixed-potential CO gas sensors, Sensors and Actuators B: Chemical 352 (2022) 130981. * [26] D. Zhou, H.-H. Guo, M.-S. Fu, X.-G. Yao, H.-X. Lin, W.-F. Liu, L.-X. Pang, C. Singh, S. Trukhanov, A. Trukhanov, and I. M. Reaney, Anomalous dielectric behaviour during the monoclinic to tetragonal phase transition in La(Nb0.9V0.1)O4, Inorg. Chem. Front. 8 (2021) 156. * [27] J. Xue, Z. Yu, H. M. Noh, B. R. Lee, B. C. Choi, S. H. Park, J. H. Jeong, P. Du, and M. Song, Designing multi-mode optical thermometers via the thermochromic LaNbO4:Bi3+/Ln3+ (Ln = Eu, Tb, Dy, Sm) phosphors, Chemical Engineering Journal 415 (2021) 128977. * [28] S. Ding, Q. Zhang, W. Liu, J. Luo, F. Peng, X. Wang, G. Sun, and D. Sun, Crystal growth and characterization of a mixed laser crystal: Nd-doped Gd0.89La0.1NbO4, RSC Adv. 7 (2017) 35666. * [29] F. B. Xiong, F. X. Xu, H. F. Lin, Y. P. Wang, E. Ma, and W. Z. Zhu, Synthesis and luminescent properties of novel thermal-stable orangish-red-emitting LnNbO4: Sm3+ (Ln=La, Y) phosphors, Appl. Phys. A 126 (2020) 908. * [30] P. Sun, P. Dai, J. Yang, C. Zhao, and X. Zhang, Enhanced upconversion luminescence induced by structural evolution of Lanthanum Niobate Phosphor, Ceramics International 41 (2015) 3009. * [31] S. Wachowski, A. Mielewczyk-Gryn, and M. Gazda, Effect of isovalent substitution on microstructure and phase transition of LaNb1-xMxO4 (M=Sb, V or Ta; $x=$ 0.05–0.3), J. Solid State Chem. 219 (2014) 201. * [32] M. Huse, A. W. B. Skilbred, M. Karlsson, S. G. Eriksson, T. Norby, R. Haugsrud, and C. S. Knee, Neutron Diffraction study of the monoclinic to tetragonal structural transition in LaNbO4 and its relation to proton mobility, Journal of Solid State Chemistry 187 (2012) 27. * [33] W. I. F. David, The high-temperature paraelastic structure of LaNbO4, Mater. Res. Bull. 18 (1983) 749. * [34] Rishabh Shukla, Clemens Ulrich, and R. S. Dhaka, Investigation of lattice dynamics, magnetism and electronic transport in $\beta$-Na0.33V2O5, Phys. Rev. B 106 (2022) 125148. * [35] X. Cheng, D. Guo, S. Feng, K. Yang, Y. Wang, Y. Ren, and Y. Song, Structure and stability of monazite- and zircon-type LaVO4 under hydrostatic pressure, Opt. Mater. 49 (2015) 32. * [36] C. C. Santos, E. N. Silva, A. P. Ayala, I. Guedes, P. S. Pizani, C.-K. Loong, and L. A. Boatner, Raman investigations of rare earth orthovanadates, J. Appl. Phys. 101 (2007) 053511. * [37] V. Panchal, S. López-Moreno, D. Santamaría-Pérez, D. Errandonea, F. J. Manjón, P. Rodríguez-Hernandez, A. Muñoz, S. N. Achary, and A. K. Tyagi, Zircon to monazite phase transition in CeVO4: X-ray diffraction and Raman-scattering measurements, Phys. Rev. B 84 (2011) 024111. * [38] R. Okram, N. R. Singh, and Ak. M. Singh, Simple preparation of Eu3+-Doped LaVO4 by ethylene glycol route: A luminescence study, Micro Nano Lett. 6 (2011) 165. * [39] N. Clavier, R. Podor, N. Dacheux, Crystal chemistry of the monazite structure, J. Eur. Ceram. Soc. 31 (2011) 941. * [40] R.K. Selvan, A. Gedanken, P. Anilkumar, G. Manikandan, C. Karunakaran, Synthesis and characterization of rare earth orthovanadate (RVO4; R = La, Ce, Nd, Sm, Eu and Gd) nanorods/Nanocrystals/Nanospindles by a facile sonochemical method and their catalytic properties, J. Cluster Sci. 20 (2009) 291. * [41] Z. Huang, S. Huang, G. Ou, and W. Pan, Synthesis, phase transformation and photoluminescence properties of Eu:La1-xGdxVO4 nanofibers by electrospinning method, Nanoscale 4 (2012) 5065. * [42] L. Wang, Q. Xu, L. Liu, Q. Song, H. Lv, G. Zhu, D. Zhang, Single-hole hollow tetragonal LaVO4:Eu3+ microspheres prepared by Ostwald ripening and their luminescence property, J. Lumin. 192 (2017) 1020. * [43] K. Ishii, N. Morita, H. Nakayama, S. Tsunekawa, and T. Fukuda, Raman Spectra of LaNbO4 in the ferroelastic phase and the relaxation after the state shift, Phys. Stat. Sol. (a) 112 (1989) 207. * [44] C. J. Jia, L. D. Sun, Z. G. Yan, Y. C. Pang, S. Z. L$\ddot{u}$, and C. H. Yan, Monazite and zircon type LaVO4:Eu nanocrystals-synthesis, luminescent properties, and spectroscopic identification of the Eu3+ sites, Eur. J. Inorg. Chem. 18 (2010) 2626. * [45] D. Errandonea, J. Pellicer-Porres, D. Mart$\acute{i}$nez-Garc$\acute{i}$a, J. Ruiz-Fuertes, A. Friedrich, W. Morgenroth, C. Popescu, P. Rodr$\acute{i}$guez-Hernández, A. Muñoz, and M. Bettinelli, Phase stability of lanthanum orthovanadate at high-pressure, J. Phys. Chem. C 120 (2016) 13749. * [46] A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, K. A. Persson, Commentary: the materials project: A materials genome approach to accelerating materials innovation, APL Materials 1 (2013) 011002. * [47] D. Errandonea and A. B. Garg, Recent progress on the characterization of the high-pressure behaviour of AVO4 orthovanadates, Prog. Mater. Sci. 97 (2018) 123. * [48] J. Pellicer-Porres, A. B. Garg, D. Vázquez-Socorro, D. Martínez-García, C. Popescu, and D. Errandonea, Stability of the fergusonite phase in GdNbO4 by high pressure XRD and Raman experiments, Journal of Solid State Chemistry 251 (2017) 14. * [49] G. Herzberg and G. Herzberg, Infrared and Raman spectra of polyatomic molecules, van Nostrand, New York, 22 1987\. * [50] M. Ishaque Khan, T. Hope, and S. Tabassum, Synthesis, reactivity, X-ray structure and thermal study of the mixed-metal oxide hydrate [Mn(H2O)2V2O6], Solid State Sciences 1 (1999) 163. * [51] F. D. Hardcastle and I. E. Wachs, Determination of Vanadium-Oxygen bond distances and bond orders by Raman spectroscopy, J. Phys. Chem. 95 (1991) 5031. * [52] L. Liu, M. Knapp, H. Ehrenberg, L. Fang, H. Fan, L. A. Schmitt, H. Fuess, M. Hoelzel, H. Dammak, M. P. Thi, M. Hinterstein, Average vs. local structure and composition-property phase diagram of K0.5Na0.5NbO3-Bi0.5Na0.5TiO3 system, Journal of the European Ceramic Society 37 (2017) 1387. * [53] J. P. Peña, P. Bouvier, and O. Isnard, Structural properties and Raman spectra of Columbite-type NiNb2-xVxO6 synthesized under high pressure, Journal of Solid State Chemistry 291 (2020) 121607. * [54] A. Ruiz-Martínez, D. Casanova, and S. Alvarez, Polyhedral structures with an odd number of vertices: Nine-coordinate metal compounds, Chem. Eur. J. 14 (2008) 1291. * [55] R. M. Badger, A relation between internuclear distances and bond force constants, The Journal of Chemical Physics 2 (1934) 128. * [56] D. F. Mullica, C. K. C. Lok, H. O. Perkins, and V. Young, X-ray photoelectron final-state screening in La(OH)3: A multiplet structural analysis, Phys. Rev. B 31 (1985) 4039. * [57] P. Steiner and H. Höchst, X-ray excited photoelectron spectra of LiNbO3: a quantitative analysis. Z Physik B 35 (1979) 51. * [58] A. Lebugle, U. Axelsson, R. Nyholm, and N. Mårtensson, Experimental L and M core level binding energies for the metals 22Ti to 30Zn, Phys. Scr. 23 (1981) 825. * [59] R. Shukla, A. Jain, M. Miryala, M. Murakami, K. Ueno, S. M. Yusuf, and R. S. Dhaka, Spin dynamics and unconventional magnetism in insulating La(1-2x)Sr2xCo(1–x)NbxO3, J. Phys. Chem. C 123 (2019) 22457. * [60] Rishabh Shukla and R. S. Dhaka, Evolution of complex magnetic phases and metal-insulator transition through Nb substitution in La0.5Sr0.5Co1-xNbxO3, Phys. Rev. B 107 (2023) 165108. * [61] P. V. Kamath and D. D. Sarma, Charge Transfer Satellites in X-Ray Photoelectron Spectra of Lanthanum Compounds, Indian J. Chem. 23A (1984) 292. * [62] R. P. Vasquez, X-Ray Photoemission Measurements of La1-xCaxCoO3 ($x=$ 0, 0.5), Phys. Rev. B 54 (1996) 14938. * [63] A. J. Signorelli and R. G. Hayes, X-Ray Photoelectron Spectroscopy of Various Core Levels of Lanthanide Ions: The Roles of Monopole Excitation and Electrostatic Coupling, Phys. Rev. B 8 (1973) 81. * [64] Rishabh Shukla and R. S. Dhaka, Anomalous magnetic and spin glass behavior in Nb-substituted LaCo1-xNbxO3, Phys. Rev. B 97 (2018) 024430. * [65] K. Isawa, R. Itti, J. Sugiyama, N. Koshizuka, and H. Yamauchi, Photoelectron spectroscopic study of SrxNbO3, Phys. Rev. B 49 (1994) 3534. * [66] V. V. Atuchin, I. E. Kalabin, V. G. Kesler, and N. V. Pervukhina, Nb 3d and O 1s core levels and chemical bonding in niobates, Journal of Electron Spectroscopy and Related Phenomena 142 (2005) 129. * [67] E. Antonides, E. C. Janse, and G. A. Sawatzky, LMM Auger spectra of Cu, Zn, Ga, and Ge, II. Relationship with the L23 photoelectron spectra via the L2L3M45 Coster-Kronig process, Phys. Rev. B 15 (1977) 4596. * [68] G. A. Sawatzky and D. Post, X-Ray photoelectron and Auger spectroscopy study of some vanadium oxides, Phys. Rev. B 20 (1979) 1546. * [69] M. Ohno, The effect of Coster–Kronig transition on the Auger-photoelectron coincidence spectroscopy spectra of early 3$d-$transition metals, Journal of Electron Spectroscopy and Related Phenomena 136 (2004) 221. * [70] J. Mendialdua, R. Casanova, and Y. Barbaux, XPS studies of V2O5, V6O13, VO2 and V2O3, Journal of Electron Spectroscopy and Related Phenomena 71 (1995) 249. * [71] G. Silversmit, D. Depla, H. Poelman, G. B. Marin, and R. De Gryse, Determination of the V 2$p$ XPS binding energies for different Vanadium oxidation states (V5+ to V0+), Journal of Electron Spectroscopy and Related Phenomena 135 (2004) 167.
# A model complete theory of transexponential pre-$H$-fields Nigel Pynn-Coates Department of Mathematics, The Ohio State University, Columbus, OH, United States<EMAIL_ADDRESS> ###### Abstract. The theory of differential-henselian, real closed pre-$H$-fields that have exponential integration and closed ordered differential residue field has quantifier elimination and is the model completion of the theory of pre-$H$-fields with gap $0$. From quantifier elimination, we deduce that this theory is distal, so has NIP. Moreover, we establish a two-sorted quantifier elimination result when the theory of the residue field has quantifier elimination in a language expanding the language of ordered differential rings, which yields a weak Ax–Kochen/Ershov principle. ## 1\. Introduction Pre-$H$-fields and $H$-fields are kinds of ordered valued differential fields introduced by M. Aschenbrenner and L. van den Dries in [2]; all Hardy fields are pre-$H$-fields and all Hardy fields containing $\mathbb{R}$ are $H$-fields. Together with J. van der Hoeven in [3], they showed that a certain theory $T^{\operatorname{nl}}$ of $H$-fields is the model companion of the theory of pre-$H$-fields in the language $\\{+,-,\cdot,0,1,\der,\preccurlyeq,\leqslant\\}$, where is interpreted as a derivation and $\preccurlyeq$ as a binary relation encoding a valuation; the latter functions in this paper as a coarse notion of size whereby elements $f\succ 1$ are thought of as infinite and elements $f\prec 1$ as infinitesimal. Moreover, $T^{\operatorname{nl}}$ admits quantifier elimination with the addition of a function symbol for field inversion and two unary predicates identifying the parameters for which two second-order differential equations have solutions. In addition, $T^{\operatorname{nl}}_{\operatorname{small}}=T^{\operatorname{nl}}+``\text{small derivation''}$ axiomatizes the theory of $\mathbb{T}$, the ordered (valued) differential field of logarithmic-exponential transseries, and is the model companion of the theory of $H$-fields with small derivation. Here, “small derivation” means that derivatives of infinitesimals are infinitesimal; see the next section for precise definitions of this and other notions. All Hardy fields have small derivation. By the results above, every pre-$H$-field extends to a model of $T^{\operatorname{nl}}$ and every $H$-field with small derivation extends to a model of $T^{\operatorname{nl}}_{\operatorname{small}}$. However, not every pre-$H$-field with small derivation extends to a model of $T^{\operatorname{nl}}_{\operatorname{small}}$ and those that do not must have gap $0$, where a pre-$H$-field has gap $0$ if it has small derivation and the logarithmic derivatives of infinite elements are infinite. It follows that in such structures, infinite elements are transexponential in some sense and the valuation yields a coarser notion of rate of growth than in Hardy fields. This paper focuses on pre-$H$-fields with gap $0$. To obtain an example of a pre-$H$-field with gap $0$, start by taking an $\aleph_{0}$-saturated elementary extension $\mathbb{T}^{*}$ of $\mathbb{T}$. Then enlarging the valuation ring (the subring of non-infinite elements) so that it is the set of elements bounded in absolute value by some finite iterate of the exponential yields a pre-$H$-field with gap $0$ (see Appendix A). The saturation ensures that $\mathbb{T}^{*}$ contains a transexponential element and thus this enlarged valuation ring is a proper subring. Another example comes from considering the functional equation $f(x+1)=e^{f(x)}$. It has a solution lying in a Hardy field [4], and any solution is clearly transexponential, so performing the same enlargement of the valuation ring of this Hardy field also yields a pre-$H$-field with gap $0$. The goal of this paper is to find a model companion for the theory of pre-$H$-fields with gap $0$, or, equivalently, to axiomatize the class of existentially closed pre-$H$-fields with gap $0$. In pre-$H$-fields with small derivation, the derivation and ordering induce a derivation and ordering, respectively, on the residue field, which is the quotient of the valuation ring by its maximal ideal of infinitesimal elements. One major distinction between the theory considered here and $T^{\operatorname{nl}}$ is that this derivation induced on the residue field can be nontrivial, and always is in existentially closed pre-$H$-fields with gap $0$. Conversely, whenever a pre-$H$-field with small derivation has nontrivial induced derivation on its residue field, it must have gap $0$. Thus it is reasonable to expect that an existentially closed pre-$H$-field with gap $0$ has an ordered differential residue field that is existentially closed; this class is axiomatized by the theory of closed ordered differential fields introduced by M. Singer [12]. Our main result is the following, in the language $\\{+,-,\cdot,0,1,\der,\preccurlyeq,\leqslant\\}$. ###### Theorem 8.2. The theory of differential-henselian, real closed pre-$H$-fields with exponential integration and closed ordered differential residue field has quantifier elimination. Differential-henselianity generalizes the notion of henselianity for valued fields to the setting of valued differential fields with small derivation, while having exponential integration means that for each $f$ there is $z\neq 0$ that behaves like $e^{\int\\!f}$ in the sense that $\der(z)/z=f$. By showing that every pre-$H$-field with gap $0$ extends to a model of the theory above, we obtain the desired result. ###### Corollary 8.4. The theory of differential-henselian, real closed pre-$H$-fields with exponential integration and closed ordered differential residue field is the model completion of the theory of pre-$H$-fields with gap $0$. It also follows from quantifier elimination that this theory is complete and decidable, which is Corollary 8.5. Finally, we show that it is tame in the following sense. ###### Theorem 8.6. The theory of differential-henselian, real closed pre-$H$-fields with exponential integration and closed ordered differential residue field is distal, and hence has NIP. Consider the example of a pre-$H$-field with gap $0$ obtained by enlarging the valuation ring of an elementary extension $\mathbb{T}^{*}$ of $\mathbb{T}$. The residue field of this structure can be equipped with a valuation induced by the valuation of $\mathbb{T}^{*}$, making it elementarily equivalent to $\mathbb{T}$ as an ordered valued differential field (see Appendix A). Thus we consider theories in which the residue field has structure beyond its ordered differential field structure. Fix a language $\mathcal{L}_{\operatorname{res}}$ expanding $\\{+,-,\cdot,0,1,\der,\leqslant\\}$ and an $\mathcal{L}_{\operatorname{res}}$-theory $T^{\operatorname{dhl}}_{\operatorname{res}}$ of ordered differential fields. Let $T^{\operatorname{dhl}}$ be the theory whose models are structures $(K,\bm{k};\pi)$ such that $K$ is a differential-henselian, real closed pre-$H$-field with exponential integration in the language $\\{+,-,\cdot,\iota,0,1,\der,\preccurlyeq,\leqslant\\}$, where $\iota$ is interpreted by field inversion; $\bm{k}\models T^{\operatorname{dhl}}_{\operatorname{res}}$; and $\pi\colon K\to\bm{k}$ is a map inducing an isomorphism of ordered differential fields between the residue field of $K$ and $\bm{k}$. Then: ###### Theorem 8.8. If $T^{\operatorname{dhl}}_{\operatorname{res}}$ has quantifier elimination, then so does $T^{\operatorname{dhl}}$. A two-sorted model companion result is established in Corollary 8.10. Finally, Theorem 8.8 yields the following weak Ax–Kochen/Ershov principle. ###### Theorem 8.11. Suppose that $K_{1}$ and $K_{2}$ are pre-$H$-fields that are differential- henselian, real closed, and have exponential integration, and let $\bm{k}_{1}$ and $\bm{k}_{2}$ be their ordered differential residue fields. Then $K_{1}\equiv K_{2}$ if and only if $\bm{k}_{1}\equiv\bm{k}_{2}$. ### 1.A. Outline After some preliminary definitions and remarks, we show how to extend embeddings of ordered valued differential fields by first extending the residue field in §3. Associated to each pre-$H$-field with gap $0$ is an $H$-asymptotic couple with gap $0$, and in §4 we study them as structures in their own right. We isolate the model completion of the theory of $H$-asymptotic couples with gap $0$ and prove that this theory has quantifier elimination in Theorem 4.1. Since models include the asymptotic couples of real closed pre-$H$-fields with gap $0$ and exponential integration, Theorem 4.1 gets used in §5 via Corollary 4.2. The main result in that section is Theorem 5.2, which is a strengthening of [7, Theorem 3.6] under additional hypotheses. Section 6 deals with extending the constant field for use in the next section. Section 7 builds towards Theorem 7.16, which shows the existence of differential-Hensel-Liouville closures; these are extensions that are differential-henselian, real closed, and have exponential integration, and that satisfy a semi-universal property. We use Theorem 5.2 to prove that differential-Hensel-Liouville closures are unique in Corollary 7.18. Finally, §8 contains the main results advertised above. ## 2\. Preliminaries We let $d$, $m$, $n$, and $r$ range over $\mathbb{N}=\\{0,1,2,\dots\\}$ and $\rho$, $\lambda$, and $\mu$ be ordinals. The main objects of this paper are kinds of ordered valued differential fields; all fields in this paper are assumed to be of characteristic $0$. A _valued field_ is a field $K$ equipped with a surjective map $v\colon K^{\times}\to\Gamma$, where $\Gamma$ is a (totally) ordered abelian group, satisfying for $f,g\in K^{\times}$: 1. (V1) $v(fg)=v(f)+v(g)$; 2. (V2) $v(f+g)\geqslant\min\\{v(f),v(g)\\}$ whenever $f+g\neq 0$. A _differential field_ is a field $K$ equipped with a _derivation_ $\der\colon K\to K$, which satisfies for $f,g\in K$: 1. (D1) $\der(f+g)=\der(f)+\der(g)$; 2. (D2) $\der(fg)=f\der(g)+g\der(f)$. Let $K$ be a valued field. We add a new symbol $\infty$ to $\Gamma$ and extend the addition and ordering to $\Gamma_{\infty}\coloneqq\Gamma\cup\\{\infty\\}$ by $\infty+\gamma=\gamma+\infty=\infty$ and $\infty>\gamma$ for all $\gamma\in\Gamma$. This allows us to extend $v$ to $K$ by setting $v(0)\coloneqq\infty$. We often use the following more intuitive notation: $\begin{array}[]{lc}f\preccurlyeq g\ \Leftrightarrow\ v(f)\geqslant v(g),\qquad f\prec g\ \Leftrightarrow\ v(f)>v(g),\\\ f\asymp g\ \Leftrightarrow\ v(f)=v(g),\qquad f\sim g\ \Leftrightarrow\ f-g\prec g.\end{array}$ The relation $\preccurlyeq$ is called a _dominance relation_. Both $\asymp$ and $\sim$ are equivalence relations on $K$ and $K^{\times}$ respectively, with a consequence of (V2) being that if $f\sim g$, then $f\asymp g$. We set $\mathcal{O}\coloneqq\\{f\in K:f\preccurlyeq 1\\}$ and call it the _valuation ring_ of $K$. It has a (unique) maximal ideal $\cao\coloneqq\\{f\in K:f\prec 1\\}$, and we call $\operatorname{res}(K)\coloneqq\mathcal{O}/\cao$ the _residue field_ of $K$, often denoted by $\bm{k}$. From our assumption that $\bm{k}$ has characteristic $0$, we see that $\mathbb{Q}\subseteq\mathcal{O}$. We also let $\overline{a}$ or $\operatorname{res}(a)$ denote the image of $a\in\mathcal{O}$ under the map to $\bm{k}$. For another valued field $L$, we denote these objects by $\mathcal{O}_{L}$, $\Gamma_{L}$, $\bm{k}_{L}$, etc. Let $K$ be a differential field. For $f\in K$, we often write $f^{\prime}$ for $\der(f)$ if the derivation is clear from the context and set $f^{\dagger}\coloneqq f^{\prime}/f$ if $f\neq 0$. We say that $K$ has _exponential integration_ if $(K^{\times})^{\dagger}=K$. The _field of constants_ of $K$ is $C\coloneqq\\{f\in K:f^{\prime}=0\\}$. For another differential field $L$, we denote this object by $C_{L}$. We let $K\\{Y\\}\coloneqq K[Y,Y^{\prime},Y^{\prime\prime},\dots]$ be the ring of differential polynomials over $K$ and set $K\\{Y\\}^{\neq}\coloneqq K\\{Y\\}\setminus\\{0\\}$. Let $P$ range over $K\\{Y\\}^{\neq}$. The _order_ of $P$ is the smallest $r$ such that $P\in K[Y,Y^{\prime},\dots,Y^{(r)}]$. For $\bm{i}=(i_{0},\dots,i_{r})\in\mathbb{N}^{1+r}$, we set $Y^{\bm{i}}\coloneqq Y^{i_{0}}(Y^{\prime})^{i_{1}}\dots(Y^{(r)})^{i_{r}}$. If $P$ has order at most $r$, then we decompose $P$ as $\sum_{\bm{i}}P_{\bm{i}}Y^{\bm{i}}$, where $\bm{i}$ ranges over $\mathbb{N}^{1+r}$. Letting $|\bm{i}|\coloneqq i_{0}+\dots+i_{r}$, we note that $P_{d}=\sum_{|\bm{i}|=d}P_{\bm{i}}Y^{\bm{i}}$, where $P_{d}$ denotes the homogeneous part of $P$ of degree $d$. We extend the derivation of $K$ to $K\\{Y\\}$ in the natural way, and we also extend $v$ to $K\\{Y\\}$ by setting $v(P)$ to be the minimum valuation of the coefficients of $P$. The relations $\preccurlyeq$, $\prec$, $\asymp$, and $\sim$ are also extended to $K\\{Y\\}$ in the corresponding way, and the image of $P\in\mathcal{O}\\{Y\\}$ under the map to $\bm{k}\\{Y\\}$ is also denoted by $\overline{P}$. Now suppose that $K$ is a valued differential field with nontrivial valuation and derivation; we continue to assume this throughout the paper. Relating the valuation and the derivation, we impose throughout most of this paper the condition that $K$ has _small derivation_ , which means that $\der\cao\subseteq\cao$. In this case, $\der\mathcal{O}\subseteq\mathcal{O}$ [3, Lemma 4.4.2], so induces a derivation on $\bm{k}$, and is continuous with respect to the valuation topology on $K$ [3, Lemma 4.4.6]. If $K$ has small derivation, we always construe $\bm{k}$ as a differential field with this induced derivation and are typically interested in the case that it is nontrivial, as happens when $\bm{k}$ is _linearly surjective_ : for all $a_{0},\dots,a_{r}\in\bm{k}$, the equation $a_{0}+a_{1}y+a_{2}y^{\prime}+\dots+a_{r}y^{(r)}=0$ has a solution in $\bm{k}$. We say that $K$ is _differential-henselian_ (_$\operatorname{d}$ -henselian_ for short) if $K$ has small derivation and: 1. (DH1) $\bm{k}$ is linearly surjective; 2. (DH2) whenever $P\in\mathcal{O}\\{Y\\}$ satisfies $P_{0}\prec 1$ and $P_{1}\asymp 1$, there is $y\prec 1$ with $P(y)=0$. Differential-henselianity was introduced by T. Scanlon in [9, 10] and studied more systematically in [3]. It is closely connected to the notion of differential-algebraic maximality, for which we first need to discuss certain kinds of extensions. Given an extension $L$ of $K$, we identify $\Gamma$ with a subgroup of $\Gamma_{L}$ and $\bm{k}$ with a subfield of $\bm{k}_{L}$ in the obvious way. Here and throughout we use the word _extension_ in the following way: if $F$ is a valued differential field, “extension of $F$” means “valued differential field extension of $F$;” if $F$ is an ordered valued differential field, “extension of $F$” means “ordered valued differential field extension of $F$;” etc. Where there is particular danger of confusion, we make explicit the kind of extension meant. The words “embedding,” “isomorphic,” and “isomorphism” are used similarly. We say that an extension $L$ of $K$ is _immediate_ if $\Gamma_{L}=\Gamma$ and $\bm{k}_{L}=\bm{k}$; if $K$ and $L$ have small derivation, then $\bm{k}$ is naturally a differential subfield of $\bm{k}_{L}$. In an immediate extension $L$ of $K$, every element of $L\setminus K$ is the pseudolimit of a pseudocauchy sequence (“pc-sequence” for short) in $K$ that has no pseudolimit in $K$ (called _divergent_ in $K$); we use this only in Lemma 8.1. For a definition and basic facts about pc-sequences, see [3, §2.2]. Divergent pc- sequences in $K$ can be of $\operatorname{d}$-algebraic or $\operatorname{d}$-transcendental type over $K$, and this comes up in Lemmas 5.1 and 8.1; for more on these two notions, see [3, §4.4 and §6.9]. If $K$ has small derivation, then we call it _differential-algebraically maximal_ (_$\operatorname{d}$ -algebraically maximal_ for short) if it has no proper differentially algebraic (“$\operatorname{d}$-algebraic” for short) immediate extension with small derivation. If $K$ has small derivation and the derivation on $\bm{k}$ is nontrivial, then $K$ is $\operatorname{d}$-algebraically maximal if and only if it has no divergent pc-sequence of $\operatorname{d}$-algebraic type over $K$ by [3, Lemma 6.9.3]. Relating this to $\operatorname{d}$-henselianity, any $\operatorname{d}$-algebraically maximal valued differential field with small derivation and linearly surjective differential residue field is also $\operatorname{d}$-henselian [3, Theorem 7.0.1]; an earlier case is in [9]. The converse fails in general but holds for asymptotic (valued differential) fields [7, Theorem 3.6]. We say that $K$ is _asymptotic_ if $f\prec g\iff f^{\prime}\prec g^{\prime}$ for all nonzero $f,g\in\cao$. Note that if $K$ is asymptotic, then $C\subseteq\mathcal{O}$. If $K$ is asymptotic, then we say that $K$ is _$H$ -asymptotic_ or of _$H$ -type_ if, for all $f,g\in K^{\times}$ satisfying $f\preccurlyeq g\prec 1$, we have $f^{\dagger}\succcurlyeq g^{\dagger}$. In the rest of this paragraph, suppose that $K$ is asymptotic. We say that $K$ has _gap $0$_ if it has small derivation and $f^{\dagger}\succ 1$ for all $f\in K^{\times}$ with $f\prec 1$. For $g\in K^{\times}$ with $g\not\asymp 1$, $v(g^{\dagger})$ and $v(g^{\prime})$ depend only on $vg$ and not on $g$, so with $\gamma\coloneqq vg$ we set $\gamma^{\dagger}\coloneqq v(g^{\dagger})$ and $\gamma^{\prime}\coloneqq v(g^{\prime})$; note that $\gamma^{\dagger}=\gamma^{\prime}-\gamma$. Thus setting $\Gamma^{\neq}\coloneqq\Gamma\setminus\\{0\\}$, logarithmic differentiation induces a map $\displaystyle\psi\colon\Gamma^{\neq}$ $\displaystyle\to\Gamma$ $\displaystyle\gamma$ $\displaystyle\mapsto\gamma^{\dagger}.$ We call $(\Gamma,\psi)$ the _asymptotic couple_ of $K$; such structures were introduced by M. Rosenlicht [8]. The map $\psi$ is a valuation on $\Gamma$ in the sense of [3, §2.2], and we set $\Psi\coloneqq\psi(\Gamma^{\neq})$. Note that $K$ being $H$-asymptotic just says that the valuation $\psi$ is convex with respect to the ordering of $\Gamma$, and $K$ having small derivation or gap $0$ are also properties of its asymptotic couple. In fact, having such a map $\psi$ on the value group satisfying certain axioms described in §4 is equivalent to being asymptotic [3, Proposition 9.1.3]. We extend $\psi$ to a map $\psi\colon\Gamma_{\infty}\to\Gamma_{\infty}$ by setting $\psi(0)=\psi(\infty)\coloneqq\infty$. Even stronger than being asymptotic is being pre-$\operatorname{d}$-valued. We say that $K$ is _pre-differential-valued_ (_pre- $\operatorname{d}$-valued_ for short) if: 1. (PD1) $C\subseteq\mathcal{O}$; 2. (PD2) for all $f,g\in K^{\times}$, if $f\preccurlyeq g\prec 1$, then $\frac{f}{g}-\frac{f^{\prime}}{g^{\prime}}\prec 1$. Equivalently, $K$ is pre-$\operatorname{d}$-valued if, for all $f,g\in K^{\times}$ with $f\preccurlyeq 1$ and $g\prec 1$, we have $f^{\prime}\prec g^{\dagger}$ [3, Lemma 10.1.4]. This paper is primarily concerned with certain ordered pre-$\operatorname{d}$-valued fields called pre-$H$-fields. Here, $K$ is an _ordered valued differential field_ if, in addition to its valuation and derivation, it is also equipped with a (total) ordering $\leqslant$ making it an ordered field (in the sense that the ordering is preserved by addition and by multiplication by positive elements). If $\mathcal{O}$ is convex with respect to the ordering (equivalently, $\cao$ is convex; see [3, Lemma 3.5.11]), then $\leqslant$ induces an ordering on $\bm{k}$ making it an ordered field. Relating the ordering, valuation, and derivation, we call $K$ a _pre- $H$-field_ if: 1. (PH1) $K$ is pre-$\operatorname{d}$-valued; 2. (PH2) $\mathcal{O}$ is convex (with respect to $\leqslant$); 3. (PH3) for all $f\in K$, if $f>\mathcal{O}$, then $f^{\prime}>0$. It follows that if $K$ is a pre-$H$-field and $f,g\in K^{\times}$, then $f^{\dagger}<g^{\dagger}$ whenever $f\prec g$ [3, Lemma 10.5.2(i)]. By part (ii) of the same lemma, pre-$H$-fields are $H$-asymptotic. Since pre-$H$-fields are pre-$\operatorname{d}$-valued, if $K$ is a pre-$H$-field with small derivation and the derivation induced on its residue field is nontrivial, then $K$ must have gap $0$. We now discuss how to extend orderings, valuations, and derivations to various extensions. If $K$ is an ordered valued differential field with convex valuation ring, we equip its real closure $K^{\operatorname{rc}}$ with the unique derivation extending that of $K$ (see [3, Lemma 1.9.2]) and the unique valuation extending that of $K$ whose valuation ring is convex (see [3, Corollary 3.5.18]), and always construe $K^{\operatorname{rc}}$ as an ordered valued differential field in this way. Then $\Gamma_{K^{\operatorname{rc}}}$ is the divisible hull $\mathbb{Q}\Gamma$ of $\Gamma$, $\bm{k}_{K^{\operatorname{rc}}}$ is the real closure of $\bm{k}$, and $C_{K^{\operatorname{rc}}}$ is the real closure of $C$. If $K$ is a pre-$H$-field, then so is $K^{\operatorname{rc}}$ by [3, Proposition 10.5.4]; if $K$ is a pre-$H$-field with gap $0$, then so is $K^{\operatorname{rc}}$ by the same proposition and the remarks after [3, Lemma 6.5.3]. If $K$ is a pre-$H$-field and $L$ is an immediate valued differential field extension of $K$ that is asymptotic, then $L$ can be given an ordering making it a pre-$H$-field extension of $K$; in fact, this is the unique ordering with respect to which $\mathcal{O}_{L}$ is convex (see [3, Lemma 10.5.8]). ## 3\. Extensions controlled by the residue field Here are ordered variants of [3, Theorem 6.3.2] and [3, Lemma 6.3.1]. Let $F\in K\\{Y\\}^{\neq}$ have order $r$ and set $d\coloneqq\deg_{Y^{(r)}}F$. Decomposing $F(Y)=\sum_{n=0}^{d}G_{n}(Y)\cdot(Y^{(r)})^{n}$ with $G_{n}\in K[Y,\dots,Y^{(r-1)}]$ for $n=0,\dots,d$, the _initial_ of $F$ is $G_{d}$. Below we let $I$ be the initial of $F$. First we recall [3, Theorem 6.3.2]: ###### Theorem 3.1 ([3, 6.3.2]). Suppose that $K$ has small derivation. Let $K\langle a\rangle$ be a differential field extension of $K$ such that $a$ has minimal annihilator $F$ as above satisfying $F\asymp 1$ and $I\asymp 1$, and such that $\overline{F}$ is irreducible in $\bm{k}\\{Y\\}$. Then there is a unique valuation $v\colon K\langle a\rangle^{\times}\to\Gamma$ extending that of $K$ such that: 1. (i) $K\langle a\rangle$ has small derivation; 2. (ii) $a\preccurlyeq 1$; 3. (iii) $\overline{a}$ has minimal annihilator $\overline{F}$ over $\bm{k}$. It is given by $P(a)/Q(a)\mapsto vP-vQ\in\Gamma$, for $P\in K[Y,\dots,Y^{(r)}]^{\neq}$ with $\deg_{Y^{(r)}}P<d$ and $Q\in K[Y,\dots,Y^{(r-1)}]^{\neq}$. The residue field of $K\langle a\rangle$ is $\bm{k}\langle\overline{a}\rangle$. Below, we equip $K\langle a\rangle$ with this valuation. ###### Lemma 3.2. Suppose that $K$ is an ordered valued differential field with small derivation and convex valuation ring. Let $K\langle a\rangle$ be as above. Suppose that $\bm{k}\langle\overline{a}\rangle$ is an _ordered_ differential field extension of $\bm{k}$. Then there exists a unique ordering on $K\langle a\rangle$ making it an ordered field extension of $K$ with convex valuation ring such that the induced ordering on $\bm{k}\langle\overline{a}\rangle$ agrees with the given one. If $K$ is a pre-$H$-field with gap $0$, then so is $K\langle a\rangle$. ###### Proof. Suppose that $K\langle a\rangle$ is equipped with an ordering making it an ordered field extension of $K$ with convex valuation ring. Let $P\in K[Y,\dots,Y^{(r)}]^{\neq}$ and $\deg_{Y^{(r)}}P<d$. By scaling $P$ by an element of $K^{>}$, we may assume that $v(P)=0$, and thus $\overline{P}(\overline{a})\neq 0$. We have $P(a)>0\iff\overline{P}(\overline{a})>0$, which shows that there is at most one ordering on $K\langle a\rangle$ making it an ordered field extension of $K$ with convex valuation ring such that the induced ordering on $\bm{k}\langle\overline{a}\rangle$ agrees with the given one. To construct such an ordering, let $b\in K\langle a\rangle^{\times}$, so $b=P(a)/Q(a)$ for $P\in K[Y,\dots,Y^{(r)}]^{\neq}$ with $\deg_{Y^{(r)}}P<d$ and $Q\in K[Y,\dots,Y^{(r-1)}]^{\neq}$. By scaling $b$ by an element of $K^{>}$, it suffices to define $b>0$ when $b\asymp 1$. Similarly, we may assume that $P\asymp Q\asymp 1$. Then the condition $\overline{P}(\overline{a})/\overline{Q}(\overline{a})>0$ in $\bm{k}\langle\overline{a}\rangle$ depends only on $b$ and not on the choice of $P$ and $Q$, so we can define $b>0:\Leftrightarrow\overline{P}(\overline{a})/\overline{Q}(\overline{a})>0$. Then $b>0$ or $-b>0$. Next, assume that $b,c\in K\langle a\rangle^{\times}$ and $b,c>0$; we show that $b+c>0$ and $bc>0$. We have $b=sP(a)/Q(a)$ and $c=tG(a)/H(a)$ with $s,t\in K^{>}$, $P$ and $Q$ as above, and $G\in K[Y,\dots,Y^{(r)}]^{\neq}$ and $H\in K[Y,\dots,Y^{(r-1)}]^{\neq}$ such that $\deg_{Y^{(r)}}G<d$ and $G\asymp H\asymp 1$. First we show that $b+c>0$. Without loss of generality, $s\preccurlyeq t$, so we have $b+c=t(st^{-1}HP+QG)(a)/QH(a)$ with $\deg_{Y^{(r)}}(st^{-1}HP+QG)<d$. Then $\frac{\overline{(st^{-1}HP+QG)}(\overline{a})}{\overline{QH}(\overline{a})}\ =\ \overline{st^{-1}}\frac{\overline{P}(\overline{a})}{\overline{Q}(\overline{a})}+\frac{\overline{G}(\overline{a})}{\overline{H}(\overline{a})}\ >\ 0,$ so $b+c>0$. Now we show that $bc>0$; we may assume that $s=t=1$. If $G\in K[Y,\dots,Y^{(r-1)}]$, then $bc=PG(a)/QH(a)$ with $\deg_{Y^{(r)}}PG<d$ and $\frac{\overline{PG}(\overline{a})}{\overline{QH}(\overline{a})}\ =\ \frac{\overline{P}(\overline{a})}{\overline{Q}(\overline{a})}\cdot\frac{\overline{G}(\overline{a})}{\overline{H}(\overline{a})}\ >\ 0.$ It therefore suffices to consider the case that $b=P(a)$ and $c=G(a)$. By division with remainder in $\mathcal{O}[Y,\dots,Y^{(r)}]$ we have $I^{m}PG=BF+R$ with $B,R\in\mathcal{O}[Y,\dots,Y^{(r)}]$ and $\deg_{Y^{(r)}}R<d$, and thus $bc=R(a)/I^{m}(a)$. But $\overline{R}(\overline{a})/\overline{I^{m}}(\overline{a})=\overline{P}(\overline{a})\cdot\overline{G}(\overline{a})>0$, so $bc>0$. Thus we have defined an ordering on $K\langle a\rangle$ making it an ordered field extension of $K$. An easy calculation shows that if $b\prec 1$, then $-1<b<1$, so the valuation ring of $K\langle a\rangle$ is convex with respect to this ordering (see [3, Lemma 3.5.11]), and by construction it induces the given ordering on $\bm{k}\langle\overline{a}\rangle$. Finally, suppose that $K$ is a pre-$H$-field with gap $0$. As a valued differential field extension of $K$ with small derivation and the same value group, $K\langle a\rangle$ is pre-$\operatorname{d}$-valued [3, Lemma 10.1.9], so it has the same asymptotic couple as $K$ and thus has gap $0$. By [3, Lemma 10.5.5] (with $T=K^{\times}$), $K\langle a\rangle$ is in fact a pre-$H$-field. ∎ Recall that the gaussian valuation on $K\langle Y\rangle$ is defined by setting $v(P)$, for $P\in K\\{Y\\}^{\neq}$, to be the minimum valuation of the coefficients of $P$; for more details, see [3, §4.5 and §6.3]. ###### Lemma 3.3. Suppose that $K$ is an ordered valued differential field with small derivation and convex valuation ring. Consider $K\langle Y\rangle$ with the gaussian valuation. Suppose that $\bm{k}\langle\overline{Y}\rangle$ is an _ordered_ differential field extension of $\bm{k}$. Then there exists a unique ordering on $K\langle Y\rangle$ making it an ordered field extension of $K$ with convex valuation ring such that the induced ordering on $\bm{k}\langle\overline{Y}\rangle$ agrees with the given one. If $K$ is a pre-$H$-field with gap $0$, then so is $K\langle Y\rangle$. ###### Proof. The proof is very similar to that of the previous lemma, but easier. ∎ ###### Corollary 3.4. Suppose that $K$ is an ordered valued differential field with small derivation and convex valuation ring. Let $\bm{k}_{L}$ be an ordered differential field extension of $\bm{k}$. Then $K$ has an ordered valued differential field extension $L$ with the following properties: 1. (i) $\Gamma_{L}=\Gamma$; 2. (ii) $L$ has small derivation; 3. (iii) $\mathcal{O}_{L}$ is convex; 4. (iv) $\operatorname{res}(L)\cong\bm{k}_{L}$ over $\bm{k}$ (as ordered differential fields); 5. (v) for any ordered valued differential field extension $M$ of $K$ with convex valuation ring that is $\operatorname{d}$-henselian, every embedding $\operatorname{res}(L)\to\bm{k}_{M}$ over $\bm{k}$ is induced by an embedding $L\to M$ over $K$. Moreover, if $K$ is a pre-$H$-field with gap $0$, then so is $L$. ###### Proof. First, note that we can reduce to the case that $\bm{k}_{L}=\bm{k}\langle y\rangle$. Suppose that $y$ is $\operatorname{d}$-transcendental over $\bm{k}$. Set $L\coloneqq K\langle Y\rangle$, equipped with the gaussian valuation and the ordering from Lemma 3.3 so that $\operatorname{res}(L)=\bm{k}\langle\overline{Y}\rangle\cong\bm{k}\langle y\rangle$ over $\bm{k}$. Let $M$ be an ordered valued differential field extension of $K$ with convex valuation ring, and suppose that $M$ is $\operatorname{d}$-henselian and $i\colon\bm{k}\langle\overline{Y}\rangle\to\bm{k}_{M}$ is an embedding over $\bm{k}$. Take $b\in M$ with $b\asymp 1$ and $\overline{b}=i(\overline{Y})$. Then [3, Lemma 6.3.1] provides a valued differential field embedding $L\to M$ over $K$ sending $Y$ to $b$; this is an ordered field embedding by the uniqueness in Lemma 3.3. Now suppose that $y$ is $\operatorname{d}$-algebraic over $\bm{k}$. Take $F\in\mathcal{O}\\{Y\\}$ so that $\overline{F}\in\bm{k}\\{Y\\}$ is a minimal annihilator of $y$ over $\bm{k}$ and $F$ and $\overline{F}$ have the same order $r$, degree in $Y^{(r)}$, and total degree. Note that $F\asymp I\asymp S\asymp 1$, where $I$ is the initial of $F$ and $S\coloneqq\partial F/\partial Y^{(r)}$ is the separant of $F$. Take a differential field extension $L\coloneqq K\langle a\rangle$ of $K$ such that $a$ has minimal annihilator $F$ over $K$. We equip $L$ with the valuation extending that of $K$ from Theorem 3.1, so $L$ has small derivation and $a\preccurlyeq 1$, and the ordering from Lemma 3.2, making it an ordered field extension of $K$ with convex valuation ring and $\operatorname{res}(L)=\bm{k}\langle\overline{a}\rangle\cong\bm{k}\langle y\rangle$ over $\bm{k}$. Let $M$ be an ordered valued differential field extension of $K$ with convex valuation ring, and suppose that $M$ is $\operatorname{d}$-henselian and $i\colon\bm{k}\langle\overline{a}\rangle\to\bm{k}_{M}$ is an embedding over $\bm{k}$. Let $z\in M$ with $z\preccurlyeq 1$ and $\overline{z}=i(\overline{a})$. By the minimality of $\overline{F}$, we have $\overline{S}\big{(}i(\overline{a})\big{)}\neq 0$, so $S(z)\asymp 1$. In particular, $(F_{+z})_{1}\asymp 1$, so by the $\operatorname{d}$-henselianity of $M$, there is $b\in M$ with $F(b)=0$, $b\preccurlyeq 1$, and $\overline{b}=i(\overline{a})$. Note that then $F$ is a minimal annihilator of $b$ over $K$ by the minimality of $\overline{F}$. Hence by Theorem 3.1 and Lemma 3.2 we may embed $L$ into $M$ over $K$ sending $a$ to $b$. ∎ ## 4\. Asymptotic couples with small derivation Towards our quantifier elimination and model completion results for pre-$H$-fields with gap $0$, we first study their associated asymptotic couples and prove quantifier elimination and model completion results for the theory of such structures. We suspend in this section the convention that $\Gamma$ is the value group of $K$. Instead, throughout the section $(\Gamma,\psi)$ is an _$H$ -asymptotic couple_, which means that $\Gamma$ is an ordered abelian group and $\psi\colon\Gamma^{\neq}\to\Gamma$ is a map satisfying, for all $\gamma,\delta\in\Gamma^{\neq}$: 1. (AC1) if $\gamma+\delta\neq 0$, then $\psi(\gamma+\delta)\geqslant\min\\{\psi(\gamma),\psi(\delta)\\}$; 2. (AC2) $\psi(k\gamma)=\psi(\gamma)$ for all $k\in\mathbb{Z}^{\neq}$; 3. (AC3) if $\gamma>0$, then $\gamma+\psi(\gamma)>\psi(\delta)$; 4. (HC) if $0<\gamma\leqslant\delta$, then $\psi(\gamma)\geqslant\psi(\delta)$. It follows from (AC2) and (HC) that $\psi$ is constant on archimedean classes of $\Gamma$. For $\gamma\in\Gamma$, we let $[\gamma]\coloneqq\\{\delta\in\Gamma:|\delta|\leqslant n|\gamma|\ \text{and}\ |\gamma|\leqslant n|\delta|\ \text{for some}\ n\\}$ denote its archimedean class, and set $[\Gamma]\coloneqq\\{[\gamma]:\gamma\in\Gamma\\}$, ordering it in the natural way. The map $\psi$ extends uniquely to the divisible hull $\mathbb{Q}\Gamma$ of $\Gamma$, defined by $\psi(q\gamma)=\psi(\gamma)$ for $\gamma\in\Gamma^{\neq}$ and $q\in\mathbb{Q}^{\times}$ (for uniqueness see [3, Lemma 6.5.3]), and in this way we always construe $\mathbb{Q}\Gamma$ as an $H$-asymptotic couple $(\mathbb{Q}\Gamma,\psi)$ extending $(\Gamma,\psi)$; it satisfies $\psi(\mathbb{Q}\Gamma^{\neq})=\psi(\Gamma^{\neq})$. Keeping in mind that in later sections $(\Gamma,\psi)$ will be the asymptotic couple of an $H$-asymptotic field (such as a pre-$H$-field), we let $\gamma^{\dagger}\coloneqq\psi(\gamma)$ and $\gamma^{\prime}\coloneqq\gamma^{\dagger}+\gamma$ for $\gamma\in\Gamma^{\neq}$. We let $\Psi\coloneqq\psi(\Gamma^{\neq})$ and let $\Psi^{\downarrow}$ be the downward closure of $\Psi$ in $\Gamma$. For any ordered abelian group $G$ we set $G^{<}\coloneqq\\{g\in G:g<0\\}$ and likewise with $G^{>}$. Thus (AC3) says that $\Psi<(\Gamma^{>})^{\prime}$. For $\beta\in\Gamma$, we say that $(\Gamma,\psi)$ has _gap $\beta$_ if $\Psi<\beta<(\Gamma^{>})^{\prime}$ and _max $\beta$_ if $\max\Psi=\beta$. Note that if $(\Gamma,\psi)$ is the asymptotic couple of an asymptotic field $K$, then $K$ has gap $0$ (in the sense of §2) if and only if $(\Gamma,\psi)$ has gap $0$. It follows easily from [3, Theorem 9.2.1 and Lemma 9.2.9] that $\sup\Psi=0\iff(\Gamma^{>})^{\prime}=\Gamma^{>}$ and that $\sup\Psi=0\notin\Psi$ if and only if $(\Gamma,\psi)$ has gap $0$. Thus $\sup\Psi=0$ if and only if $(\Gamma,\psi)$ has gap $0$ or max $0$. We are concerned primarily with $H$-asymptotic couples having gap $0$, but using similar techniques we prove analogous results for asymptotic couples with max $0$, although we do not use them in the main results of the paper. Before stating the quantifier elimination and model completion results, we specify the language $\mathcal{L}_{\operatorname{ac}}=\\{+,-,\leqslant,0,\infty,\psi\\}$ of asymptotic couples. The underlying set of an $H$-asymptotic couple $(\Gamma,\psi)$ in this language is $\Gamma_{\infty}\coloneqq\Gamma\cup\\{\infty\\}$, and we interpret $\infty$ in the following way: for all $\gamma\in\Gamma$, $\infty+\gamma=\gamma+\infty\coloneqq\infty$ and $\gamma<\infty$; $\infty+\infty\coloneqq\infty$; $-\infty\coloneqq\infty$; $\psi(0)=\psi(\infty)\coloneqq\infty$. The other symbols have the expected interpretation. ###### Theorem 4.1. The theory of nontrivial divisible $H$-asymptotic couples $(\Gamma,\psi)$ with $\Psi=\Gamma^{<}$ has quantifier elimination, and is the model completion of the theory of $H$-asymptotic couples with gap $0$. In this paper, we use this theorem via the following corollary. For $n\geqslant 1$, $\alpha_{1},\dots,\alpha_{n}\in\Gamma$, and $\gamma\in\Gamma$, we define the function $\psi_{\alpha_{1},\dots,\alpha_{n}}\colon\Gamma_{\infty}\to\Gamma_{\infty}$ recursively by $\psi_{\alpha_{1}}(\gamma)\coloneqq\psi(\gamma-\alpha_{1})\qquad\text{and}\qquad\psi_{\alpha_{1},\dots,\alpha_{n}}(\gamma)\coloneqq\psi\big{(}\psi_{\alpha_{1},\dots,\alpha_{n-1}}(\gamma)-\alpha_{n}\big{)}\ \text{for}\ n\geqslant 2.$ ###### Corollary 4.2. Let $(\Gamma,\psi)$ be a nontrivial divisible $H$-asymptotic couple with $\Psi=\Gamma^{<}$ and let $(\Gamma^{*},\psi^{*})$ be an $H$-asymptotic couple extending $(\Gamma,\psi)$ with gap $0$. Suppose $n\geqslant 1$, $\alpha_{1},\dots,\alpha_{n}\in\Gamma$, $q_{1},\dots,q_{n}\in\mathbb{Q}$, and $\gamma^{*}\in\Gamma^{*}$ are such that: 1. (i) $\psi^{*}_{\alpha_{1},\dots,\alpha_{n}}(\gamma^{*})\neq\infty\ (\text{so}\ \psi^{*}_{\alpha_{1},\dots,\alpha_{i}}(\gamma^{*})\neq\infty\ \text{for}\ i=1,\dots,n);$ 2. (ii) $\gamma^{*}+q_{1}\psi^{*}_{\alpha_{1}}(\gamma^{*})+\dots+q_{n}\psi^{*}_{\alpha_{1},\dots,\alpha_{n}}(\gamma^{*})\in\Gamma\ (\text{in}\ \mathbb{Q}\Gamma^{*}).$ Then $\gamma^{*}\in\Gamma$. ###### Proof. By Theorem 4.1, $(\Gamma,\psi)$ is an existentially closed $H$-asymptotic couple with gap $0$ (see [3, Lemma B.10.10]), so we have $\gamma\in\Gamma$ with $\gamma+q_{1}\psi_{\alpha_{1}}(\gamma)+\dots+q_{n}\psi_{\alpha_{1},\dots,\alpha_{n}}(\gamma)\ =\ \gamma^{*}+q_{1}\psi^{*}_{\alpha_{1}}(\gamma^{*})+\dots+q_{n}\psi^{*}_{\alpha_{1},\dots,\alpha_{n}}(\gamma^{*}).$ It remains to use [3, Lemma 9.9.3] to obtain $\gamma^{*}=\gamma\in\Gamma$. ∎ The rest of the section is devoted to proving Theorem 4.1, as well as an analogue for $H$-asymptotic couples with max $0$. The material in this section is based on work in progress by Aschenbrenner, van den Dries, and van der Hoeven, tentatively titled “Revisiting Closed Asymptotic Couples.” This work improves [1], in which a result similar to Theorem 4.1 is obtained for a different theory of $H$-asymptotic couples, by introducing several new lemmas that simplify the arguments. We use these lemmas to adapt their new proof of quantifier elimination to this setting. Since their work is unpublished, we quote the results that we use and give their proofs, also due to Aschenbrenner, van den Dries, and van der Hoeven. Moreover, many of the proofs of results specific to the setting of gap $0$ or max $0$ are very similar to proofs of analogous results in their setting. We are indebted to those authors for allowing the use of their manuscript. In contrast with the results of [1], here we do not need to expand the language by a predicate for the $\Psi$-set or by functions for divisibility by nonzero natural numbers. Additionally, those authors work over an arbitrary ordered scalar field $\bm{k}$, but here we work over $\mathbb{Q}$ for concreteness (the results of this section hold in that setting in the language $\mathcal{L}_{\operatorname{ac}}$ expanded by functions for scalar multiplication). ### 4.A. Preliminaries The first two lemmas are due to Aschenbrenner, van den Dries, and van der Hoeven, as part of the work in progress described above, while the third combines [3, Corollary 9.8.8] with another case needed here. ###### Lemma 4.3. Suppose that $\Psi$ is downward closed. Let $(\Gamma_{1},\psi_{1})$ and $(\Gamma_{*},\psi_{*})$ be $H$-asymptotic couples extending $(\Gamma,\psi)$ such that $\Gamma^{<}$ is cofinal in $\Gamma_{1}^{<}$. Suppose that $\gamma_{1}\in\Gamma_{1}\setminus\Gamma$ and $\gamma_{*}\in\Gamma_{*}\setminus\Gamma$ realize the same cut in $\Gamma$ with $\gamma_{1}^{\dagger}\notin\Gamma$. Then $\gamma_{*}^{\dagger}\notin\Gamma$ and $\gamma_{*}^{\dagger}$ realizes the same cut in $\Gamma$ as $\gamma_{1}^{\dagger}$. ###### Proof. Let $\alpha\in\Gamma^{\neq}$. We claim that: $\gamma_{1}^{\dagger}<\alpha^{\dagger}\implies\gamma_{*}^{\dagger}<\alpha^{\dagger}\qquad\text{and}\qquad\gamma_{1}^{\dagger}>\alpha^{\dagger}\implies\gamma_{*}^{\dagger}>\alpha^{\dagger}.$ First, suppose that $\gamma_{1}^{\dagger}<\alpha^{\dagger}$. Then $|\gamma_{1}|>|\alpha|$, so $|\gamma_{*}|>|\alpha|$, and thus $\gamma_{*}^{\dagger}\leqslant\alpha^{\dagger}$. Since $\Gamma^{<}$ is cofinal in $\Gamma_{1}^{<}$, there is $\delta\in\Gamma$ with $\gamma_{1}^{\dagger}<\delta<\alpha^{\dagger}$. By taking $\beta\in\Gamma^{\neq}$ with $\beta^{\dagger}=\delta$, since $\Psi$ is downward closed, and replacing $\alpha$ by $\beta$ in the argument, we obtain $\gamma_{*}^{\dagger}\leqslant\beta^{\dagger}<\alpha^{\dagger}$. Now suppose that $\gamma_{1}^{\dagger}>\alpha^{\dagger}$, so we obtain $\gamma_{*}^{\dagger}\geqslant\alpha^{\dagger}$ in the same way. By the cofinality assumption, there is $\delta\in\Gamma$ with $\gamma_{1}^{\dagger}>\delta>\alpha^{\dagger}$. Note that $\delta\in\Psi^{\downarrow}$ because there is $\beta\in\Gamma^{\neq}$ with $|\gamma_{1}|\geqslant|\beta|$, so $\gamma_{1}^{\dagger}<\beta^{\dagger}\in\Psi$. Hence similar reasoning as in the first case works. This also shows how the lemma follows from the claim. ∎ ###### Lemma 4.4. Suppose that $(\Gamma_{1},\psi_{1})$ is an $H$-asymptotic couple extending $(\Gamma,\psi)$, and let $\gamma_{1}\in\Gamma_{1}\setminus\Gamma$ and $\alpha_{1},\alpha_{2}\in\Gamma$. Suppose that $\beta_{1}\coloneqq\gamma_{1}-\alpha_{1}$ and $\beta_{2}\coloneqq\beta_{1}^{\dagger}-\alpha_{2}$ satisfy $\beta_{1}^{\dagger}\notin\Gamma$ and $\beta_{2}^{\dagger}\notin\Psi$, and that $|\beta_{1}|\geqslant|\gamma|$ for some $\gamma\in\Gamma^{\neq}$. Then $\beta_{1}^{\dagger}<\beta_{2}^{\dagger}$. ###### Proof. Take $\gamma\in\Gamma^{\neq}$ with $|\beta_{1}|\geqslant|\gamma|$, so $\beta_{1}^{\dagger}\leqslant\gamma^{\dagger}$. From $\beta_{2}^{\dagger}\notin\Psi$ we obtain $[\beta_{1}^{\dagger}-\alpha_{2}]\notin[\Gamma]$. Thus $[\beta_{1}^{\dagger}-\gamma^{\dagger}]\geqslant[\beta_{1}^{\dagger}-\alpha_{2}]$, as otherwise $[\beta_{1}^{\dagger}-\alpha_{2}]=[\gamma^{\dagger}-\alpha_{2}]\in[\Gamma]$. Then, using [3, Lemma 6.5.4(i)] for the first inequality, we get $\beta_{1}^{\dagger}\ =\ \min\\{\beta_{1}^{\dagger},\gamma^{\dagger}\\}\ <\ (\beta_{1}^{\dagger}-\gamma^{\dagger})^{\dagger}\ \leqslant\ (\beta_{1}^{\dagger}-\alpha_{2})^{\dagger}\ =\ \beta_{2}^{\dagger}.\qed$ ###### Lemma 4.5. Let $\beta\in\Psi^{\downarrow}\setminus\Psi$ or $\beta$ be a gap in $(\Gamma,\psi)$. Then there is an $H$-asymptotic couple $(\Gamma\oplus\mathbb{Z}\alpha,\psi^{\alpha})$ extending $(\Gamma,\psi)$ such that: 1. (i) $\alpha>0$ and $\psi^{\alpha}(\alpha)=\beta$; 2. (ii) given any embedding $i\colon(\Gamma,\psi)\to(\Gamma^{*},\psi^{*})$ and $\alpha^{*}\in\Gamma^{*}$ with $\alpha^{*}>0$ and $\psi^{*}(\alpha^{*})=i(\beta)$, there is a unique embedding $j\colon(\Gamma\oplus\mathbb{Z}\alpha,\psi^{\alpha})\to(\Gamma^{*},\psi^{*})$ extending $i$ with $j(\alpha)=\alpha^{*}$. ###### Proof. Apply [3, Lemma 9.8.7] with $C=\\{[\gamma]:\gamma\in\Gamma^{\neq},\ \psi(\gamma)>\beta\\}$. ∎ We call $(\Gamma,\psi)$ _gap-closed_ if $\Gamma$ is nontrivial and divisible, and $\Psi=\Gamma^{<}$. Similarly, we call $(\Gamma,\psi)$ _max-closed_ if $\Gamma$ is divisible and $\Psi=\Gamma^{\leqslant}$. Then we call an $H$-asymptotic couple $(\Gamma_{1},\psi_{1})$ extending $(\Gamma,\psi)$ a _gap-closure_ of $(\Gamma,\psi)$ if it is gap-closed and it embeds over $(\Gamma,\psi)$ into every gap-closed $H$-asymptotic couple extending $(\Gamma,\psi)$. Similarly, we call an $H$-asymptotic couple $(\Gamma_{1},\psi_{1})$ extending $(\Gamma,\psi)$ a _max-closure_ of $(\Gamma,\psi)$ if it is max-closed and it embeds over $(\Gamma,\psi)$ into every max-closed $H$-asymptotic couple extending $(\Gamma,\psi)$. ###### Corollary 4.6. Every $H$-asymptotic couple $(\Gamma,\psi)$ with $\sup\Psi=0\notin\Psi$ has a gap-closure. Every $H$-asymptotic couple $(\Gamma,\psi)$ with $\sup\Psi=0$ has a max-closure. ###### Proof. Let $(\Gamma,\psi)$ be an $H$-asymptotic couple. Suppose that $\sup\Psi=0\notin\Psi$ and $\Gamma\neq\\{0\\}$. By iterating Lemma 4.5, first construct an $H$-asymptotic couple $(\Gamma_{1},\psi_{1})$ extending $(\Gamma,\psi)$ such that $(\Gamma_{1},\psi_{1})$ embeds over $(\Gamma,\psi)$ into every gap-closed $H$-asymptotic couple extending $(\Gamma,\psi)$ and that for every $\gamma\in\Gamma^{<}$, there is $\gamma_{1}\in\Gamma_{1}^{\neq}$ with $\psi_{1}(\gamma_{1})=\gamma$. Second, take the divisible hull $(\mathbb{Q}\Gamma_{1},\psi_{1})$ of $(\Gamma_{1},\psi_{1})$. Alternating these procedures yields a gap-closure of $(\Gamma,\psi)$. Now suppose that $\Gamma=\\{0\\}$. Then let $\Gamma_{0}\coloneqq\bigoplus_{n}\mathbb{Q}\gamma_{n}$ be the ordered vector space over $\mathbb{Q}$ satisfying $\gamma_{n}<0$ and $[\gamma_{n}]>[\gamma_{n+1}]$ for all $n$, and equip it with the function $\psi_{0}$ defined by $\psi_{0}(q_{1}\gamma_{n_{1}}+\dots+q_{m}\gamma_{n_{m}})=\gamma_{n_{1}+1}$ for all $m\geqslant 1$, $n_{1}>\dots>n_{m}$, and $q_{1},\dots,q_{m}\in\mathbb{Q}^{\times}$. Then by the same reasoning as in Lemma 4.9, $(\Gamma_{0},\psi_{0})$ is an $H$-asymptotic couple with gap $0$ that embeds into every gap-closed $H$-asymptotic couple, so a gap-closure of $(\Gamma_{0},\psi_{0})$ is also a gap-closure of $(\\{0\\},\psi)$. A similar argument to the first paragraph shows that $(\Gamma,\psi)$ has a max-closure when $\sup\Psi=0$. ∎ ### 4.B. Quantifier elimination with gap 0 We now turn to the proof of quantifier elimination for gap-closed $H$-asymptotic couples. To that end, suppose that $(\Gamma,\psi)$ is a divisible $H$-asymptotic couple with gap $0$, and let $(\Gamma_{1},\psi_{1})$ and $(\Gamma_{*},\psi_{*})$ be gap-closed $H$-asymptotic couples extending $(\Gamma,\psi)$ such that $(\Gamma_{*},\psi_{*})$ is $|\Gamma|^{+}$-saturated. Let $\gamma_{1}\in\Gamma_{1}\setminus\Gamma$ and $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ be the divisible $H$-asymptotic couple generated by $\Gamma\cup\\{\gamma_{1}\\}$ in $(\Gamma_{1},\psi_{1})$. In light of standard quantifier elimination tests, our goal is to embed $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. The first two lemmas are adapted from the aforementioned work in progress, while the third is particular to the case of gap $0$ but is proved similarly. For convenience, we set $0^{\dagger}\coloneqq\psi(0)=\infty$, so $\Gamma^{\dagger}=\Psi\cup\\{\infty\\}$. ###### Lemma 4.7. Suppose that $(\Gamma+\mathbb{Q}\gamma_{1})^{\dagger}=\Gamma^{\dagger}$. Then $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ can be embedded into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. ###### Proof. From $(\Gamma+\mathbb{Q}\gamma_{1})^{\dagger}=\Gamma^{\dagger}$, we get $\Gamma\langle\gamma_{1}\rangle=\Gamma+\mathbb{Q}\gamma_{1}$. Note that there is no $\beta_{1}\in\Gamma+\mathbb{Q}\gamma_{1}$ with $0<\beta_{1}<\Gamma^{>}$: otherwise, $\psi_{1}(\beta_{1})>\Psi$, since $\Psi$ has no greatest element, contradicting that $(\Gamma+\mathbb{Q}\gamma_{1})^{\dagger}=\Gamma^{\dagger}$. _Case 1: $[\Gamma+\mathbb{Q}\gamma_{1}]=[\Gamma]$._ By saturation, we may take $\gamma_{*}\in\Gamma_{*}$ realizing the same cut in $\Gamma$ as $\gamma_{1}$. Then we have an embedding $i\colon\Gamma+\mathbb{Q}\gamma_{1}\to\Gamma_{*}$ of ordered vector spaces over $\mathbb{Q}$ that is the identity on $\Gamma$ and satisfies $i(\gamma_{1})=\gamma_{*}$ by [3, Lemma 2.4.16]. Now for $\gamma\in\Gamma+\mathbb{Q}\gamma_{1}$ we have $[i(\gamma)]=[\gamma]\in[\Gamma]$, so $i(\gamma)^{\dagger}=\gamma^{\dagger}\in\Psi\cup\\{\infty\\}$. Hence $i$ is an embedding of $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. _Case 2: $[\Gamma+\mathbb{Q}\gamma_{1}]\neq[\Gamma]$._ Take $\beta_{1}\in\Gamma_{1}\setminus\Gamma$ with $\beta_{1}>0$ and $[\beta_{1}]\notin[\Gamma]$, so $[\Gamma\langle\gamma_{1}\rangle]=[\Gamma]\cup\\{[\beta_{1}]\\}$. Let $D$ be the cut in $\Gamma$ realized by $\beta_{1}$ and $E\coloneqq\Gamma\setminus D$, so $D<\beta_{1}<E$. First, we claim that $D$ has no greatest element. If it did have a greatest element $\delta$, then $0<\beta_{1}-\delta<\Gamma^{>}$, contradicting the comment at the beginning of the proof. Similarly, $E$ has no least element. Thus by saturation we have $\beta_{*}\in\Gamma_{*}$ realizing the same cut in $\Gamma$ as $\beta_{1}$ with $\beta_{*}^{\dagger}=\beta_{1}^{\dagger}$. Then [3, Lemma 2.4.16] yields an embedding $i\colon\Gamma+\mathbb{Q}\gamma_{1}\to\Gamma_{*}$ of ordered vector spaces over $\mathbb{Q}$ that is the identity on $\Gamma$ and satisfies $i(\beta_{1})=\beta_{*}$. This embedding is also an embedding of $H$-asymptotic couples. ∎ ###### Lemma 4.8. Suppose that $(\Gamma,\psi)$ is gap-closed, $(\Gamma+\mathbb{Q}\gamma)^{\dagger}\neq\Gamma^{\dagger}$ for all $\gamma\in\Gamma_{1}\setminus\Gamma$, and $\Gamma^{<}$ is cofinal in $\Gamma_{1}^{<}$. Then $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ can be embedded into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. ###### Proof. Take $\alpha_{1}\in\Gamma$ such that $(\gamma_{1}-\alpha_{1})^{\dagger}\notin\Gamma^{\dagger}$. Since $(\gamma_{1}-\alpha_{1})^{\dagger}<0$ and $\Psi=\Gamma^{<}$, we deduce that $(\gamma_{1}-\alpha_{1})^{\dagger}\notin\Gamma$. Let $n\geqslant 1$. We thus construct sequences $\alpha_{1},\alpha_{2},\dots$ in $\Gamma$ and $\beta_{1},\beta_{2},\dots$ in $\Gamma\langle\gamma_{1}\rangle\setminus\Gamma$ with $\beta_{1}=\gamma_{1}-\alpha_{1}$, $\beta_{n+1}=\beta_{n}^{\dagger}-\alpha_{n+1}$, and $\beta_{n}^{\dagger}\notin\Gamma$. It follows that $[\beta_{n}]\notin[\Gamma]$, and by Lemma 4.4 we have $\beta_{n}^{\dagger}<\beta_{n+1}^{\dagger}$ and thus $[\beta_{n}]>[\beta_{n+1}]$. In particular, the family $(\beta_{n})_{n\geqslant 1}$ is $\mathbb{Q}$-linearly independent over $\Gamma$ and $\Gamma\langle\gamma_{1}\rangle\ =\ \Gamma\oplus\mathbb{Q}\beta_{1}\oplus\mathbb{Q}\beta_{2}\oplus\dots.$ By saturation, we take $\gamma_{*}\in\Gamma_{*}\setminus\Gamma$ realizing the same cut in $\Gamma$ as $\gamma_{1}$ and define by recursion on $n\geqslant 1$ $\beta_{*n}\in(\Gamma_{*})_{\infty}$ by $\beta_{*1}\coloneqq\gamma_{*}-\alpha_{1}$ and $\beta_{*(n+1)}\coloneqq\beta_{*n}^{\dagger}-\alpha_{n+1}$. We assume inductively that $\beta_{*m}\in\Gamma_{*}\setminus\Gamma$ for $m=1,\dots n$, and that we have an embedding $i_{n}\colon\Gamma+\mathbb{Q}\beta_{1}+\dots+\mathbb{Q}\beta_{n}\to\Gamma_{*}$ of ordered vector spaces over $\mathbb{Q}$ that is the identity on $\Gamma$ and satisfies $i_{n}(\beta_{m})=\beta_{*m}$ for $m=1,\dots,n$. Then $\beta_{n}$ and $\beta_{*n}$ realize the same cut in $\Gamma$, so $\beta_{*n}^{\dagger}\notin\Gamma$ and $\beta_{*n}^{\dagger}$ realizes the same cut in $\Gamma$ as $\beta_{n}^{\dagger}$ by Lemma 4.3. Hence $\beta_{*(n+1)}\in\Gamma_{*}\setminus\Gamma$ and $\beta_{n+1}$ and $\beta_{*(n+1)}$ realize the same cut in $\Gamma$. We have $[\Gamma+\mathbb{Q}\beta_{1}+\dots+\mathbb{Q}\beta_{n}]=[\Gamma]\cup\\{[\beta_{1}],\dots,[\beta_{n}]\\}\qquad\text{and}\qquad[\beta_{1}]>\dots>[\beta_{n}]>[\beta_{n+1}].$ Let $D$ be the cut realized by $[\beta_{n+1}]$ in $[\Gamma+\mathbb{Q}\beta_{1}+\dots+\mathbb{Q}\beta_{n}]$. The comments above show that $[\beta_{*(n+1)}]$ realizes the image under $i_{n}$ of $D$ in $[i_{n}(\Gamma+\mathbb{Q}\beta_{1}+\dots+\mathbb{Q}\beta_{n})]$. Thus we may extend $i_{n}$ to an embedding $i_{n+1}\colon\Gamma+\mathbb{Q}\beta_{1}+\dots+\mathbb{Q}\beta_{n}+\mathbb{Q}\beta_{n+1}\to\Gamma_{*}$ of ordered vector spaces over $\mathbb{Q}$ that is the identity on $\Gamma$ and satisfies $i_{n+1}(\beta_{n+1})=\beta_{*(n+1)}$. By induction, this yields a map $i\colon\Gamma\langle\gamma_{1}\rangle\to\Gamma_{*}$ extending each $i_{n}$, so $i$ is an embedding of $H$-asymptotic couples. ∎ ###### Lemma 4.9. Suppose that $\Gamma^{<}<\gamma_{1}<0$. Then $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ can be embedded into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. ###### Proof. Set $\gamma_{1}^{\langle 0\rangle}\coloneqq\gamma_{1}$ and $\gamma_{1}^{\langle n+1\rangle}\coloneqq(\gamma_{1}^{\langle n\rangle})^{\dagger}$ for all $n$. We have $[\gamma_{1}]>[\gamma_{1}^{\langle 1\rangle}]>[\gamma_{1}^{\langle 2\rangle}]>\dots$ by [3, Lemma 9.2.10(iv)], and so $\Gamma^{<}<\gamma_{1}<\gamma_{1}^{\langle 1\rangle}<\gamma_{1}^{\langle 2\rangle}<\dots<0\qquad\text{and}\qquad[\gamma_{1}^{\langle n\rangle}]\notin[\Gamma]\ \text{for all}\ n.$ Hence the family $(\gamma_{1}^{\langle n\rangle})_{n\in\mathbb{N}}$ is $\mathbb{Q}$-linearly independent over $\Gamma$ and $\Gamma\langle\gamma_{1}\rangle\ =\ \Gamma\oplus\mathbb{Q}\gamma_{1}\oplus\mathbb{Q}\gamma_{1}^{\langle 1\rangle}\oplus\mathbb{Q}\gamma_{1}^{\langle 2\rangle}\oplus\dots.$ By saturation, we may take $\gamma_{*}\in\Gamma_{*}\setminus\Gamma$ with $\Gamma^{<}<\gamma_{*}<0$. The above holds in $\Gamma_{*}$ with $\gamma_{*}$ replacing $\gamma_{1}$ (and $\gamma_{*}^{\langle n\rangle}$ defined analogously), so we construct an embedding of $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ into $(\Gamma_{*},\psi_{*})$ over $\Gamma$ that sends $\gamma_{1}$ to $\gamma_{*}$ as in the proof of the previous lemma. ∎ We can now complete the proof of Theorem 4.1. Recall from the introduction to this section the language $\mathcal{L}_{\operatorname{ac}}=\\{+,-,\leqslant,0,\infty,\psi\\}$ of asymptotic couples, though we first prove quantifier elimination in the expanded language $\mathcal{L}_{\operatorname{ac},\operatorname{div}}=\mathcal{L}_{\operatorname{ac}}\cup\\{\operatorname{div}_{n}:n\geqslant 1\\}$, where each unary function symbol $\operatorname{div}_{n}$ is interpreted as division by $n$ with $\operatorname{div}_{n}(\infty)\coloneqq\infty$. ###### Theorem 4.1. The theory of gap-closed $H$-asymptotic couples has quantifier elimination, and it is the model completion of the theory of $H$-asymptotic couples with gap $0$. ###### Proof. That the theory gap-closed $H$-asymptotic couples has quantifier elimination in $\mathcal{L}_{\operatorname{ac},\operatorname{div}}$ follows from Lemmas 4.7, 4.8, and 4.9, and Corollary 4.6 by a standard quantifier elimination test. (See for example [3, Corollary B.11.11].) To see that it has quantifier elimination in $\mathcal{L}_{\operatorname{ac}}$, recall from the beginning of this section how, for an asymptotic couple $(\Gamma,\psi)$, $\psi$ extends uniquely to the divisible hull $\mathbb{Q}\Gamma$ of $\Gamma$. The desired result then follows from [3, Corollary B.11.5]. The model completion statement follows from quantifier elimination and Corollary 4.6. (See for example [3, Corollary B.11.6].) ∎ ###### Corollary 4.10. The theory of gap-closed $H$-asymptotic couples is complete and has a prime model. ###### Proof. The $H$-asymptotic couple $(\\{0\\},\psi)$, where $\psi\colon\emptyset\to\\{0\\}$ is the empty function, embeds into every gap- closed $H$-asymptotic couple, yielding completeness. (See for example [3, Corollary B.11.7].) It also has gap $0$, so its gap-closure is the prime model of this theory. ∎ ### 4.C. Quantifier elimination with max 0 We derive similar quantifier elimination and model completion results in the setting allowing max $0$. The proofs are as in the previous subsection, except where indicated. This material is only used in one later theorem that itself is not used in the main results, but this subsection is naturally complementary to the previous one. Suppose that $(\Gamma,\psi)$ is a divisible $H$-asymptotic couple with $\sup\Psi=0$. Let $(\Gamma_{1},\psi_{1})$ and $(\Gamma_{*},\psi_{*})$ be max- closed $H$-asymptotic couples extending $(\Gamma,\psi)$ such that $(\Gamma_{*},\psi_{*})$ is $|\Gamma|^{+}$-saturated. Let $\gamma_{1}\in\Gamma_{1}\setminus\Gamma$ and $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ be the divisible $H$-asymptotic couple generated by $\Gamma\cup\\{\gamma_{1}\\}$ in $(\Gamma_{1},\psi_{1})$. ###### Lemma 4.11. Suppose that $\max\Psi=0$ and $(\Gamma+\mathbb{Q}\gamma_{1})^{\dagger}=\Gamma^{\dagger}$. Then $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ can be embedded into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. ###### Proof. From $(\Gamma+\mathbb{Q}\gamma_{1})^{\dagger}=\Gamma^{\dagger}$, we get $\Gamma\langle\gamma_{1}\rangle=\Gamma+\mathbb{Q}\gamma_{1}$. _Case 1: $[\Gamma+\mathbb{Q}\gamma_{1}]=[\Gamma]$._ As in _Case 1_ of Lemma 4.7. _Case 2: $[\Gamma+\mathbb{Q}\gamma_{1}]\neq[\Gamma]$ but there does not exist $\beta_{1}\in\Gamma+\mathbb{Q}\gamma_{1}$ with $0<\beta_{1}<\Gamma^{>}$._ As in _Case 2_ of Lemma 4.7. _Case 3: there exists $\beta_{1}\in\Gamma+\mathbb{Q}\gamma_{1}$ with $0<\beta_{1}<\Gamma^{>}$._ By saturation, take $\beta_{*}\in\Gamma_{*}$ with $0<\beta_{*}<\Gamma^{>}$, so $\beta_{*}^{\dagger}=\beta_{1}^{\dagger}=0$. The proof continues as in _Case 2_ of Lemma 4.7 after “$\beta_{*}^{\dagger}=\beta_{1}^{\dagger}$.” ∎ ###### Lemma 4.12. Suppose that $(\Gamma,\psi)$ is max-closed and $(\Gamma+\mathbb{Q}\gamma)^{\dagger}\neq\Gamma^{\dagger}$ for all $\gamma\in\Gamma_{1}\setminus\Gamma$. Then $(\Gamma\langle\gamma_{1}\rangle,\psi_{1})$ can be embedded into $(\Gamma_{*},\psi_{*})$ over $\Gamma$. ###### Proof. If $\gamma\in\Gamma_{1}\setminus\Gamma$ with $0<\gamma<\Gamma^{>}$, then $\gamma^{\dagger}=0$ and so $(\Gamma+\mathbb{Q}\gamma)^{\dagger}=\Gamma^{\dagger}$, a contradiction. Hence there is no such $\gamma$, and thus $\Gamma^{<}$ is cofinal in $\Gamma_{1}^{<}$. The rest of the proof is as in Lemma 4.8. ∎ Recall from the introduction to this section the language $\mathcal{L}_{\operatorname{ac}}$ of asymptotic couples. ###### Theorem 4.13. The theory of max-closed $H$-asymptotic couples has quantifier elimination, and it is the model completion of the theory of $H$-asymptotic couples $(\Gamma,\psi)$ with $\sup\Psi=0$. ###### Corollary 4.14. The theory of max-closed $H$-asymptotic couples is complete and has a prime model. ## 5\. Extensions controlled by the asymptotic couple ### 5.A. A maximality theorem The results and proofs of this section are adapted from [3, §16.1]. This next lemma and its consequences are where we use the quantifier elimination for gap-closed asymptotic couples from §4. Note that if $K$ is an $H$-asymptotic field with exponential integration and gap $0$, then in fact $\Psi=\Gamma^{<}$, so if additionally $\Gamma$ is divisible then $(\Gamma,\psi)$ is a gap-closed $H$-asymptotic couple in the sense of the previous section. For the next lemma, recall the discussion of pc-sequences from §2. ###### Lemma 5.1. Suppose that $K$ is a $\operatorname{d}$-henselian $H$-asymptotic field with exponential integration and gap $0$ whose value group is divisible. Let $L$ be an $H$-asymptotic extension of $K$ with gap $0$ and $\bm{k}_{L}=\bm{k}$, and suppose that there is no $y\in L\setminus K$ such that $K\langle y\rangle$ is an immediate extension of $K$. Let $f\in L\setminus K$. Then the vector space $\mathbb{Q}\Gamma_{K\langle f\rangle}/\Gamma$ is infinite dimensional. ###### Proof. First, we argue that there is no divergent pc-sequence in $K$ with a pseudolimit in $L$. Towards a contradiction, suppose that $(a_{\rho})$ is a divergent pc-sequence in $K$ with pseudolimit $\ell\in L$. Since $K$ is $\operatorname{d}$-henselian and asymptotic, it is $\operatorname{d}$-algebraically maximal [7, Theorem 3.6], so $(a_{\rho})$ is not of $\operatorname{d}$-algebraic type over $K$ by [3, Lemma 6.9.3]. Hence $(a_{\rho})$ is of $\operatorname{d}$-transcendental type over $K$, so $K\langle\ell\rangle$ is an immediate extension of $K$ by [3, Lemma 6.9.1], a contradiction. Thus for all $y\in L\setminus K$, the set $v_{L}(y-K)\subseteq\Gamma_{L}$ has a maximum. If $v_{L}(y-y_{0})=\max v_{L}(y-K)$, then $v_{L}(y-y_{0})\notin\Gamma$ since $\bm{k}_{L}=\bm{k}$. Otherwise, there would be $y_{1}\in K$ with $y-y_{0}\sim y_{1}$, contradicting the maximality of $v_{L}(y-y_{0})$. For convenience, assume below that $L=K\langle f\rangle$. Set $f_{0}\coloneqq f$, pick $b_{0}\in K$ with $v_{L}(f_{0}-b_{0})=\max v_{L}(f_{0}-K)$, and set $f_{1}\coloneqq(f_{0}-b_{0})^{\dagger}\in L$. We claim that $f_{1}\notin K$. Otherwise, there would be $g\in K^{\times}$ with $(f_{0}-b_{0})^{\dagger}=g^{\dagger}$, so $v_{L}(f_{0}-b_{0})=v(g)$, contradicting that $v_{L}(f_{0}-b_{0})\notin\Gamma$. By induction we obtain sequences $(f_{n})$ in $L\setminus K$ and $(b_{n})$ in $K$ such that for all $n$: 1. (i) $v_{L}(f_{n}-b_{n})=\max v_{L}(f_{n}-K)$; 2. (ii) $f_{n+1}=(f_{n}-b_{n})^{\dagger}$. Hence $v_{L}(f_{n}-b_{n})\notin\Gamma$ for all $n$. The result follows from the next claim: $v_{L}(f_{0}-b_{0}),\ v_{L}(f_{1}-b_{1}),\ \dots\ \text{are $\mathbb{Q}$-linearly independent over}\ \Gamma.$ To see this, let $n\geqslant 1$ and take $a_{n}\in K^{\times}$ with $a_{n}^{\dagger}=b_{n}$, so $f_{n}-b_{n}\ =\ \left(f_{n-1}-b_{n-1}\right)^{\dagger}-a_{n}^{\dagger}\ =\ \left(\frac{f_{n-1}-b_{n-1}}{a_{n}}\right)^{\dagger},$ and set $\alpha_{n}\coloneqq v(a_{n})\in\Gamma$. Recall the function $\psi_{L,\alpha_{1},\dots,\alpha_{n}}$ defined before Corollary 4.2, where the subscript $L$ indicates that it is defined on $\Gamma_{L}$, not just $\Gamma$. Then we have $v_{L}(f_{n}-b_{n})\ =\ \psi_{L}\big{(}v_{L}(f_{n-1}-b_{n-1})-\alpha_{n}\big{)},$ so by induction we get $v_{L}(f_{n}-b_{n})\ =\ \psi_{L,\alpha_{1},\dots,\alpha_{n}}\big{(}v_{L}(f_{0}-b_{0})\big{)}.$ Suppose towards a contradiction that $v_{L}(f_{0}-b_{0}),\dots,v_{L}(f_{n}-b_{n})$ are $\mathbb{Q}$-linearly dependent over $\Gamma$, so we have $m<n$ and $q_{1},\dots,q_{n-m}\in\mathbb{Q}$ such that $v_{L}(f_{m}-b_{m})+q_{1}v_{L}(f_{m+1}-b_{m+1})+\dots+q_{n-m}v_{L}(f_{n}-b_{n})\in\Gamma.$ With $\gamma\coloneqq v_{L}(f_{m}-b_{m})\in\Gamma_{L}\setminus\Gamma$, this means $\gamma+q_{1}\psi_{L,\alpha_{m+1}}(\gamma)+\dots+q_{n-m}\psi_{L,\alpha_{m+1},\dots,\alpha_{n}}(\gamma)\in\Gamma,$ so $v_{L}(f_{m}-b_{m})=\gamma\in\Gamma$ by Corollary 4.2, a contradiction. ∎ The previous lemma yields a maximality theorem that is used in the following section to prove the minimality of differential-Hensel-Liouville closures, but which is also of independent interest as a strengthening of [7, Theorem 3.6] under additional hypotheses. ###### Theorem 5.2. Suppose that $K$ is a $\operatorname{d}$-henselian $H$-asymptotic field with exponential integration and gap $0$ whose value group is divisible. Then $K$ has no proper $\operatorname{d}$-algebraic $H$-asymptotic extension with gap $0$ and the same residue field. ###### Proof. Let $L$ be a proper $\operatorname{d}$-algebraic $H$-asymptotic extension of $K$ with gap $0$ and $\bm{k}_{L}=\bm{k}$. Since $K$ is $\operatorname{d}$-algebraically maximal [7, Theorem 3.6], there is no $y\in L\setminus K$ such that $K\langle y\rangle$ is an immediate extension of $K$. But for $f\in L\setminus K$, the transcendence degree of $K\langle f\rangle$ over $K$ is finite, so the vector space $\mathbb{Q}\Gamma_{K\langle f\rangle}/\Gamma$ is finite dimensional by the Zariski–Abhyankar inequality [3, Corollary 3.1.11], contradicting Lemma 5.1. ∎ By quantifier elimination for max-closed $H$-asymptotic couples and the same arguments, we also obtain the following, which is not used later. Here we say an asymptotic field $K$ has _max $0$_ if its asymptotic couple does. ###### Theorem 5.3. If $K$ is a $\operatorname{d}$-henselian $H$-asymptotic field with exponential integration and max $0$ whose value group is divisible, then $K$ has no proper $\operatorname{d}$-algebraic $H$-asymptotic extension with max $0$ and the same residue field. We now provide more details about the asymptotic couple of $K\langle f\rangle$ for use in the next subsection. ###### Lemma 5.4. Let $K$, $L$, and $f$ be as in Lemma 5.1, and let the sequences $(f_{n})$, $(b_{n})$, $(a_{n})_{n\geqslant 1}$, and $(\alpha_{n})_{n\geqslant 1}$ be as in the proof of Lemma 5.1. Set $\beta_{n}\coloneqq v_{L}(f_{n}-b_{n})-\alpha_{n+1}$. The asymptotic couple $(\Gamma_{K\langle f\rangle},\psi_{L})$ of $K\langle f\rangle$ has the following properties: 1. (i) $\Gamma_{K\langle f\rangle}=\Gamma\oplus\bigoplus_{n}\mathbb{Z}\beta_{n}$ (internal direct sum); 2. (ii) $\beta_{n}^{\dagger}\notin\Gamma$ for all $n$, and $\beta_{m}^{\dagger}\neq\beta_{n}^{\dagger}$ for all $m\neq n$; 3. (iii) $\psi_{L}(\Gamma_{K\langle f\rangle}^{\neq})=\Psi\cup\\{\beta_{n}^{\dagger}:n\in\mathbb{N}\\}$; 4. (iv) $[\beta_{n}]\notin[\Gamma]$ for all $n$, $[\beta_{m}]\neq[\beta_{n}]$ for all $m\neq n$, and $[\Gamma_{K\langle f\rangle}]=[\Gamma]\cup\\{[\beta_{n}]:n\in\mathbb{N}\\}$; 5. (v) if $\Gamma_{K}^{<}$ is cofinal in $\Gamma_{K\langle f\rangle}^{<}$, then $\beta_{0}^{\dagger}<\beta_{1}^{\dagger}<\beta_{2}^{\dagger}<\cdots$. ###### Proof. Set $\mathfrak{m}_{n}\coloneqq(f_{n}-b_{n})/a_{n+1}$, so $v_{L}(\mathfrak{m}_{n})=\beta_{n}$. Then $\displaystyle\mathfrak{m}_{n+1}\ $ $\displaystyle=\ \frac{f_{n+1}-b_{n+1}}{a_{n+2}}\ =\ \frac{(f_{n}-b_{n})^{\dagger}-b_{n+1}}{a_{n+2}}\ =\ \frac{(a_{n+1}\mathfrak{m}_{n})^{\dagger}-b_{n+1}}{a_{n+2}}\ =\ \frac{a_{n+1}^{\dagger}+\mathfrak{m}_{n}^{\dagger}-b_{n+1}}{a_{n+2}}$ $\displaystyle=\ \frac{\mathfrak{m}_{n}^{\dagger}}{a_{n+2}}.$ Hence $\mathfrak{m}_{n}^{\prime}=a_{n+2}\mathfrak{m}_{n}\mathfrak{m}_{n+1}$. From $f=b_{0}+a_{1}\mathfrak{m}_{0}$ we get $f^{\prime}=b_{0}^{\prime}+a_{1}^{\prime}\mathfrak{m}_{0}+a_{1}a_{2}\mathfrak{m}_{0}\mathfrak{m}_{1}$, so induction yields $F_{n}\in K[Y_{0},\dots,Y_{n}]$ with $\deg F_{n}\leqslant n+1$ and $f^{(n)}=F_{n}(\mathfrak{m}_{0},\dots,\mathfrak{m}_{n})$. Thus for $P\in K\\{Y\\}^{\neq}$ of order at most $r$ we have $P(f)=\sum_{\bm{i}\in I}a_{\bm{i}}\mathfrak{m}_{0}^{i_{0}}\dots\mathfrak{m}_{r}^{i_{r}}$, where $I$ is a nonempty finite set of indices $\bm{i}=(i_{0},\dots,i_{r})\in\mathbb{N}^{1+r}$. Note that by the proof of Lemma 5.1, the family $(\beta_{n})$ is $\mathbb{Q}$-linearly independent over $\Gamma$. Hence $v_{L}\big{(}P(f)\big{)}\in\Gamma+\sum_{n}\mathbb{N}\beta_{n}$, which proves (i). By the proof of Lemma 5.1, we also have $\beta_{n}^{\dagger}\ =\ \psi_{L}\big{(}v_{L}(f_{n}-b_{n})-\alpha_{n+1}\big{)}\ =\ v_{L}(f_{n+1}-b_{n+1})\ =\ \beta_{n+1}+\alpha_{n+2}\ \notin\ \Gamma.$ Thus the family $(\beta_{n}^{\dagger})$ is $\mathbb{Q}$-linearly independent over $\Gamma$, since the family $(\beta_{n})$ is, proving (ii). Note that (iii) follows from (i) and (ii). From (ii), we get $[\beta_{n}]\notin[\Gamma]$ and $[\beta_{m}]\neq[\beta_{n}]$ for all $m\neq n$, so (iv) now follows from (i). Finally, (v) follows from (ii) and Lemma 4.4. ∎ ### 5.B. Further consequences in the ordered setting Now we develop further the results of the previous subsection in the pre-$H$-field setting. In this subsection, $K$ and $L$ are pre-$H$-fields with small derivation. Suppose that $K$ is $\operatorname{d}$-henselian and has exponential integration, and that $\Gamma$ is divisible. Suppose that $L$ is an extension of $K$ with $\bm{k}_{L}=\bm{k}$, and that there is no $y\in L\setminus K$ such that $K\langle y\rangle$ is an immediate extension of $K$. Let $f\in L\setminus K$ with $\Gamma^{<}$ cofinal in $\Gamma_{K\langle f\rangle}^{<}$, and let the sequences $(f_{n})$, $(b_{n})$, $(a_{n})_{n\geqslant 1}$, and $(\alpha_{n})_{n\geqslant 1}$ be as in the proof of Lemma 5.1. As before, we also set $\beta_{n}\coloneqq v_{L}(f_{n}-b_{n})-\alpha_{n+1}$. Note that since $K$ is a pre-$H$-field with small derivation and nontrivial induced derivation on $\bm{k}$, it has gap $0$, and so does $L$. ###### Lemma 5.5. Suppose $M$ is a pre-$H$-field extension of $K$ and $g\in M$ realizes the same cut in $K$ as $f$. Then $v_{M}(g-b_{0})=\max v_{M}(g-K)\notin\Gamma$ and $g_{1}\coloneqq(g-b_{0})^{\dagger}$ realizes the same cut in $K$ as $f_{1}$. ###### Proof. Let $\alpha\in\Gamma$ and $b\in K$. We first claim that $v_{L}(f-b)<\alpha\iff v_{M}(g-b)<\alpha\qquad\text{and}\qquad v_{L}(f-b)>\alpha\iff v_{M}(g-b)>\alpha.$ To see this, take $a\in K^{>}$ with $va=\alpha$. Suppose that $v_{L}(f-b)<\alpha$, so $|f-b|>a$. Hence $|g-b|>a$ and thus $v_{M}(g-b)\leqslant\alpha$. By the cofinality assumption, take $\delta\in\Gamma$ with $v_{L}(f-b)<\delta<\alpha$, and then the same argument yields $v_{M}(g-b)\leqslant\delta<\alpha$. One proves similarly that $v_{L}(f-b)>\alpha\implies v_{M}(g-b)>\alpha$. Finally, consider the case that $v_{L}(f-b)=\alpha$. This yields $f-b\sim ua$ for $u\in K$ with $u\asymp 1$, since $\bm{k}=\bm{k}_{L}$. From the convexity of $\mathcal{O}_{K\langle f\rangle}$ we obtain $|u|a/2<|f-b|<2|u|a$, so $|u|a/2<|g-b|<2|u|a$, and thus $v_{M}(g-b)=va=\alpha$, completing the proof of the claim. By the claim above and the fact that $v_{L}(f-b_{0})\notin\Gamma$, we get $v_{M}(g-b_{0})\notin\Gamma$. This yields $v_{M}(g-b_{0})=\max v_{M}(g-K)$, as otherwise we would have $b\in K$ with $v_{M}(g-b)>v_{M}(g-b_{0})$, so $v_{M}(g-b_{0})=v(b-b_{0})\in\Gamma$. It also follows that $(g-b_{0})^{\dagger}\notin K$, as otherwise $(g-b_{0})^{\dagger}=b^{\dagger}$ for some $b\in K^{\times}$, so $v_{M}(g-b_{0})=vb\in\Gamma$. Finally, we show that $(g-b_{0})^{\dagger}$ realizes the same cut in $K$ as $(f-b_{0})^{\dagger}$. By replacing $f$, $g$, and $b_{0}$ with $-f$, $-g$, and $-b_{0}$ if necessary, we may assume that $f>b_{0}$, so $g>b_{0}$. First, suppose that we have $h\in K$ with $(f-b_{0})^{\dagger}<h$ and $h<(g-b_{0})^{\dagger}$. Take $\phi\in K^{>}$ with $h=\phi^{\dagger}$ and set $s\coloneqq(f-b_{0})/\phi$. Then we have $s>0$ and $s^{\dagger}=(f-b_{0})^{\dagger}-h<0$. By [3, Lemma 10.5.2(i)], $v_{L}(s)\geqslant 0$, but since $v_{L}(f-b_{0})\notin\Gamma$, we get $v_{L}(s)>0$; in particular, $0<s<1$ (see [3, Lemma 3.5.11]). Similarly, $h<(g-b_{0})^{\dagger}$ gives $t\coloneqq(g-b_{0})/\phi>0$ and $t^{\dagger}>0$, so $v_{M}(t)<0$; in particular, $t>1$. Putting this together yields $f\ =\ b_{0}+\phi s\ <\ b_{0}+\phi\ \qquad\text{and}\ \qquad b_{0}+\phi\ <\ b_{0}+\phi t\ =\ g,$ contradicting that $f$ and $g$ realize the same cut in $K$. The other case, that there is $h\in K$ with $(f-b_{0})^{\dagger}>h$ and $h>(g-b_{0})^{\dagger}$, is handled in the same fashion. ∎ ###### Proposition 5.6. Suppose that $M$ is a pre-$H$-field extension of $K$ with gap $0$ and $g\in M$ realizes the same cut in $K$ as $f$. Then there exists an embedding $K\langle f\rangle\to M$ over $K$ with $f\mapsto g$. ###### Proof. Define $g_{0}\coloneqq g$ and $g_{n+1}\coloneqq(g_{n}-b_{n})^{\dagger}$ for all $n$, so by the previous lemma $g_{n}\in M\setminus K$ realizes the same cut in $K$ as $f_{n}$, and in particular $v_{M}(g_{n}-b_{n})\notin\Gamma$ for all $n$. Then using the same argument as in the proof of Lemma 5.1, we have that $v_{M}(g_{0}-b_{0}),v_{M}(g_{1}-b_{1}),\dots$ are $\mathbb{Q}$-linearly independent over $\Gamma$. Set $\beta_{n}^{*}\coloneqq v_{M}(g_{n}-b_{n})-\alpha_{n+1}$ and $\mathfrak{m}_{n}^{*}\coloneqq(g_{n}-b_{n})/a_{n+1}$, so $v_{M}(\mathfrak{m}_{n}^{*})=\beta_{n}^{*}$ and the family $(\beta_{n}^{*})$ is $\mathbb{Q}$-linearly independent over $\Gamma$. Note that since $f_{n}$ and $g_{n}$ realize the same cut in $K$, so do $\mathfrak{m}_{n}$ and $\mathfrak{m}_{n}^{*}$, and hence $\beta_{n}$ and $\beta_{n}^{*}$ realize the same cut in $\Gamma$. From the proof of Lemma 5.4 we have $F_{n}(Y_{0},\dots,Y_{n})\in K[Y_{0},\dots,Y_{n}]$ with $\deg F_{n}\leqslant n+1$ and $g^{(n)}=F_{n}(\mathfrak{m}_{0}^{*},\dots,\mathfrak{m}_{n}^{*})$. For $P\in K\\{Y\\}^{\neq}$ of order at most $r$ we thus get $P(g)=\sum_{\bm{i}\in I}a_{\bm{i}}\mathfrak{m}_{0}^{*i_{0}}\cdots\mathfrak{m}_{r}^{*i_{r}}$, where $I$ is the same nonempty finite index set and $a_{\bm{i}}$ are the same coefficients as in the proof of Lemma 5.4. Since the family $(\beta_{n}^{*})$ is $\mathbb{Q}$-linearly independent over $\Gamma$, we have that $v_{M}\big{(}P(g)\big{)}\in\Gamma+\sum_{n}\mathbb{N}\beta_{n}^{*}$. The rest of the proof of Lemma 5.4 now goes through replacing $f_{n}$ with $g_{n}$ and $\beta_{n}$ with $\beta_{n}^{*}$. From this we obtain an ordered abelian group isomorphism $j\colon\Gamma_{K\langle f\rangle}\to\Gamma_{K\langle g\rangle}$ over $\Gamma$ with $\beta_{n}\mapsto\beta_{n}^{*}$. Using the expressions for $P(f)$ and $P(g)$, we get $j\big{(}v_{L}(P(f))\big{)}=v_{M}(P(g))$ for all $P\in K\\{Y\\}^{\neq}$, so we have a valued differential field embedding $K\langle f\rangle\to M$ over $K$ with $f\mapsto g$. By the above and since $\mathfrak{m}_{n}$ and $\mathfrak{m}_{n}^{*}$ have the same sign, $P(f)>0\iff P(g)>0$ for all $P\in K\\{Y\\}^{\neq}$, so this is in fact an ordered valued differential field embedding, as desired. ∎ ### 5.C. The non-cofinal case In the previous subsection we assumed that $\Gamma^{<}$ was cofinal in $\Gamma_{K\langle f\rangle}^{<}$, and now we turn to the other case. In this subsection, $K$ and $L$ are pre-$H$-fields with gap $0$ and $L$ is an extension of $K$. ###### Lemma 5.7. Let $f\in L^{>}$ with $\Gamma^{<}<v_{L}(f)<0$. Suppose that $M$ is a pre-$H$-field extension of $K$ with gap $0$ and $g\in M^{>}$ satisfies $\Gamma^{<}<v_{M}(g)<0$. Then there is an embedding $K\langle f\rangle\to M$ over $K$ with $f\mapsto g$. ###### Proof. Set $f_{0}\coloneqq f$ and $f_{n+1}\coloneqq f_{n}^{\dagger}$, and let $\beta_{n}\coloneqq v_{L}(f_{n})\in\Gamma_{L}$. By [3, Lemma 9.2.10(iv)], $[\Gamma^{\neq}]>[\beta_{0}]>[\beta_{1}]>[\beta_{2}]>\cdots>[0].$ In particular, $[\beta_{n}]\notin[\Gamma]$ for all $n$ and the family $(\beta_{n})$ is $\mathbb{Q}$-linearly independent over $\Gamma$. Hence the vector space $\mathbb{Q}\Gamma_{K\langle f\rangle}/\Gamma$ is infinite dimensional, so $f$ is $\operatorname{d}$-transcendental over $K$ [3, Corollary 3.1.11]. By the same argument as in Lemma 5.4 with $f_{n}$ in place of $\mathfrak{m}_{n}$ (i.e., with $b_{n}=0$ and $a_{n}=1$), one shows that for any $P\in K\\{Y\\}^{\neq}$ of order at most $r$, we have $P(f)=\sum_{\bm{i}\in I}a_{\bm{i}}f_{0}^{i_{0}}\dots f_{r}^{i_{r}}$, where $I$ is a nonempty finite set of indices $\bm{i}=(i_{0},\dots,i_{r})\in\mathbb{N}^{1+r}$. In particular, $\Gamma_{K\langle f\rangle}=\Gamma\oplus\bigoplus_{n}\mathbb{Z}\beta_{n}$. Set $g_{0}\coloneqq g$, $g_{n+1}\coloneqq g_{n}^{\dagger}$, and $\beta_{n}^{*}\coloneqq v_{M}(g_{n})\in\Gamma_{M}$. The same argument yields that $g$ is $\operatorname{d}$-transcendental over $K$ and $P(g)=\sum_{\bm{i}\in I}a_{\bm{i}}g_{0}^{i_{0}}\dots g_{r}^{i_{r}}$, where $I$ is the same set of indices as in $P(f)$ and $a_{\bm{i}}$ are the same coefficients. Hence $\Gamma_{K\langle g\rangle}=\Gamma\oplus\bigoplus_{n}\mathbb{Z}\beta_{n}^{*}$. Thus we have an isomorphism of ordered abelian groups $j\colon\Gamma_{K\langle f\rangle}\to\Gamma_{K\langle g\rangle}$ with $\beta_{n}\mapsto\beta_{n}^{*}$. By the expressions for $P(f)$ and $P(g)$, $j\big{(}v_{L}(P(f))\big{)}=v_{M}(P(g))$, which yields a valued differential field embedding from $K\langle f\rangle\to M$ over $K$ with $f\mapsto g$. To see that this is an ordered valued differential field embedding, note that by [3, Lemma 10.5.2(i)], $f_{n}>0$ and $g_{n}>0$ for all $n$, so $P(f)>0\iff P(g)>0$. ∎ ## 6\. Extending the constant field ###### Assumption. In this section, $K$ is asymptotic with small derivation. Since $C\subseteq\mathcal{O}$, $C$ maps injectively into $\bm{k}$ under the residue field map, and hence into $C_{\bm{k}}$. We say that $K$ is _residue constant closed_ if $K$ is henselian and $C$ maps onto $C_{\bm{k}}$, that is, $\operatorname{res}(C)=C_{\bm{k}}$. We say that $L$ is a _residue constant closure_ of $K$ if it is a residue constant closed $H$-asymptotic extension of $K$ with small derivation that embeds uniquely over $K$ into every residue constant closed $H$-asymptotic extension $M$ of $K$ with small derivation. Note that if $K$ has a residue constant closure, then it is unique up to unique isomorphism over $K$. ###### Proposition 6.1. Suppose that $K$ is pre-$\operatorname{d}$-valued of $H$-type with $\sup\Psi=0$. Then $K$ has a residue constant closure that is an immediate extension of $K$. ###### Proof. Recall from §4 that $\sup\Psi=0$ is equivalent to $(\Gamma^{>})^{\prime}=\Gamma^{>}$. Also note that if $L$ is an immediate asymptotic extension of $K$, then it is $H$-asymptotic, satisfies $\Psi=\Psi_{L}$, so $\sup\Psi_{L}=0$, and is pre-$\operatorname{d}$-valued by [3, Corollary 10.1.17]. Build a tower of immediate asymptotic extensions of $K$ as follows. Set $K_{0}\coloneqq K$. If $K_{\lambda}$ is not henselian, set $K_{\lambda+1}\coloneqq K_{\lambda}^{\operatorname{h}}$, the henselization of $K_{\lambda}$, which as an algebraic extension of $K_{\lambda}$ is asymptotic by [3, Proposition 9.5.3]. If $K_{\lambda}$ is residue constant closed, we are done. So suppose that $K_{\lambda}$ is henselian but not residue constant closed and take $u\in K_{\lambda}$ with $u\asymp 1$, $u^{\prime}\prec 1$, and $u^{\prime}\notin\der\cao_{K_{\lambda}}$. Let $y$ be transcendental over $K_{\lambda}$ and equip $K_{\lambda+1}\coloneqq K_{\lambda}(y)$ with the unique derivation extending that of $K_{\lambda}$ such that $y^{\prime}=u^{\prime}$. Then by [3, Lemma 10.2.5(iii)] $\\{v(u^{\prime}-a^{\prime}):a\in\cao_{K_{\lambda}}\\}$ has no maximum, so by [3, Lemma 10.2.4] we can equip $K_{\lambda+1}$ with the unique valuation making it an $H$-asymptotic extension of $K_{\lambda}$ with $y\not\asymp 1$; with this valuation, $y\prec 1$ and $K_{\lambda+1}$ is an immediate extension of $K_{\lambda}$. If $\lambda$ is a limit ordinal, set $K_{\lambda}\coloneqq\bigcup_{\rho<\lambda}K_{\rho}$. Since each extension is immediate, by Zorn’s lemma we may take a maximal such tower $(K_{\lambda})_{\lambda\leqslant\mu}$. It is clear that $K_{\mu}$ is residue constant closed, and we show that it also has the desired universal property. Let $M$ be an $H$-asymptotic extension of $K$ with small derivation that is residue constant closed, and let $\lambda<\mu$ and $i\colon K_{\lambda}\to M$ be an embedding. It suffices by induction to extend $i$ uniquely to an embedding $K_{\lambda+1}\to M$. If $K_{\lambda+1}=K_{\lambda}^{\operatorname{h}}$, then this follows from the universal property of the henselization. Now suppose that $K_{\lambda+1}=K_{\lambda}(y)$ with $y$ and $u$ as above. Take the unique $c\in C_{M}$ with $c\sim i(u)$ and set $z\coloneqq i(u)-c$. Then $z^{\prime}=i(u)^{\prime}$ and $z\prec 1$, so by the remarks after [3, Lemma 10.2.4], $z$ is transcendental over $i(K_{\lambda})$, and thus mapping $y\mapsto z$ yields a differential field embedding $K_{\lambda+1}\to M$ extending $i$. By the uniqueness of [3, Lemma 10.2.4], this is a valued differential field embedding. Finally, if $i$ extends to an embedding with $y\mapsto z_{1}\in M$, then $i(u)-z_{1}\in C_{M}$ and $i(u)-z_{1}\sim i(u)\sim c$, so $z_{1}=z$. ∎ Note that if $K$ is a pre-$H$-field with $\sup\Psi=0$, then as an immediate extension of $K$ any residue constant closure of $K$ embeds uniquely (as an _ordered_ valued differential field) over $K$ into every residue constant closed pre-$H$-field extension of $K$ with small derivation by [3, Lemma 10.5.8]. ###### Lemma 6.2. Suppose that $K$ is residue constant closed. Then the algebraic closure $K^{\operatorname{ac}}$ of $K$ is residue constant closed. If $K$ is additionally an ordered field with convex valuation ring, then $K^{\operatorname{rc}}$ is residue constant closed. ###### Proof. First, note that an algebraic extension of a henselian valued field is henselian [3, Corollary 3.3.12]. Let $u\in K^{\operatorname{ac}}$ with $u\asymp 1$ and $u^{\prime}\prec 1$; we need to show that there is $c\in C_{K^{\operatorname{ac}}}=C^{\operatorname{ac}}$ with $c\sim u$. We have that $\overline{u}\in C_{\operatorname{res}(K^{\operatorname{ac}})}$ is algebraic over $\operatorname{res}(K)$, so it is algebraic over $C_{\operatorname{res}(K)}$ [3, Lemma 4.1.2]. Take a monic $P\in C[X]$, say of degree $n$, such that $\overline{P}\in C_{\operatorname{res}(K)}[X]$ is the minimum polynomial of $\overline{u}$ over $C_{\operatorname{res}(K)}$. Then $P=\prod_{i=1}^{n}(X-c_{i})$ with $c_{1},\dots,c_{n}\in C^{\operatorname{ac}}$, hence $\overline{P}=\prod_{i=1}^{n}(X-\overline{c_{i}})$, so we have $i$ with $1\leqslant i\leqslant n$ and $\overline{u}=\overline{c_{i}}$, and thus $u\sim c_{i}$. The second statement is proved similarly. ∎ ## 7\. Differential-Hensel-Liouville closures In this section we construct differential-Hensel-Liouville closures (Theorem 7.16) in analogy with the Newton-Liouville closures of [3, §14.5] and prove that they are unique (Corollary 7.18). To do this, we first construct extensions that are real closed, have exponential integration, and satisfy an embedding property (Corollary 7.11, Lemma 7.14), in analogy with the Liouville closures of [3, §10.6]; some preliminaries are adapted from [3, §10.4–10.6]. Combining this with the residue constant closures from the previous section, we then construct extensions that are residue constant closed, are real closed, have exponential integration, and satisfy an embedding property (Theorem 7.15). ###### Assumption. In this section, $K$ is a pre-$H$-field. ### 7.A. Adjoining exponential integrals Suppose that $s\in K\setminus(K^{\times})^{\dagger}$ and $f$ is transcendental over $K$. We give $K(f)$ the unique derivation extending that of $K$ with $f^{\dagger}=s$. In the first lemma, $K$ need only be an ordered differential field. ###### Lemma 7.1. If $K$ is real closed and $K(f)$ can be ordered making it an ordered field extension of $K$, then $C_{K(f)}=C$. ###### Proof. This follows from [3, Lemma 4.6.11 and Corollary 4.6.12]. ∎ In the next two lemmas, $K$ is just a valued differential field, and need not be ordered. The first is based on [3, Lemma 10.4.2]. ###### Lemma 7.2. Suppose that $K$ has small derivation and $\bm{k}=(\bm{k}^{\times})^{\dagger}$. Let $K(f)$ have a valuation that makes it an extension of $K$ with $\Gamma_{K(f)}=\Gamma$ and $\der\mathcal{O}_{K(f)}\subseteq\mathcal{O}_{K(f)}$. Then $s-a^{\dagger}\prec 1$ for some $a\in K^{\times}$. ###### Proof. Since $vf\in\Gamma$, there is $b\in K^{\times}$ with $g\coloneqq f/b\asymp 1$. Then $s-b^{\dagger}=g^{\dagger}\asymp g^{\prime}\preccurlyeq 1$. If $s-b^{\dagger}\prec 1$, set $a\coloneqq b$. If $s-b^{\dagger}\asymp 1$, since $\bm{k}=(\bm{k}^{\times})^{\dagger}$, we have $u\asymp 1$ in $K^{\times}$ with $s-b^{\dagger}\sim u^{\dagger}$. Then set $a\coloneqq bu$. ∎ The last part of the argument also yields the following useful fact. ###### Lemma 7.3. Suppose that $K$ has small derivation and $\bm{k}=(\bm{k}^{\times})^{\dagger}$. If $s-a^{\dagger}\succcurlyeq 1$ for all $a\in K^{\times}$, then $s-a^{\dagger}\succ 1$ for all $a\in K^{\times}$. Now we return to the situation that $K$ is a pre-$H$-field. ###### Lemma 7.4 ([3, Lemma 10.5.18]). Suppose that $K$ is henselian and $vs\in(\Gamma^{>})^{\prime}$. Then there is a unique valuation on $K(f)$ making it an $H$-asymptotic extension of $K$ with $f\sim 1$. With this valuation, $K(f)$ is an immediate extension of $K$, so there is a unique ordering of $K(f)$ making it a pre-$H$-field extension of $K$ by [3, Lemma 10.5.8]. Here is a pre-$H$-field version of [3, Lemma 10.5.20] with the same proof. ###### Lemma 7.5. Suppose that $K$ is real closed, $s<0$, and $v(s-a^{\dagger})\in\Psi^{\downarrow}$ for all $a\in K^{\times}$. Then there is a unique pair of a field ordering and a valuation on $L\coloneqq K(f)$ making it a pre-$H$-field extension of $K$ with $f>0$. Moreover, we have: 1. (i) $vf\notin\Gamma$, $\Gamma_{L}=\Gamma\oplus\mathbb{Z}vf$, $f\prec 1$; 2. (ii) $\Psi$ is cofinal in $\Psi_{L}\coloneqq\psi_{L}(\Gamma_{L}^{\neq})$; 3. (iii) a gap in $K$ remains a gap in $L$; 4. (iv) if $L$ has a gap not in $\Gamma$, then $[\Gamma_{L}]=[\Gamma]$; 5. (v) $\bm{k}_{L}=\bm{k}$. ### 7.B. Exponential integration closures Let $E$ be a differential field. We call a differential field extension $F$ of $E$ an _exponential integration extension_ of $E$ (_expint-extension_ for short) if $C_{F}$ is algebraic over $C_{E}$ and for every $a\in F$ there are $t_{1},\dots,t_{n}\in F^{\times}$ with $a\in E(t_{1},\dots,t_{n})$ such that for $i=1,\dots,n$, either $t_{i}$ is algebraic over $E(t_{1},\dots,t_{i-1})$ or $t_{i}^{\dagger}\in E(t_{1},\dots,t_{i-1})$. In particular, any expint- extension is $\operatorname{d}$-algebraic. The following is routine. ###### Lemma 7.6. Let $E\subseteq F\subseteq M$ be a chain of differential field extensions. 1. (i) If $M$ is an expint-extension of $E$, then $M$ is an expint-extension of $F$. 2. (ii) If $M$ is an expint-extension of $F$ and $F$ is an expint-extension of $E$, then $M$ is an expint-extension of $E$. Minor modifications to the proof of [3, Lemma 10.6.8] yield the following. ###### Lemma 7.7. If $F$ is an expint-extension of $E$, then $|F|=|E|$. Now suppose that $E$ is an ordered differential field. We call $E$ _exponential integration closed_ (_expint-closed_ for short) if it is real closed and has exponential integration. We call an ordered differential field extension $F$ of $E$ an _exponential integration closure_ (_expint-closure_ for short) of $E$ if it is an expint-extension of $E$ that is expint-closed. The next observation has the same proof as [3, Lemma 10.6.9]. ###### Lemma 7.8. If $E$ is expint-closed, then $E$ has no proper expint-extension with the same constants. ###### Assumption. For the rest of this subsection, suppose that $K$ has gap $0$. From this assumption it follows that $(\Gamma^{>})^{\prime}=\Gamma^{>}$ and $\Psi^{\downarrow}=\Gamma^{<}$ (see §4). Recall from §2 how we construe the real closure of $K$ as a pre-$H$-field extension of $K$ with gap $0$. ###### Definition. We call a strictly increasing chain $(K_{\lambda})_{\lambda\leqslant\mu}$ of pre-$H$-fields with gap $0$ an _expint-tower on $K$_ if: 1. (i) $K_{0}=K$; 2. (ii) if $\lambda$ is a limit ordinal, then $K_{\lambda}=\bigcup_{\rho<\lambda}K_{\rho}$; 3. (iii) if $\lambda<\lambda+1\leqslant\mu$, then either: 1. (a) $K_{\lambda}$ is not real closed and $K_{\lambda+1}$ is the real closure of $K_{\lambda}$; or 2. (b) $K_{\lambda}$ is real closed and $K_{\lambda+1}=K_{\lambda}(y_{\lambda})$ with $y_{\lambda}\notin K_{\lambda}$ satisfying either: 1. (b1) $y_{\lambda}^{\dagger}=s_{\lambda}\in K_{\lambda}$ with $y_{\lambda}\sim 1$, $s_{\lambda}\prec 1$, and $s_{\lambda}\neq a^{\dagger}$ for all $a\in K_{\lambda}^{\times}$; or 2. (b2) $y_{\lambda}^{\dagger}=s_{\lambda}\in K_{\lambda}$ with $s_{\lambda}<0$, $y_{\lambda}>0$, and $s_{\lambda}-a^{\dagger}\succ 1$ for all $a\in K_{\lambda}^{\times}$. Given such a tower, we call $K_{\mu}$ its _top_ and set $C_{\lambda}\coloneqq C_{K_{\lambda}}$ and $\bm{k}_{\lambda}\coloneqq\bm{k}_{K_{\lambda}}$ for $\lambda\leqslant\mu$. ###### Lemma 7.9. Let $(K_{\lambda})_{\lambda\leqslant\mu}$ be an expint-tower on $K$. Then: 1. (i) $K_{\mu}$ is an expint-extension of $K$; 2. (ii) $C_{\mu}$ is the real closure of $C$ if $\mu>0$; 3. (iii) $\bm{k}_{\mu}$ is the real closure of $\bm{k}$ if $\mu>0$; 4. (iv) $|K_{\lambda}|=|K|$, and hence $\mu<|K|^{+}$. ###### Proof. For (i), go by induction on $\lambda\leqslant\mu$. The main thing to check is the condition on the constant fields. If $\lambda=0$ or $\lambda$ is a limit ordinal, this is clear. If $K_{\lambda+1}$ is the real closure of $K_{\lambda}$, then $C_{\lambda+1}$ is the real closure of $C_{\lambda}$. If $K_{\lambda}$ is real closed and $K_{\lambda+1}$ is as in (b) above, then $C_{\lambda+1}=C_{\lambda}$ by Lemma 7.1. For (ii), $C_{1}$ is the real closure of $C$, and then $C_{\lambda}=C_{1}$ for all $\lambda\geqslant 1$ as in the proof of (i). For (iii), $\bm{k}_{1}$ is the real closure of $\bm{k}$, and then $\bm{k}_{\lambda}=\bm{k}_{1}$ for all $\lambda\geqslant 1$ by the uniqueness of Lemma 7.4 and Lemma 7.5. Finally, (iv) follows from (i) and Lemma 7.7. ∎ ###### Lemma 7.10. Let $L$ be the top of a maximal expint-tower on $K$ such that $\bm{k}_{L}$ has exponential integration. Then $L$ is expint-closed, and hence an expint- closure of $K$. ###### Proof. Suppose that $L$ is not expint-closed. If $L$ is not real closed, then its real closure is a proper pre-$H$-field extension of $L$ with gap $0$, so we could extend the expint-tower. We are left with the case that $L$ is real closed and we have $s\in L\setminus(L^{\times})^{\dagger}$. In particular, $L$ is henselian and $\Gamma$ is divisible. We may assume that $s<0$. Take $f$ transcendental over $L$ with $f^{\dagger}=s$. First suppose that $s-a^{\dagger}\prec 1$ for some $a\in L^{\times}$. Then taking such an $a$ and replacing $f$ and $s$ by $f/a$ and $s-a^{\dagger}$, we arrange that $s\prec 1$. Giving $L(f)$ the valuation and ordering from Lemma 7.4 makes it a pre-$H$-field extension of $L$ with gap $0$ of type (b1). Now suppose that $s-a^{\dagger}\succcurlyeq 1$ for all $a\in L^{\times}$. By Lemma 7.3, $s-a^{\dagger}\succ 1$ for all $a\in L^{\times}$. Then giving $L(f)$ the ordering and valuation from Lemma 7.5 makes it a pre-$H$-field extension of $L$ with gap $0$ of type (b2). Thus $L$ is expint-closed, and hence an expint-closure of $K$ by Lemma 7.9(i). ∎ ###### Corollary 7.11. Suppose that $\bm{k}$ is expint-closed. Then $K$ has an expint-closure that is a pre-$H$-field extension of $K$ with gap $0$. If $K$ is residue constant closed, then $K$ has a residue constant closed expint-closure that is a pre-$H$-field extension of $K$ with gap $0$. ###### Proof. By Lemma 7.9(iv), Zorn gives a maximal expint-tower $(K_{\lambda})_{\lambda\leqslant\mu}$ on $K$. Then $\bm{k}_{\mu}=\bm{k}$ by Lemma 7.9(iii), and hence $K_{\mu}$ is an expint-closure of $K$ by Lemma 7.10. If $K$ is residue constant closed, then so is $K_{\mu}$ since it is henselian and $C$ maps onto $C_{\bm{k}}=C_{\bm{k}_{\mu}}$. ∎ ###### Lemma 7.12. Let $M$ be a residue constant closed, expint-closed pre-$H$-field extension of $K$ with gap $0$. Suppose that $(K_{\lambda})_{\lambda\leqslant\mu}$ is an expint-tower on $K$ in $M$ (i.e., each $K_{\lambda}$ is a pre-$H$-subfield of $M$) and maximal in $M$ (i.e., it cannot be extended to an expint-tower $(K_{\lambda})_{\lambda\leqslant\mu+1}$ on $K$ in $M$) such that $\bm{k}_{\mu}$ has exponential integration. Then $(K_{\lambda})_{\lambda\leqslant\mu}$ is a maximal expint-tower on $K$. ###### Proof. Since $M$ is real closed, $K_{\mu}$ must be real closed by maximality in $M$. So supposing $(K_{\lambda})_{\lambda\leqslant\mu}$ is not a maximal expint- tower on $K$, we have $s_{\mu}\in K_{\mu}$ such that $s_{\mu}\neq a^{\dagger}$ for all $a\in K_{\mu}^{\times}$; we may assume that $s_{\mu}<0$. Since $M$ is expint-closed, we have $y_{\mu}\in M$ with $y_{\mu}^{\dagger}=s_{\mu}$; we may assume that $y_{\mu}>0$. First suppose that $s_{\mu}-a^{\dagger}\succcurlyeq 1$ for all $a\in K_{\mu}^{\times}$, so actually $s_{\mu}-a^{\dagger}\succ 1$ for all $a\in K_{\mu}^{\times}$ by Lemma 7.3. Thus setting $K_{\mu+1}\coloneqq K_{\mu}(y_{\mu})$ yields an extension of $(K_{\lambda})_{\lambda\leqslant\mu}$ in $M$ of type (b2). Now suppose that $s_{\mu}-a^{\dagger}\prec 1$ for some $a\in K_{\mu}^{\times}$. Taking such an $a$ and replacing $s_{\mu}$ and $y_{\mu}$ by $s_{\mu}-a^{\dagger}$ and $y_{\mu}/a$, we may assume that $s_{\mu}\prec 1$. Since $M$ has gap $0$, we have $y_{\mu}\asymp 1$ and so $y_{\mu}^{\prime}\asymp s_{\mu}\prec 1$. That is, $\overline{y_{\mu}}\in C_{\operatorname{res}(M)}$, so we have $c\in C_{M}$ with $y_{\mu}\sim c$. Replacing $y_{\mu}$ by $y_{\mu}/c$, we obtain the desired extension of $(K_{\lambda})_{\lambda\leqslant\mu}$ in $M$ of type (b1). ∎ This comment is not used later, but in the above lemma, we can replace the assumption that $M$ is residue constant closed (so $C_{\operatorname{res}(M)}=\operatorname{res}(C_{M})$) with $C_{\operatorname{res}(M)}=C_{\operatorname{res}(K)}$. In the final argument, instead of $c\in C_{M}$ we have $u\in K$ with $u\asymp 1$ and $u^{\prime}\prec 1$, so we also replace $s_{\mu}$ with $s_{\mu}-u^{\dagger}$. ###### Corollary 7.13. Suppose that $L$ is an expint-closed pre-$H$-field extension $L$ of $K$. 1. (i) If $L$ is an expint-closure of $K$, then no proper differential subfield of $L$ containing $K$ is expint-closed. 2. (ii) Suppose that $\bm{k}$ is expint-closed, and that $L$ has gap $0$ and is residue constant closed. If no proper differential subfield of $L$ containing $K$ is expint-closed, then $L$ is an expint-closure of $K$. ###### Proof. For (i), if $L$ is an expint-closure of $K$, then no proper differential subfield of $L$ containing $K$ is expint-closed by Lemmas 7.6 and 7.8. For (ii), suppose that no proper differential subfield of $L$ containing $K$ is expint-closed. Take an expint-tower $(K_{\lambda})_{\lambda\leqslant\mu}$ on $K$ in $L$ that is maximal in $L$. Since $\bm{k}$ is real closed, $\bm{k}_{\mu}=\bm{k}$, and hence $\bm{k}_{\mu}$ has exponential integration. Then $(K_{\lambda})_{\lambda\leqslant\mu}$ is a maximal expint-tower on $K$ by Lemma 7.12. By Lemma 7.10, $K_{\mu}$ is expint-closed and hence equal to $L$. ∎ ###### Lemma 7.14. Let $(K_{\lambda})_{\lambda\leqslant\mu}$ be an expint-tower on $K$. Then any embedding of $K$ into a residue constant closed, expint-closed pre-$H$-field extension $M$ of $K$ with gap $0$ extends to an embedding of $K_{\mu}$. If $K$ is residue constant closed and $\bm{k}$ is expint-closed, then any two residue constant closed expint-closures of $K$ that are pre-$H$-field extensions of $K$ with gap $0$ are isomorphic over $K$. ###### Proof. Let $M$ be a residue constant closed, expint-closed pre-$H$-field with gap $0$. We prove that for $\lambda<\mu$ any embedding $K_{\lambda}\to M$ extends to an embedding $K_{\lambda+1}\to M$, which yields the result by induction. Suppose that $i\colon K_{\lambda}\to M$ is an embedding. If $K_{\lambda+1}$ is the real closure of $K_{\lambda}$, then we may extend $i$ to $K_{\lambda+1}$. So suppose that $K_{\lambda}$ is real closed and we have $s_{\lambda}\in K_{\lambda}$ and $y_{\lambda}\in K_{\lambda+1}\setminus K_{\lambda}$ with $K_{\lambda+1}=K_{\lambda}(y_{\lambda})$, $y_{\lambda}^{\dagger}=s_{\lambda}$, $y_{\lambda}\sim 1$, $s_{\lambda}\prec 1$, and $s_{\lambda}\neq a^{\dagger}$ for all $a\in K_{\lambda}^{\times}$. Take $z\in M$ with $z^{\dagger}=i(s_{\lambda})$. Hence $z\asymp 1$ and $\overline{z}\in C_{\operatorname{res}(M)}$, so we have $c\in C_{M}$ with $z\sim c$. By the uniqueness of Lemma 7.4, we may extend $i$ to an embedding of $K_{\lambda}(y_{\lambda})$ into $M$ sending $y_{\lambda}$ to $z/c$. Now suppose that $K_{\lambda}$ is real closed and we have $s_{\lambda}\in K_{\lambda}$ and $y_{\lambda}\in K_{\lambda+1}\setminus K_{\lambda}$ with $K_{\lambda+1}=K_{\lambda}(y_{\lambda})$, $y_{\lambda}^{\dagger}=s_{\lambda}$, $s_{\lambda}<0$, $y_{\lambda}>0$, and $s_{\lambda}-a^{\dagger}\succ 1$ for all $a\in K^{\times}_{\lambda}$. Take $z\in M$ with $z^{\dagger}=i(s_{\lambda})$; we may assume that $z>0$. Then by the uniqueness of Lemma 7.5, we can extend $i$ to an embedding of $K_{\lambda}(y_{\lambda})$ into $M$ sending $y_{\lambda}$ to $z$. The second statement follows from the first by Corollary 7.13(i) and the proof of Corollary 7.11. ∎ Combining these results with Proposition 6.1 yields the following. ###### Theorem 7.15. Suppose that $\bm{k}$ is expint-closed. Then $K$ has a pre-$H$-field extension $L$ with gap $0$ such that: 1. (i) $L$ is a residue constant closed, expint-closed extension of $K$; 2. (ii) $L$ embeds over $K$ into any residue constant closed, expint-closed pre-$H$-field extension of $K$ with gap $0$; 3. (iii) $L$ has no proper differential subfield containing $K$ that is residue constant closed and expint-closed. ###### Proof. By Proposition 6.1, let $K_{0}$ be the residue constant closure of $K$. Taking the top of a maximal expint-tower on $K_{0}$ yields an expint-closure $L$ of $K_{0}$ that is residue constant closed as in Corollary 7.11. For (ii), let $M$ be a pre-$H$-field extension of $K$ with gap $0$ that is residue constant closed and expint-closed. Then $K_{0}$ embeds uniquely into $M$ over $K$, so by Lemma 7.14 we can extend this to an embedding of $L$. For (iii), suppose that $L_{0}\supseteq K$ is a differential subfield of $L$ that is residue constant closed and expint-closed. Then $L_{0}\supseteq K_{0}$, and hence $L_{0}=L$ by Corollary 7.13(i). ∎ Let $L$ be as above. Then any pre-$H$-field extension of $K$ with gap $0$ satisfying (i) and (ii) is isomorphic to $L$ over $K$ by (iii). Also, $L$ is a Liouville extension of $K$ in the sense of [3, §10.6],111The definition is similar to that of expint-extensions in §7.B except that we also allow $t_{i}^{\prime}\in E(t_{1},\dots,t_{i-1})$. since $K_{0}$ is a Liouville extension of $K$ by construction and expint-extensions are Liouville extensions. ### 7.C. Differential-Hensel-Liouville closures ###### Assumption. We continue to assume in this subsection that the pre-$H$-field $K$ has gap $0$. ###### Definition. We call $K$ _differential-Hensel-Liouville closed_ (slightly shorter: _$\operatorname{d}$ -Hensel-Liouville closed_) if it is $\operatorname{d}$-henselian and expint-closed. We call a pre-$H$-field extension $L$ of $K$ a _differential-Hensel-Liouville closure_ (slightly shorter: _$\operatorname{d}$ -Hensel-Liouville closure_) of $K$ if it is $\operatorname{d}$-Hensel-Liouville closed and embeds over $K$ into every $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field extension of $K$. Note that, if $K$ is $\operatorname{d}$-henselian, then $K$ is also closed under integration in the sense that is surjective by [3, Lemma 7.1.8], hence the use of “Liouville” in the terms just defined. To build $\operatorname{d}$-Hensel-Liouville closures, we use the fact that if $F$ is an asymptotic valued differential field with small derivation and linearly surjective differential residue field, then it has a (unique) _differential-henselization_ (_$\operatorname{d}$ -henselization_ for short) $F^{\operatorname{dh}}$ by [7, Theorem 3.7]. For such $F$, $F^{\operatorname{dh}}$ is an immediate asymptotic $\operatorname{d}$-algebraic extension of $F$ that embeds over $F$ into every $\operatorname{d}$-henselian asymptotic extension of $F$; if $F$ is a pre-$H$-field, then $F^{\operatorname{dh}}$ is too and embeds (as an _ordered_ valued differential field) into every $\operatorname{d}$-henselian pre-$H$-field extension of $F$ by [3, Lemma 10.5.8]. ###### Theorem 7.16. Suppose that $\bm{k}$ is expint-closed and linearly surjective. Then $K$ has a $\operatorname{d}$-Hensel-Liouville closure $K^{\operatorname{dhl}}$. ###### Proof. We use below that any $\operatorname{d}$-henselian asymptotic field is residue constant closed by [3, Lemma 9.4.10]. Define a sequence of pre-$H$-field extensions of $K$ with gap $0$ as follows. Set $K_{0}\coloneqq K$. For $n\geqslant 1$, if $n$ is odd, let $K_{n}$ be the $\operatorname{d}$-henselization of $K_{n-1}$, and if $n$ is even, let $K_{n}$ be the expint-closure of $K_{n-1}$ from Corollary 7.11. Note that $\bm{k}_{K_{n}}=\bm{k}$ for all $n$. We set $K^{\operatorname{dhl}}\coloneqq\bigcup_{n}K_{n}$ and show that $K^{\operatorname{dhl}}$ is a $\operatorname{d}$-Hensel-Liouville closure of $K$. Let $L$ be a pre-$H$-field extension of $K$ that is $\operatorname{d}$-henselian and expint-closed. We show by induction on $n$ that we can extend any embedding $K_{n}\to L$ to an embedding $K_{n+1}\to L$, so suppose that we have an embedding $i\colon K_{n}\to L$. If $n$ is even, then $K_{n+1}$ is the $\operatorname{d}$-henselization of $K_{n}$, so we may extend $i$ to an embedding $K_{n+1}\to L$. If $n$ is odd, then $K_{n}$ is $\operatorname{d}$-henselian and $K_{n+1}$ is the expint-closure of $K_{n}$, so we can extend $i$ to an embedding $K_{n+1}\to L$ by Lemma 7.14. ∎ Note that $K^{\operatorname{dhl}}$ is a $\operatorname{d}$-algebraic extension of $K$ with the same residue field. In the next two results, adapted from [3, §16.2], we show that $K^{\operatorname{dhl}}$ is the unique, up to isomorphism over $K$, $\operatorname{d}$-Hensel-Liouville closure of $K$. ###### Lemma 7.17. Suppose that $\bm{k}$ is expint-closed and linearly surjective. Let $i\colon K^{\operatorname{dhl}}\to L$ be an embedding into a pre-$H$-field $L$ with gap $0$ such that $\operatorname{res}\big{(}i(K^{\operatorname{dhl}})\big{)}=\operatorname{res}(L)$. Then $i(K^{\operatorname{dhl}})\ =\ i(K)^{\operatorname{dalg}}\ \coloneqq\ \\{f\in L:f\ \text{is $\operatorname{d}$-algebraic over}\ i(K)\\}.$ ###### Proof. We have $i(K^{\operatorname{dhl}})\subseteq i(K)^{\operatorname{dalg}}$ since $K^{\operatorname{dhl}}$ is a $\operatorname{d}$-algebraic extension of $K$. For the other direction, note that $i(K^{\operatorname{dhl}})$ is a $\operatorname{d}$-henselian, expint-closed pre-$H$-subfield of $i(K)^{\operatorname{dalg}}$, so $i(K^{\operatorname{dhl}})=i(K)^{\operatorname{dalg}}$ by Theorem 5.2. ∎ Hence for $K$ as in the lemma above, any $\operatorname{d}$-algebraic extension of $K$ that is a $\operatorname{d}$-henselian, expint-closed pre-$H$-field with the same residue field as $K$ is isomorphic to $K^{\operatorname{dhl}}$ over $K$, and is thus a $\operatorname{d}$-Hensel- Liouville closure of $K$. ###### Corollary 7.18. Suppose that $\bm{k}$ is expint-closed and linearly surjective. Then $K^{\operatorname{dhl}}$ has no proper differential subfield containing $K$ that is $\operatorname{d}$-Hensel-Liouville closed. Thus any $\operatorname{d}$-Hensel-Liouville closure of $K$ is isomorphic to $K^{\operatorname{dhl}}$ over $K$. ###### Proof. If $L\supseteq K$ is a $\operatorname{d}$-Hensel-Liouville closed differential subfield of $K^{\operatorname{dhl}}$, then we have an embedding $i\colon K^{\operatorname{dhl}}\to L$ over $K$. Viewing this as an embedding into $K^{\operatorname{dhl}}$, by Lemma 7.17 we have $K^{\operatorname{dhl}}=i(K^{\operatorname{dhl}})$, so $K^{\operatorname{dhl}}=L$. ∎ ## 8\. Main results ### 8.A. Quantifier elimination We now turn to the proof of quantifier elimination. Recall from [12] the theory of closed ordered differential fields, which has quantifier elimination and is the model completion of the theory of ordered differential fields (where no assumption is made on the interaction between the ordering and the derivation). Then let $T^{\operatorname{dhl}}_{\operatorname{codf}}$ be the theory, in the language $\\{+,-,\cdot,0,1,\der,\preccurlyeq,\leqslant\\}$ of ordered valued differential fields, of $\operatorname{d}$-Hensel-Liouville closed pre-$H$-fields that have closed ordered differential residue field. ###### Assumption. In this section, $K$ and $L$ are pre-$H$-fields with small derivation. In the next results, for an ordered set $S$ we denote the cofinality of $S$ by $\operatorname{cf}(S)$. Recall also for _Case 3_ the discussion of pc- sequences in §2. ###### Lemma 8.1. Suppose that $K$ is $\operatorname{d}$-Hensel-Liouville closed, and let $E$ be a pre-$H$-subfield of $K$ with $\bm{k}_{E}=\bm{k}$. Suppose that $L$ is $\operatorname{d}$-Hensel-Liouville closed. Assume that $L$ is $|K|^{+}$-saturated as an ordered set and $\operatorname{cf}(\Gamma_{L}^{<})>|\Gamma|$. Then any embedding $E\to L$ can be extended to an embedding $K\to L$. ###### Proof. Let $i\colon E\to L$ be an embedding. We may assume that $E\neq K$. It suffices to show that $i$ can be extended to an embedding $F\to L$ for some pre-$H$-subfield $F$ of $K$ properly containing $E$. First, suppose that $\Gamma_{E}^{<}$ is not cofinal in $\Gamma^{<}$ and let $f\in K^{>}$ with $\Gamma_{E}^{<}<vf<0$. By the cofinality assumption on $\Gamma_{L}^{<}$, take $g\in L^{>}$ with $\Gamma_{i(E)}^{<}<v_{L}(g)<0$. Then we extend $i$ to an embedding $E\langle f\rangle\to L$ sending $f\mapsto g$ by Lemma 5.7. Now suppose that $\Gamma_{E}^{<}$ is cofinal in $\Gamma^{<}$ and consider the following three cases. Case 1: $E$ is not $\operatorname{d}$-Hensel-Liouville closed. From the assumptions on $K$, we get that $\bm{k}$ is expint-closed and linearly surjective. Since $\bm{k}_{E}=\bm{k}$, we may extend $i$ to an embedding of the $\operatorname{d}$-Hensel-Liouville closure of $E$ into $L$ by Theorem 7.16. Case 2: $E$ is $\operatorname{d}$-Hensel-Liouville closed and $E\langle y\rangle$ is an immediate extension of $E$ for some $y\in K\setminus E$. Take such a $y$ and let $(a_{\rho})$ be a divergent pc-sequence in $K$ with $a_{\rho}\rightsquigarrow y$. Since $E$ is $\operatorname{d}$-henselian, it is $\operatorname{d}$-algebraically maximal by [7, Theorem 3.6], and so $(a_{\rho})$ is of $\operatorname{d}$-transcendental type over $E$. By the saturation assumption on $L$ and [3, Lemma 2.4.2], we have $z\in L$ with $i(a_{\rho})\rightsquigarrow z$. Then [3, Lemma 6.9.1] yields a valued differential field embedding $E\langle y\rangle\to L$ sending $y\mapsto z$; by [3, Lemma 10.5.8], this is also an ordered field embedding. Case 3: $E$ is $\operatorname{d}$-Hensel-Liouville closed and there is no $y\in K\setminus E$ with $E\langle y\rangle$ an immediate extension of $E$. Take any $f\in K\setminus E$. By saturation, take $g\in L$ such that for all $a\in E$, we have $a<f\implies i(a)<g\qquad\text{and}\qquad f<a\implies g<i(a).$ Then we can extend $i$ to an embedding $E\langle f\rangle\to L$ with $f\mapsto g$ by Proposition 5.6. ∎ ###### Theorem 8.2. The theory $T^{\operatorname{dhl}}_{\operatorname{codf}}$ has quantifier elimination. ###### Proof. Suppose that $K$ and $L$ are $\operatorname{d}$-Hensel-Liouville closed and have closed ordered differential residue fields. Suppose further that $L$ is $|K|^{+}$-saturated as an ordered set, $\operatorname{cf}(\Gamma_{L}^{<})>|\Gamma|$, and $\bm{k}_{L}$ is $|\bm{k}|^{+}$-saturated as an ordered differential field. Let $E$ be a substructure of $K$, so $E$ is a differential subring of $K$ with the induced dominance relation and ordering. By a standard quantifier elimination test (see for example [3, Corollary B.11.9]), it suffices to show that any embedding $i\colon E\to L$ can be extended to an embedding $K\to L$, so let $i\colon E\to L$ be an embedding. By extending $i$ to the fraction field of $E$, we may assume that $E$ is a field. The embedding $i$ induces an embedding $i_{\operatorname{res}}\colon\bm{k}_{E}\to\bm{k}_{L}$ of ordered differential fields. Since $\bm{k}_{L}$ is $|\bm{k}|^{+}$-saturated, by quantifier elimination for closed ordered differential fields [12] we may extend $i_{\operatorname{res}}$ to an embedding $\bm{k}\to\bm{k}_{L}$. By Corollary 3.4, we can now extend $i$ to an embedding $F\to L$ for a differential subfield $F$ of $K$ with $\bm{k}_{F}=\bm{k}$. It remains to apply Lemma 8.1. ∎ ###### Lemma 8.3. Every pre-$H$-field with gap $0$ can be extended to a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field with closed ordered differential residue field. ###### Proof. Suppose we have a pre-$H$-field $K_{0}$ with gap $0$. We first extend its residue field to a closed ordered differential field, since the theory of closed ordered differential fields is the model completion of the theory of ordered differential fields, and apply Corollary 3.4 to obtain a pre-$H$-field extension $K_{1}$ of $K_{0}$ with gap $0$ whose residue field is a closed ordered differential field. It follows from their definition that closed ordered differential fields are expint-closed and linearly surjective, so we can extend $K_{1}$ to a pre-$H$-field $K_{2}$ with the same residue field that is $\operatorname{d}$-Hensel-Liouville closed by Theorem 7.16. ∎ ###### Corollary 8.4. The theory $T^{\operatorname{dhl}}_{\operatorname{codf}}$ is the model completion of the theory of pre-$H$-fields with gap $0$. ###### Proof. This follows from Theorem 8.2 and Lemma 8.3 by a standard model-theoretic fact (see for example [3, Corollary B.11.6]). ∎ ###### Corollary 8.5. The theory $T^{\operatorname{dhl}}_{\operatorname{codf}}$ is complete, and hence decidable. ###### Proof. The structure $(\mathbb{Z};+,-,\cdot,0,1,{}_{0},\preccurlyeq_{0},\leqslant)$, where 0 is the trivial derivation (${}_{0}(\mathbb{Z})=\\{0\\}$) and $\preccurlyeq_{0}$ is the trivial dominance relation ($k\preccurlyeq_{0}l$ for all $k,l\in\mathbb{Z}^{\neq}$), embeds into every model of $T^{\operatorname{dhl}}_{\operatorname{codf}}$, so the theory is complete (see for example [3, Corollary B.11.7]). Decidability then follows from the recursive axiomatization of $T^{\operatorname{dhl}}_{\operatorname{codf}}$ (see for example [3, Corollary B.6.9]). ∎ We now use quantifier elimination to show that $T^{\operatorname{dhl}}_{\operatorname{codf}}$ is distal, a notion of model- theoretic tameness introduced by P. Simon to isolate those NIP theories that are “purely unstable” [11]. The definition used here, one of several equivalent formulations, is in terms of indiscernible sequences; first, some conventions. We use the term “indiscernible” to mean “indiscernible over $\emptyset$,” and if $B$ is a parameter set, “$B$-indiscernible” to mean “indiscernible over $B$.” If $I_{0}$ and $I_{1}$ are linearly ordered sets, then $I_{0}+I_{1}$ denotes their natural concatenation, $I_{0}$ followed by $I_{1}$. The singleton $\\{\ell\\}$ viewed as a linearly ordered set is denoted by $(\ell)$. ###### Definition. Let $T$ be a complete theory in a language $\mathcal{L}$. Then $T$ is _distal_ if in every model $\bm{M}$ of $T$, for any $B\subseteq M$ and infinite linearly ordered sets $I_{0}$, $I_{1}$, whenever 1. (i) $(a_{i})_{i\in I_{0}+(\ell)+I_{1}}$ is indiscernible and 2. (ii) $(a_{i})_{i\in I_{0}+I_{1}}$ is $B$-indiscernible, $(a_{i})_{i\in I_{0}+(\ell)+I_{1}}$ is also $B$-indiscernible. Recall that the theory RCVF of real closed fields with a nontrivial valuation whose valuation ring is convex, in the language $\\{+,-,\cdot,0,1,\preccurlyeq,\leqslant\\}$, is distal: It is weakly o-minimal by quantifier elimination [5], hence dp-minimal by [6, Corollary 4.3], and thus distal by [11, Lemma 2.10]. We reduce Theorem 8.6 to the distality of RCVF by “forgetting” the derivation. ###### Theorem 8.6. The theory $T^{\operatorname{dhl}}_{\operatorname{codf}}$ is distal, and hence has NIP. ###### Proof. Let $K$ be a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field with closed ordered differential residue field. Let $I_{0}$ and $I_{1}$ be infinite linearly ordered sets and $B\subseteq K$. Suppose that $(a_{i})_{i\in I_{0}+(\ell)+I_{1}}$ is indiscernible with $a_{i}\in K^{d}$, $d\geqslant 1$, and $(a_{i})_{i\in I_{0}+I_{1}}$ is $B$-indiscernible. Let $\varphi(x_{1},\dots,x_{k},y)$, $k\in\mathbb{N}$, be a formula with $|x_{1}|=\dots=|x_{k}|=m$ and $|y|=n$. We need to show that for all $b\in B^{n}$ and $i_{1},\dots,i_{k},j_{1},\dots,j_{k}\in I_{0}+(\ell)+I_{1}$ with $i_{1}<\dots<i_{k}$ and $j_{1}<\dots<j_{k}$, $K\models\varphi(a_{i_{1}},\dots,a_{i_{k}},b)\leftrightarrow\varphi(a_{j_{1}},\dots,a_{j_{k}},b).$ For simplicity of notation, we assume that $d=k=m=n=1$. By quantifier elimination, any formula in the variable $z=(z_{1},\dots,z_{t})$, $t\in\mathbb{N}$, is equivalent in $K$ to a boolean combination of formulas of one of the following forms: $F(z)=0,\quad F(z)>0,\quad F(z)\preccurlyeq G(z),\qquad\text{where}\ F,G\in\mathbb{Z}\\{Z_{1},\dots,Z_{t}\\}.$ In particular, there is a formula $\psi(x_{0},\dots,x_{r},y_{0},\dots,y_{r})$ in the language $\mathcal{L}_{\textnormal{OR},\preccurlyeq}\coloneqq\\{+,-,\cdot,0,1,\leqslant,\preccurlyeq\\}$ such that, for all $a,b\in K$, $K\models\varphi(a,b)\leftrightarrow\psi(a,a^{\prime},\dots,a^{(r)},b,b^{\prime},\dots,b^{(r)}).$ Let $B_{r}\coloneqq B\cup\der(B)\cup\dots\cup{}^{r}(B)$. With respect to $\mathcal{L}_{\textnormal{OR},\preccurlyeq}$, the sequences $(a_{i},a_{i}^{\prime},\dots,a_{i}^{(r)})_{i\in I_{0}+(\ell)+I_{1}}$ and $(a_{i},a_{i}^{\prime},\dots,a_{i}^{(r)})_{i\in I_{0}+I_{1}}$ are indiscernible and $B_{r}$-indiscernible, respectively. As a structure in this language, $K\models\textnormal{RCVF}$, and RCVF is distal. Thus for any $b\in B$ and $i,j\in I_{0}+(\ell)+I_{1}$, $K\models\psi(a_{i},a_{i}^{\prime},\dots,a_{i}^{(r)},b,b^{\prime},\dots,b^{(r)})\leftrightarrow\psi(a_{j},a_{j}^{\prime},\dots,a_{j}^{(r)},b,b^{\prime},\dots,b^{(r)}).\qed$ Another consequence of quantifier elimination is that this theory is o-minimal at infinity in the following sense, from which it follows that it is locally o-minimal (see [13]) by taking fractional linear transformations. ###### Corollary 8.7. Suppose that $K$ is a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field with closed ordered differential residue field, and let $X\subseteq K$ be definable with parameters. Then there exists $a\in K$ such that either $(a,+\infty)\subseteq X$ or $(a,+\infty)\cap X=\emptyset$. ###### Proof. By quantifier elimination and a standard model-theoretic argument, it suffices to show that for every elementary extension $L$ of $K$ with $a,b\in L$ satisfying $a,b>K$, there is an isomorphism $K\langle a\rangle\to K\langle b\rangle$ over $K$ with $a\mapsto b$, which follows easily from the proofs of [3, Lemmas 16.6.9 and 16.6.10]. ∎ ### 8.B. Quantifier elimination with extra structure on the residue field In this final subsection, we consider a theory of pre-$H$-fields with gap $0$ where extra structure is allowed on the residue field and prove quantifier elimination and model companion results similar to those in the previous subsection. We use this two-sorted quantifier elimination to deduce an Ax–Kochen/Ershov principle, namely that the theory of a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field is determined by the theory of its ordered differential residue field. We suspend here the convention that $\bm{k}$ is the residue field of $K$, likewise with $\bm{k}_{L}$, etc. Now consider the two-sorted structure $(K,\bm{k};\pi)$, where the language on the sort of $K$ is $\\{+,-,\cdot,0,1,\der,\preccurlyeq,\leqslant\\}$, the language on the sort of $\bm{k}$ is $\mathcal{L}_{\operatorname{res}}\supseteq\\{+,-,\cdot,0,1,\der,\leqslant\\}$, and $\pi$ is a map $\pi\colon K\to\bm{k}$; call this language $\mathcal{L}$. We fix an $\mathcal{L}_{\operatorname{res}}$-theory $T_{\operatorname{res}}$ of ordered differential fields and let $T^{\operatorname{gap}}$ be the $\mathcal{L}$-theory whose models are structures $(K,\bm{k};\pi)$ satisfying: 1. (i) $K$ is a pre-$H$-field with gap $0$; 2. (ii) $\bm{k}\models T_{\operatorname{res}}$; 3. (iii) $\pi|_{\mathcal{O}}$ is a surjective ordered differential ring homomorphism with kernel $\cao$ and $\pi(K\setminus\mathcal{O})=\\{0\\}$. Thus $\pi$ induces an isomorphism of ordered differential fields $\operatorname{res}(K)\cong\bm{k}$; conversely, an isomorphism $\operatorname{res}(K)\cong\bm{k}$ lifts to a surjective ordered differential ring homomorphism $\mathcal{O}\to\bm{k}$ with kernel $\cao$. Suppose $T_{\operatorname{res}}^{\operatorname{dhl}}$ is an $\mathcal{L}_{\operatorname{res}}$-theory extending the theory $T^{\operatorname{expint},\operatorname{ls}}_{\operatorname{res}}$ of ordered differential fields that are expint-closed and linearly surjective; these conditions are necessary if $T_{\operatorname{res}}^{\operatorname{dhl}}$ is to be the theory of a residue field of a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field. Let $T^{\operatorname{dhl}}$ be the $\mathcal{L}$-theory whose models are structures $(K,\bm{k};\pi)$ satisfying: 1. (i) $K$ is a $\operatorname{d}$-Hensel-Liouville closed pre-$H$-field; 2. (ii) $\bm{k}\models T_{\operatorname{res}}^{\operatorname{dhl}}$; 3. (iii) $\pi\colon K\to\bm{k}$ is as in $T$. To obtain quantifier elimination we need to expand $\mathcal{L}$ by a unary function symbol $\iota$ on the sort of $K$, interpreted as multiplicative inversion on $K^{\times}$ and $\iota(0)=0$; set $\mathcal{L}^{\iota}\coloneqq\mathcal{L}\cup\\{\iota\\}$. ###### Theorem 8.8. If $T^{\operatorname{dhl}}_{\operatorname{res}}$ has quantifier elimination, then so does $T^{\operatorname{dhl}}$ in $\mathcal{L}^{\iota}$. If $T^{\operatorname{dhl}}_{\operatorname{res}}$ is model complete, then so is $T^{\operatorname{dhl}}$ in $\mathcal{L}$. ###### Proof. Suppose that $T^{\operatorname{dhl}}_{\operatorname{res}}$ has quantifier elimination, and let $(K,\bm{k};\pi)$ and $(L,\bm{k}_{L};\pi_{L})$ be models of $T^{\operatorname{dhl}}$ such that $(L,\bm{k}_{L};\pi_{L})$ is $|K|^{+}$-saturated. Let $(E,\bm{k}_{E};\pi_{E})$ be an $\mathcal{L}^{\iota}$-substructure of $(K,\bm{k};\pi)$ and $i\colon(E,\bm{k}_{E};\pi_{E})\to(L,\bm{k}_{L};\pi_{L})$ be an embedding. Thus $E$ is a pre-$H$-subfield of $K$ and $\bm{k}_{E}$ is an $\mathcal{L}_{\operatorname{res}}$-substructure of $\bm{k}$, with $\pi_{E}=\pi|_{E}\colon E\to\bm{k}_{E}$ a (not necessarily surjective) ordered differential ring homomorphism on $\mathcal{O}_{E}$. It suffices to extend $i$ to an embedding $(K,\bm{k};\pi)\to(L,\bm{k}_{L};\pi_{L})$. Let $i_{\operatorname{res}}\colon\bm{k}_{E}\to\bm{k}_{L}$ be the restriction of $i$ to $\bm{k}_{E}$. By quantifier elimination for $T^{\operatorname{dhl}}_{\operatorname{res}}$ and the $|\bm{k}|^{+}$-saturation of $\bm{k}_{L}$, we extend $i_{\operatorname{res}}$ to an embedding $i^{*}_{\operatorname{res}}\colon\bm{k}\to\bm{k}_{L}$, which, by pulling back $i_{\operatorname{res}}^{*}$ via the ordered differential field isomorphisms induced by $\pi$ and $\pi_{L}$, yields an embedding $\operatorname{res}(K)\to\operatorname{res}(L)$. Applying Corollary 3.4 with $\operatorname{res}(K)$ instead of $\bm{k}_{L}$ gives a pre-$H$-subfield $F$ of $K$ that extends $E$, has ordered differential residue field $\operatorname{res}(F)=\operatorname{res}(K)$, and such that the embedding $\operatorname{res}(K)\to\operatorname{res}(L)$ is induced by an embedding $F\to L$ extending $i|_{E}$. Now by Lemma 8.1, this embedding extends further to an embedding $j\colon K\to L$. Then the map $i^{*}$ that is $j$ on $K$ and $i_{\operatorname{res}}^{*}$ on $\bm{k}$ is an embedding $(K,\bm{k};\pi)\to(L,\bm{k}_{L};\pi_{L})$ extending $i$. The second statement is proved similarly. ∎ ###### Lemma 8.9. Suppose that every model of $T_{\operatorname{res}}$ can be extended to a model of $T_{\operatorname{res}}^{\operatorname{dhl}}$. Then every model of $T^{\operatorname{gap}}$ can be extended to a model of $T^{\operatorname{dhl}}$. ###### Proof. Let $(K,\bm{k};\pi)\models T^{\operatorname{gap}}$, and extend $\bm{k}$ to a model $\bm{k}^{*}$ of $T_{\operatorname{res}}^{\operatorname{dhl}}$. Let $\bm{k}_{L}$ be an ordered differential field extension of $\operatorname{res}(K)$ such that we have an isomorphism $i\colon\bm{k}_{L}\to\bm{k}^{*}$ of ordered differential fields extending the isomorphism $\operatorname{res}(K)\cong\bm{k}$ induced by $\pi$. Then by applying Corollary 3.4 with $\operatorname{res}(K)$ instead of $\bm{k}$, we obtain a pre-$H$-field extension $L$ of $K$ with gap $0$ that has ordered differential residue field isomorphic to $\bm{k}_{L}$ over $\operatorname{res}(K)$. By composing this isomorphism with $i$, we may assume that $i$ is an isomorphism $i\colon\operatorname{res}(L)\to\bm{k}^{*}$. By Theorem 7.16, we extend $L$ to its $\operatorname{d}$-Hensel-Liouville closure $L^{\operatorname{dhl}}$ with residue field $\operatorname{res}(L^{\operatorname{dhl}})=\operatorname{res}(L)$. Defining $\pi^{*}\colon L^{\operatorname{dhl}}\to\bm{k}^{*}$ by $\pi^{*}(f)\coloneqq i(\operatorname{res}{f})$ for $f\in\mathcal{O}_{L^{\operatorname{dhl}}}$ and $\pi^{*}(f)=0$ otherwise, we obtain a model $(L^{\operatorname{dhl}},\bm{k}^{*};\pi^{*})$ of $T^{\operatorname{dhl}}$ extending $(K,\bm{k};\pi)$. ∎ ###### Corollary 8.10. If $T_{\operatorname{res}}^{\operatorname{dhl}}$ is the model companion of $T_{\operatorname{res}}$, then $T^{\operatorname{dhl}}$ is the model companion of $T^{\operatorname{gap}}$. ###### Theorem 8.11. Suppose that $K_{1}$ and $K_{2}$ are $\operatorname{d}$-Hensel-Liouville closed pre-$H$-fields and let $\bm{k}_{1}$ and $\bm{k}_{2}$ be their ordered differential residue fields. Then $K_{1}\equiv K_{2}\ \iff\ \bm{k}_{1}\equiv\bm{k}_{2}.$ ###### Proof. The left-to-right direction is obvious, so suppose that $\bm{k}_{1}\equiv\bm{k}_{2}$ as ordered differential fields. Construing $(K_{1},\bm{k}_{1};\pi_{1})$ and $(K_{2},\bm{k}_{2};\pi_{2})$ as models of $T^{\operatorname{dhl}}$, with $\mathcal{L}_{\operatorname{res}}=\\{+,-,\cdot,0,1,\der,\leqslant\\}$ and $T^{\operatorname{dhl}}_{\operatorname{res}}=\operatorname{Th}(\bm{k}_{1})$, it suffices to show that $(K_{1},\bm{k}_{1};\pi_{1})\equiv(K_{2},\bm{k}_{2};\pi_{2})$. By expanding the language $\mathcal{L}_{\operatorname{res}}$, we may assume that $T^{\operatorname{dhl}}_{\operatorname{res}}$ has quantifier elimination, and thus so does $T^{\operatorname{dhl}}$ in $\mathcal{L}^{\iota}$ by Theorem 8.8. Now we assume the Continuum Hypothesis and explain why this is unnecessary after Corollary 8.12. Then by passing to elementarily equivalent structures, we arrange that $(K_{1},\bm{k}_{1};\pi_{1})$ and $(K_{2},\bm{k}_{2};\pi_{2})$ are saturated of cardinality $\aleph_{1}$. In particular, $\bm{k}_{1}$ and $\bm{k}_{2}$ are also saturated of cardinality $\aleph_{1}$, so $\bm{k}_{1}\cong\bm{k}_{2}$. Finally, consider the structure $(\mathbb{Q},\bm{k}_{1};\pi)$, where $\mathbb{Q}$ is equipped with the usual ordered field structure, the trivial derivation, and the trivial dominance relation, and $\pi$ is the unique field embedding $\mathbb{Q}\to\bm{k}_{1}$. This structure embeds into both $(K_{1},\bm{k}_{1};\pi_{1})$ and $(K_{2},\bm{k}_{2};\pi_{2})$, and thus quantifier elimination yields $(K_{1},\bm{k}_{1};\pi_{1})\equiv(K_{2},\bm{k}_{2};\pi_{2})$. ∎ ###### Corollary 8.12. Let $\mathcal{L}_{\operatorname{res}}=\\{+,-,\cdot,0,1,\der,\leqslant\\}$ and $T^{\operatorname{dhl}}_{\operatorname{res}}=T^{\operatorname{expint},\operatorname{ls}}_{\operatorname{res}}$. Then every $\mathcal{L}^{\iota}$-sentence is $T^{\operatorname{dhl}}$-equivalent to an $\mathcal{L}_{\operatorname{res}}$-sentence. ###### Proof. Let $B$ be the boolean algebra of $\mathcal{L}^{\iota}$-sentences modulo $T^{\operatorname{dhl}}$-equivalence and let $B_{\operatorname{res}}$ be the boolean subalgebra of $B$ of $\mathcal{L}_{\operatorname{res}}$-sentences modulo $T^{\operatorname{dhl}}$-equivalence. Recall the correspondence between complete theories extending $T^{\operatorname{dhl}}$ and elements of $S(B)$, the space of ultrafilters on $B$. By Theorem 8.11, the restriction map $S(B)\to S(B_{\operatorname{res}})$ is injective, and thus by the Stone Representation Theorem, the inclusion map $B_{\operatorname{res}}\to B$ is surjective. ∎ Corollary 8.12 is clearly equivalent to Theorem 8.11. As $T^{\operatorname{dhl}}$ has a recursive axiomatization, Corollary 8.12 is an arithmetic statement, and thus by absoluteness Theorem 8.11 holds without assuming the Continuum Hypothesis. ## Acknowledgements Thanks are due to Lou van den Dries for helpful discussions, for comments on a draft of this paper, and for providing the manuscript, written with M. Aschenbrenner and J. van der Hoeven, on which §4 is based. Thanks are also due to Anton Bernshteyn for some suggestions on a draft of this paper and to Chris Miller for suggesting the Boshernitzan paper. Presentation of some of this research at DART X was supported by NSF DMS-1952694. ## Appendix A Motivating example This appendix elaborates on [3, Example 10.1.7] to provide an example of a pre-$H$-field with gap $0$ motivating the results of §8.B. Let $\mathbb{T}^{*}$ be an $\aleph_{0}$-saturated elementary extension of $\mathbb{T}$. Enlarging the valuation ring $\mathcal{O}_{\mathbb{T}^{*}}$ of $\mathbb{T}^{*}$ to $\dot{\mathcal{O}}_{\mathbb{T}^{*}}=\\{f\in\mathbb{T}^{*}:|f|\leqslant\exp^{n}(x)\ \text{for some}\ n\geqslant 1\\}$ yields a $\operatorname{d}$-henselian pre-$H$-field $(\mathbb{T}^{*},\dot{\mathcal{O}}_{\mathbb{T}^{*}})$ (necessarily with gap $0$); the saturation ensures that $\dot{\mathcal{O}}_{\mathbb{T}^{*}}$ is a proper subring of $K$, i.e., $K$ contains a _transexponential_ element. Moreover, the residue field of $(\mathbb{T}^{*},\dot{\mathcal{O}}_{\mathbb{T}^{*}})$ is elementarily equivalent to $\mathbb{T}$ as an ordered valued differential field. To explain this, we first review the theory of $\mathbb{T}$. In the language $\\{+,-,\cdot,0,1,\der,\preccurlyeq,\leqslant\\}$ the theory of $\mathbb{T}$ is model complete and axiomatized by the theory $T^{\operatorname{nl}}_{\operatorname{small}}$ of newtonian, $\upomega$-free, Liouville closed $H$-fields with small derivation. An asymptotic field $K$ is _differential-valued_ (_$\operatorname{d}$ -valued_ for short) if $\mathcal{O}=C+\cao$; $\operatorname{d}$-valued fields are pre-$\operatorname{d}$-valued and $H$-fields are exactly the $\operatorname{d}$-valued pre-$H$-fields. An $H$-field $K$ is _Liouville closed_ if it is real closed, has exponential integration, and has _integration_ in the sense that is surjective. For more on $H$-fields and related notions, see [3, Chapter 10]. The property of $\upomega$-freeness is crucial to studying $\mathbb{T}$ but incidental here, so we refer the reader to [3, §11.7]. Likewise, we do not define newtonianity, a technical cousin of $\operatorname{d}$-henselianity, and instead refer the reader to [3, Chapter 14]. To describe $(\mathbb{T}^{*},\dot{\mathcal{O}}_{\mathbb{T}^{*}})$ and its residue field, it is convenient to work with value groups instead of valuation rings via the notions of coarsening and specialization. Let $K$ be an $H$-asymptotic field with small derivation and let $\Delta$ be a nontrivial proper convex subgroup of $\Gamma$ such that $\psi(\Delta^{\neq})\subseteq\Delta$ and $\psi(\Gamma\setminus\Delta)\subseteq\Gamma\setminus\Delta$. The _coarsening of $K$ by $\Delta$_ is the differential field $K$ with the valuation $\displaystyle v_{\Delta}\colon K^{\times}$ $\displaystyle\to\Gamma/\Delta$ $\displaystyle a$ $\displaystyle\mapsto va+\Delta,$ denoted by $K_{\Delta}$. Its valuation ring is $\dot{\mathcal{O}}\ \coloneqq\ \\{a\in K:va\geqslant\delta\ \text{for some}\ \delta\in\Delta\\}\ \supseteq\ \mathcal{O}$ with maximal ideal $\dot{\cao}\ \coloneqq\ \\{a\in K:va>\Delta\\}\ \subseteq\ \cao.$ Then $K_{\Delta}$ has small derivation by [3, Corollary 4.4.4], and it is $H$-asymptotic with gap $0$ by [3, Corollary 9.2.26 and Lemma 9.2.24]. Moreover, it is pre-$\operatorname{d}$-valued by [3, Corollary 10.1.6]. If $K$ is equipped with an ordering with respect to which $\mathcal{O}$ is convex, then $\dot{\mathcal{O}}$ remains convex; if $K$ is additionally a pre-$H$-field, then so is $K_{\Delta}$. Setting $\dot{a}\coloneqq a+\dot{\cao}$ for $a\in\dot{\mathcal{O}}$, we equip the differential residue field $\dot{K}\coloneqq\dot{\mathcal{O}}/\dot{\cao}$ of $K_{\Delta}$ with the valuation $\displaystyle v\colon\dot{K}^{\times}$ $\displaystyle\to\Delta$ $\displaystyle\dot{a}$ $\displaystyle\mapsto va,$ making $\dot{K}$ a valued differential field with small derivation called the _specialization of $K$ to $\Delta$_. Its valuation ring is $\mathcal{O}_{\dot{K}}\coloneqq\\{\dot{a}:a\in\mathcal{O}\\}$ with maximal ideal $\cao_{\dot{K}}\coloneqq\\{\dot{a}:a\in\cao\\}$. The map $\mathcal{O}\to\mathcal{O}_{\dot{K}}$ given by $a\mapsto\dot{a}$ induces an isomorphism $\operatorname{res}(K)\cong\operatorname{res}(\dot{K})$ of differential fields. From $\psi(\Delta^{\neq})\subseteq\Delta$ we get that $\dot{K}$ is $H$-asymptotic with asymptotic couple $(\Delta,\psi|_{\Delta^{\neq}})$. Furthermore, if $K$ is pre-$\operatorname{d}$-valued, then so is $\dot{K}$, and if $K$ is $\operatorname{d}$-valued, then so is $\dot{K}$ with $C_{\dot{K}}=C$, where we identify $C$ with a subfield of $C_{\dot{K}}$ via $a\mapsto\dot{a}$ [3, Lemma 10.1.8]. If $K$ is equipped with an ordering with respect to which $\mathcal{O}$ is convex, then $\mathcal{O}_{\dot{K}}$ is convex with respect to the induced ordering on $\dot{K}$. Moreover, suppose that $K$ is a pre-$H$-field. To see that $\dot{K}$ is a pre-$H$-field, it remains to check (PH3), so let $a\in\dot{\mathcal{O}}$ with $\dot{a}>\mathcal{O}_{\dot{K}}$. Then $va<0$, so $va\in\Delta^{\neq}$, and hence $v(a^{\prime})=v(a)+\psi(va)\in\Delta$. But we also have $a^{\prime}>0$, and so $a^{\prime}>\dot{\cao}$, as desired. If additionally $K$ is an $H$-field, then so is $\dot{K}$, since $H$-fields are exactly the $\operatorname{d}$-valued pre-$H$-fields. Suppose now that $K=\mathbb{T}^{*}$ and set $\Delta\coloneqq\\{\gamma\in\Gamma:\psi^{n}(\gamma)\geqslant 0\ \text{for some}\ n\geqslant 1\\}$, a nontrivial proper convex subgroup of $\Gamma$ with $\psi(\Delta^{\neq})\subseteq\Delta$ and $\psi(\Gamma\setminus\Delta)\subseteq\Gamma\setminus\Delta$, as in [3, Example
# ScGAN: A Generative Adversarial Network to Predict Hypothetical Superconductors Evan Kim<EMAIL_ADDRESS>Tesla STEM High School, Redmond, WA 98053, USA S.V. Dordevic<EMAIL_ADDRESS>Department of Physics, The University of Akron, Akron, OH 44325, USA ###### Abstract Despite having been discovered more than three decades ago, High Temperature Superconductors (HTSs) lack both an explanation for their mechanisms and a systematic way to search for them. To aid this search, this project proposes ScGAN, a Generative Adversarial Network (GAN) to efficiently predict new superconductors. ScGAN was trained on compounds in OQMD and then transfer learned onto the SuperCon database or a subset of it. Once trained, the GAN was used to predict superconducting candidates, and approximately 70% of them were determined to be superconducting by a classification model–a 23-fold increase in discovery rate compared to manual search methods. Furthermore, more than 99% of predictions were novel materials, demonstrating that ScGAN was able to potentially predict completely new superconductors, including several promising HTS candidates. This project presents a novel, efficient way to search for new superconductors, which may be used in technological applications or provide insight into the unsolved problem of high temperature superconductivity. ## I Introduction In recent years, superconductors have been applied in a variety of important technologies such as power transmission lines, MRI magnets, Maglev trains, and quantum computers. Quantum computers are especially important as they are expected to solve problems that are too computationally expensive for current classical computers. However, to be used in these applications the superconductors must be cooled below their critical temperatures ($T_{c}$), which for most current superconductors are very low. For instance, Google’s quantum computer must be maintained at $0.02\;\mathrm{K}$, which severely limits its general use [1]. This points to a growing need for superconductors with higher $T_{c}$, which has made them an active research topic for the last couple of decades. The mechanisms of superconductivity in most materials with relatively high $T_{c}$ is not fully understood, which means there is no systematic way to search for new materials or to predict their critical temperatures [2]. Thus, the current procedure for finding HTSs is essentially trial–and–error, which is extremely inefficient. This was exemplified in a recent study, which found that only about $3\%$ of the approximately $1000$ materials surveyed were superconducting [3]. Furthermore, the study failed to find any superconductors with $T_{c}>60\;\mathrm{K}$. This extreme inefficiency means that the likelihood of manually finding new superconductors, especially HTSs, is extremely low. To address this difficulty, the use of computational tools in superconductivity research has become popular in recent years [4]. In particular, there have been several studies utilizing machine learning to predict whether a given chemical compound will be superconducting or not. The earliest such study was done by Stanev _et al._ in which two random forest- based models were built: a classification model for predicting superconductivity and a regression model for predicting superconducting transition temperature [5]. The models were successfully applied on the SuperCon database, achieving a 92% accuracy in classification and a $R^{2}=0.88$ for regression. They did run across one limitation of their machine learning model, however. When trained on a certain class of superconductors (e.g. cuprates), the model was unable to make good predictions on other classes of superconductors (e.g. pnictides). Following this pioneering study, there have been several other studies applying machine learning methods, such as a K-nearest neighbors algorithm (Ref. [6]) and a deep learning model (Ref. [7]). The K-nearest neighbors algorithm reported improvements on the previous study from Ref. [5] in terms of overall performance: an $R^{2}$ of $0.93$ and a classification accuracy of $96.5\%$. The deep learning model, on the other hand, showed that it might be possible to overcome the limitation that Stanev _et al._ faced, as they were able to make predictions about pnictide superconductors from the training data that did not contain them. The way each of these previous studies predicted new superconducting materials was by running their trained model on a database of known existing chemical compounds and finding which compounds the model indicated could be superconducting. This procedure has several notable limitations. First, these studies miss out on the vast chemical composition space that is not already contained in existing databases. Second, commonly used databases (such as ICSD and OQMD) contain mostly stoichiometric compounds, whereas many superconductors are non-stoichiometric (cuprates and pnictides, in particular), and so many possibilities were missed in that manner as well (see for example Table 3 in Ref. [5] and Table 1 in Ref. [6]). Finally, the discovery of superconductors in the lab usually does not happen that way; they are discovered by synthesizing new materials and testing them, rather than checking the known ones. In order to overcome these limitations, in this work we employ Generative Adversarial Networks (GANs) [8]. Generative models refer to a general class of machine learning models which are able to generate things that resemble the input dataset. They have proven to be extremely powerful, and in recent years have found numerous applications such as science, engineering, medicine, art, video games, deepfakes, etc. [9]. In most cases GANs performed better than other generative models, such as variational autoencoders, because they are able to learn more hidden rules in the input data set. For example, Dan _et al._ [10] reported a GAN model which generated new chemical compounds with 92.53% novelty and 84.5% validity. Another work, Hu _et al._ [11] applied GANs to the SuperCon dataset, but for the purpose of characterization rather than prediction. In this work, we combine the general idea of GANs with the previous superconductor models, and propose the first GAN to predict new superconductors (ScGAN). Our ScGANs are based on chemical composition only, and are able to generate new superconducting materials with high novelty and validity. The paper is organized as follows. In Section II we present the details of the creation of the ScGANs. In section III the main results of the study are discussed. In particular, we present a list of hypothetical superconducting materials generated by our ScGANs, as well as their predicted critical temperatures. Finally, in Section IV we summarize the most important findings. ## II Methodology As stated in the introduction, we chose the GAN as our generative model. Its structure is shown in Fig. 1. A GAN is composed of two competing neural networks, the generator and discriminator. The generator takes in random noise and generates a “fake” compound, which the discriminator attempts to determine if it is real or not using existing data of real compounds. Each of them then updates their parameters based on the performance of the discriminator. The two networks improve their performance iteratively until a generator can generate realistic looking compounds, while the discriminator can detect unrealistic looking ones [8]. Figure 1: The architecture of the GAN model used. Y1.2Ba0.8CuO4 and O7.25Ca0.62Cu2.13Sr1.91Y0.37Bi1.71 are examples of real (from SuperCon) and “fake” superconductors, respectively. Fig. 2 depicts the training/testing process of our GANs and it has three main stages. The main idea is to have the model first learn general chemical composition rules by training off of a larger dataset (OQMD database) of general chemical compounds and then transfer learning it onto a (much smaller) dataset of known superconducting materials (SuperCon). The idea of transfer learning is to allow a model to learn even with a limited amount of data, which is the case here as the general compounds dataset (OQMD) has on the order of $\sim 10^{6}$ data points, while the superconductor datasets will only have on the order of $\sim 10^{4}$ data points. Figure 2: The overall training / testing process from the data to the final GAN model. It is composed of three main stages: (a) data processing, (b) training on the OQMD dataset, and finally (c) transfer learning onto the superconductor dataset (the whole SuperCon, or a part of it). ### II.1 Data Collection Data were sourced from two open-source databases: SuperCon [12] and the Open Quantum Materials Database (OQMD) [13, 14]. SuperCon is the largest database for superconducting materials with around $30,000$ superconductors before filtering. Similar to what was done in some previous studies [6, 15], we only used the chemical compositions of the materials extracted from SuperCon. On the other hand, OQMD is a much larger database with around $10^{6}$ DFT- calculated compounds, most of which are not superconductors. ### II.2 Data Processing Figure 3: Chemical composition data represented as a matrix [6] in $\mathbb{R}^{96\times m}$, where $m$ is the number of datapoints. Each column is a single compound, with each entry representing the number of each element present in the compound. Note that that the numbers in the matrix are for illustration purposes only; they do not represent any real compounds or superconductors. Figure 4: An “adjusted one-hot” encoding of the chemical composition as a matrix in $\mathbb{R}^{96\times 8}$. The idea here is that the amount of each element is encoded through both the vertical location of the yellow box and the value in it, which allows the GAN to learn the chemical compositions better. Class | Quantity | Percentage ---|---|--- Cuprates | 7,304 | 44.4% Pnictides | 1,436 | 8.7% Others | 7,749 | 47.0% Everything | 16489 | 100% Table 1: Distribution of the superconductors in our filtered version of SuperCon by class. Each of these served as different training sets. We notice that the number of pnictide entries is much smaller compared with cuprates and others. Before filtering the data, all compounds had to be expressed in a common format so that unwanted datapoints could be detected regardless of small differences in formatting across the databases, such as the ordering of elements. First, all datapoints were formatted as $1\times 96$ matrices [6] (the $96$ is the maximum atomic number present across all the compounds in the database) and then combined so that each dataset was a matrix in $\mathbb{R}^{96\times m}$, where $m$ is the number of data points (see Fig. 3). Then, both datasets were filtered for duplicates, which reduced OQMD to around $800,000$ datapoints. The SuperCon dataset required further processing as a number of entries were incomplete, and they were either corrected or removed, leaving around 16,000 datapoints. Lastly, the SuperCon dataset was split into three different groups (classes): cuprates, pnictides (iron-based) and others (anything else that is neither a curpate nor a pnictide). Table 1 has the quantitative counts of these different groups. The purpose of this was to test the model’s ability to learn the different classes of superconductors, especially the high temperature classes of cuprates and pnictides. Once the data was filtered, the chemical compositions were then transformed into a form for the GAN to train on. A previous study, Ref. [10], used a one- hot encoding for general chemical compounds. However, that encoding was designed for stoichiometric compounds, i.e. for integer values of parameters, and many superconductors are non-stoichiometric, i.e. have decimal compositions. Instead, we propose the use of a “adjusted one-hot” encoding that works for decimals, in which each compound is represented by a 96 $\times$ 8 matrix of real numbers (Fig. 4). As shown in the figure, from the $1\times 96$ vectors in the columns of Fig. 3, each nonzero component of that vector $\mathbf{v}_{i}$ was expanded into an 8-dimensional vector with the following process. First the nearest integer $k$ between $1$ and $7$ inclusive to $\mathbf{v}_{i}$ was found. Then, the matrix values were set as $A_{mi}=\delta_{mk}\cdot\mathbf{v}_{i}/k,$ (1) where $\delta$ here is the Kronecker delta, we index from $0$ (top left is $A_{00}$), and $m$ ranges from $0$ to $7$. For zero components ($\mathbf{v}_{i}=0$), a $1$ was simply placed at $A_{0i}$. We point out that we tested other encodings, such as the one from Ref. [7], but these were susceptible to mode collapse. The encoding proposed in this work successfully encodes decimal values both through the actual matrix entry and its location. ### II.3 Model A GAN is a type of generative model that has two competing neural networks, a generator and a discriminator, as shown in Fig. 1. Traditional GANs, however, can suffer from issues such as mode collapse and gradient vanishing, so the Wasserstein GAN with Gradient Penalty [16] was used instead. In the Wasserstein GAN with gradient penalty, the loss functions are $\displaystyle\mathrm{Loss}_{D}$ $\displaystyle=\underset{\bm{\tilde{x}}\sim\mathbb{P}_{g}}{\mathbb{E}}[D(\bm{\tilde{x}})]-\underset{\bm{x}\sim\mathbb{P}_{r}}{\mathbb{E}}[D(\bm{x})]$ (2) $\displaystyle\quad+\lambda\underset{\bm{\hat{x}}\sim\mathbb{P}_{\bm{\hat{x}}}}{\mathbb{E}}[(\|\nabla_{\bm{\hat{x}}}D(\hat{\bm{x}})\|-1)^{2}],$ $\displaystyle\mathrm{Loss}_{G}$ $\displaystyle=-\underset{\bm{\tilde{x}}\sim\mathbb{P}_{g}}{\mathbb{E}}[D(\bm{\tilde{x}})].$ (3) Here $D(x)$ represents the output of the discriminator and $\mathbb{E}$ is the expectation value (average). Then the parameters are updated with an optimizer, $\displaystyle w$ $\displaystyle\leftarrow w+\alpha\cdot\operatorname{Optimizer}(w,\nabla_{w}\mathrm{Loss}_{D}),$ (4) $\displaystyle\theta$ $\displaystyle\leftarrow\theta-\alpha\cdot\operatorname{Optimizer}(\theta,\nabla_{\theta}\mathrm{Loss}_{G}),$ (5) where $w$ are the discriminator’s parameters, $\theta$ is the generator’s parameters, and $\alpha$ is the learning rate. RMSProp was chosen as the optimizer [17], after testing out several other options, such as Adam [18]. #### II.3.1 Training The model was first trained on the OQMD dataset for 400 epochs. It was then transfer learned onto the SuperCon dataset or a subset of it, on which it would train for another 500 epochs. Transfer learning onto four different datasets (cuprates, pnictides, others, and everything together) resulted in four different versions of the GAN. Afterwards, the testing procedure and the data analysis were conducted, and then the hyperparameters were updated based on the results. The training curves for the final model on each of these sets are displayed in Fig. 5. Notably, they were all able to converge and stabilize over the 500 epochs. Figure 5: The generator loss against training epoch for each of the four datasets the GAN trained on: (a) All of SuperCon; (b) Others (i.e. not cuprates or pnictides); (c) Cuprates; and (d) Pnictides. #### II.3.2 Testing After each training process, $5,000$ hypothetical compounds were generated from the versions of the GAN trained on the smaller sets and $30,000$ from the version trained on everything. The generated predictions were then inspected with various quality checks. Each compound was first tested for validity using the charge neutrality and electronegativity check features of the SMACT package [19]. The package tests the compound for electronegativity and charge balance to determine whether it is a valid compound or not. Each prediction was then checked for uniqueness—whether the compound showed up earlier in the generated list—and novelty—whether the compound was in the training dataset. These three checks looked at the general quality of the model. Then, more specific to superconductivity, each compound was run through the model from Ref. [6] to check whether it is a superconductor or not, as well as to predict its critical temperature. Of course, to be sure of superconductivity, the compounds must be synthesized and tested. Lastly, the formation energy of each generated compound was calculated using the model from Ref. [20], which indicates the stability of the compound. These tests will be discussed further in Section III, along with the actual results. ### II.4 Clustering In order to further assess the quality of predictions, clustering analysis was performed. Clustering is an unsupervised machine learning technique whose main goal is to unveil hidden patterns in the data. It was recently applied to superconductors from SuperCon database [15]. Depending on the data set, different clustering algorithms were used, such as k-means, hierarchical, Gaussian mixtures, self-organizing maps, etc. The results showed that in case of superconductors, clustering methods can achieve, and in some cases exceed, human level performance. In order to visualize clustering results, different techniques can be used. It was shown that for superconductors the so-called t-SNE produces the best results [15]. t-SNE is a non-linear dimensionality reduction technique which allows higher dimensional data to be represented in 2D or 3D [21]. In case of superconductors, the data points are $96$-dimensional (each compound is represented by a 1 $\times$ 96 matrix), as discussed in Section II.2. t-SNE reduces these dimensions down to either two or three, which allows easy visualization. We point out, however, that these reduced dimensions do not have any physical meaning. ## III Results After training, from the four different versions of the GAN, we generated four superconductor candidate lists with either 5,000 or 30,000 chemical compositions. We then ran the predicted superconductors through a series of tests to evaluate their quality. The first few were general tests, and the rest were in the context of superconductivity using existing computational models, as experimentally testing all of them is unfeasible. ### III.1 Duplicates and Novelty We first screened the output for duplicates within the generated sets themselves and then for duplicates between the generated set and the dataset of known superconductors that it trained on. The results are tabulated in Table 2. As seen in the table, the number of repeats within the generated samples were relatively low (high uniqueness), with the exception of pnictides, which had more duplicates than the rest. This is likely due to the fact that the dataset of pnictides was significantly smaller than the rest (see Table 1). We speculate that this overall low rate of duplicates stems from the fact that the model is able to handle decimals (see Section IIB and Fig.4), which opens up a large composition space for it to explore. The percent of predicted compounds that were novel, i.e. not already known to exist, listed in Table 2 is also very high across all four GANs. These two results demonstrate that all versions of ScGAN can generate a diverse array of new compositions. GAN Version | Novel % | Unique % ---|---|--- Entirety of SuperCon | 99.69% | 96.78% Cuprate | 99.74% | 92.98% Pnictides | 99.32% | 58.74% Others | 98.89% | 91.58 % Table 2: The percentage of generated predictions that were novel (not in the training set) and unique (distinct from others in the given generated set) for each of the versions of ScGAN trained on the given training datasets on the left. ### III.2 Formation Energy As mentioned in the previous section, we found the formation energies of the predicted compounds from the GANs using ElemNet [20]. It was indicated in Ref. [20] that a negative value of formation energy is a good indicator of stability, i.e. the possibility of being synthesized in the lab. In Fig. 6 we display the values of formation energy for all predictions. We see from the distributions of formation energies that most of predicted compounds have calculated formation energies less than zero. Even though this does not provide definitive proof of stability, it is a general indication that most of the predicted compounds are stable and can be synthesized in the lab. Figure 6: Distribution of the formation energies of the predicted compounds from the four versions of the GAN: (a) everything, (b) others, (c) cuprates, (d) pnictides. ### III.3 Superconductivity As a next test, we ran the predicted compounds through the $K$-Nearest Neighbors (KNN) classification model from Ref. [6] for predicting superconductivity based on elemental composition, in order to to check if the predictions of two machine learning models (GAN and KNN) would agree. However, the probabilistic nature of the machine learning model had to be taken into account. If $p_{sc}$ is the proportion of predictions that came up superconducting according to the model, then we can estimate $\rho_{sc}$, the true proportion of superconducting entries, using Bayesian statistics. Denoting tp and fp as the true positive and false positive rates of the classification model, respectively, we can write $\rho_{sc}\cdot\textit{{tp}}+(1-\rho)\cdot\textit{{fp}}\approx p_{sc}.$ (6) Solving for $\rho_{sc}$ gives $\rho_{sc}\approx\frac{p_{sc}-\textit{{fp}}}{\textit{{tp}}-\textit{{fp}}}.$ (7) The true positive and false positive rates here are reported from Ref. [6]: $\textit{{tp}}=98.69\%$ and $\textit{{fp}}=16.94\%$. The output percentages along with the estimates of the true proportions calculated from equation 7 are tabulated in Table 3. GAN Version | Output % | True % Estimate ---|---|--- Entirety of SuperCon | 74.50 % | 70.42% Cuprates | 75.76% | 71.95% Pnictides | 72.44% | 67.89% Others | 69.58% | 64.39% Table 3: The percentages of generated predictions that were determined by the KNN model to be superconducting for different training sets along with the estimated real percentage of the predictions that were superconducting. The true percentages were estimated according to Eq. 7. All four GANs achieved very high percentages of superconducting material according to the KNN model, especially when compared to the 3% figure from the manual search in Ref. [3]. However, the only definite test of superconductivity can come from experimental measurements. ### III.4 Critical Temperature Estimates Figure 7: Distributions of the critical temperatures of the predictions of the four different versions of ScGAN: trained on (a) everything, (b) others, (c) cuprates, (d) pnictides. We also calculated the critical temperatures of our predictions using the regression model from Ref. [6]. Similar to the superconductivity tests in the previous subsection, these calculated values can only be taken as approximations. However, the regression model can still provide us with a general understanding of the capabilities of ScGAN. The critical temperature outputs from the model are summarized in Table 4 and the distributions are in shown in Fig. 7. While the distributions are somewhat broad, the GANs were still able to find several superconductors with predicted critical temperatures higher than $100\,\mathrm{K}$, which exceeds the manual search maximum of $58\,\mathrm{K}$ (though that search was mostly restricted to pnictides) and several of the previous machine learning approaches [6, 5]. This is not surprising, as those previous searches were limited to existing databases of stoichiometric compounds, which meant that these forward design approaches could only produce a limited number of candidates, mostly with low critical temperatures. Training Data | Average $T_{c}$ | Standard Dev. | Max $T_{c}$ ---|---|---|--- Entirety of SuperCon | $6.53\,\mathrm{K}$ | $11.76\,\mathrm{K}$ | $123.25\,\mathrm{K}$ Cuprates | $59.34\,\mathrm{K}$ | $24.78\,\mathrm{K}$ | $133\,\mathrm{K}$ Pnictides | $20.41\,\mathrm{K}$ | $13.69\,\mathrm{K}$ | $51.98\,\mathrm{K}$ Others | $72.68\,\mathrm{K}$ | $21.24\,\mathrm{K}$ | $116.55\,\mathrm{K}$ Table 4: Summary statistics of the predicted critical temperatures of the generated predictions that were determined by the regression model for different training sets. ### III.5 Ability to Learn Features We then looked at the types of superconductors that were generated by the different versions of ScGAN. The distributions are given in Table 5, and we can see that each versions of ScGAN generated mostly superconductors that matched their training data. This indicates the GAN was able to detect the different underlying features behind these different major classes of superconductors. Training Data | Cuprate % | Pnictide % | Other % ---|---|---|--- Cuprate | 92.76% | 0.06% | 7.18% Pnictides | 0.02 % | 99.84 % | 0.14% Others | 0.14% | 0.6 % | 99.26% Table 5: The distribution of the predicted superconductors across the different classes of superconductors for the different versions of the GAN. ### III.6 Clustering results In Fig. 8 we display the results of clustering analysis on three sets of predictions: panel (a) for cuprates, panel (b) for pnictides and panel (c) for others. The results are visualized with the help of t-SNE. As pointed out above, the two t-SNE dimensions, Y1 and Y2, do not have any physical meaning. Full circles of different colors represent different families of superconductors from SuperCon database, whereas purple open circles represent GAN predictions. As can be seen from all three panels, GANs were able to generate new superconductors from all known families of cuprates, pnictides and other superconductors. However, GANs did not predict any new families of superconductors. Figure 8: Clustering of the predicted compounds from various versions of the GAN: (a) cuprates, (b) pnictides and (c) others. Full circles represent the data points from SuperCon and purple open circles are GAN predictions. ### III.7 Promising Candidates After running through the $T_{c}$ prediction model, we manually identified the ones that looked the most promising, including some with very high critical temperatures. It turns out that most of these were cuprates, which is not surprising, considering that superconductors with highest critical temperatures in the training set are cuprates. We then checked the Crystallography Open Database (COD) [22] to see if those compounds were in the database. The ones listed in Table 6 could not be found neither in COD nor in SuperCon, showing that this model overcomes the limitations of the previous forward design models and finds completely new superconductors. A more comprehensive list of predictions is available upon reasonable request. Compound | Predicted $T_{c}$ | Class ---|---|--- $\mathrm{PrCaBiSr_{2}Cu_{2}O_{7.46}}$ | $104.6\,\mathrm{K}$ | Cuprates $\mathrm{YTiSr_{2}Cu_{2.74}O_{6.76}}$ | $91.7\,\mathrm{K}$ | Cuprates $\mathrm{TeYSr_{2}Cu_{2}O_{7.75}}$ | $89.8\,\mathrm{K}$ | Cuprates $\mathrm{TlCaSr_{2}Cu_{2}O_{7.82}}$ | $73.9\,\mathrm{K}$ | Cuprates $\mathrm{YCaBa_{2}ZnCu_{2.36}O_{7.54}}$ | $71.5\,\mathrm{K}$ | Cuprates $\mathrm{HgCsSrCa_{2}Cu_{2.56}O_{8.66}}$ | $69.8\,\mathrm{K}$ | Cuprates $\mathrm{GdCaRuSr_{1.83}Cu_{2}O_{8.71}}$ | $40.8\,\mathrm{K}$ | Cuprates $\mathrm{C_{2.52}Ni_{0.92}Y_{0.71}Th}$ | $85.3\,\mathrm{K}$ | Others $\mathrm{Si_{0.62}V_{0.91}Zr_{0.83}}$ | $84.7\,\mathrm{K}$ | Others $\mathrm{Al_{2.34}Te_{0.64}Ir_{1.07}}$ | $84.7\,\mathrm{K}$ | Others $\mathrm{Be_{0.16}Si_{1.09}V_{2.67}Y_{1.72}}$ | $62.4\,\mathrm{K}$ | Others $\mathrm{Cu_{1.13}Nb_{3.0}Sb_{0.72}Ir_{1.05}}$ | $59.4\,\mathrm{K}$ | Others $\mathrm{Ga_{0.62}Nb_{2.88}Sn_{0.65}Te_{0.79}}$ | $40.8\,\mathrm{K}$ | Others $\mathrm{B_{1.73}C_{1.03}Ni_{1.12}Y_{0.66}Pt_{0.64}}$ | $40.8\,\mathrm{K}$ | Others $\mathrm{RuTeSeFe}$ | $35.6\,\mathrm{K}$ | Pnictides $\mathrm{TeSSeFe_{1.05}}$ | $31.0\,\mathrm{K}$ | Pnictides $\mathrm{CeCoAs_{2.15}Fe_{1.39}}$ | $23.3\,\mathrm{K}$ | Pnictides $\mathrm{CeThPAsFe_{1.59}}$ | $12.2\,\mathrm{K}$ | Pnictides $\mathrm{GaPrCa_{2.58}As_{12.44}Fe_{6.34}}$ | $11.9\,\mathrm{K}$ | Pnictides $\mathrm{NdOAsFe}$ | $4.5\,\mathrm{K}$ | Pnictides Table 6: Promising superconductor candidates generated by ScGANs, that do not exist in current databases. Also shown are their predicted critical temperatures [6]. ## IV Conclusion For decades the search for new superconductors has relied on the serendipity of material scientists to synthesize a new material with superconducting proprieties. This paper introduced a novel method to search for superconductors—discovering candidates with a generative adversarial network. In contrast to previous computational methods which attempted to predict new superconductors off of existing datasets, this model predicted compounds directly. This “inverse design” approach proved to be far more powerful than manual search methods and previous computational methods, with the model being able to generate thousands of candidates with a wide-range of critical temperatures that lied outside of existing databases (both superconductor and general inorganic compound databases). Even while only training on chemical compositions, more than $70\%$ of the GANs predictions were cross-checked with a separate model to be potentially superconducting (the only way to know for sure, however, would be to synthesize and check these compounds in a lab). Of these, several were promising HTS candidates listed in Table 6. We point out that previous models would have been unable to find such candidates as they were outside of current databases. While the compounds generated were new, our clustering showed that the GAN did not generate any new families of superconductors. However, it was still able to generate non-stoichiometric compounds, widening the scope of the computational search. Future studies should look into some improvements that can be made, such as being able to account for charge neutrality and crystal structure in the compound encodings. Chemical checks on the predicted compounds (using SMACT as detailed earlier) revealed that while there was electronegativity balance, charge balance was not always exact for the superconductor candidates. Furthermore, crystal structure has been known to play a significant role in superconductivity, so including it in calculations would most likely result in improvements. However, this may prove to be a difficult endeavor as crystal structure is not well-documented in existing databases. Active transfer learning could also be attempted to narrow down the predictions to high temperature superconductors only [23]. However, even without these possible improvements, the model at its current version is still very promising and can be applied to search for superconductors, starting with the candidates identified in this paper. ## References * Arute _et al._ [2019] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, and J. M. Martinis, Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019). * Hirsch _et al._ [2015] J. Hirsch, M. Maple, and F. Marsiglio, Superconducting materials classes: Introduction and overview, Physica C: Superconductivity and its Applications 514, 1 (2015), superconducting Materials: Conventional, Unconventional and Undetermined. * Hosono _et al._ [2015] H. Hosono, K. Tanabe, E. Takayama-Muromachi, H. Kageyama, S. Yamanaka, H. Kumakura, M. Nohara, H. Hiramatsu, and S. Fujitsu, Exploration of new superconductors and functional materials, and fabrication of superconducting tapes and wires of iron pnictides, Science and Technology of Advanced Materials 16, 033503 (2015). * Bedolla _et al._ [2020] E. Bedolla, L. C. Padierna, and R. Castañeda-Priego, Machine learning for condensed matter physics, Journal of Physics: Condensed Matter 33, 053001 (2020). * Stanev _et al._ [2018] V. Stanev, C. Oses, A. G. Kusne, E. Rodriguez, J. Paglione, S. Curtarolo, and I. Takeuchi, Machine learning modeling of superconducting critical temperature, npj Computational Materials 4, 29 (2018). * Roter and Dordevic [2020] B. Roter and S. Dordevic, Predicting new superconductors and their critical temperatures using machine learning, Physica C: Superconductivity and its Applications 575, 1353689 (2020). * Konno _et al._ [2021] T. Konno, H. Kurokawa, F. Nabeshima, Y. Sakishita, R. Ogawa, I. Hosako, and A. Maeda, Deep learning model for finding new superconductors, Phys. Rev. B 103, 014509 (2021). * Goodfellow _et al._ [2014] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in _Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2_ , NIPS’14 (MIT Press, Cambridge, MA, USA, 2014) p. 2672–2680. * Dash _et al._ [2021] A. Dash, J. Ye, and G. Wang, A review of generative adversarial networks (gans) and its applications in a wide variety of disciplines–from medical to remote sensing, arXiv preprint arXiv:2110.01442 (2021). * Dan _et al._ [2020] Y. Dan, Y. Zhao, X. Li, S. Li, M. Hu, and J. Hu, Generative adversarial networks (gan) based efficient sampling of chemical composition space for inverse design of inorganic materials, npj Computational Materials 6, 84 (2020). * Hu _et al._ [2020] T. Hu, H. Song, T. Jiang, and S. Li, Learning representations of inorganic materials from generative adversarial networks, Symmetry 12, 10.3390/sym12111889 (2020). * [12] National Institute for Materials Science, . * Saal _et al._ [2013] J. E. Saal, S. Kirklin, M. Aykol, B. Meredig, and C. Wolverton, Materials design and discovery with high-throughput density functional theory: The open quantum materials database (oqmd), JOM 65, 1501 (2013). * Kirklin _et al._ [2015] S. Kirklin, J. E. Saal, B. Meredig, A. Thompson, J. W. Doak, M. Aykol, S. Rühl, and C. Wolverton, The open quantum materials database (oqmd): assessing the accuracy of dft formation energies, npj Computational Materials 1, 15010 (2015). * Roter _et al._ [2022] B. Roter, N. Ninkovic, and S. Dordevic, Clustering superconductors using unsupervised machine learning, Physica C: Superconductivity and its Applications 598, 1354078 (2022). * Gulrajani _et al._ [2017] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, Improved training of wasserstein gans, Advances in neural information processing systems 30 (2017). * Hinton [2012] G. Hinton, Neural networks for machine learning: Lecture 6a overview of min-batch gradient descent (2012). * Kingma and Ba [2014] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). * Davies _et al._ [2016] D. Davies, K. Butler, A. Jackson, A. Morris, J. Frost, J. Skelton, and A. Walsh, Computational screening of all stoichiometric inorganic materials, Chem 1, 617 (2016). * Jha _et al._ [2018] D. Jha, L. Ward, A. Paul, W.-k. Liao, A. Choudhary, C. Wolverton, and A. Agrawal, Elemnet: Deep learning the chemistry of materials from only elemental composition, Scientific Reports 8, 17593 (2018). * van der Maaten and Hinton [2008] L. van der Maaten and G. Hinton, Visualizing data using t-SNE, Journal of Machine Learning Research 9, 2579 (2008). * Vaitkus _et al._ [2021] A. Vaitkus, A. Merkys, and S. Gražulis, Validation of the Crystallography Open Database using the Crystallographic Information Framework, Journal of Applied Crystallography 54, 661 (2021). * Kim _et al._ [2021] Y. Kim, Y. Kim, C. Yang, K. Park, G. X. Gu, and S. Ryu, Deep learning framework for material design space exploration using active transfer learning and data augmentation, npj Computational Materials 7, 140 (2021).
# Screened plasmons of graphene near a perfect electric conductor Afshin Moradi1<EMAIL_ADDRESS>Nurhan Türker Tokan2<EMAIL_ADDRESS>_1 Department of Engineering Physics, Kermanshah University of Technology, Kermanshah, Iran 2Department of Electronic and Communications Engineering, Yildiz Technical University, Istanbul, Turkey _ ###### Abstract Screened plasmon properties of graphene near a perfect electric conductor are investigated using classical electrodynamics and a linearized hydrodynamic model that includes Fermi correction. A general expression for the dispersion relation of the mentioned screened plasmonic waves is given and illustrated graphically. The result indicates that for realistic wavenumbers, the dispersion relation of plasmonic waves of isolated graphene is almost unaffected by the Fermi correction, while this correction is an important factor for the screened plasmons of graphene near a perfect electric conductor, where it increases the frequency of surface waves. The results show that near the graphene neutrality point, the surface wave has a linear dispersion with a universal speed close to $v_{\mathrm{F}}/\sqrt{2}$. Such linear dispersion for surface waves (also known as energy waves) appears to be a common occurrence when a splitting of plasma frequencies occurs, e.g. in the electron-hole plasma of graphene [W. Zhao et al., Nature 614, 688 (2023)]. Furthermore, analytical expressions for the energy parameters (the power flow, energy density, and energy velocity) of screened plasmons of the system are derived. Also, the analytical expressions are derived and analyzed for the damping function and surface plasmon and electromagnetic field strength functions of surface waves of the system with small intrinsic damping. ## I Introduction Graphene is a two-dimensional (2D) material consisting of carbon atoms arranged in a hexagonal lattice, which was discovered by Novoselov et al. K.S.N666 in 2004. Graphene has electrons that behave like massless Dirac fluid B.W318 ; E.H.H205418 , and therefore extraordinary properties can be observed in this 2D material. For example, graphene has carriers (i.e., electrons and holes) with extremely high mobility. Also, graphene supports the propagation of surface plasmon polariton (SPP) in the region from infrared to THz frequencies S.A.M016803 ; X.L351 ; P.A.D.G . Furthermore, the low loss of SPPs of graphene up to mid-infrared frequencies also makes it a promising alternative for future applications X.L351 . The most important advantage of graphene in the plasmonics world is the tunability of surface plasmons because the density of carriers in graphene can be easily adjusted with doping and an electric gate. The conductivity characteristic of graphene M.M1052 and graphene’s plasmonic properties A.N.G749 can be well explained by the hydrodynamic model derived by Müller et al. M.M025301 in the long-wavelength limit, i.e., $k\ll k_{\mathrm{F}}$, where $k_{\mathrm{F}}$ is the Fermi wavenumber in doped graphene and $k$ is the wavenumber of the plasmonic wave. Chaves et al. A.J.C195438 investigated the excitation of plasmonic waves of graphene in the presence of a fast-moving charge using the hydrodynamic model in the electrostatic approximation. Ferreira et al. B.A.F033817 performed the quantization of graphene plasmons using the hydrodynamic model in the absence of losses for three graphene-based structures, i.e., a monolayer graphene, a bilayer graphene, and a graphene near a perfect electric conductor (PEC). The hydrodynamic Dirac fluid also shows interesting collective excitations, such as hydrodynamic bipolar plasmon polaritons that exhibit a coupled collective excitation of electromagnetic and electron-hole oscillations with the opposite motion of electrons and holes, and energy wave (also known as the demon mode), which is a quasi-acoustic mode in which the motion of relativistic electrons and holes are in the same direction D.S083715 ; A.L245153 ; Z.S3285 ; A.L053001 ; D.S121405 ; A.L115449 ; I.T144307 ; B.N.N167979 ; E.I.K245434 ; J.D023036 ; D.F941 ; B.N115402 . Importantly, more recently Zhao et al. W.Z688 observed both hydrodynamic plasmons and the hydrodynamic energy waves of Dirac fluid based on new on-chip THz spectroscopy techniques, where the report by Zhao et al. may reveal new opportunities to study the collective hydrodynamic excitations in graphene- based materials. Figure 1: (a) Side view of the system under study. A monolayer graphene near a PEC. Monolayer graphene and PEC are separated from each other by a dielectric medium of thickness $d$ and dielectric constant $\varepsilon_{1}$, while the region $z>d$ is a semi-infinite dielectric with the dielectric constant $\varepsilon_{2}$. (b) Snapshot of the electric-field pattern of screened $p$-polarized plasmon of graphene near a PEC in the $xz$ plane that shows the tangential component of the electric field vanishes at $z=0$. The magnitude of the electric field is oscillating in the $x$-direction but decreases away from the boundary. For the attenuation of the electric field, it can be found that the rate of hyperbolic sine decaying to zero is much faster than that of the exponential function. (c) Snapshot of the Poynting-vector pattern close to the boundary. The power flows in the $0<z<d$ region and in the $z>d$ region are in the same direction, giving a net power flow to the right for a positive value of wavenumber $k$. One can see that $x$-component of the Poynting vector $S_{x}$ is maximum at the dielectric-PEC boundary and the Poynting vector is everywhere orthogonal to the electric field, as expected on physical grounds. One of the main advantages of the hydrodynamic model for the study of plasmonic waves of graphene is the possibility to include nonlocal and quantum effects in its plasmonic response without a high computational burden I.S.E133104 . Note that the condition $kc/(k_{\mathrm{F}}v_{\mathrm{F}})\gg 1$ must be satisfied for nonlocality to play an important role in the optical spectrum of graphene, where $v_{\mathrm{F}}$ is the Fermi speed in doped graphene, and $c$ is the speed of light in free space. Using the hydrodynamic model, we analyzed the characteristics of energy density and power flow of the $p$-polarized plasmonic waves of monolayer A.M63 ; A.M043103 ; A.M072114 , bilayer graphene A.M135 , and graphene on a conducting substrate A.M353 . However, to the best of our knowledge, no explicit calculation can be found for the energy behaviors of screened plasmons of graphene near a PEC. We note that the plasmonic properties of graphene near a PEC may present some new behaviors that make it appropriate for applications in the mid-infrared to a few THz range of frequencies. For instance, the results by Gu et al. X.G071103 show that graphene near a PEC leads to additional field localization near graphene and may increase the amplification factor of plasmons, as shown by Morozov et al. M.Y.M40 . Therefore, in the present work, we wish to investigate the screened plasmon properties of graphene near a PEC for future applications. In this way, by using the linearized hydrodynamic model that includes Fermi correction A.J.C195438 and classical electrodynamic formulations, we derive the general expressions for the dispersion relation, power flow, energy density, energy (group) velocity, damping function, and surface plasmon and electromagnetic field strength functions of the $p$-polarized surface waves of the system under investigation. ## II Theory The side view of the system under study, i.e., a monolayer graphene near a PEC is shown in panel (a) of Fig. 1 in a Cartesian coordinate system with coordinates $(x,y,z)$. Note that the metal, typically gold or silver, may be modeled as a PEC for the frequency range of interest, i.e., from GHz to a few THz A.M2023 ; M.M.B494 . The monolayer graphene and PEC are separated by a dielectric of thickness $d$ and dielectric constant $\varepsilon_{1}$, whereas the region $z>d$ is assumed to be a semi-infinite dielectric with the dielectric constant $\varepsilon_{2}$. The electronic behavior of the graphene layer is modeled as a 2D massless Dirac electron fluid and we assume that the equilibrium doping density of electrons in graphene is $n$. Now, let us consider the propagation of a plasmon polariton that is a $p$-polarized surface electromagnetic wave having $E_{x}$, $H_{y}$, and $E_{z}$ components along the $x$-direction at the boundary between two dielectric media. In this way, the homogeneous 2D massless Dirac electron fluid will be perturbed and can be regarded as a charged fluid with the first-order perturbed values of the electron fluid density per unit area $n_{\mathrm{g}}(x,t)$ and electric current density flowing on the graphene surface $\textbf{J}(x,t)=j_{x}(x,t)\textbf{e}_{x}$, where $\textbf{e}_{x}$ being the unit vector along the $x$-axis. We assume that all physical quantities vary as $e^{i(kx-\omega t)}$, where $k$ is the wavenumber (propagation constant) of the wave. Based on the linear Drude model with Fermi correction in the limit of large wavelengths, covering a range of frequencies from the mid-infrared to the THz, the electronic excitations on a doped graphene surface can be described by the following set of hydrodynamic equations A.J.C195438 ; A.S.P195437 Figure 2: The dimensionless screened plasmon-polariton frequencies $\Omega=\omega/\omega_{0}$ of graphene near a PEC, as a function of $K=k/k_{0}$, for different values of the parameter $D=k_{0}d$, when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$, and $n=n_{0}$ that means $k_{0}=k_{\mathrm{F}}$, and $\omega_{0}=\omega_{\mathrm{F}}$. The different panels refer to (a) $d\rightarrow\infty$, (b) $d=1/k_{\mathrm{F}}$, (c) $d=0.5/k_{\mathrm{F}}$, and (d) $d=0.1/k_{\mathrm{F}}$. In each panel, the dashed blue line, and red line correspond to $s=0$ and $s\neq 0$, respectively. The dashed black line marks a linear dispersion with a velocity of $V_{\mathrm{s}}=v_{\mathrm{s}}/v_{\mathrm{F}}=1/\sqrt{2}$. Figure 3: The dimensionless screened plasmon-polariton frequencies $\Omega=\omega/\omega_{0}$ of graphene near a PEC, as a function of $K=k/k_{0}$, for different values of the parameter $k_{\mathrm{F}}=\sqrt{\pi n}$, when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$, and $d=0.8/k_{0}$. The different panels refer to (a) $k_{\mathrm{F}}=k_{0}$, (b) $k_{\mathrm{F}}=0.7k_{0}$, (c) $k_{\mathrm{F}}=0.4k_{0}$, and (d) $k_{\mathrm{F}}=0.1k_{0}$. In each panel, the dashed blue line, and red line correspond to $s=0$ and $s\neq 0$, respectively. The dashed black line marks a linear dispersion with a velocity of $V_{\mathrm{s}}=v_{\mathrm{s}}/v_{\mathrm{F}}=1/\sqrt{2}$. $-e\partial_{t}n_{\mathrm{g}}(x,t)+\partial_{x}j_{x}(x,t)=0\;,$ (1) $\partial_{t}j_{x}(x,t)=\frac{D_{\mathrm{g}}}{\pi}E_{x}\big{|}_{z=d}+ev_{\mathrm{s}}^{2}\partial_{x}n_{\mathrm{g}}(x,t)\;,$ (2) where $e$ is the electron charge, $D_{\mathrm{g}}=(e^{2}/\hbar^{2})E_{\mathrm{F}}$ is the Drude weight of graphene, $E_{\mathrm{F}}=\hbar\omega_{\mathrm{F}}$ (with $\omega_{\mathrm{F}}=v_{\mathrm{F}}k_{\mathrm{F}}$, and $\hbar=h/2\pi$, where $h$ is the Planck constant) is the Fermi energy, $k_{\mathrm{F}}=\sqrt{\pi n}$ is the Fermi wavenumber and $v_{\mathrm{F}}\approx c/300$ is the Fermi speed in doped graphene, as mentioned before. In the right-hand side of Eq. (2), the first term is the force on electrons due to the tangential component of the electric field, evaluated at the graphene surface $z=d$, and the second term shows the Fermi pressure in the 2D electron gas with $v_{\mathrm{s}}=\upsilon_{\mathrm{F}}/\sqrt{2}$ that is the energy sound speed of Dirac electrons in graphene W.Z688 . Note that Eqs. (1) and (2) provides an adequate description of the low-energy intraband electronic transitions in doped graphene in the optical limit, specifically, for the wavenumbers $k\ll k_{\mathrm{F}}$. Now, by eliminating the induced density $n$ from Eqs. (1) and (2), and applying $j_{x}=\sigma_{\mathrm{g}}E_{x}$ (where $\sigma_{\mathrm{g}}$ is the conductivity of graphene), we find G.A.M1150 $\sigma_{\mathrm{g}}(k,\omega)=\dfrac{i}{\pi}\dfrac{\omega D_{\mathrm{g}}}{\omega^{2}-v_{\mathrm{s}}^{2}k^{2}}\;.$ (3) In order to determine the plasmonic properties of the system, we look for an evanescent $p$-polarized wave described by an electric field of the form A.M $\textbf{E}(x,z)=\left[\textbf{e}_{x}E_{x}+\textbf{e}_{z}E_{z}\right]e^{ikx}$. Note that we have considered $\partial\textbf{E}/\partial y=0$, since the $x$\- and $y$-directions are equivalent. The associated magnetic field has the form $\textbf{H}(y)=\textbf{e}_{y}H_{y}e^{ikx}$, where we have omitted writing explicitly a factor $\exp(-i\omega t)$ describing the time-dependence of the wave. Note that $H_{y}$ and $E_{z}$ can be determined if the non-zero longitudinal component $E_{x}$ is known. The Helmholtz equation for the x-component electric field $E_{x}$ of the $p$-polarized surface wave can be given by $\left[\dfrac{d^{2}}{dz^{2}}-\kappa_{\ell}^{2}\right]E_{x}(z)=0\;,$ (4) where $\kappa_{\ell}=\left[k^{2}-\varepsilon_{\ell}k_{0}^{2}\right]^{1/2}$ with $\ell=1,2$, denotes the attenuation constant in the regions $0<z<d$ and $z>d$, and $k_{0}=\omega/c$ is free space wavenumber. To solve Eq. (4), we have to provide appropriate boundary conditions. With the electric conductivity of graphene, these boundary conditions at the surface $z=d$ can be written as $E_{x}\big{|}_{z=d+}=E_{x}\big{|}_{z=d-}\;,$ (5) $H_{y}(z)|_{z=d+}-H_{y}(z)|_{z=d-}=-\sigma_{\mathrm{g}}E_{x}\big{|}_{z=d}\;,$ (6) which express the continuity and discontinuity of the tangential components of the electric and magnetic fields, respectively, across the surface $z=d$. Also, the boundary condition satisfied by $E_{x}(z)$ at the surface $z=0$ is $E_{x}\big{|}_{z=0}=0\;,$ (7) that implies the tangential component of the electric field should vanish at $z=0$ as can be seen in panel (b) of Fig. 1. With the above equations and also using $E_{z}=-(ik/\kappa_{\ell}^{2})\partial E_{x}/\partial z$, and $H_{y}=(i\omega\varepsilon_{0}\varepsilon_{\ell}/\kappa_{\ell}^{2})\partial E_{x}/\partial z$, we will investigate the screened plasmon-polariton properties of graphene near a PEC in the following sections. ### II.1 Dispersion relation We note that the surface field should decay for $z\rightarrow\infty$. Also, the presence of a PEC at $z=0$ implies that $E_{x}(z=0)=0$. Therefore, the appropriate expressions for $E_{x}(z)$, are as follows: $E_{x}(z)=\left\\{\begin{array}[]{clcr}A_{-}\sinh\kappa_{1}z\;,&\mbox{$0\leq z\leq d$\;.}\\\ A_{+}e^{-\kappa_{2}z}\;,&\mbox{$d\leq z$\;,}\\\ \end{array}\right.$ (8) where the relations between the coefficients $A_{+}$ and $A_{-}$ can be determined from the matching boundary conditions at the graphene surface, i.e., $z=d$. Use of Eqs. (4) and (8) in the boundary conditions (5) and (6) yields the condition that $\dfrac{\varepsilon_{1}}{\kappa_{1}}\coth\kappa_{1}d+\dfrac{\varepsilon_{2}}{\kappa_{2}}+\dfrac{i\sigma_{\mathrm{g}}}{\omega\varepsilon_{0}}=0\;.$ (9) The roots of this transcendental equation, which can only be solved numerically, provide the dispersion relation of the screened plasmon- polaritons of graphene near a PEC. This dispersion relation has two tuning parameters: the graphene-PEC distance $d$, and the graphene sheet carrier density $n$, which controls the conductivity of graphene. When $d$ is very large such that $\coth\kappa_{1}d\approx 1$, Eq. (9) reduces to the dispersion relation for the surface plasmon-polaritons supported by isolated monolayer graphene P.A.D.G ; A.M . Let us note that the dispersion relation for the screened plasmon-polaritons of graphene near a PEC coincides with the quasi-acoustic plasmon-polaritons in symmetric bilayer graphene, provided $d=d_{\mathrm{bilayer}}/2$, where $d_{\mathrm{bilayer}}$ is the interlayer distance in the bilayer graphene. This fact can be understood in terms of image charges. That is why the screened plasmon-polariton introduced here is also called quasi-acoustic plasmon-polariton. Note that such a quasi-acoustic mode seems to be a common occurrence when a splitting of plasma frequencies happens due to the electrostatic interaction, e.g., in the electron-hole plasma of graphene D.S083715 ; W.Z688 , or in the coupling between the interlayer in the bilayer graphene A.M135 . If we neglect the retardation effects, i.e., $k\gg k_{0}$, from Eq. (9) we find the dispersion relation of the screened plasmons of graphene near a PEC, as $\omega=\left[v_{\mathrm{s}}^{2}k^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}}\dfrac{k}{\varepsilon_{1}\coth kd+\varepsilon_{2}}\right]^{1/2}\;.$ (10) From Eq. (10), we can distinguish two different dimensionality regimes depending on two cases of $kd\gg 1$ and $kd\ll 1$. For $kd\gg 1$, where graphene and PEC decouple, we may use the asymptotic expression $\coth kd\approx 1$. Thus, the dispersion relation can be written as $\omega=\left[v_{\mathrm{s}}^{2}k^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}\left(\varepsilon_{1}+\varepsilon_{2}\right)}k\right]^{1/2}\approx\sqrt{\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}\left(\varepsilon_{1}+\varepsilon_{2}\right)}k}\;.$ (11) which is exactly the same as the well-known dispersion relation of the surface plasmons of isolated graphene in free space, when $\varepsilon_{1}=1=\varepsilon_{2}$ A.J.C195438 , with the approximate result valid for realistic ($k\ll k_{\mathrm{F}}$ ) wavenumbers. This result means that the dispersion relation of an isolated graphene is almost unaffected by the Fermi correction, i.e., the internal pressure force of the electron [the term with $v_{\mathrm{s}}^{2}$ in Eq. (3)]. On the other hand, for $kd\ll 1$ we may use the asymptotic expression $\coth kd\approx 1/kd$. Thus, the dispersion relation can be written as $\omega=\left[v_{\mathrm{s}}^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}}\dfrac{d}{\varepsilon_{1}+\varepsilon_{2}kd}\right]^{1/2}k\approx\left[v_{\mathrm{s}}^{2}+\dfrac{D_{\mathrm{g}}d}{\pi\varepsilon_{0}\varepsilon_{1}}\right]^{1/2}k\;.$ (12) As a new interesting result, it is clear that in this case, the internal pressure force of the electron is an important term and increases the frequency of surface plasmons. In fact, for graphene near a PEC, the dispersion is strongly dependent on $d$. We note that Chaves et al. A.J.C195438 showed that for $d$ about $1.5$nm, the screened plasmons of graphene near a PEC can appear in the mid-infrared with a wavenumber of the order of $200\mu$m-1 (corresponding to a $\lambda_{\mathrm{spp}}=2\pi/k\approx 30$nm). For a Fermi energy of graphene about $E_{\mathrm{F}}=0.4$eV, we find $kc/(k_{\mathrm{F}}v_{\mathrm{F}})\sim 100$, which places graphene in the strong nonlocal regime. Also, the phase and group velocities of screened plasmons of the system can be obtained from Eq. (10). For the phase velocity, we have $v_{\mathrm{phase}}=\left[v_{\mathrm{s}}^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}k}\dfrac{1}{\varepsilon_{1}\coth kd+\varepsilon_{2}}\right]^{1/2}\;,$ (13) while for the group velocity by derivation of Eq. (10) with respect to $\omega$, we find $v_{\mathrm{group}}=\dfrac{v_{\mathrm{s}}^{2}k+\dfrac{D_{\mathrm{g}}}{2\pi\varepsilon_{0}}\dfrac{\varepsilon_{1}\coth kd+\varepsilon_{2}+\varepsilon_{1}\dfrac{kd}{\sinh^{2}kd}}{\left[\varepsilon_{1}\coth kd+\varepsilon_{2}\right]^{2}}}{\left[v_{\mathrm{s}}^{2}k^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}}\dfrac{k}{\varepsilon_{1}\coth kd+\varepsilon_{2}}\right]^{1/2}}\;.$ (14) To see clearly the character of the dispersion relation for the screened plasmon-polariton of graphene near a PEC, first let us introduce the dimensionless variables $K=k/k_{0},\Omega=\omega/\omega_{0}$, $V_{\mathrm{s}}=v_{\mathrm{s}}/v_{\mathrm{F}}=1/\sqrt{2}$, $V_{\mathrm{c}}=c/v_{\mathrm{F}}\approx 300$, $D=k_{0}d$, where $k_{0}=\sqrt{\pi n_{0}}$, and $\omega_{0}=v_{\mathrm{F}}k_{0}$. Note that for $n_{0}=n$, we have $k_{0}=k_{\mathrm{F}}$ and $\omega_{0}=\omega_{\mathrm{F}}$. Now, in Fig. 2, we show the dependence of the dimensionless frequency $\Omega=\omega/\omega_{0}$ on the dimensionless variable $K=k/k_{0}$, for different values of the parameter $D=k_{0}d$, when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$, and $n=n_{0}$ that means $k_{0}=k_{\mathrm{F}}$, and $\omega_{0}=\omega_{\mathrm{F}}$. One can see that the behavior of the screened plasmon-polariton depends on the value of $d$, where the decreasing thickness of the spacer layer $d$, red-shifts the frequency of the surface wave. More importantly, it can be seen that with the decreasing thickness of the spacer layer $d$, internal interaction force plays an important role in the dispersion relation of the surface wave for $k\ll k_{\mathrm{F}}$. We observe that in the presence of the nonlocal effects, screened plasmon- polariton of the system has a phase velocity that can be made arbitrarily close to the energy sound speed of Dirac electrons in graphene by tuning the spacer layer $d$. Furthermore, from Fig. 2, we observe for the constant values of frequency and graphene sheet carrier density, the wavelength of screened plasmon-polariton considerably decreases when the PEC draws near to graphene, as can be easily concluded from Eq. (12). The effect of the graphene sheet carrier density $n$ on the dispersion relation for the screened plasmon-polariton of graphene near a PEC is shown in Fig. 3. It can be seen that as the carrier density decreases to lower values, the surface wave frequency decreases. It is clear that at extremely low charge density, the surface wave has a velocity close to $v_{\mathrm{F}}/\sqrt{2}$. This means that near the graphene neutrality point, the surface wave has a linear dispersion with a universal speed close to $v_{\mathrm{F}}/\sqrt{2}$ W.Z688 . One may conclude from panel (d) of Fig. 3 that near the neutral point of graphene, the local model shows an incorrect result. Also, from Fig. 3, it is clear that for the constant values of $\omega$ and $d$, the wavelength of screened plasmon-polariton considerably decreases with a decrease in the graphene sheet carrier density, as can be easily seen from Eq. (12). ### II.2 Power flow For the power flow density associated with a surface wave of graphene near a PEC, we have, in the three media, $\textbf{S}=\left\\{\begin{array}[]{clcr}\textbf{E}_{-}\times\textbf{H}_{-}\;,&\mbox{$0<z<d$\;,}\\\ \textbf{E}_{+}\times\textbf{H}_{+}\;,&\mbox{$d<z$\;,}\\\ \end{array}\right.$ (15) where subscripts $-$ and $+$ denote the regions below and above the graphene layer, and for $z=0$ using Eq. (A-4) in Appendix, we have $S_{\mathrm{g}x}=-\frac{\pi e}{D_{\mathrm{g}}}v_{\mathrm{s}}^{2}n_{\mathrm{g}}j_{x}$. After the elimination of $j_{x}=\sigma_{\mathrm{g}}E_{x}$ and $n_{\mathrm{g}}=-(k/e\omega)j_{x}$, and also using $A_{+}=A_{-}\sinh(\kappa_{1}d)\exp(\kappa_{2}d)$, the cycle-averaged $x$-components of S in Eq. (15), and also on the surface of graphene can be written as $S_{x}=\dfrac{\varepsilon_{0}k\omega}{2}A_{-}^{2}\left\\{\begin{array}[]{clcr}\dfrac{\varepsilon_{1}}{\kappa_{1}^{2}}\cosh^{2}\kappa_{1}z\;,&\mbox{$0<z<d$\;,}\\\ \dfrac{\varepsilon_{2}}{\kappa_{2}^{2}}\sinh^{2}\kappa_{1}d\;e^{-2\kappa_{2}(z-d)}\;,&\mbox{$d<z$\;,}\\\ \end{array}\right.$ (16) $S_{\mathrm{g}x}=\dfrac{\varepsilon_{0}k\omega}{2}A_{-}^{2}\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}\omega^{2}}v_{\mathrm{s}}^{2}|\sigma_{\mathrm{g}}|^{2}\sinh^{2}\kappa_{1}d\;.\ $ (17) The total power flow density associated with the $p$-polarized surface wave is determined by integration over $z$. The power flow through an area in the $yz$ plane of infinite length in the $z$-direction and unit width in the $y$-direction is $\left\langle S_{x}\right\rangle=\dfrac{\varepsilon_{0}k\omega}{4}\sinh^{2}\kappa_{1}d\;A_{-}^{2}\left[\dfrac{\varepsilon_{1}}{\kappa_{1}^{2}}\dfrac{d}{\sinh^{2}\kappa_{1}d}\right.\\\ \qquad\left.+\dfrac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+\dfrac{\varepsilon_{2}}{\kappa_{2}^{3}}+\dfrac{2\pi}{\varepsilon_{0}D_{\mathrm{g}}\omega^{2}}v_{\mathrm{s}}^{2}|\sigma_{\mathrm{g}}|^{2}\right]\;,$ (18) where $\left\langle\cdots\right\rangle\equiv\int_{0}^{+\infty}\cdots dz$. This total power flow density (per unit width) is positive, as can be seen in panel (c) of Fig. 1. ### II.3 Energy distribution For the cycle-averaged energy density distribution associated with a surface wave of graphene near a PEC, we have, $U=\dfrac{1}{4}\left\\{\begin{array}[]{clcr}\varepsilon_{0}\varepsilon_{1}|\textbf{E}_{-}|^{2}+\mu_{0}|\textbf{H}_{-}|^{2}\;,&\mbox{$0<z<d$\;,}\\\ \varepsilon_{0}\varepsilon_{2}|\textbf{E}_{+}|^{2}+\mu_{0}|\textbf{H}_{+}|^{2}\;,&\mbox{$d<z$\;,}\\\ \end{array}\right.$ (19) where for $z=0$ using Eq. (A-3) in Appendix, we have $U_{\mathrm{g}}=\frac{1}{4}\frac{\pi}{D_{\mathrm{g}}}\left(|j_{x}|^{2}+e^{2}v_{\mathrm{s}}^{2}|n_{\mathrm{g}}|^{2}\right)\;.$ (20) Then Eqs. (19) and (20) yield, $U=\dfrac{\varepsilon_{0}}{4}A_{-}^{2}\left\\{\begin{array}[]{clcr}\varepsilon_{1}\left[\sinh^{2}\kappa_{1}z+\dfrac{k^{2}+\varepsilon_{1}\frac{\omega^{2}}{c^{2}}}{\kappa_{1}^{2}}\cosh^{2}\kappa_{1}z\right]\;,&\mbox{$z<d$\;,}\\\ 2\varepsilon_{2}\dfrac{k^{2}}{\kappa_{2}^{2}}\sinh^{2}\kappa_{1}d\;e^{-2\kappa_{2}(z-d)}\;,&\mbox{$z>d$\;,}\\\ \end{array}\right.$ (21) $U_{\mathrm{g}}=\dfrac{1}{4}A_{-}^{2}\frac{\pi}{D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}|\sigma_{\mathrm{g}}|^{2}\sinh^{2}\kappa_{1}d\;,$ (22) where all contributions to the energy density are positive. The total energy density associated with the screened plasmon-polaritons of graphene near a PEC is again determined by integration over $z$, the energy per unit surface area is Figure 4: The wavenumber dependence of dimensionless relaxation rate of the long-wavelength screened plasmon-polaritons of graphene near a PEC as a function of $K=k/k_{0}$, when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$. (a) For different values of the parameter $D=k_{0}d$, when $n=n_{0}$. (b) For different values of the parameter $k_{\mathrm{F}}=\sqrt{\pi n}$, when $D=5$. Figure 5: The wavenumber dependence of the strength functions $\Theta_{\mathrm{sp}}$ and $\Theta_{\mathrm{ph}}$ of the screened plasmon- polaritons of graphene near a PEC, as a function of $K=k/k_{0}$, when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$ for different values of the parameter $D=k_{0}d$, when $n=n_{0}$. $\left\langle U\right\rangle=\dfrac{\varepsilon_{0}}{4}A_{-}^{2}\sinh^{2}\kappa_{1}d\left[\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\dfrac{\omega^{2}}{c^{2}}\dfrac{d}{\sinh^{2}\kappa_{1}d}+k^{2}\dfrac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d\right.\\\ \qquad\left.+k^{2}\dfrac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}|\sigma_{\mathrm{g}}|^{2}\right]\;.$ (23) Let us note that in a recent work, Morozov and Popov M.Y.M22209 prepared a concept of a terahertz waveguide plasmon amplifier based on a metal groove with active graphene. In this way, they used concepts of plasmon energy to gain insight into the physical origins of terahertz waveguide plasmon amplification. However, in their results, we cannot see the contribution of the energy density on the graphene surface, as shown by Eq. (22). Fortunately, there is an easy way to check such results. In fact, we should find the group and energy velocities of the wave under consideration, when losses are neglected. If the group velocity is the same as the energy velocity, then the obtained energy formulas are correct, as we are going to check this important issue for our results in the following subsection. ### II.4 Energy velocity The energy velocity of a surface wave of graphene near a PEC is given as the ratio of the total power flow density (per unit width) and total energy density (per unit area), such as $v_{\mathrm{energy}}=\omega k\\\ \dfrac{\frac{\varepsilon_{1}}{\kappa_{1}^{2}}\frac{d}{\sinh^{2}\kappa_{1}d}+\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+\frac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{2\pi}{\varepsilon_{0}D_{\mathrm{g}}\omega^{2}}v_{\mathrm{s}}^{2}|\sigma_{\mathrm{g}}|^{2}}{\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\frac{\omega^{2}}{c^{2}}\frac{d}{\sinh^{2}\kappa_{1}d}+k^{2}\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\frac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}|\sigma_{\mathrm{g}}|^{2}}\;.$ (24) If we neglect the retardation effects, i.e., $k\gg k_{0}$, from Eq. (25) we find $v_{\mathrm{energy}}=\dfrac{v_{\mathrm{s}}^{2}k+\dfrac{D_{\mathrm{g}}}{2\pi\varepsilon_{0}}\dfrac{\varepsilon_{1}\coth kd+\varepsilon_{2}+\varepsilon_{1}\dfrac{kd}{\sinh^{2}kd}}{\left[\varepsilon_{1}\coth kd+\varepsilon_{2}\right]^{2}}}{\left[v_{\mathrm{s}}^{2}k^{2}+\dfrac{D_{\mathrm{g}}}{\pi\varepsilon_{0}}\dfrac{k}{\varepsilon_{1}\coth kd+\varepsilon_{2}}\right]^{1/2}}\;,$ (25) which is identical to the group velocity obtained in Eq. (14). This equality confirms the correctness of the presented results. In fact, in general, the group velocity is equal to the energy velocity in the absence of damping A.M18373 ; A.M143901 ; A.M10760 . ### II.5 Damping property Now, we study the damping function of surface waves of the system. To obtain an analytical expression for the damping function of the surface waves of graphene near a PEC we use the perturbative method proposed by Loudon R.L233 and Nkoma et al. J.N3547 . Such a procedure enables us to calculate the true surface wave damping rate to the first order in the damping parameter $\gamma$, introduced to describe the intrinsic damping of crystal oscillations. Also, this theory enables us to discuss both the propagation length and the lifetime of a surface wave. The advantage of the perturbative method is that the damping properties result from the calculation of real dispersion relations. The plasmonic damping parameter or relaxation rate $\Gamma(k,\omega)$ of the present case may be determined by the following procedure. The kinetic and total energy densities (per unit area) $U_{\mathrm{gk}}$, and $\left\langle U\right\rangle$ are first calculated in the absence of damping. If a small amount of damping is now reintroduced, the surface energy relaxation rate to the lowest order in $\gamma$ is $\Gamma(k,\omega)=2\gamma U_{\mathrm{gk}}/\left\langle U\right\rangle$. where from Eq. (A-3) we have $U_{\mathrm{gk}}=\dfrac{\varepsilon_{0}}{4}A_{-}^{2}\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}|\sigma_{\mathrm{g}}|^{2}\sinh^{2}\kappa_{1}d$. Therefore, we get $\Gamma(k,\omega)\\\ =\dfrac{2\gamma\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}|\sigma_{\mathrm{g}}|^{2}}{\dfrac{d\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\frac{\omega^{2}}{c^{2}}}{\sinh^{2}\kappa_{1}d}+k^{2}\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\frac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{\pi|\sigma_{\mathrm{g}}|^{2}}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}}\;.$ (26) Let us note that the frequency and wavenumber dependence of the damping function comes from the retarded part of the plasmonic waves and it is easy to find that, in the nonretarded limit, the total energy density (per unit area) becomes twice as large as the kinetic energy density (per unit area) of the system. As a consequence, the damping function of surface waves of the system equals $\gamma$, i.e., it becomes a constant. Also, let us note that the surface wave lifetime $T$ is simply the inverse of Eq. (26), i.e., $T(k,\omega)=\Gamma^{-1}(k,\omega)$, while their propagation length is given by $L(k,\omega)=v_{\mathrm{group}}\Gamma^{-1}(k,\omega)$, where $v_{\mathrm{group}}$ can be found from Eq. (24). By using (26), the damping rate of the long-wavelength screened plasmon- polaritons of graphene near a PEC, in terms of the dimensionless variables are presented in Fig. 4 when $\varepsilon_{1}=\varepsilon_{\mathrm{SiO_{2}}}=3.9$, $\varepsilon_{2}=1$. It is clear that the damping function of long-wavelength screened plasmon-polaritons is approximately equal to $\gamma$ for a large value of $k$. From panel (a) one can see that by decreasing values of $D$ for a fixed value of $k_{\mathrm{F}}$, the screened plasmon-polaritons relaxation rate increases sharply for a low value of $k$. On the other hand, from panel (b) it is obvious that for a fixed value of $D$, by decreasing values of $k_{\mathrm{F}}$, the screened plasmon-polaritons relaxation rate increases. ### II.6 Surface plasmon and electromagnetic field strength functions Since a screened plasmon-polariton is a coupled optical plasmon-photon wave, we can introduce strength functions that characterize the quantitative compositions of the mixed wave. We have $\left\langle U_{\mathrm{sp}}\right\rangle/\left\langle U\right\rangle+\left\langle U_{\mathrm{ph}}\right\rangle/\left\langle U\right\rangle=1$, where $\left\langle U_{\mathrm{ph}}\right\rangle$ is the total photon energy density and $\left\langle U_{\mathrm{sp}}\right\rangle$ is the total surface plasmon energy density associated with the screened plasmon-polariton. Note that $\left\langle U\right\rangle$ is the sum of the integrated energy densities, i.e., Eq. (23), and also $\left\langle U_{\mathrm{ph}}\right\rangle=\dfrac{\varepsilon_{0}}{4}A_{-}^{2}\sinh^{2}\kappa_{1}d\left[\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\dfrac{\omega^{2}}{c^{2}}\dfrac{d}{\sinh^{2}\kappa_{1}d}\right.\\\ \qquad\left.+k^{2}\dfrac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\dfrac{\varepsilon_{2}}{\kappa_{2}^{3}}\right]\;,$ (27) $\left\langle U_{\mathrm{sp}}\right\rangle=\dfrac{\varepsilon_{0}}{4}A_{-}^{2}\sinh^{2}\kappa_{1}d\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}|\sigma_{\mathrm{g}}|^{2}\;.$ (28) Thus, for the surface plasmon strength function $\Theta_{\mathrm{sp}}$ and the electromagnetic strength function $\Theta_{\mathrm{ph}}$ we obtain $\Theta_{\mathrm{ph}}=\dfrac{\left\langle U_{\mathrm{ph}}\right\rangle}{\left\langle U\right\rangle}\\\ =\dfrac{\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\frac{\omega^{2}}{c^{2}}\frac{d}{\sinh^{2}\kappa_{1}d}+k^{2}\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\frac{\varepsilon_{2}}{\kappa_{2}^{3}}}{\frac{d\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\frac{\omega^{2}}{c^{2}}}{\sinh^{2}\kappa_{1}d}+k^{2}\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\frac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{\pi|\sigma_{\mathrm{g}}|^{2}}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}}\;.$ (29) $\Theta_{\mathrm{sp}}=\dfrac{\left\langle U_{\mathrm{sp}}\right\rangle}{\left\langle U\right\rangle}\\\ =\dfrac{\frac{\pi}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}|\sigma_{\mathrm{g}}|^{2}}{\frac{d\frac{\varepsilon_{1}^{2}}{\kappa_{1}^{2}}\frac{\omega^{2}}{c^{2}}}{\sinh^{2}\kappa_{1}d}+k^{2}\frac{\varepsilon_{1}}{\kappa_{1}^{3}}\coth\kappa_{1}d+k^{2}\frac{\varepsilon_{2}}{\kappa_{2}^{3}}+\frac{\pi|\sigma_{\mathrm{g}}|^{2}}{\varepsilon_{0}D_{\mathrm{g}}}\frac{\omega^{2}+v_{\mathrm{s}}^{2}k^{2}}{\omega^{2}}}\;.$ (30) Fig. 5 shows the variation of $\Theta_{\mathrm{ph}}$ and $\Theta_{\mathrm{sp}}$ with respect to the dimensionless wavenumber of the screened plasmon-polaritons of graphene near a PEC. One can see that for low values of $D$, a plasmonic wave of the system is largely plasmon-like and the role of the surface electromagnetic wave is small compared with that of the surface plasmon wave. Only for high values of $D$ the strength functions of surface plasmon and surface electromagnetic wave are comparable. ## III Conclusion In summary, we have investigated the properties of screened plasmons of graphene near a PEC based on classical electrodynamics and the linearized hydrodynamic model with Fermi correction. In this way, at first, we have derived the dispersion relation of the mentioned screened plasmonic waves. We have studied numerically the effects of graphene distance from the PEC and graphene sheet carrier density on the surface wave properties of the system. Numerical results for the present system indicate alterations in the physical behavior of the surface waves, in comparison with those obtained for an isolated monolayer graphene. Also, we have derived the analytical expressions for the power flow, energy density, and energy velocity of screened plasmons of the system. Furthermore, we have obtained the analytical expressions for the damping function, and surface plasmon and electromagnetic field strength functions of surface waves of the system with small intrinsic damping. The results show that the plasmonic properties of graphene near the PEC may present new behaviors that make it suitable for applications in the mid- infrared to a few THz range of frequencies. ## ACKNOWLEDGMENTS The first-named author would like to thank the Department of Electronic and Communications Engineering at the Yildiz Technical University for its hospitality during his visit. Also, A.M. would like to acknowledge the financial support of the Kermanshah University of Technology for this research opportunity under grant number S/P/F/6. ## AUTHOR DECLARATIONS ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions Afshin Moradi: Project administration (equal); Conceptualization (lead); Investigation (lead); Methodology (lead); Validation (lead); Formal analysis (lead); Software (lead); Writing - original draft (lead); Writing - review and editing (equal). Nurhan Türker Tokan: Project administration (equal); Conceptualization (supporting); Investigation (supporting); Methodology (supporting); Validation (supporting); Formal analysis (supporting); Writing - review and editing (equal). ### DATA AVAILABILITY The data that supports the findings of this study are available within the article. ## Appendix: The energy density and power flow on graphene with Fermi correction From the Poynting theorem for energy in standard electrodynamics, we have: $\nabla\cdot\textbf{S}+\partial_{t}u=-\textbf{E}\cdot\textbf{J}\;,$ (A-1) where $\textbf{S}=\textbf{E}\times\textbf{H}$ is known as the Poynting vector, which is a power density vector associated with an electromagnetic field and $u=\frac{1}{2}\left(\textbf{E}\cdot\textbf{D}+\textbf{H}\cdot\textbf{B}\right)$ is the energy density of electromagnetic waves. For the present nonmagnetic system, we have $\textbf{D}=\varepsilon_{0}\varepsilon\textbf{E}$ and $\textbf{B}=\mu_{0}\textbf{H}$ (where $\mu_{0}$ is the permeability of free space). At this stage, by employing Eqs. (1) and (2) and after doing some algebra, we rewrite the right-hand side of Eq. (A-1) on graphene surface as: $\textbf{E}\cdot\textbf{J}=\frac{\pi}{2D_{\mathrm{g}}}\partial_{t}j_{x}^{2}+\frac{\pi e^{2}}{2D_{\mathrm{g}}}v_{\mathrm{s}}^{2}\partial_{t}n_{\mathrm{g}}^{2}-\frac{\pi e}{D_{\mathrm{g}}}v_{\mathrm{s}}^{2}\partial_{x}\textbf{e}_{x}\cdot n_{\mathrm{g}}j_{x}\textbf{e}_{x}\;.$ (A-2) Then, this equation together with Eq. (A-1) yields the energy density $U_{\mathrm{g}}$ and the power flow $S_{\mathrm{g}}$ on the graphene surface in the forms, as: $U_{\mathrm{g}}=U_{\mathrm{gk}}+U_{\mathrm{gp}}\;,$ (A-3) $S_{\mathrm{g}}=-\frac{\pi e}{D_{\mathrm{g}}}v_{\mathrm{s}}^{2}n_{\mathrm{g}}j_{x}\;.$ (A-4) On the right-hand side of Eq. (A-3), the first term is the kinetic-energy density $U_{\mathrm{gk}}=\pi j_{x}^{2}/2D_{\mathrm{g}}$ and the second term represents the the potential-energy density $U_{\mathrm{gp}}=\pi e^{2}v_{\mathrm{s}}^{2}n_{\mathrm{g}}^{2}/2D_{\mathrm{g}}$. ## References * (1) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Electric field effect in atomically thin carbon films, Science 306, 666 (2004) * (2) B. Wunsch, T. Stauber, F. Sols, and F. Guinea, Dynamical polarization of graphene at finite doping, New J. Phys. 8, 318 (2006) * (3) E. H. Hwang and S. D. Sarma, Dielectric function, screening, and plasmons in two-dimensional graphene, Phys. Rev. B 75, 205418 (2007) * (4) S. A. Mikhailov and K. Ziegler, New electromagnetic mode in graphene, Phys. Rev. Lett. 99, 016803 (2007) * (5) X. Luo, T. Qiu, W. Lu, and Z. Ni, Plasmons in graphene: recent progress and applications, Mater. Sci. Eng. R 74, 351 (2013) * (6) P.A.D. Goncalves, N.M.R. Peres, An Introduction to Graphene Plasmonic (World Scientific, Singapore, 2016) * (7) M. Mendoza, H. J. Herrmann, and S. Succi, Hydrodynamic model for conductivity in graphene, Sci. Rep. 3, 1052 (2013) * (8) A. N. Grigorenko, M. Polini, and K. S. Novoselov, Graphene plasmonics, Nat. Photon. 6, 749 (2012) * (9) M. Müller, J. Schmalian, and L. Fritz, Graphene: A Nearly Perfect Fluid, Phys. Rev. Lett. 103, 025301 (2009) * (10) A. J. Chaves, N. M. R. Peres, G. Smirnov, and N. Asger Mortensen, Hydrodynamic model approach to the formation of plasmonic wakes in graphene, Phys. Rev. B 96, 195438 (2017) * (11) B. A. Ferreira, B. Amorim, A. J. Chaves, and N. M. R. Peres, Quantization of graphene plasmons, Phys. Rev. A 101, 033817 (2020) * (12) D. Svintsov, V. Vyurkov, S. Yurchenko, T. Otsuji, and V. Ryzhii, Hydrodynamic model for electron-hole plasma in graphene, J. Appl. Phys. 111, 083715 (2012) * (13) A. Lucas, Sound waves and resonances in electron-hole plasma, Phys. Rev. B 93, 245153 (2016) * (14) Z. Sun, D. N. Basov, and M. M. Fogler, Universal linear and nonlinear electrodynamics of a Dirac fluid, Proc. Natl Acad. Sci. USA 115, 3285 (2018) * (15) A. Lucas, and K. C. Fong, Hydrodynamics of electrons in graphene, J. Phys. Condens. Matter 30, 053001 (2018) * (16) D. Svintsov, Hydrodynamic-to-ballistic crossover in Dirac materials,Phys. Rev. B 97, 121405 (2018) * (17) A. Lucas, and S. D. Sarma, Electronic sound modes and plasmons in hydrodynamic two-dimensional metals, Phys. Rev. B 97, 115449 (2018) * (18) I. Torre, L. V. de Castro, B. V. Duppen, D. B. Ruiz, F. M. Peeters, F. H. L. Koppens, and M. Polini, Acoustic plasmons at the crossover between the collisionless and hydrodynamic regimes in two-dimensional electron liquids, Phys. Rev. B 99, 144307 (2019) * (19) B. N. Narozhny, Electronic hydrodynamics in graphene, Ann. Phys. 411, 167979 (2019) * (20) E. I. Kiselev, and J. Schmalian, Nonlocal hydrodynamic transport and collective excitations in Dirac fluids, Phys, Rev. B 102, 245434 (2020) * (21) J. Dufty, K. Luo, and J. Wrighton, Generalized hydrodynamics revisited, Phys. Rev. Res. 2, 023036 (2020) * (22) D. Fateev, and V. Popov, Hydrodynamic terahertz plasmons and electron sound in graphene with spatial dispersion, Semiconductors 54, 941 (2020) * (23) B. Narozhny, I. Gornyi, and M. Titov, Hydrodynamic collective modes in graphene, Phys. Rev. B 103, 115402 (2021) * (24) W. Zhao, S. Wang, S. Chen, Z. Zhang, K. Watanabe, T. Taniguchi, A. Zettl, and F. Wang, Observation of hydrodynamic plasmons and energy waves in graphene, Nature 614, 688 (2023) * (25) I. S. Eid, B. F. Mohamed, and B. Guo, Electron exchange effect on surface magnetoplasmon polaritons dynamics in a graphene-plasmonic structure, J. Appl. Phys. 133, 133104 (2023) * (26) A. Moradi, Energy density and energy flow of magnetoplasmonic waves on graphene, Solid State Commun. 253, 63 (2017) * (27) A. Moradi, Damping properties of plasmonic waves on graphene, Phys. Plasmas 24, 072114 (2017) * (28) A. Moradi, Energy density and energy flow of surface waves in a strongly magnetized graphene, J. Appl. Phys. 123, 043103 (2018) * (29) A. Moradi, Energy density and energy flow of plasmonic waves in bilayer graphene, Opt. Commun. 394, 135 (2017) * (30) A. Moradi, Plasmonic waves of graphene on a conducting substrate, J. Mod. Opt. 66, 353 (2019) * (31) X. Gu, I-T. Lin, and J.-M. Liu, Extremely confined terahertz surface plasmon-polaritons in graphene-metal structures, Appl. Phys. Lett. 103, 071103 (2013) * (32) M. Y. Morozov, I. M. Moiseenko, and V. V. Popov, Amplification of plasma waves in shielded active graphene. Tech. Phys. Lett. 42, 40 (2016) * (33) A. Moradi, Comment on: Tunable surface waves supported by graphene-covered left-handed material structures, Opt. Commun. 545, 129735 (2023) * (34) M. M. Bait Suwailam, Z. Chen, Surface waves on a grounded double-negative (DNG) slab waveguide, Microw. Opt. Technol. Lett. 44, 494 (2005) * (35) A. S. Petrov, and D. Svintsov, Perturbation theory for two-dimensional hydrodynamic plasmons, Phys. Rev. B 99, 195437 (2019) * (36) G. A. Marks, D. Blankespoor, Z. L. Miskovic, Launching plasmons in a two-dimensional material traversed by a fast charged particle, Materials 16, 1150 (2023) * (37) A. Moradi, Canonical Problems in the Theory of Plasmonics: From 3D to 2D Systems (Switzerland, Springer, 2020) * (38) A. Moradi, M. Wubs, Strongly direction-dependent magnetoplasmons in mixed Faraday-Voigt configurations, Sci Rep 11, 18373 (2021) * (39) A. Moradi, N. T. Tokan, Magnetostatic microwaves in circular metallic waveguides filled with uniaxial negative permeability media, J. Appl. Phys. 132, 143901 (2022) * (40) A. Moradi, P.-G. Luan, Electromagnetic energy density in hyperbolic metamaterials, Sci. Rep. 12, 10760 (2022) * (41) M. Y. Morozov, and V. V. Popov, Concept of terahertz waveguide plasmon amplifier based on a metal groove with active graphene, Sci Rep 12, 22209 (2022) * (42) R. Loudon, The propagation of electromagnetic energy through an absorbing dielectric, J. Phys. A: Gen. Phys. 3, 233 (1970) * (43) J. Nkoma, R. Loudon, D.R. Tille, Elementary properties of surface polaritons. J. Phys. C: Solid State Phys. 7, 3547 (1974)
# Determination of QPO properties in the presence of strong broad-band noise: a case study on the data of MAXI J1820+070 Deng-Ke Zhou1,2, Shuang-Nan Zhang1,2, Li-Ming Song1,2, Jin-Lu Qu1,2, Liang Zhang1,3, Xiang Ma1, You-Li Tuo1, Ming-Yu Ge1, Yanan Wang3, Shu Zhang1 and Lian Tao1 1Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China. 3Physics & Astronomy, University of Southampton, Southampton, Hampshire SO17 1BJ, UK Corresponding author. E-mail<EMAIL_ADDRESS>author. E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract Accurate calculation of the phase lags of quasi-periodic oscillations (QPOs) will provide insight into their origin. In this paper we investigate the phase lag correction method which has been applied to calculate the intrinsic phase lags of the QPOs in MAXI J1820+070. We find that the traditional additive model between BBN and QPOs in the time domain is rejected, but the convolution model is accepted. By introducing a convolution mechanism in the time domain, the Fourier cross-spectrum analysis shows that the phase lags between QPOs components in different energy bands will have a simple linear relationship with the phase lags between the total signals, so that the intrinsic phase lags of the QPOs can be obtained by linear correction. The power density spectrum (PDS) thus requires a multiplicative model to interpret the data. We briefly discuss a physical scenario for interpreting the convolution. In this scenario, the corona acts as a low-pass filter, the Green’s function containing the noise is convolved with the QPOs to form the low-frequency part of the PDS, while the high-frequency part requires an additive component. We use a multiplicative PDS model to fit the data observed by _Insight_ -HXMT. The overall fitting results are similar compared to the traditional additive PDS model. Neither the width nor the centroid frequency of the QPOs obtained from each of the two PDS models were significantly different, except for the r.m.s. of the QPOs. Our work thus provides a new perspective on the coupling of noise and QPOs. ###### keywords: X-rays: binaries – methods: analytical – methods: data analysis ††pubyear: 2021††pagerange: Determination of QPO properties in the presence of strong broad-band noise: a case study on the data of MAXI J1820+070–Determination of QPO properties in the presence of strong broad-band noise: a case study on the data of MAXI J1820+070 ## 1 Introduction Decades of research on black hole binaries (BHBs) show that their X-ray emission is variable on different time scales, including the low-frequency (mHz to 30Hz) quasi-periodic oscillations (QPOs) and the broad-band noise (BBN) (Psaltis et al. 1999; Ingram et al. 2009; Ingram & Done 2011; Motta 2016). The study of the timing signals can effectively diagnose the geometric characteristics of the disk and the corona near the black hole (Belloni & Hasinger 1990; Belloni et al. 2002; Ingram 2016; Ingram & Motta 2019). The disk and the corona near the black hole continuously radiate X-ray photons outward due to various radiation mechanisms (thermal radiation, Compton Radiation and so on). Photons with different energy arrive at the observer at different times because they may come from different radiation regions (Lin et al. 2000; Rapisarda et al. 2016), or undergo different scattering processes (Cui 1999; Poutanen 2001), or have very complex mechanisms that cause delays (Morgan et al. 1997; Wijnands et al. 1999; Qu et al. 2010). Therefore, analyzing the phase/time lags of photons between different energy bands helps us better understand the geometric or radiometric characteristics of X-ray BHBs. A common analysis method is based on Fourier cross-spectrum, which measures the frequency-dependent phase lag spectrum (FDPLS) between the signals in two different energy bands (van der Klis et al. 1987). This method allows to study the phase lags of two signals as a function of Fourier frequencies. Thus the phase lags between different components of timing signals, which usually originate from different physical processes (Narayan & Yi 1995; Done et al. 2007; Ingram & Done 2011), can be studied separately. For example, Zhang et al. (2020) conducted a systematic study on the phase lag of the type-C QPO and found that the phase lag behaviour of the sub-harmonic of the QPO is very similar to that of the QPO fundamental component but the second harmonic of the QPO shows a quit different phase lag behaviour. Uttley et al. (2011) investigated the phase lag of the BBN components of GX-339 and found that the large lags can be explained by viscous propagation of mass accretion fluctuations in the disk. The traditional way to obtain the phase lag of the QPO components is to assume that the other components contribute weakly to the lag in the QPO frequency range, and then directly treat the values in the QPO frequency range as the phase lag of the QPO components (e.g., Morgan et al. 1997; Wijnands et al. 1999; Kara et al. 2019; Zhang et al. 2020). However, the coexistence of various components makes it difficult to calculate any of the individual component. In particular, when the BBN is sufficiently strong in the QPO frequency range, there is no reason to ignore the effect of the BBN on the measured QPO phase lag. Despite attempts by some authors to ameliorate this dilemma by fitting different components of the cross-spectrum (e.g., Qu et al. 2004), there is no broad consensus on how to obtain the intrinsic phase lag of the QPO in the presence of strong BBN. Therefore, it is difficult to determine the intrinsic properties (including phase lag) of the QPO in the presence of strong BBN. In a recent work (Ma et al. 2021, hereafter Ma21) the authors attempt to correct for the original phase lags, which gives a clear physical picture using the corrected phase lags. Ma21 investigated the behavior of the QPO phase lags in MAXI J1820+070 using _Insight_ -HXMT observations and proposed a method to obtain the intrinsic phase lag of the QPO. In their analysis of the phase lags, they find that by subtracting the phase lags below the QPO frequency range they can obtain consistent QPO phase lags as functions of photon energy for all observations and can explain the lag behavior through the precession of a compact jet above the black hole. On the data they used, the PDS shows that the BBN components are too strong compared to the QPO component to ignore the contribution to the phase lags in the QPO frequency range (see panel c of figure 1 in Ma21). If they do not correct the phase lags for the QPO, the phase lags obtained from the original FDPLS will be affected by the BBN and thus are not intrinsic phase lags of the QPO. Although Ma21 applied this method to obtain consistent results of the phase lags, the rationale for doing so was not explained in detail, so the plausibility of this correction method needs to be tested. On the data they analyzed, some of the observations obtained phase lags with little difference before and after the correction, but some of the phase lags changed significantly (even the sign is totally reversed) before and after the correction. Therefore, we believe it is necessary to investigate under what conditions the correction is effective and how the QPO component is related to the BBN component. The motivation of this paper is to explore the mechanism behind the correction method used by Ma21 and to investigate the QPO properties in the presence of strong BBN in conjunction with the results obtained by Ma21. Since we want to obtain the properties of a certain component (in our case, the QPO), and what we observe is some kind of superposition of all components, we have to face the problem of how these components contribute to the total signal. In this paper, when we refer to the term signal, we are referring to the light curve or the underlying time series. The total signal is defined as the time series that we directly observe and the sub-signals are the sub- components such as QPO and BBN that make up the total signal. Traditionally, it is believed that the BBN and the QPO are additive in the time domain and that they are incoherent at any frequency, which is why the PDS is fitted by the sum of several Lorentzian functions. Ingram & van der Klis (2013) proposed a possible relationship between QPO and BBN, where the QPO component and the BBN component are multiplied in the time domain. In this case, a convolution model is required for the fitting of the PDS in the frequency domain. Another way in which the QPO component and the BBN component are combined into a total signal in the time domain is convolution, which is usually caused by the response of the QPO signal in the region where the BBN component is generated (a model similar to this mechanism can be found in Cabanac et al. 2010). The calculation of the FDPLS involves the Fourier orthogonal decomposition of the signal, so it can be expected that if the sub-signals form the total signal in different ways, then the relationship between the FDPLS of the total signal and the FDPLS of the sub-signals must be different. This paper is structured as follows: Section 2 analyzes the relationships between the FDPLS as well as the PDS of total signals and sub-signals. An algorithm to generate two signals satisfied specific PDS and FDPLS simultaneously is also proposed. Besides, one possible way of coupling the QPO component and the BBN component in the time domain is discussed. In Section 3, based on the results of Ma21’s analysis of MAXI J1820+070 on phase lags, we argue that the QPO component and the BBN component constitute the total signal by convolution in the time domain. Using the data of MAXI J1820+070, we fit the PDS in different energy bands using the multiplicative PDS model and the traditional additive PDS model, and compare their differences. In addition, we also performed some simulations to rule out the possibility that the total signal appears to be the sum of the sub-signals in the time domain. Section 4 discusses and summarizes the whole paper. ## 2 theory and simulation ### 2.1 phase lag relationship Suppose that the expressions of non-zero mean signals $r_{1}(t)$, $r_{2}(t)$, $q_{1}(t)$, $q_{2}(t)$ at frequency $f_{0}$ can be written as: $\displaystyle r_{1}(t)$ $\displaystyle=R_{1}\sin(2\pi f_{0}t+\phi_{r_{1}})+c_{r_{1}},$ (1) $\displaystyle r_{2}(t)$ $\displaystyle=R_{2}\sin(2\pi f_{0}t+\phi_{r_{2}})+c_{r_{2}},$ $\displaystyle q_{1}(t)$ $\displaystyle=Q_{1}\sin(2\pi f_{0}t+\phi_{q_{1}})+c_{q_{1}},$ $\displaystyle q_{2}(t)$ $\displaystyle=Q_{2}\sin(2\pi f_{0}t+\phi_{q_{2}})+c_{q_{2}},$ where $c_{r_{1}}$, $c_{r_{2}}$, $c_{q_{1}}$ and $c_{q_{2}}$ are the mean values of the corresponding signals; $R_{1}$, $R_{2}$, $Q_{1}$, $Q_{2}$ and $\phi_{r_{1}}$, $\phi_{r_{2}}$, $\phi_{q_{1}}$, $\phi_{q_{2}}$ are the amplitudes and the initial phases of the corresponding signals, respectively. The frequency $f_{0}$ can take any non-negative value including $0$. When $0$ is taken, it indicates that this is a constant signal. If the total signal is the sum of the sub-signals in the time domain (hereafter this kind of total signal is called the additive signal), i.e. : $\displaystyle s_{1}(t)$ $\displaystyle=r_{1}(t)+q_{1}(t),$ (2) $\displaystyle s_{2}(t)$ $\displaystyle=r_{2}(t)+q_{2}(t),$ then the phase difference (i.e. phase lag) between $s_{1}(t)$ and $s_{2}(t)$ can be written as: $\displaystyle\Delta\phi_{\rm add}(s_{2},s_{1};f_{0})$ (3) $\displaystyle=\phi_{s_{2}}-\phi_{s_{1}}$ $\displaystyle={\rm Arg}[R_{2}\cos(\phi_{r_{2}})+Q_{2}\cos(\phi_{q_{2}}),R_{2}\sin(\phi_{r_{2}})+Q_{2}\sin(\phi_{q_{2}})]$ $\displaystyle-{\rm Arg}[R_{1}\cos(\phi_{r_{1}})+Q_{1}\cos(\phi_{q_{1}}),R_{1}\sin(\phi_{r_{1}})+Q_{1}\sin(\phi_{q_{1}})].$ Here we use $\rm Arg$ $[a,b]$ to denote the argument of the complex $a+ib$, where $i$ is the imaginary unit. It can be seen from equation (3) that if the total signal is the additive signal, the phase lag between the total signals depends on the amplitude and initial phase of each sub-signal. If the total signal is convoluted by the sub-signals (hereafter this kind of total signal is called the convolved signal), then the phase lag between the total signal and the phase lag between the sub-signals satisfies a linear relationship, the proof of which will be given below. Still assume that the sub-signals satisfy equation (1), but at this time the total signals are equal to the convolution of the sub-signals: $\displaystyle s_{1}(t)$ $\displaystyle=r_{1}(t)\otimes q_{1}(t),$ (4) $\displaystyle s_{2}(t)$ $\displaystyle=r_{2}(t)\otimes q_{2}(t),$ where the sign $\otimes$ represents the convolution operation. The Fourier transform of the convolution of two signals is equal to the multiplication of their respective Fourier transforms. We can obtain the cross-correlation function (CCF) of $s_{1}(t)$ and $s_{2}(t)$ in the frequency domain: $\displaystyle{\rm CCF}(f)$ $\displaystyle=\frac{R_{1}R_{2}Q_{1}Q_{2}}{64\pi^{2}}e^{-i[\Delta\phi(r_{2},r_{1})+\Delta\phi(q_{2},q_{1})]}\delta^{4}(f-f_{0}),$ (5) where $\Delta\phi(r_{2},r_{1})=\phi_{r_{2}}-\phi_{r_{1}}$ and $\Delta\phi(q_{2},q_{1})=\phi_{q_{2}}-\phi_{q_{1}}$ are the phase lag of the sub-signals. The phase lag of the two total signals $s_{1}$ and $s_{2}$ can be obtained by taking the argument of their CCF: $\displaystyle\Delta\phi_{\rm con}(s_{2},s_{1};f_{0})$ $\displaystyle={\rm Arg}[{\rm CCF}(f)]=\Delta\phi(r_{2},r_{1})+\Delta\phi(q_{2},q_{1}).$ (6) That is to say, if the total signal is the convolved signal, the phase lag of the total signals is equal to the sum of the phase lag of the sub-signals. We also note that Rapisarda et al. (2014) argued that the QPO component is multiplied together with the broad component to form the observed signal. We now consider the phase lag relationship between the total signal composed of single frequency sub-signals by multiplying them together (hereafter, this kind of total signal is called the multiplicative signal). $s_{1}(t)$ and $s_{2}(t)$ now are written as $\displaystyle s_{1}(t)$ $\displaystyle=r_{1}(t)\times q_{1}(t)$ (7) $\displaystyle=c_{q_{1}}R_{1}\sin(2\pi f_{0}t+\phi_{r_{1}})+c_{r_{1}}Q_{1}\sin(2\phi f_{0}t+\phi_{q_{1}})$ $\displaystyle+\frac{1}{2}R_{1}Q_{1}\cos(2\pi\times 2f_{0}t+\phi_{r_{1}}+\phi_{q_{1}})+c_{s_{1}},$ $\displaystyle s_{2}(t)$ $\displaystyle=r_{2}(t)\times q_{2}(t)$ $\displaystyle=c_{q_{2}}R_{2}\sin(2\pi f_{0}t+\phi_{r_{2}})+c_{r_{2}}Q_{2}\sin(2\phi f_{0}t+\phi_{q_{2}})$ $\displaystyle+\frac{1}{2}R_{2}Q_{2}\cos(2\pi\times 2f_{0}t+\phi_{r_{2}}+\phi_{q_{2}})+c_{s_{2}},$ where $c_{s_{1}}$ and $c_{s_{2}}$ are constants. Thus, the two total signals $s_{1}(t)$ and $s_{2}(t)$ contain two non-zero frequency components, one at $f_{0}$ and the other at $2f_{0}$. We can see that the first two terms of $s_{1}(t)$ and $s_{2}(t)$ are in fact additive signals and thus the results on additive signals can be used. Thus, the phase lags of them can be written as: $\displaystyle\Delta\phi_{\rm mul}(s_{2},s_{1};f_{0})$ $\displaystyle=\Delta\phi_{\rm add}(s^{\prime}_{2},s^{\prime}_{1};f_{0}),$ (8) $\displaystyle\Delta\phi_{\rm mul}({s_{2}},{s_{1}};2f_{0})$ $\displaystyle=\Delta\phi(r_{2},r_{1})+\Delta\phi(q_{2},q_{1}),$ where $s^{\prime}_{1}=c_{q_{1}}R_{1}\sin(2\pi f_{0}t+\phi_{r_{1}})+c_{r_{1}}Q_{1}\sin(2\pi f_{0}t+\phi_{q_{1}})$ and $s^{\prime}_{2}=c_{q_{2}}R_{2}\sin(2\pi f_{0}t+\phi_{r_{2}})+c_{r_{2}}Q_{2}\sin(2\pi f_{0}t+\phi_{q_{2}})$. This is very interesting because the multiplicative signal seems to contain properties of both additive and convolved signals: on one hand the phase lag at frequency $f_{0}$ follows the pattern of the additive signal and on the other hand the phase lag at frequency $2f_{0}$ follows the pattern of the convolved signal. However, in general the mean value of the actual signal is larger than its amplitude, so it is expected that the total FDPLS of the multiplicative signal should be closer to the pattern of the additive signal, as we will see in the simulation section. For the general signals $r_{n}$, $q_{n}$, $s_{n}$ ($n=0,1,....N-1$), their discrete-time Fourier series are $\begin{split}r_{n}=\frac{1}{N}\sum_{k=0}^{N-1}R_{k}e^{i2\pi\frac{k}{N}n},\\\ q_{n}=\frac{1}{N}\sum_{k=0}^{N-1}Q_{k}e^{i2\pi\frac{k}{N}n},\\\ \ s_{n}=\frac{1}{N}\sum_{k=0}^{N-1}S_{k}e^{i2\pi\frac{k}{N}n},\end{split}$ (9) where $R_{k}$, $Q_{k}$, and $S_{k}$ are the discrete Fourier transforms of $r_{n}$, $q_{n}$, and $s_{n}$, respectively. Thus $r_{n}$, $q_{n}$, $s_{n}$ can be treated as a superposition of many trigonometric functions with different amplitude, different frequencies and different initial phases. For the additive and convolved signals discuss above, at the specified frequency, these signals have the same properties as the corresponding single frequency signals. So the phase lags of the additive signal follows equation (3) at each frequency, and the phase lags of the convolved signal follows equation (6) at each frequency. In summary, for the additive/convolved signals, their FDPLS follow equation (3) or (6) at each specific frequency, respectively. In this case, each frequency corresponds to a set of parameters (amplitude and initial phase) for calculating the phase lag. Unfortunately, it is clear from equation (7) that additional frequency components appear in the multiplicative signal that are not identical to the sub-signals, so the conclusion for the single- frequency signal cannot be generalized to the general signal. Nevertheless, we can use simulation (see sec 2.4) to explore the phase lag relationship between the multiplicative signals. ### 2.2 PDS relationship Assuming $s(t)$ is the additive signal, i.e. $s(t)=r(t)+q(t)$. Let $P_{\rm s}(f)$, $P_{\rm r}(f)$ and $P_{\rm q}(f)$ be the PDS of the signals $s(t)$, $r(t)$ and $q(t)$, respectively. Considering the Fourier transform is linear, one obtains: $\displaystyle P_{s}(f)$ $\displaystyle=|\mathscr{F}(r+q)|^{2}$ (10) $\displaystyle=|\mathscr{F}(r)+\mathscr{F}(q)|^{2}$ $\displaystyle=P_{r}(f)+P_{q}(f)+\mathscr{F}(r)^{*}\mathscr{F}(q)+\mathscr{F}(r)\mathscr{F}(q)^{*}.$ The last two terms are actually the cross-spectrum of the signals $r(t)$ and $q(t)$. If $r(t)$ and $q(t)$ are incoherent at all frequencies, then their cross-spectrum will converge to zero after averaging many signal realizations. Therefore the above equation is simplified to $<P_{\rm s}(f)>=<P_{r}(f)>+<P_{q}(f)>,$ (11) where the <> sign indicates the average of many realizations of the signals. This indicates that the PDS of the sum of two incoherent signals is equal to the sum of their respective PDS. Assuming $s(t)$ is the convolved signal, i.e. $s(t)=r(t)\otimes q(t)$. The Fourier transform of the total signal $s(t)$ is equal to the multiplication of the Fourier transforms of the sub-signals $r(t)$, $q(t)$, i.e. $\displaystyle\mathscr{F}(s)=\mathscr{F}(r)\mathscr{F}(q),$ (12) $\displaystyle<P_{s}(f)>=<P_{r}(f)><P_{q}(f)>.$ That is, the PDS of the convolved signal is equal to the multiplication of the PDS of the corresponding sub-signals. It can be easily generalized that if $s(t)=r(t)\otimes q(t)+p(t)$, and $r(t)\otimes q(t)$ is incoherent with $p(t)$, their PDS will satisfy $<P_{\rm s}(f)>=<P_{\rm r}(f)><P_{\rm q}(f)>+<P_{\rm p}(f)>.$ (13) The PDS properties for convolved signals can be generalized to multiplicative signals simply based on the symmetry of the Fourier transform, that is, the PDS of the multiplicative signal is the convolution of the PDS of the sub- signals. ### 2.3 An algorithm for simultaneously simulating signals with specified PDS and FDPLS In order to verify the correctness of the above theoretical analysis as well as to facilitate the analysis below, some simulations need to be done. An algorithm is thus needed to generate two signals with specified PDS and specified FDPLS simultaneously. The algorithm steps are as follows: 1. 1) Use Timmer & Koenig (1995) (TK95 in the following) algorithm to generate two signals $s(t)$ and $s^{\prime}(t)$ that satisfy the specified PDS. Because the phase given to the signal by TK95 algorithm is random, the phase lag between these two signals is now on average zero. Denote their Fourier transforms as $S(f)$, $S^{\prime}(f)$, respectively. 2. 2) Given the FDPLS $\phi(f)$, then calculate ${\rm CCF}(f)$ according to the following equation: ${\rm CCF}(f)=\begin{cases}|S(f)||S^{\prime}(f)|\\{\cos[\phi(f)]+i\sin[\phi(f)]\\}&\text{$f\neq 0$},\\\ |S(f)||S^{\prime}(f)|&\text{$f=0$},\end{cases}$ (14) where $i$ is the imaginary unit. 3. 3) The complex array ${\rm CCF}(f)$ obtained in step 2 is divided by the complex conjugate of $S(f)$ to obtain a new complex array. Then performing inverse Fourier transform to it to obtain the signal $s^{\prime\prime}(t)$. Expressed in mathematical notation, it is $s^{\prime\prime}(t)=\mathscr{F}^{-1}[\frac{{\rm CCF}(f)}{S^{*}(f)}].$ (15) The underlying PDS of $s^{\prime\prime}(t)$ is the same as the PDS of $s(t)$, but the FDPLS between $s^{\prime\prime}(t)$ and $s(t)$ will satisfy the given FDPLS. In summary, $s(t)$ and $s^{\prime\prime}(t)$ satisfy both the given PDS and the given FDPLS. In this paper, all PDS and FDPLS are extracted using the X-ray astronomy python package stingray (Huppenkothen et al. 2019, version 0.3), and all PDS and FDPLS fitting are done by XSPEC (Arnaud 1996, version 12.11.1) or lmfit (Newville et al. 2014, version 1.0.2). ### 2.4 simulation Table 1: Timing properties of $r_{1}(t)$, $r_{2}(t)$, $q_{1}(t)$, $q_{2}(t)$ (see section 2.4 for the definition of them). signals | bin size (s) | $\nu_{c}$ | $\omega$ | mean rate (cts/s) | exposure (s) | fractional r.m.s | PDS type | FDPLS type ---|---|---|---|---|---|---|---|--- $r_{1}(t)$ | 0.01 | 0 | 3 | 2000 | 2000 | 30% | BBN | constant $r_{2}(t)$ | 0.01 | 0 | 4 | 2000 | 2000 | 20% | BBN $q_{1}(t)$ | 0.01 | 1 | 0.1 | 2000 | 2000 | 15% | QPO | dip $q_{2}(t)$ | 0.01 | 1 | 0.2 | 2000 | 2000 | 10% | QPO Figure 1: Simulation results of the FDPLS and PDS. Panel a: PDS of signals $r_{1}(t)$, $r_{2}(t)$, $q_{1}(t)$, $q_{2}(t)$. Panel b: simulated FDPLS (blue dots) between $r_{1}(t)$ and $r_{2}(t)$ and simulated FDPLS (black dots) between $q_{1}(t)$ and $q_{2}(t)$. Panel c: simulated FDPLS (green dots) between $r_{2}(t)+q_{2}(t)$ and $r_{1}(t)+q_{1}(t)$ and simulated FDPLS (blue dots) between $r_{2}(t)\times q_{2}(t)$ and $r_{1}(t)\times r_{2}(t)$. Panel d: simulated FDPLS between $r_{2}(t)\otimes q_{2}(t)$ and $r_{1}(t)\otimes q_{1}(t)$. The red curves in panels b, c, and d are theoretically calculated curves. Error bars correspond to $1\sigma$ confidence intervals. Four signals $r_{1}(t)$, $r_{2}(t)$, $q_{1}(t)$, and $q_{2}(t)$ with time resolution of 0.01 s are simulated according to the algorithm proposed in subsection 2.3. The PDS of all these signals are characterized by the Lorentzian function, which takes the form of $L(f)=\frac{K(\omega/(2\pi))}{(\omega/2)^{2}+(f-f_{c})^{2}},$ (16) where $K$, $\omega$ and $f_{c}$ denote the normalization factor, the full width at half maximum (FWHM) and the centroid frequency, respectively. The PDS of $r_{1}(t)$ and $r_{2}(t)$ are modeled by setting the centroid frequency of the Lorentzian function to zero and taking a large $\omega$, which simulates BBN, while $q_{1}(t)$ and $q_{2}(t)$ are modeled by taking the appropriate non-zero centroid frequency and $\omega$, which simulates QPO. In addition, the theoretical FDPLS $\phi(f)$ is also set. The FDPLS between BBNs is set to be constant, while the FDPLS between QPOs is set to have a dip-like feature near the centroid frequency (as seen in MAXI J1820+070). That is, $\phi(f)=\begin{cases}0.5&\text{for BBN},\\\ -0.5e^{\frac{(f-1)^{2}}{0.04}}&\text{for QPO}.\end{cases}$ (17) The timing properties of these four signals are summarized in table 1. We then split each signal into multiple 20-sec segments and calculated the PDS of each segment with Leahy normalization (Leahy et al. 1983). The PDS is rebined by a logarithmic factor of 0.03 and we finally obtain the averaged PDS with the frequency range of 0.05-50.53 Hz. The FDPLS is obtained using cross-spectrum analysis. The results of the simulated PDS and the FDPLS are shown in panel a and panel b of Fig 1, respectively. When the total signal is assumed to be additive or multiplicative signal, the FDPLS between the total signals is shown in panel c of Fig. 1. When the total signal is assumed to be the convolved signal, the FDPLS between the total signals is shown in panel d of Fig. 1. As stated in the theoretical analysis section, the FDPLS of the multiplicative signal is very close to the FDPLS of the additive signal (see the green data points and the blue data points in panel c of Fig. 1). Due to the symmetry of the Fourier transform to convolution and multiplication, the PDS section will only compare the differences between convolved and additive signals. In panels b, c, d of Fig. 1, the data points are obtained by simulation and the red dashed lines are obtained from our theoretical calculation (the theoretical curve drawn in panel c of Fig. 1 is for the additive signal, and we did not draw the theoretical curve for the multiplicative signal because of the analytical difficulties). The theoretical curve shown in panel c of Fig. 1 is calculated by using the value of the simulated data (i.e., amplitude, initial phase), and it appears to fluctuate around the data points, which is due to the randomness deliberately introduced by the simulation algorithm (see TK95 for detail). The difference between panel c and panel d of Fig. 1 is mainly due to the different dependence of the FDPLS on the different kinds of signals (additive, multiplicative, convolved signals) on each sub-signal. The FDPLS between the convolved signals depends only on the FDPLS between sub-signals, independent of the other properties of the sub-signals. This is not the case for the additive and multiplicative signals. So it can be seen from panel c that the FDPLS depends on the relative power of sub-signals, while in panel d the FDPLS does not depend on the shape of the PDS of the sub-signals. In conclusion, the simulation results of the FDPLS are consistent with the theoretical analysis. The results of the PDS simulation results are shown in Fig. 2. The PDS of $r_{1}(t)$ and $q_{1}(t)$ are shown in the left panel of Fig. 2, and the PDS of the additive and convolved signals are shown in the middle and right panels of Fig. 2, respectively. We can see that the PDS of the additive signal is the sum of the PDS of the sub-signals, while the PDS of the convolved signal is the multiplication of the PDS of the sub-signals. The solid lines running through the data points in the PDS are the best-fit using the additive and multiplicative Lorentzian models for the additive signal and the convolved signal, respectively. Overall, the simulation results are in good agreement with those of the theoretical analysis. Figure 2: PDS of simulation results. Left panel: the cyan and blue data points are the PDS of the signals $r_{1}(t)$ and $q_{1}(t)$, respectively. The solid lines are the best-fit using Lorentzian model (considering the contribution of Poisson noise requires adding a constant to the Lorentzian model). Middle panel: the blue points are the PDS of the sum of the signals $r_{1}(t)$ and $q_{1}(t)$. The red solid line is the best-fit using two summed Lorentzian functions. Right panel: the blue points are the PDS of the convolution of the signals $r_{1}(t)$ and $q_{1}(t)$. The red solid line is the best-fit using two multiplicative Lorentzian functions (considering the contribution of Poisson noise requires adding a constant to each Lorentzian model). Error bars correspond to $1\sigma$ confidence intervals. ### 2.5 A possible mechanism for introducing a convolution mechanism in the time domain Assuming that the orbit of matter around a black hole is circular and Keplerian. The equation can be derived based on the conservation of mass and angular momentum (e.g. Ingram 2016), i.e. $\frac{\partial\Sigma}{\partial t}=\frac{3}{R}\frac{\partial}{\partial R}[R^{\frac{1}{2}}\frac{\partial}{\partial R}(\nu\Sigma R)],$ (18) where $\Sigma=\rho H$ is the surface density of the corona or disk, and $\nu$ is the kinematic viscosity. Assuming that the surface density at $t=0$ is $\Sigma(t=0,R)=\delta(R-R_{0})$ and $\nu$ is a constant, we will get $g(R,t)=\frac{m}{12\pi\nu t}(\frac{R}{R_{0}})^{-\frac{1}{4}}I_{\frac{1}{4}}(\frac{RR_{0}}{6\nu t})e^{-\frac{R_{0}^{2}+R^{2}}{12\nu t}},$ (19) where $I_{\frac{1}{4}}$ is the modified Bessel function and $g(R,t)$ is called the Green’s function of the system. Under the condition that the system is linear, the surface density of any initial fluctuation $q(t)$ at position $R=R_{0}$ is the convolution of that fluctuation with the Green’s function, i.e., $\Sigma(R,t)=q(t)\otimes g(R,t)$ (Ingram 2016). Denoting the mass accretion rate as $\dot{M}(R,t)$, then the luminosity corresponding to such a accretion rate is $L(R,t)\propto\dot{M}(R,t)\propto\Sigma(R,t)\propto q(t)\otimes g(R,t)$. If we check the region $R<<R_{0}$, then we will get $g(R,t)\propto t^{\frac{5}{4}}e^{-\frac{R_{0}^{2}}{12\nu t}}$. The PDS of such a damped exponential signal is a zero-centred Lorentzian function (Ingram 2016). By introducing two types of white noise, one associated with the Green’s function and the other superimposed on the QPO signal, we assume that the observed signal is expressed in the time domain as $s(t)=g(t)\otimes wn_{1}\otimes[q(t)+wn_{2}]$. We have assumed that q(t) has the form of QPO. Considering that the white noise and QPO signals are incoherent, a PDS of the combined signal will has the form $P(R,f)\propto P_{b1}(R,f)P_{q}(R,f)+P_{b2}(R,f),$ (20) where $P_{b1}$ denotes the first zero-centred Lorentzian function (i.e., the BBN1 component), $P_{q}$ denotes the non-zero centred Lorentzian function (i.e., the QPO component) and $P_{b2}$ denotes the second zero-centred Lorentzian function (i.e., the BBN2 component). Note that the former term of the above summation is due to the fluctuation propagation in the form of QPO and the latter term is due to the fluctuation propagation in the form of white noise, which dominates different frequency ranges (we will see this in section 3). Furthermore, it is worth noting that the above result is valid only when $R<<R_{0}$ and the assumptions about the white noise and QPO fluctuations are satisfied. The total observed luminosity is the integral of the differential luminosity over the entire corona after considering the emissivity (Ingram & Done 2011), but the form is very complicated. Nonetheless, it is still worthwhile to start with a simple model to explain the data. For this reason, when fitting the PDS of the real data with the multiplicative PDS model in section 3, only a form similar to equation (20) will be considered. Figure 3: QPO phase lag correction of three typical _Insight_ -HXMT observations (reproduced from the data in Ma21). Each row represents the results of one observation. The FDPLS between 1-2.6 keV and 100-150 keV energy bands are shown in the left panels. Middle panels are the original QPO energy- dependent phase lags. Right panels are the intrinsic QPO energy-dependent phase lags after correction. The intrinsic QPO phase lags are obtained by subtracting the average of the phase lag of the BBN component (marked by the red dots on the left panels) from the original QPO phase lag (the averaged value marked by the cyan dots on left panels). The red arrows indicate the high energy band data that we will model in Fig. 6. Error bars correspond to $1\sigma$ confidence intervals. Figure 4: Several examples of the best-fit of the observed PDS data (ObsID P0114661078) in different energy bands with additive or multiplicative PDS models. The PDS model used in the left panels is the additive PDS model (i.e., equation (21)). The PDS model used in the right panels is the multiplicative PDS model (i.e., equation (22)). The contribution of Poisson noise in all PDS has been subtracted. Error bars correspond to $1\sigma$ confidence intervals. Figure 5: The parameters of the QPO (fundamental and harmonic components) as functions of photon energy which are obtained with the traditional additive and multiplicative PDS model, respectively. Error bars correspond to $1\sigma$ confidence intervals. ## 3 The FDPLS and PDS of MAXI J1820+070 MAXI J1820+070 was discovered by the Monitor of All-sky X-ray Image (MAXI) during the outburst on 11 March 2018 (Kawamuro et al. 2018). It was confirmed to be a BHB (Torres et al. 2019). _Insight_ -HXMT carried out observations three days after its discovery and obtained rich data with total exposure time of over 2000 ks. Ma21 has carried out a detailed temporal analysis of these data and, in particular, detailed calculations of the phase lags in different energy bands. Fig. 3 shows three typical observations from top to bottom, with a clear dip-like feature appearing near the QPO frequency range (the averaged value of this frequency range shown by the cyan dots denotes the original QPO phase lag). The phase lags of low-frequency BBN component are marked with red dots, denoted as background phase lag. The intrinsic phase lag of the QPO is obtained by subtracting the average of the background phase lag from the original QPO phase lag. After the correction, the absolute value of phase lag of the QPO becomes larger as the energy increases in all three observations. For the sake of clarity, the detailed correction steps used in Ma21 are re- summarized as follows: 1. 1) Calculate the FDPLS and identify the ccentroid frequency $f_{0}$ and the FWHM $\omega$ of the QPO according to the PDS. 2. 2) The original phase lag of the QPO is defined as the average of the phase lags in the frequency range $f_{0}\pm\omega/2$. 3. 3) The background phase lag is defined as the average of the phase lags below the QPO frequency range. 4. 4) The intrinsic phase lag of the QPO is then defined as the original phase lag minus the background phase lag. Such a correction actually implies two assumptions. The first assumption is that the phase lags of the BBN component at the QPO frequency range are the same as the phase lags below the QPO frequency range, at least their averaged values must be approximately equal. The second assumption is that the total phase lags (i.e., the observed original phase lags) at the QPO frequency range are equal to the sum of the BBN component phase lags and the QPO intrinsic phase lags. The correction of the phase lags is valid only when these two assumptions are satisfied simultaneously. The first assumption can be considered to be approximately satisfied. This is because the FDPLS obtained from MAXI J1820+070 shows that the phase lag does not vary significantly with frequency below the QPO frequency range. Thus it is reasonable to assume that at the QPO frequency range, the phase lags of the BBN component are approximately equal to the phase lags below the QPO frequency range. As to whether the second assumption can be satisfied, we need to first make an assumption about how the BBN component and the QPO component synthesize the observed signal. From the discussion of section 2 we know that if the observed signal is considered to be the sum of the BBN component and the QPO component (which is the default assumption in most of the literatures), the second condition cannot be satisfied. The second condition can be satisfied only when the observed signal is considered as a convolution of the BBN component and the QPO component. In the case that the observed signal is the convolved signal, the PDS of the signal needs to be fitted by a multiplicative PDS model. In section 2.5 we introduced a multiplicative PDS model, so we use the multiplicative PDS model to fit the PDS in different energy bands and make a comparison with the traditional additive PDS model. We are not going to explore the multiplicative signal (i.e., the total signal is the multiplication of the sub-signals in the time domain) further because we are mainly concerned with the additive and convolved signals. For the additive PDS model, the form is $\begin{split}P_{\rm add}(f)=&\frac{K_{1}(\omega_{1}/(2\pi))}{(\omega_{1}/2)^{2}+(f-f_{c_{1}})^{2}}+\frac{K_{2}(\omega_{2}/(2\pi))}{(\omega_{2}/2)^{2}+(f-f_{c_{2}})^{2}}\\\ &+\frac{K_{3}(\omega_{3}/(2\pi))}{(\omega_{3}/2)^{2}+(f-f_{c_{3}})^{2}}+\frac{K_{4}(\omega_{4}/(2\pi))}{(\omega_{4}/2)^{2}+(f-f_{c_{4}})^{2}}+c.\end{split}$ (21) For the multiplicative PDS model, the form is $\begin{split}P_{\rm mul}(f)=&\frac{K_{1}(\omega_{1}/(2\pi))}{(\omega_{1}/2)^{2}+(f-f_{c_{1}})^{2}}\times[\frac{K_{2}(\omega_{2}/(2\pi))}{(\omega_{2}/2)^{2}+(f-f_{c_{2}})^{2}}\\\ &+\frac{K_{3}(\omega_{3}/(2\pi))}{(\omega_{3}/2)^{2}+(f-f_{c_{3}})^{2}}]+\frac{K_{4}(\omega_{4}/(2\pi))^{2}}{(\omega_{4}/2)^{2}+(f-f_{c_{4}})^{2}}+c,\end{split}$ (22) which is consistent with the multiplicative PDS model we discussed in section 2.5. We use these two models to fit the PDS in different energy bands in a representative _Insight_ -HXMT observation (ObsID P0114661078). The data reduction process in this paper is the same as Ma21. We extract the light curves with the time resolution of 0.03125 s in each energy band (1-2.6 keV, 2.6-4.8 keV, 4.8-7.0 keV, 7-11 keV, 11-23 keV, 25-35 keV, 35-48 keV, 48-67 keV, 67-100 keV, 100-150 keV and 150-200 keV). We then split the light curves into multiple 32-sec segments and calculate the PDS of each segment with Miyamoto normalization (Miyamoto et al. 1991) for the convenience of calculating fractional r.m.s later, and finally obtain the averaged PDS with the frequency range of 1/32 to 16 Hz. After subtracting the contribution of Poisson noise in the PDS, we fitted the PDS with the additive and multiplicative PDS models. Some fitting examples in different energy bands are shown in Fig. 4. From top to bottom, Fig. 4 shows the PDS fitting results for the three energy bands. The left panels are fitted using the traditional additive PDS model while the right panels are fitted using the multiplicative PDS model. In the multiplicative PDS model, the low-frequency zero-centered Lorentzian component is multiplied onto the QPO component instead of being added, resulting in the left side of the QPO component being lifted up in the right panels of Fig. 4. We can see that the total fitting results are similar but the individual components have some differences. Basing on the best-fit, we calculate the centroid frequency, the FWHM and the fractional r.m.s111The fractional r.m.s is calculated in the same way as for Bu et al. (2015), but ignoring the background correction, since the correction coefficients are the same for both models and our aim is only to compare their differences. Neglecting the background correction leads to a lower fractional r.m.s for the energy band with a lower signal-to-noise ratio (in this paper it is the higher energy band), but it does not change our conclusion. of the QPO and parameters of the BBN on each energy band. And the results are listed in table 2, 3 and 4. As shown in Fig. 5, the centroid frequency and the FWHM of the QPO as functions of photon energy calculated according to the two models are similar, but the fractional r.m.s given by the two models are significantly different. For the fundamental component of QPO, the fractional r.m.s of QPO given by the traditional additive PDS model is about $2\sim 3$ times higher than that of the multiplicative PDS mode, but the trend is the same for both results. For the harmonic component of the QPO, the difference in the fractional r.m.s given by the two PDS models is not very significant. Table 2: Best-fit results for the fundamental frequency component of the QPO obtained using additive and multiplicative PDS models (i.e. equations (21) and (22)), respectively. [b] QPO frequency (Hz) QPO FWHM (Hz) energy band (keV) additive PDS model multiplicative PDS model additive PDS model multiplicative PDS model 1.0-2.6 $0.44\pm 0.01$ $0.45\pm 0.01$ $0.13\pm 0.04$ $0.14\pm 0.03$ 2.6-4.8 $0.43\pm 0.01$ $0.45\pm 0.01$ $0.14\pm 0.03$ $0.14\pm 0.03$ 4.8-7.0 $0.43\pm 0.01$ $0.45\pm 0.01$ $0.13\pm 0.05$ $0.12\pm 0.04$ 7.0-11.0 $0.43\pm 0.01$ $0.45\pm 0.01$ $0.11\pm 0.06$ $0.11\pm 0.05$ 11.0-23.0 $0.43\pm 0.01$ $0.44\pm 0.01$ $0.12\pm 0.03$ $0.10\pm 0.03$ 25.0-35.0 $0.42\pm 0.01$ $0.43\pm 0.01$ $0.11\pm 0.03$ $0.09\pm 0.02$ 35.0-48.0 $0.43\pm 0.01$ $0.44\pm 0.01$ $0.10\pm 0.03$ $0.08\pm 0.02$ 48.0-67.0 $0.43\pm 0.01$ $0.44\pm 0.01$ $0.10\pm 0.03$ $0.08\pm 0.02$ 67.0-100.0 $0.43\pm 0.01$ $0.44\pm 0.01$ $0.11\pm 0.03$ $0.10\pm 0.02$ 100.0-150.0 $0.42\pm 0.01$ $0.43\pm 0.01$ $0.06\pm 0.03$ $0.09\pm 0.03$ 150.0-200.0 $0.44\pm 0.03$ $0.45\pm 0.03$ $0.10\pm 0.13$ $0.08\pm 0.11$ QPO r.m.s %* reduced $\chi^{2}$ energy band (keV) additive PDS model multiplicative PDS model additive PDS model multiplicative PDS model 1.0-2.6 $9.14\pm 0.31$ $3.83\pm 0.10$ 0.48 0.71 2.6-4.8 $10.26\pm 0.31$ $4.15\pm 0.07$ 0.74 0.88 4.8-7.0 $8.99\pm 0.40$ $3.36\pm 0.09$ 0.61 0.69 7.0-11.0 $7.59\pm 0.56$ $3.06\pm 0.13$ 0.66 0.66 11.0-23.0 $7.86\pm 0.20$ $2.59\pm 0.05$ 0.48 0.57 25.0-35.0 $5.65\pm 0.11$ $1.72\pm 0.02$ 1.02 1.55 35.0-48.0 $5.54\pm 0.13$ $1.72\pm 0.03$ 0.89 1.14 48.0-67.0 $4.75\pm 0.09$ $1.44\pm 0.02$ 0.59 0.71 67.0-100.0 $5.24\pm 0.13$ $1.64\pm 0.02$ 0.45 0.58 100.0-150.0 $3.88\pm 0.14$ $1.88\pm 0.04$ 0.43 0.44 150.0-200.0 $2.05\pm 0.12$ $0.76\pm 0.06$ 0.44 0.41 * * No background correction is applied to the r.m.s because we are only interested in the difference between the results of fitting using the additive PDS model and the multiplicative PDS model, and the correction factors are the same for both models. Table 3: The same as table 2 but for the harmonic frequency component of the QPO. | QPO frequency(Hz) | QPO FWHM(Hz) ---|---|--- energy band (keV) | additive PDS model | multiplicative PDS model | additive PDS model | multiplicative PDS model 1.0-2.6 | $0.90\pm 0.02$ | $0.98\pm 0.02$ | $0.37\pm 0.08$ | $0.54\pm 0.09$ 2.6-4.8 | $0.90\pm 0.02$ | $0.94\pm 0.02$ | $0.34\pm 0.09$ | $0.33\pm 0.08$ 4.8-7.0 | $0.91\pm 0.02$ | $0.94\pm 0.02$ | $0.26\pm 0.09$ | $0.24\pm 0.10$ 7.0-11.0 | $0.91\pm 0.06$ | $0.98\pm 0.06$ | $0.47\pm 0.21$ | $0.47\pm 0.18$ 11.0-23.0 | $0.88\pm 0.02$ | $0.89\pm 0.02$ | $0.10\pm 0.10$ | $0.12\pm 0.12$ 25.0-35.0 | $0.91\pm 0.03$ | $0.96\pm 0.03$ | $0.42\pm 0.12$ | $0.35\pm 0.11$ 35.0-48.0 | $0.85\pm 0.09$ | $0.85\pm 0.23$ | $0.42\pm 0.34$ | $0.41\pm 0.34$ 48.0-67.0 | $0.93\pm 0.04$ | $0.97\pm 0.02$ | $0.75\pm 0.09$ | $0.36\pm 0.08$ 67.0-100.0 | $0.91\pm 0.02$ | $0.95\pm 0.02$ | $0.31\pm 0.07$ | $0.25\pm 0.06$ 100.0-150.0 | $0.94\pm 0.03$ | $1.01\pm 0.03$ | $0.45\pm 0.14$ | $0.39\pm 0.13$ 150.0-200.0 | $1.01\pm 0.04$ | $1.12\pm 0.04$ | $0.39\pm 0.14$ | $0.65\pm 0.12$ | QPO r.m.s % | | energy band (keV) | additive PDS model | multiplicative PDS model | | 1.0-2.6 | $9.55\pm 1.22$ | $10.04\pm 0.58$ | | 2.6-4.8 | $9.47\pm 1.28$ | $7.76\pm 0.79$ | | 4.8-7.0 | $8.57\pm 1.65$ | $6.63\pm 1.30$ | | 7.0-11.0 | $9.29\pm 1.89$ | $7.57\pm 0.98$ | | 11.0-23.0 | $4.53\pm 1.19$ | $3.35\pm 2.41$ | | 25.0-35.0 | $8.76\pm 1.44$ | $5.93\pm 0.92$ | | 35.0-48.0 | $5.55\pm 2.10$ | $6.16\pm 5.68$ | | 48.0-67.0 | $9.42\pm 0.69$ | $4.51\pm 0.55$ | | 67.0-100.0 | $6.37\pm 0.88$ | $4.04\pm 0.77$ | | 100.0-150.0 | $5.59\pm 1.07$ | $3.54\pm 0.71$ | | 150.0-200.0 | $4.43\pm 0.71$ | $4.77\pm 0.35$ | | Table 4: The same as table 2 but for the BBN components. | BBN1 FWHM (Hz) | BBN2 FWHM (Hz) ---|---|--- energy band (keV) | additive PDS model | multiplicative PDS model | additive PDS model | multiplicative PDS model 1.0-2.6 | $0.48\pm 0.08$ | $0.27\pm 0.04$ | $3.17\pm 0.51$ | $3.01\pm 0.35$ 2.6-4.8 | $0.62\pm 0.19$ | $0.28\pm 0.04$ | $3.89\pm 0.76$ | $3.02\pm 0.44$ 4.8-7.0 | $0.87\pm 0.47$ | $0.30\pm 0.06$ | $3.70\pm 1.61$ | $2.54\pm 0.61$ 7.0-11.0 | $0.54\pm 0.24$ | $0.32\pm 0.09$ | $10.27\pm 9.14$ | $5.56\pm 3.56$ 11.0-23.0 | $0.92\pm 1.92$ | $0.42\pm 0.38$ | $2.56\pm 1.59$ | $2.25\pm 0.44$ 25.0-35.0 | $0.69\pm 0.35$ | $0.38\pm 0.07$ | $4.70\pm 1.89$ | $3.26\pm 0.95$ 35.0-48.0 | $0.58\pm 0.80$ | $\cdots$ | $6.24\pm 15.27$ | $2.06\pm 2.09$ 48.0-67.0 | $0.65\pm 0.20$ | $0.41\pm 0.05$ | $10.83\pm 2.89$ | $2.77\pm 0.52$ 67.0-100.0 | $0.69\pm 0.54$ | $0.43\pm 0.09$ | $2.74\pm 0.82$ | $2.44\pm 0.35$ 100.0-150.0 | $0.71\pm 0.45$ | $0.50\pm 0.08$ | $3.76\pm 1.71$ | $3.10\pm 0.87$ 150.0-200.0 | $1.86\pm 0.18$ | $0.61\pm 0.08$ | $18.88\pm 12.81$ | $6.55\pm 2.43$ | BBN1 r.m.s % | BBN2 r.m.s % energy band (keV) | additive PDS model | multiplicative PDS model | additive PDS model | multiplicative PDS model 1.0-2.6 | $16.32\pm 1.30$ | $\cdots$ | $20.12\pm 1.18$ | $22.07\pm 0.67$ 2.6-4.8 | $14.36\pm 2.21$ | $\cdots$ | $20.56\pm 1.19$ | $23.23\pm 0.73$ 4.8-7.0 | $14.40\pm 5.28$ | $\cdots$ | $18.60\pm 3.27$ | $22.31\pm 1.29$ 7.0-11.0 | $11.55\pm 2.11$ | $\cdots$ | $15.69\pm 5.61$ | $15.50\pm 2.27$ 11.0-23.0 | $21.02\pm 0.75$ | $\cdots$ | $19.22\pm 7.50$ | $20.85\pm 1.03$ 25.0-35.0 | $9.87\pm 2.65$ | $\cdots$ | $14.66\pm 1.34$ | $16.88\pm 0.92$ 35.0-48.0 | $11.34\pm 1.83$ | $\cdots$ | $7.87\pm 3.83$ | $11.38\pm 4.97$ 48.0-67.0 | $8.01\pm 1.10$ | $\cdots$ | $10.53\pm 0.93$ | $13.04\pm 0.59$ 67.0-100.0 | $6.19\pm 4.02$ | $\cdots$ | $13.18\pm 1.96$ | $14.56\pm 0.56$ 100.0-150.0 | $5.33\pm 2.21$ | $\cdots$ | $9.17\pm 1.30$ | $10.51\pm 0.66$ 150.0-200.0 | $10.49\pm 0.56$ | $\cdots$ | $11.46\pm 4.92$ | $9.34\pm 0.77$ Figure 6: Modeling results for the FDPLS between signals $s_{\rm l}$ and $s_{\rm h}$ and their respective PDS (see section 3 for the definitions of $s_{\rm l}$ and $s_{\rm h}$). In panels a and b, the solid lines are the best-fit using the models proposed in section 3 and the best-fit parameters are listed in table 2. The yellow bands running through the two panels are the QPO FWHM frequency ranges (including fundamental and harmonic frequencies). Error bars correspond to $1\sigma$ confidence intervals. Table 5: Timing properties of $s_{\rm lq}$,$s_{\rm lnq}$,$s_{\rm hq}$ and $s_{\rm hnq}$ (see section 3 for the definition of them). signals | bin size (s) | mean rate (cts/s) | exposure (s) | fractional r.m.s | PDS type | FDPLS type ---|---|---|---|---|---|--- $s_{\rm lq}$ | 0.03125 | 142.5 | 8000 | 27.00% | QPO | dip $s_{\rm hq}$ | 0.03125 | 63.5 | 8000 | 9.43% | QPO $s_{\rm lnq}$ | 0.03125 | 142.5 | 8000 | 51.00% | Non-QPO | constant $s_{\rm hnq}$ | 0.03125 | 63.5 | 8000 | 15.70% | Non-QPO Table 6: PDS and FDPLS fitting results for $s_{\rm l}$ and $s_{\rm h}$. $s_{\rm l}$ PDS model | $s_{\rm h}$ PDS model | FDPLS model ---|---|--- parameter name | value | parameter name | value | parameter name | value $K_{1}$ | $0.007\pm 0.002$ | $K_{1}$ | $0.001\pm 0.000$ | $A_{1}$ | $-0.109\pm 0.015$ $f_{c1}$ | $0.442\pm 0.008$ | $f_{c1}$ | $0.430\pm 0.010$ | $\mu_{1}$ | $0.405\pm 0.011$ $\omega_{1}$ | $0.119\pm 0.033$ | $\omega_{1}$ | $0.090\pm 0.030$ | $\sigma_{1}$ | $0.062\pm 0.008$ $K_{2}$ | $0.009\pm 0.002$ | $K_{2}$ | $0.001\pm 0.000$ | $A_{2}$ | $-0.204\pm 0.041$ $f_{c2}$ | $0.900\pm 0.020$ | $f_{c2}$ | $0.860$ (frozen) | $\mu_{2}$ | $0.776\pm 0.037$ $\omega_{2}$ | $0.370\pm 0.083$ | $\omega_{2}$ | $0.200$ (frozen) | $\sigma_{2}$ | $0.185\pm 0.024$ $K_{3}$ | $0.027\pm 0.004$ | $K_{3}$ | $0.006$ (frozen) | c | $1.102\pm 0.040$ $f_{c3}$ | $0.000$ (frozen) | $f_{c3}$ | $0.000$ (frozen) | $\cdots$ | $\cdots$ $\omega_{3}$ | $0.477\pm 0.080$ | $\omega_{3}$ | $2.061\pm 0.425$ | $\cdots$ | $\cdots$ $K_{4}$ | $0.040\pm 0.005$ | c | $0.016\pm 0.000$ | $\cdots$ | $\cdots$ $f_{c4}$ | $0.000$ (frozen) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ $\omega_{4}$ | $3.175\pm 0.512$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ c | $0.007\pm 0.000$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ In addition, we would like to know how the phase lags should look like in MAXI J1820 for additive and convolved signals, so some simulations are done. We use the same data (ObsID P0114661078) used above as an example to show how effective this correction is. From panel h and panel i of Fig. 3 we can see that the QPO phase lags between the high and low energy bands before and after correction are completely flipped. We focus on two energy bands of these signals: the reference energy band (1-2.6 keV) and the high energy band (100-150 keV). The phase lag between the reference energy band and the high energy band is indicated by the red arrows in panel h and panel i of Fig. 3. After the data reduction in the same way as Ma21, we extract two light curves in reference energy band (denote as $s_{\rm l}$) and high energy band (denote as $s_{\rm h}$). The mean count rate of $s_{\rm l}$ and $s_{\rm h}$ are 285 counts/s and 127 counts/s, respectively, and both of the effective exposure time are 8 ks. The FDPLS and PDS of $s_{\rm l}$ and $s_{\rm h}$ are first fitted to obtain the best models, and after that, the best models are used for simulations. For the FDPLS between $s_{\rm l}$ and $s_{\rm h}$, the model takes the form ${\rm lag}(f)=\frac{A_{1}}{\sigma_{1}\sqrt{2\pi}}e^{[{-{(f-\mu_{1})^{2}}/{{2\sigma_{1}}^{2}}}]}+\frac{A_{2}}{\sigma_{2}\sqrt{2\pi}}e^{[{-{(f-\mu_{2})^{2}}/{{2\sigma_{2}}^{2}}}]}+c,$ (23) where the first and second terms represent the dip-like phase lags of the QPO components (including fundamental and harmonic frequencies) and the last term represent the phase lags of the non-QPO components (BBN components). For the PDS of $s_{\rm l}$ and $s_{\rm h}$, a constant term and the sum of four Lorentzian functions are used to fit the data, i.e., the PDS model is the same as equation (21). We find that the above PDS model does not require the high- frequency zero-centred Lorentzian component for $s_{\rm h}$ when fitting the PDS of $s_{\rm h}$. Moreover, in fitting the PDS of $s_{\rm h}$ we found that the harmonic frequency component of QPO is not well constrained due to the low signal-to-noise ratio of the data. We thus fix the parameters of the QPO harmonic frequency component, which does not affect the goodness of fit, but is useful for our subsequent simulation of the QPO components. The fitting results are shown in Fig. 6 and the best-fit parameters of the above models are listed in table 6. It is worth pointing out that we are using an additive PDS model to fit the PDS here, which is correct for additive signal, but not for convolved signal, which should use a multiplicative PDS model. However, we note that the FDPLS relationship of the convolved signal depends only on the FDPLS of the sub-signals and is independent of the PDS of the sub-signals, so the PDS model we use here has no effect on the FDPLS calculation of the convolved signal. We then simulate four sub-signals based on the best FDPLS and PDS models obtained above. The PDS of the QPO component is modeled using the sum of the non-zero centred Lorentzian components and the PDS of the BBN component is modeled using the sum of the zero-centred Lorentzian components. We first simulate four signals using TK95 algorithm, noted as $s_{\rm lq}$, $s_{\rm lnq}$, $s_{\rm hq}$, $s_{\rm hnq}$, which stand for the QPO component and the BBN component in the reference energy band, and the QPO component and the BBN component in the high energy band, respectively. After that, we use the algorithm proposed in section 2.3 to make the FDPLS of $s_{\rm lq}$ and $s_{\rm hq}$ satisfy the Gaussian components of the best-fit model and make the FDPLS of $s_{\rm lnq}$ and $s_{\rm hnq}$ satisfy the constant component of the best-fit model. The timing properties of these four signal are listed in table 5. Note that in calculating the FDPLS we split the signal into QPO and non-QPO components, which in effect assumes that the contribution of the additive component of equation (20) to the overall FDPLS can be neglected, i.e. the destruction of this additive component to the additivity of the phase lag between the convolution components can be neglected, as we will explain in detail in the subsequent discussion section. The PDS of $s_{\rm lq}$, $s_{\rm lnq}$, $s_{\rm hq}$, $s_{\rm hnq}$ are shown in top panel of Fig. 7. The FDPLS between $s_{\rm lq}$ and $s_{\rm hq}$ are two dips, and the FDPLS between $s_{\rm lnq}$ and $s_{\rm hnq}$ are constant, which are shown in the middle panel of Fig. 7. Figure 7: Simulation results based on the data modeling shown in Fig. 6. Upper panel: the simulated PDS of the signals $s_{\rm lq}$,$s_{\rm lnq}$,$s_{\rm hq}$ and $s_{\rm hnq}$. Middle panel: the simulated FDPLS (blue dots) of $s_{\rm lq}$ and $s_{\rm hq}$ and the simulated FDPLS (green dots) of $s_{\rm lnq}$ and $s_{\rm hnq}$. Bottom panel: the calculated FDPLS of the total signal when the total signal is an/a additive/convolved signal (cyan/red data points). The gray dots are the observed FDPLS. The yellow bands running through panels are the QPO FWHM frequency ranges (including fundamental and harmonic frequencies). Error bars correspond to $1\sigma$ confidence intervals. Then, $s_{\rm lq}$ and $s_{\rm lnq}$ are added/convolved to get the additive/convolved signal. $s_{\rm hq}$ and $s_{\rm hnq}$ are added/convolved to get the other additive/convolved signal. The FDPLS of the additive/convolved signals is shown by the cyan/red dotted lines in the lower panel of Fig. 7. The gray dots in the figure are the observed data. As can be seen from Fig. 7, the simulated results for the additive signal are very different from those given by the data. But the simulated results for the convolved signal match the data perfectly. This simulation result indicates that the observed data can distinguish between convolved and additive signals for FDPLS. ## 4 discussion and summary In this paper, we investigate the mechanism behind the phase lag correction that was successfully applied for the first time by Ma21 for MAXI J1820+070, where the strong BBN and the QPO coexist. After correcting the phase lag of the QPO, the absolute value of QPO phase lag increases monotonically with photon energy in all observations. Ma21 explained the phase lag behavior of the QPO by employing a compact jet with precession. In this scenario, the high-energy photons come from the part of the jet closer to the black hole, and the precession of the compact jet causes the QPO phenomenon and allows the high-energy photons to reach the observer first, resulting in a soft lag. Because the phase lag behavior can have a large impact on physical conclusions, it is necessary to investigate the rationality of this correction method. Since we want to obtain the intrinsic properties of the QPO, and what we observe is some kind of superposition of the QPO and the BBN components, we have to face the question of how these components constitute the total signal. We found that the correction method is effective only when the sub-signals are synthesized into the total signal by convolution. If the total signal is the convolved signal, the intrinsic phase lags of the QPO can be obtained by subtracting the phase lags of the BBN component from the original phase lags of the total signal, as successfully implemented in Ma21. If the observed total signals are convolved signals, the corresponding PDS cannot be fitted simply by summing a series of Lorentzian functions (conventions in most of the literatures) but require a multiplicative PDS model. We then try to introduce the convolution mechanism by assuming the propagation of the QPO waves in the corona (may be due to the magneto-acoustic wave propagating within the corona, e.g. Cabanac et al. 2010). The fluctuation propagation in the form of Dirac delta function resulting in the Green’s function. Any form of timing fluctuation will be the convolution of that fluctuation and the Green’s function (Ingram 2016). We assume that the Green’s function is first convolved with the white noise and then convolved with the QPO signal to form the low-frequency part of the observed signal, while the high-frequency part is the result of the convolution of the Green’s function with the two white noise components. If the Green’s function and the QPO signal are convoluted in the time domain, the total PDS will be the multiplication of their respective PDS according to the convolution theorem. Based on this, we introduce a multiplicative PDS model to fit the observed PDS in a representative _Insight_ -HXMT observation in 11 different energy bands. For comparison, we also fitted the same data using the traditional additive PDS model. Overall, both additive and multiplicative PDS models fit the observed data well, but the individual components have some differences. The two models give little difference in the centroid frequency as well as in the FWHM of the QPO . For the fundamental frequency component of the QPO, the fraction of r.m.s of the QPO given by the traditional additive PDS model is about $2\sim 3$ higher than that of the multiplicative PDS model, but the trend is the same for both results. For the harmonic frequency component of QPO, the fractional rms given by the two models are not significantly different. For the traditional additive PDS model, the low-frequency zero-centred Lorentzian component can be considered as the variability due to the propagation of the white noise fluctuation in the outer region of the corona (Ingram & Done 2011), and the narrow Lorentzian components stand for the fundamental and harmonic components of the QPO, and the high frequency zero- centred Lorentzian component is responsible for the variability due to the propagation of the white noise fluctuation in the inner region of the corona (Ingram & Done 2011). All these terms are simply added together, which means that there is no coherence between them. For the multiplicative PDS model, we find that the low frequency component of the PDS can be fitted by multiplying the fundamental and harmonic components of the QPO with a zero-centred Lorentzian function, in addition to an additional additive component to produce the high frequency part of the PDS. The additive Lorentzian component plays the same as the role in the additive PDS model. Therefore, for our multiplicative model, the high-frequency part does not need to be convolved to the QPO signal, but is simply added together. This additive component appearing in the PDS model looks to destroy the additivity of the phase lag brought about by the time domain convolution. We ignored the contribution of this additive component in our previous phase lag calculations. We make a simulation to investigate the effect of this additional additive component on the total FDPLS. Due to the dependence of the FDPLS of additive signals on the PDS of the individual components, we need to know the PDS parameters of each component. Specifically, we first obtain the PDS parameters for each component based on the results of the fit of the multiplicative PDS model (shown in the right panels of Fig. 4), and then simulate four signals based on the mean of these parameters in two energy bands: $x_{1}$ (the convolved signal of energy band 1), $y_{1}$ (the additional additive signal of energy band 1), $x_{2}$ (the convolved signal of energy band 2) and $y_{2}$ (the additional additive signal of energy band 2). The PDS of these four signals are shown in the upper panel of Fig. 8. We then set the FDPLS model of $x_{1}$ and $x_{2}$ to $\phi(f)=-0.5e^{-\frac{(f-0.4)^{2}}{0.1^{2}}}-0.3e^{-\frac{(f-0.8)^{2}}{0.15^{2}}}+1$ and set the FDPLS model of $y_{1}$ and $y_{2}$ to 1. Finally we calculate the FDPLS of $x_{1}$ and $x_{2}$ and the FDPLS of $x_{1}+y1$ and $x_{2}+y_{2}$, respectively. By comparing these two FDPLS, the effect of the additional additive component on the total FDPLS can be known. As can be seen from the lower panel of Fig. 8, the additional additive component has almost no effect on the total FDPLS, except for a slight dilution of the phase lag of the harmonic component. It is therefore reasonable to ignore the effect of the additional additive component on the FDPLS of the convolution components in our previous analysis. Figure 8: The effect of an additional additive components on the total FDPLS. Upper panel: PDS of simulated signals. Lower panel: FDPLS with or without an additional additive component. Traditionally, it is mostly assumed that the observed components are additive in the time domain, and it has also been suggested that it might be more reasonable to multiply these components in the time domain based on the fluctuation propagation model (e.g. Ingram & van der Klis (2013)). However, neither of these two models can explain the phase lag correction in Ma21. In order to explain the correction of phase lags in Ma21, we propose a convolution model instead of the additive and multiplicative models in the time domain, which is supported by comparison between simulations and data on both PDS and FDPLS. This suggests that the convolution model can explain the behaviour of the phase lag observed in MAXI J1820+070, in which case the phase lag correction method applied in Ma21 is correct. Finally, it is worth pointing out that our current convolution model still has limitations. For examples, it is not yet possible to explain the energy dependence of the phase lag using the convolution model, and the relationship of individual components to specific physical processes needs further development. However, it is certain that at least part of the time domain signal is filtered by the system before it reaches the observer (e.g. both the accretion disk and corona/jet play the role of low-pass filters to some extent), and these response processes are necessarily accompanied by time domain convolution operations. ## Acknowledgments This research has made use of the data from the _Insight_ -HXMT mission, a project funded by China National Space Administration and the Chinese Academy of Sciences. This work is supported by the National Natural Science Foundation of China under grants 12133007, U1838201, U1938201. ## Data Availability The data used in this paper can be found in the _Insight_ -HXMT website (http://hxmtweb.ihep.ac.cn/). ## References * Arnaud (1996) Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds, Astronomical Society of the Pacific Conference Series Vol. 101, Astronomical Data Analysis Software and Systems V. p. 17 * Belloni & Hasinger (1990) Belloni T., Hasinger G., 1990, A&A, 227, L33 * Belloni et al. (2002) Belloni T., Psaltis D., van der Klis M., 2002, ApJ, 572, 392 * Bu et al. (2015) Bu Q.-c., Chen L., Li Z.-s., Qu J.-l., Belloni T. M., Zhang L., 2015, ApJ, 799, 2 * Cabanac et al. (2010) Cabanac C., Henri G., Petrucci P. O., Malzac J., Ferreira J., Belloni T. M., 2010, MNRAS, 404, 738 * Cui (1999) Cui W., 1999, ApJ, 524, L59 * Done et al. (2007) Done C., Gierliński M., Kubota A., 2007, A&ARv, 15, 1 * Huppenkothen et al. (2019) Huppenkothen D., et al., 2019, ApJ, 881, 39 * Ingram (2016) Ingram A. R., 2016, Astronomische Nachrichten, 337, 385 * Ingram & Done (2011) Ingram A., Done C., 2011, MNRAS, 415, 2323 * Ingram & Motta (2019) Ingram A. R., Motta S. E., 2019, New Astron. Rev., 85, 101524 * Ingram & van der Klis (2013) Ingram A., van der Klis M., 2013, MNRAS, 434, 1476 * Ingram et al. (2009) Ingram A., Done C., Fragile P. C., 2009, MNRAS, 397, L101 * Kara et al. (2019) Kara E., et al., 2019, Nature, 565, 198 * Kawamuro et al. (2018) Kawamuro T., et al., 2018, The Astronomer’s Telegram, 11399, 1 * Leahy et al. (1983) Leahy D. A., Darbro W., Elsner R. F., Weisskopf M. C., Sutherland P. G., Kahn S., Grindlay J. E., 1983, ApJ, 266, 160 * Lin et al. (2000) Lin D., Smith I. A., Böttcher M., Liang E. P., 2000, ApJ, 531, 963 * Ma et al. (2021) Ma X., et al., 2021, Nature Astronomy, 5, 94 * Miyamoto et al. (1991) Miyamoto S., Kimura K., Kitamoto S., Dotani T., Ebisawa K., 1991, ApJ, 383, 784 * Morgan et al. (1997) Morgan E. H., Remillard R. A., Greiner J., 1997, ApJ, 482, 993 * Motta (2016) Motta S. E., 2016, Astronomische Nachrichten, 337, 398 * Narayan & Yi (1995) Narayan R., Yi I., 1995, ApJ, 452, 710 * Newville et al. (2014) Newville M., Stensitzki T., Allen D. B., Ingargiola A., 2014, LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python, Zenodo, doi:10.5281/zenodo.11813 * Poutanen (2001) Poutanen J., 2001, in White N. E., Malaguti G., Palumbo G. G. C., eds, American Institute of Physics Conference Series Vol. 599, X-ray Astronomy: Stellar Endpoints, AGN, and the Diffuse X-ray Background. pp 310–325 (arXiv:astro-ph/0002505), doi:10.1063/1.1434644 * Psaltis et al. (1999) Psaltis D., Belloni T., van der Klis M., 1999, ApJ, 520, 262 * Qu et al. (2004) Qu J. L., Chen Y., Wu M., Chen L., Song L. M., 2004, Ap&SS, 293, 441 * Qu et al. (2010) Qu J. L., Lu F. J., Lu Y., Song L. M., Zhang S., Ding G. Q., Wang J. M., 2010, ApJ, 710, 836 * Rapisarda et al. (2014) Rapisarda S., Ingram A., van der Klis M., 2014, MNRAS, 440, 2882 * Rapisarda et al. (2016) Rapisarda S., Ingram A., Kalamkar M., van der Klis M., 2016, MNRAS, 462, 4078 * Timmer & Koenig (1995) Timmer J., Koenig M., 1995, A&A, 300, 707 * Torres et al. (2019) Torres M. A. P., Casares J., Jiménez-Ibarra F., Muñoz-Darias T., Armas Padilla M., Jonker P. G., Heida M., 2019, ApJ, 882, L21 * Uttley et al. (2011) Uttley P., Wilkinson T., Cassatella P., Wilms J., Pottschmidt K., Hanke M., Böck M., 2011, MNRAS, 414, L60 * Wijnands et al. (1999) Wijnands R., Homan J., van der Klis M., 1999, ApJ, 526, L33 * Zhang et al. (2020) Zhang L., et al., 2020, MNRAS, 494, 1375 * van der Klis et al. (1987) van der Klis M., Hasinger G., Stella L., Langmeier A., van Paradijs J., Lewin W. H. G., 1987, ApJ, 319, L13
# A simple evaluation of a theta value and the Kronecker limit formula Fernando Chamizo The author is partially supported by the PID2020-113350GB-I00 grant of the MICINN (Spain) and by “Severo Ochoa Programme for Centres of Excellence in R&D” (SEV-2015-0554). (July 18, 2021) ###### Abstract We evaluate the classic sum $\sum_{n\in\mathbb{Z}}e^{-\pi n^{2}}$. The novelty of our approach is that it does not require any prior knowledge about modular forms, elliptic functions or analytic continuations. Even the $\Gamma$ function, in terms of which the result is expressed, only appears as a complex function in the computation of a real integral by the residue theorem. Another contribution of this note is to provide a very simple proof of the Kronecker limit formula. Keywords: Theta function, Gamma function Mathematics Subject Classification (2020): Primary 11F67, 11Y60; Secondary 11F27 ## 1 Introduction Our goal is to give a proof of the following result with very few prerequisites. In general, the special values of theta and allied functions are related to deep topics in number theory (complex multiplication, class field theory, modular forms, elliptic functions, etc., cf. [3], [2]) which we avoid here. ###### Theorem 1.1. Consider $\theta(z)=\sum_{n\in\mathbb{Z}}e^{\pi in^{2}z}$. Then $\theta(i)=(2\pi)^{-1/4}\sqrt{\frac{\Gamma(1/4)}{\Gamma(3/4)}}=\frac{\Gamma(1/4)}{\pi^{3/4}\sqrt{2}}$ with $\Gamma$ the classical Gamma function $\Gamma(s)=\int_{0}^{\infty}t^{s-1}e^{-t}\;dt$. We will prove the first equality, the second follows from the relation $\Gamma(s)\Gamma(1-s)=\pi\csc(\pi s)$ that do not use elsewhere. In fact the $\Gamma$ function only appears as a complex function in the computation of an integral (Lemma 3.2) and, beyond that, we barely use its defining integral representation for $s>1$. Except for a special case of the Jacobi triple product identity and the well known formula for the number of representations as a sum of two squares (both separated in §2 and admitting elementary proofs, not included here), the proof is completely self-contained. The techniques only involve basic real and complex variable methods. No modular properties of $\theta$ and $\eta$ and no functional equations of any $L$-function or Eisenstein series nor their analytic continuations, are required. Our argument includes a proof of a version of the (first) Kronecker limit formula (Proposition 3.1) simpler than the ones we have found in the literature (cf. [8]) which may have independent interest. We address the reader to the interesting paper [4] for the history and relevance of this formula. ## 2 Two auxiliary results We first recall the factorization of the $\theta$ function. ###### Lemma 2.1. For $|q|<1$, $\sum_{n=-\infty}^{\infty}q^{n^{2}}=\prod_{n=1}^{\infty}\big{(}1-q^{2n}\big{)}\big{(}1+q^{2n-1}\big{)}^{2}.$ The next result is the classic formula for $r(n)$, the number of representations of $n$ as a sum of two squares, in terms of the nontrivial character $\chi$ modulo $4$ (i.e., $\chi(n)=(-1)^{(n-1)/2}$ for $n$ odd and zero for $n$ even). ###### Lemma 2.2. For $n\in\mathbb{Z}^{+}$ and $s>1$, we have $r(n)=4\sum_{d\mid n}\chi(n)\quad\text{or equivalently,}\quad\sum_{n=1}^{\infty}r(n)n^{-s}=4\zeta(s)L(s)$ with $\zeta(s)=\sum_{n=1}^{\infty}n^{-s}$ the Riemann zeta function and $L(s)=\sum_{n=1}^{\infty}\chi(n)n^{-s}$. We will say some words about their proofs. Lemma 2.1 comes from the Jacobi triple product identity which admits elementary combinatorial proofs (see [6, §8.3] and [1]) but arguably, even today, the conceptually most enlightening proof is the classic one based on complex analysis [10, §10.1]. It uses the invariance by two translations of certain entire function to conclude that it is a constant, which is computed with a beautiful argument due to Gauss [9, §78]. Lemma 2.2 can be derived from the triviality of some spaces of modular forms or from some properties of elliptic functions [10, §10.3.1], [9, §84]. A less demanding proof, requiring quadratic residues and almost nothing else, is to use the representations of an integer by the quadratic forms in a class [6, §12.4]. A longer alternative is to show that $\mathbb{Z}[i]$ is a UFD and deduce the result from $r(p)=4(1+\chi(p))$ for $p$ prime, which is essentially Fermat two squares theorem [5, Art.182] (see [11] for a “one-sentence” proof of the latter). ## 3 The Kronecker limit formula and the theta evaluation We first state a compact version of the Kronecker limit formula and provide a proof only requiring the residue theorem and the very easy [7, p.23] and well known result $(s-1)\zeta(s)\to 1$ as $s\to 1^{+}$. The Epstein zeta function $\zeta(s,Q)$ associated to a positive definite binary quadratic form $Q$ and the Dedekind $\eta$ function are defined by: $\zeta(s,Q)=\sum_{\vec{n}\in\mathbb{Z}^{2}\setminus\\{\vec{0}\\}}\big{(}Q(\vec{n})\big{)}^{-s}\quad\text{ and }\quad\eta(z)=e^{\pi iz/12}\prod_{n=1}^{\infty}\big{(}1-e^{2\pi inz}\big{)}.$ We assume $s>1$ and $\Im z>0$ to assure the convergence. ###### Proposition 3.1. Let $Q(x,y)=ax^{2}+bxy+cy^{2}$ be a real form with $D=4ac-b^{2}>0$ and $a>0$. Then $\lim_{s\to 1^{+}}\Big{(}\frac{\sqrt{D}}{4\pi}\zeta(s,Q)-\zeta(2s-1)\Big{)}=\log\frac{\sqrt{a/D}}{|\eta(z_{Q})|^{2}}\qquad\text{with}\quad z_{Q}=\frac{-b+i\sqrt{D}}{2a}.$ ###### Proof. Let $f(s)=-\int_{-\infty}^{\infty}Q(x,1)^{-s}\;dx$. The limit in the statement equals $L_{1}-L_{2}$ with $L_{1}=\lim_{s\to 1^{+}}\frac{\sqrt{D}}{4\pi}\Big{(}\zeta(s,Q)+2\zeta(2s-1)f(s)\Big{)},\ L_{2}=\lim_{s\to 1^{+}}\zeta(2s-1)\Big{(}\frac{\sqrt{D}}{2\pi}f(s)+1\Big{)}.$ L’Hôpital’s rule shows $L_{2}=\frac{\sqrt{D}}{4\pi}f^{\prime}(1)$ because $(2s-2)\zeta(2s-1)\to 1$ (and the residue theorem assures $f(1)=-2\pi/\sqrt{D}$). Then the result follows if we prove (3.1) $L_{1}=-\log|\eta(z_{Q})|^{2}\qquad\text{and}\qquad f^{\prime}(1)=-\frac{4\pi}{\sqrt{D}}\log\sqrt{\frac{a}{D}}.$ We have $f^{\prime}(1)=\int_{-\infty}^{\infty}(\log p)/p$ with $p(x)=ax^{2}+bx+c$. With the change of variables $2ax+b=\sqrt{D}\tan(t/2)$ we obtain $f^{\prime}(1)=-\frac{2}{\sqrt{D}}\int_{-\pi}^{\pi}\log\frac{2|\cos(t/2)|}{\sqrt{D/a}}=-\frac{2}{\sqrt{D}}\Re\int_{C}\log\Big{(}\frac{1+z}{\sqrt{D/a}}\Big{)}\frac{dz}{iz}$ with $C$ the unit circle, where we have used $\log\big{(}2|\cos(t/2)|\big{)}=\Re\log(1+z)$ with $z=e^{it}$. Cauchy’s integral formula gives the second identity in (3.1). When we sum $Q(m,n)^{-s}$ the contribution of $n=0$ is $2a^{-s}\zeta(2s)$. Let $g_{s}(z)=Q(z,1)^{-s}+Q(z,-1)^{-s}$. For $n\neq 0$, $Q(m,n)=n^{2}Q(m/n,1)=n^{2}Q(-m/n,-1)$. Then by the residue theorem in the band $B_{\epsilon}=\\{|\Im z|<\epsilon\\}$ with $0<\epsilon<\Im z_{Q}$, $\zeta(s,Q)-2\frac{\zeta(2s)}{a^{s}}=\sum_{n=1}^{\infty}\frac{1}{n^{2s}}\sum_{m\in\mathbb{Z}}g_{s}\big{(}\frac{m}{n}\big{)}=\sum_{n=1}^{\infty}\frac{-1}{2n^{2s-1}}\int_{\partial B_{\epsilon}}g_{s}(z)i\cot(\pi nz)\;dz.$ As $g_{s}$ is even, $\int_{\partial B_{\epsilon}}=-2\int_{L_{\epsilon}}$ with $L_{\epsilon}=\\{\Im z=\epsilon\\}$ oriented to the right and the sum is $\sum_{n}n^{1-2s}\int_{L_{\epsilon}}$. Note that $\int_{L_{\epsilon}}g_{s}=\int_{L_{0}}g_{s}=-2f(s)$. Then adding $2\zeta(2s-1)f(s)$ is equivalent to replace $i\cot(\pi nz)$ by $i\cot(\pi nz)-1$ in $\int_{L_{\epsilon}}$. The expansion $i\cot w-1=2e^{2iw}/(1-e^{2iw})=2(e^{2iw}+e^{4iw}+\dots)$ assures an exponential decay and we have $L_{1}=\frac{\sqrt{D}}{4\pi}\Big{(}2\frac{\zeta(2)}{a}+\sum_{n,k=1}^{\infty}\frac{2}{n}\int_{L_{\epsilon}}g_{1}(z)e^{2\pi inkz}\;dz\Big{)}.$ Substitute $\zeta(2)=\pi^{2}/6$ and note that $g_{1}(z)=\big{(}a(z-z_{Q})(z-\bar{z}_{Q})\big{)}^{-1}+\big{(}a(z+z_{Q})(z+\bar{z}_{Q})\big{)}^{-1}$. The residue theorem in $\\{\Im z>\epsilon\\}$ gives promptly $L_{1}=\frac{\pi\sqrt{D}}{12a}+\sum_{n,k=1}^{\infty}\frac{1}{n}\big{(}e^{2\pi nkiz_{Q}}+e^{-2\pi nki\bar{z}_{Q}}\big{)}=\frac{\pi\sqrt{D}}{12a}-\sum_{k=1}^{\infty}\log\big{|}1-e^{2\pi kiz_{Q}}\big{|}^{2}$ where the second equality comes from $\log(1-w)+\log(1-\bar{w})=\log|1-w|^{2}$. The sum is $\log\big{(}|\eta(z_{Q})|^{2}|e^{-\pi iz_{Q}/6}|\big{)}$ and the proof of (3.1) is complete. ∎ The evaluation of an integral will be play a role in the final step of our proof of Theorem 1.1. We proceed again employing the residue theorem. ###### Lemma 3.2. Let $I=\frac{1}{\pi}\int_{0}^{\infty}\frac{\log t}{\cosh t}\;dt\qquad\text{then}\quad\exp(I)=\frac{\Gamma(3/4)}{\Gamma(1/4)}\sqrt{2\pi}.$ ###### Proof. Consider $f(z)=i\sec(2\pi z)\log\Gamma(1/2+z)$ on the vertical band $B=\big{\\{}|\Re z|<1/2\big{\\}}$. It defines a meromorphic function (for certain branch of the logarithm because $\Gamma$ does not vanish) with simple poles at $z_{\pm}=\pm 1/4$. Clearly the residues satisfy $2\pi i\text{Res}(f,z_{\pm})=\pm\log\Gamma(1/2+z_{\pm})$. This function is integrable along $\partial B$ and the residue theorem shows $\log\frac{\Gamma(3/4)}{\Gamma(1/4)}=\int_{\partial B}f=\int_{-\infty}^{\infty}\frac{\log\Gamma(1+it)}{\cosh(2\pi t)}\;dt-\int_{-\infty}^{\infty}\frac{\log\Gamma(it)}{\cosh(2\pi t)}\;dt.$ Using $\Gamma(1+it)=it\Gamma(it)$ and taking real parts to avoid considerations about the branch of the logarithm, $\log\frac{\Gamma(3/4)}{\Gamma(1/4)}=\int_{-\infty}^{\infty}\frac{\log|t|}{\cosh(2\pi t)}\;dt=\frac{1}{\pi}\int_{0}^{\infty}\frac{\log(t/2\pi)}{\cosh t}\;dt=I-\int_{0}^{\infty}\frac{\log(2\pi)}{\pi\cosh t}\;dt.$ The last integral is $\log\sqrt{2\pi}$ just changing $t=\log u$. ∎ ###### Proof of Theorem 1.1. The identity $\theta(z)=\prod_{n=1}^{\infty}\big{(}1-e^{2\pi inz}\big{)}\big{(}1+e^{\pi i(2n-1)z}\big{)}^{2}$ follows from Lemma 2.1 with $q=e^{\pi iz}$. Some elementary manipulations with the definition of $\eta$ show $\theta(z)=\eta^{2}\big{(}\frac{1}{2}z+\frac{1}{2}\big{)}/\eta(z+1)$. Let $Q=x^{2}+y^{2}$ and $Q^{\prime}=2x^{2}-2xy+y^{2}$ with $z_{Q}=i$ and $z_{Q^{\prime}}=\frac{1+i}{2}$. We have $\zeta(s,Q)=\zeta(s,Q^{\prime})$ because $Q^{\prime}=x^{2}+(x-y)^{2}$. Then Proposition 3.1 implies $|\eta(z_{Q^{\prime}})|^{2}/|\eta(z_{Q})|^{2}=\sqrt{2}$ and, noting $\theta(i)=|\theta(i)|$ and $|\eta(z+1)|=|\eta(z)|$, $\theta(i)=\theta(z_{Q})=\Big{|}\frac{\eta(z_{Q^{\prime}})}{\eta(z_{Q}+1)}\Big{|}^{2}|\eta(z_{Q}+1)|=\sqrt{2}|\eta(z_{Q})|.$ Recalling Lemma 3.2, Theorem 1.1 is equivalent to $I=-\log\big{(}2|\eta(z_{Q})|^{2}\big{)}$. By Lemma 2.2, we have $\zeta(s,Q)=4\zeta(s)L(s)$ and, since Proposition 3.1, we must prove $\lim_{s\to 1^{+}}\Big{(}\frac{2}{\pi}\zeta(s)L(s)-\zeta(2s-1)\Big{)}=I.$ It is known $\zeta(s)\sim(s-1)^{-1}+\gamma$ as $s\to 1$ with $\gamma$ the Euler-Mascheroni constant, and it admits a short elementary proof [7, p.23]. Using111For a quick proof write $\Gamma(s)=\lim_{n\to\infty}\int_{0}^{n}\big{(}1-\frac{x}{n}\big{)}^{n}x^{s-1}\;dx$ to obtain, by repeated partial integration, $\lim\frac{n!n^{s}}{s(s+1)\cdots(s+n)}$ (Gauss’ definition of $\Gamma$). The derivative of its logarithm at $s=1$ gives finally $\Gamma^{\prime}(1)=-\gamma=\lim\big{(}\log n-\frac{1}{1}-\frac{1}{2}-\cdots-\frac{1}{n}\big{)}$. $\Gamma^{\prime}(1)=-\gamma$ we have $\zeta(s)-2\Gamma(s)\zeta(2s-1)\to 0$ and the previous limit is $\lim_{s\to 1^{+}}\Big{(}\frac{4}{\pi}\Gamma(s)L(s)-1\Big{)}\zeta(2s-1)=\lim_{s\to 1^{+}}\frac{4\Gamma(s)L(s)-\pi}{2\pi(s-1)}=\frac{2}{\pi}\frac{d}{ds}\Big{|}_{s=1}\big{(}\Gamma(s)L(s)\big{)}$ by L’Hôpital’s rule. It only remains to show that this derivative is $\pi I/2$. Plainly $\Gamma(s)n^{-s}=\int_{0}^{\infty}t^{s-1}e^{-nt}\;dt$. Then $\Gamma(s)L(s)=\int_{0}^{\infty}t^{s-1}\big{(}e^{-t}-e^{-3t}+e^{-5t}-e^{-7t}+\dots\big{)}\;dt=\int_{0}^{\infty}\frac{t^{s-1}}{2\cosh t}\;dt$ and Lemma 3.2 implies the result differentiating under the integral sign. ∎ ### Acknowledgments I am deeply indebted to E. Valenti. ## References * [1] G. E. Andrews. A simple proof of Jacobi’s triple product identity. Proc. Amer. Math. Soc., 16:333–334, 1965. * [2] F. Chamizo and D. Raboso. Modular forms and almost integers _(Spanish)_. Gac. R. Soc. Mat. Esp., 13(3):539–555, 2010. * [3] D. A. Cox. Primes of the form $x^{2}+ny^{2}$. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1989\. Fermat, class field theory and complex multiplication. * [4] W. Duke, Ö. Imamoḡlu, and Á. Tóth. Kronecker’s first limit formula, revisited. Res. Math. Sci., 5(2):Paper No. 20, 21, 2018. * [5] C. F. Gauss. Disquisitiones arithmeticae. Springer-Verlag, New York, 1986. Translated and with a preface by Arthur A. Clarke, Revised by W. C. Waterhouse, C. Greither and A. W. Grootendorst and with a preface by Waterhouse. * [6] L. K. Hua. Introduction to number theory. Springer-Verlag, Berlin-New York, 1982. Translated from the Chinese by P. Shiu. * [7] H. Iwaniec. Lectures on the Riemann zeta function, volume 62 of University Lecture Series. American Mathematical Society, Providence, RI, 2014. * [8] Y. Motohashi. A new proof of the limit formula of Kronecker. Proc. Japan Acad., 44:614–616, 1968. * [9] H. Rademacher. Topics in analytic number theory. Die Grundlehren der mathematischen Wissenschaften, Band 169. Springer-Verlag, New York-Heidelberg, 1973. Edited by E. Grosswald, J. Lehner and M. Newman. * [10] E. M. Stein and R. Shakarchi. Complex analysis, volume 2 of Princeton Lectures in Analysis. Princeton University Press, Princeton, NJ, 2003. * [11] D. Zagier. A one-sentence proof that every prime $p\equiv 1\pmod{4}$ is a sum of two squares. Amer. Math. Monthly, 97(2):144, 1990. Departamento de Matemáticas and ICMAT, Universidad Autónoma de Madrid, 28049 Madrid, Spain Email address<EMAIL_ADDRESS>
# Shape the Future of ITS – Optimization and scheduling for a large scale urban transportation system – in a fast-changing world Yi Zhang Yi Zhang is affiliated with with the Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore 138632. Emails: [email protected](zhang$\[email protected]). ## I Introduction The rapid growth of the population facilitates the urban sprawl, automobile production and leads to heavy traffic congestions, pollution, noises and traffic fatalities. Therefore, it is indispensable to develop an eco-friendly sustainable transport with high performance of the existing traffic network. Although many transformational products have been developed and used on the road, such as autonomous vehicles (AV) and drones, and their quality is continually upgraded, 100% replacement of traditional cars to AVs requires not only the safety and security evaluation but also corresponding physical infrastructures, e.g., non-signalized intersection, charging stations. Also, we still have a long way to go in order to completely commercialize AVs with level 5 automation. Thus, it is predictable that traffic signals will still play an important role for a certain long time in this hybrid transportation world, which involves both human-driven cars and AVs under different automated levels. At the period of transition, a feasible and implementable approach is more helpful and practical to reshape the transportation system in the near future. The concept of the pedestrian/transit-oriented transport, which is designed for the human body instead of the car body, has been proposed by researchers [2]: “cities with very high walking, cycling and transit mode share (i.e., 75% or more) typically have high density, mixed use urban centres at or above 100-200 people per hectare and are supported by a transportation strategy that prioritizes pedestrians first, then cyclists followed by transit users.” However, vehicle traffic still gets excessively high attention in many cities’ guidelines, and walkability is seldom taken into account. Also, walking is the access to public transport (PT), thereby PT services could accordingly be promoted if the pedestrian walking safety and pleasantness can be satisfied. PT system has been studied for several decades due to its large ridership and sustainability on economic efficiency, environmental protection and social equity. Proper bus dispatching and operation can attract more passengers, which encourages commuters to change their travel mode from private automobiles to public buses to further alleviate the traffic congestion and air pollution. However, PT system, a typical ride-sharing transport serving a variety of access needs and providing an equal social value, now faces the challenges from the mobility-on-demand (MOD) system, which is famed for its easy transactions and convenient access via mobile phones. Therefore, we need to explore and create a win-win cooperation model between both PT and AV systems at current transitional period. The flexible demand-driven pattern in AVs could make up the shortages due to fixed routes in the PT system. On the other hand, PT system is still coping with the high-volume transfer tasks, which may not be feasible for AVs due to the safety concern. The objective of this essay is to propose a set of coordinated technological solutions to transform existing transport system to a more intelligent interactive system by adopting optimization and control methods implementable in the near future, thereby improving public services and quality of life for residents. In this essay, three different application scenes that closely related to people’s daily life are discussed. We firstly propose a traffic light scheduling strategy via model predictive control (MPC) method, with the aim to fairly minimize both pedestrians’ and vehicles’ delay. After that, a combined dispatching-operation system is proposed to further increase the control flexibility, and corresponding implementation solution for boarding control is also illustrated. Finally, a possible scheme to combine both PT and AV systems is proposed to improve existing PT system. ## II Working Packages and Tasks ### II-A Traffic light scheduling for vehicle-pedestrian mixed-flow networks Most traffic signal controllers differentiate vehicles and pedestrians, and focus on their great contributions to vehicle flows, this is reasonable when pedestrian volume is low. However, in the downtown areas where large number of pedestrians interfere with vehicular traffic, optimizing traffic signals only for vehicles may create more conflicts between both traffic participants and potentially reduce the economic interest, since pedestrians in CBD areas are usually potential customers of nearby shopping malls. According to SGS Economics and Planning [1], optimized pedestrian flow can increase $1.3 billion a year for Melbourne CBD area. On the other hand, rich data can be obtained with the help of the advanced sensor equipments, such as V2X, 5G communication, Lidar and so on. Powerful machine learning algorithms can be effectively utilized to help predict the traffic flows based on the large amount of database, which better serve the optimized signal controller. In view of this, we propose this signal controller with the aim to fairly minimize both vehicles’ and pedestrians’ delay [4][7]. Fig. 1 illustrates the framework of the proposed real-time traffic light scheduling strategy implemented in simulator software VISSIM. The macroscopic flow model for both pedestrians and vehicles are developed, and the impacts of the signal on pedestrian crossing capacity is captured in the model. Benefitted from the advanced sensors, signal priority level could also be incorporated into the model to enable higher priority for public buses. The mixed-flow model is then solved by adopting the commercial optimization solver or evolutionary algorithms. After that, the optimized signal phases and duration are sent to the simulator VISSIM, which mimics the real urban traffic environment. Meanwhile, the traffic information, such as traffic volumes, turning ratios, is stored in the back-end database, which is used to re-train the AI models by adopting the machine learning algorithms to predict the required information. The predicted traffic parameters and current real-time information are all sent to the controller side, which is solved in a rolling horizon manner, and the whole process continues with the evolve of the traffic system to form a closed-loop control strategy. Figure 1: Framework of traffic light scheduling for vehicle-pedestrian mixed- flow networks ### II-B A combined dispatching-operation strategy for public bus management incorporated with the boarding control The bus operation systems in most today’s studies are proposed based on known bus dispatching time or schedule headways/frequencies. On the other hand, the bus dispatching systems seldom consider operation control for on-road buses. It is understandable that separately deal with these two problems could reduce the computational complexity significantly compared with the complicated combined model. However, the increasing application and perfection of telematics (e.g., Automatic Vehicle Location (AVL), Automatic Passenger Counting (APC), etc.) in the bus management system, the collection of the real-time information becomes possible, also, the constantly upgraded computer with more computing power is keeping breaking the records, thus, it is predictable that to solve a combined optimization problem is implementable [5]. Decision variables, such as bus dispatching time, bus speed between any two adjacent stops, bus dwell time at each stop, OD-based boarding volume, could all be described into a holistic model to enable this combined dispatching-operation bus management system [6]. Also, boarding control has been studied in some literature and proved its high efficiency in improving the bus service quality. However, it still has not been widely implemented in the real world. Fig. 2 gives an graph illustration of future bus stop, which makes the boarding control applicable. Imagine a message via mobile apps or screen board located at the bus stop could be sent or displayed to passengers when a bus is approaching the stop, and only passengers at the front queue will be selected to enter the designated boarding area at the bus stop. The identification of the front-queue passenger can be realized by the camera or Lidar installed at the bus stop. When the bus reaches the bus stop, only passengers who are waiting at the designated area are allowed to board the bus. This requires infrastructural enhancement at the bus stop, not only the area re-design but also the advanced sensors. Figure 2: Graphic illustration of future bus stop ### II-C Autonomous bus fleet management for mobility-on-demand system To increase the vehicle utilization and reduce the carbon dioxide emissions, the on-line carsharing services have been carried out in many cities in the past few years. Also, the adoption of the electric vehicles could potentially reduce emissions to promote a sustainable environment, meanwhile, the forth- coming commercialization of AVs has constantly reshaping the transportation system. All of the above emerging concepts and technologies have combined and led to an autonomous mobility-on-demand system (AMOD). AMOD system is not a new concept and has been proposed since 2014 [3], however, the implementation and research studies about AMOD system mainly focus on private cars and taxi services, and this may bring challenges on the traditional public transport, which has been regarded as the symbol of equity and accessibility. Therefore, it is strongly necessary to identify and explore the synergistic possibilities between AMOD and public transports, especially under this transitional period. Fig. 3 illustrates one possible case capturing how AVs could support the existing public bus lines. The left subfigure describes two traditional bus lines served by corresponding buses dispatched from the terminal. The orange bar represents the demand size at each bus stop. All control variables mentioned in Section II-B could be adopted and tuned to facilitate the operation of the traditional bus service. Meanwhile, we could also dispatch AVs to serve bus stops on multiple bus lines but with high demands due to its flexible on-demand characteristics. AVs are normally commercialized as electric cars with limited battery capacity, thus, it is inevitable to develop a smart AV routing strategy catering for the demands of the users as well as the battery capacity level, as illustrated in Fig. 4. As AVs do not follow a fixed bus route and on-board passengers need to be sent to their destined stops, thus, on-demand autonomous bus fleet dispatching strategy inevitably requires the boarding control at each bus stop, and this enables the system to consider not only the routing decision but also the volume dynamics, which is different with current one passenger pick-up and drop-off problem. The optimization involving in routing, subsequent bus stop selection and corresponding boarding volume makes the problem more challenging but also interesting. Figure 3: Application of autonomous buses on supporting traditional public bus lines Figure 4: Autonomous bus routing for multi-loading and on-road charging ## III Conclusion In this essay, technological solutions from a pedestrian/transit-oriented perspective have been provided. Firstly, an adaptive traffic signal control framework is presented, with consideration of a macroscopic mixed-flow optimization model, VISSIM simulation platform, historical database and machine learning-enabled AI prediction model. The experiment results in our paper [4] have demonstrated that the proposed strategy in a Manhattan-shaped network can strike a good balance between pedestrians’ and vehicle drivers’ needs. Therefore, we are optimistic to implement this strategy in the real world to enhance the existing traffic signals in the near future. Next, the bus dispatching and its on-road operation, especially the boarding control, are combined together to minimize the passenger delay time and the operating bus vacancy, which makes our strategy more flexible and adaptive to meet the passenger demand. In our previous study [6], a multi-bus dispatching and boarding control strategy can reduce roughly 50% of remaining passenger volumes when compared with the timetable-based fixed schedule, which is quite promising and exciting. Moreover, to achieve a synergistic objective between PT system and AV system, a scheme is proposed, where AVs are dispatched to pick up passengers at high-demand stops to support the PT lines. This scheme shall be further studied in my future work to incorporate multiple optimized variables, including AV routing, volume dynamics and boarding limits. Overall, the essay presents three application scenes in the intelligent transportation field. The suggested strategies and guidelines can assist in developing an intelligent future urban transport system in order to provide citizens a smarter, safer and more interactive transportation experience. ## References * [1] SGS Economics and Planning. CBD pedestrian analysis, technical report city of melbourn. Technical report, 2014. * [2] Jennie Moore, Kirstin Mille, Richard Registe, Sarah Campbell, Julian Zelazny, and Sven Eberlein. International ecocity standards(brochure). Technical report, 2017. * [3] Kevin Spieser, Kyle Treleaven, Rick Zhang, Emilio Frazzoli, Daniel Morton, and Marco Pavone. Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: A case study in singapore. In Road vehicle automation, pages 229–245. Springer, 2014. * [4] Yi Zhang, Kaizhou Gao, Yicheng Zhang, and Rong Su. Traffic light scheduling for pedestrian-vehicle mixed-flow networks. IEEE Transactions on Intelligent Transportation Systems, 20(4):1468–1483, 2018. * [5] Yi Zhang, Rong Su, Yicheng Zhang, and Nadeesha Sandamali Gammana Guruge. A multi-bus dispatching strategy based on boarding control. IEEE Transactions on Intelligent Transportation Systems, 2021. * [6] Yi Zhang, Rong Su, Yicheng Zhang, and Bohui Wang. Dynamic multi-bus dispatching strategy with boarding and holding control for passenger delay alleviation and schedule reliability: A combined dispatching-operation system. IEEE Transactions on Intelligent Transportation Systems, 2021. * [7] Yi Zhang, Yicheng Zhang, and Rong Su. Pedestrian-safety-aware traffic light control strategy for urban traffic congestion alleviation. IEEE Transactions on Intelligent Transportation Systems, 22(1):178–193, 2019. | Yi Zhang (S’17-M’21) Yi Zhang received her Bachelor degree of Engineering from Shandong University, China in 2014, and the PhD degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2020. She is currently a research scientist at the Institute for Infocomm Research (I2R) in the Agency for Science, Technology and Research, Singapore (A*STAR). Her research interests focus on intelligent transportation system, including urban traffic flow management, model-based traffic signal scheduling, lane change prediction and bus dispatching and operation management. ---|---
Now, using the concept of dual norm, this is equal to $\displaystyle p_{MM}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}\left(\mathbf{V}_{i},\mathbf{Y}_{i}\right)$ s.t. $\displaystyle~{}\max_{\begin{subarray}{c}\|\mathbf{g}\|_{2}\leq 1\\\ \|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[m]\end{subarray}}\mathbf{g}^{\top}\sum_{i=1}^{n}(\mathbf{I}_{d}\otimes\mathbf{V}_{i}^{T})\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\mathbf{w}_{1}\leq\beta$ (136) Then, we have $\displaystyle p_{MM}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}\left(\mathbf{V}_{i},\mathbf{Y}_{i}\right)$ s.t. $\displaystyle~{}\max_{\begin{subarray}{c}j\in[m]\\\ \|\mathbf{Z}\|_{*}\leq 1\end{subarray}}\mathrm{trace}\left(\sum_{i=1}^{n}(\mathbf{I}_{d}\otimes\mathbf{V}_{i}^{T})\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\mathbf{Z}\right)\leq\beta$ (137) Now, we simply need to form the Lagrangian and solve. The Lagrangian is given by $p_{MM}^{*}=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{j=1}^{m}\lambda_{j}\left(\beta-\sum_{i=1}^{n}\mathrm{trace}\left((\mathbf{I}_{d}\otimes\mathbf{V}_{i}^{T})\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\mathbf{Z}\right)\right)$ (138) We now can switch the order of max and min via Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$: $p_{MM}^{*}=\min_{\lambda\geq 0}\min_{\\\ |\mathbf{Z}_{j}\|_{*}\leq 1}\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{j=1}^{m}\lambda_{j}\left(\beta-\sum_{i=1}^{n}\mathbf{vec}\left(\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\mathbf{Z}_{j}\right)^{\top}\mathbf{vec}\left(\mathbf{I}_{d}\otimes\mathbf{V}_{i}\right)\right)$ (139) Now, defining $\mathbf{K}_{c,d}$ as the $(c,d)$ commutation matrix: $\displaystyle\mathbf{vec}(\mathbf{I}_{d}\otimes\mathbf{V}_{i})=\left((\mathbf{I}_{d}\otimes\mathbf{K}_{c,d})(\mathbf{vec}(\mathbf{I}_{d})\otimes\mathbf{I}_{c})\otimes\mathbf{I}_{s}\right)\mathbf{vec}(\mathbf{V}_{i})$ Solving over $\mathbf{V}_{i}$ yields $p_{MM}^{*}=\min_{\lambda\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\left((\mathbf{vec}(\mathbf{I}_{d})^{\top}\otimes\mathbf{I}_{c})(\mathbf{I}_{d}\otimes\mathbf{K}_{d,c})\otimes\mathbf{I}_{s}\right)\mathbf{vec}\left(\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\lambda_{j}\mathbf{Z}_{j}\right),\mathbf{vec}(\mathbf{Y}_{i})\right)+\beta\sum_{j=1}^{m}\lambda_{j}$ (140) Re-scaling $\tilde{\mathbf{Z}}_{j}=\lambda_{j}\mathbf{Z}_{j}$ gives us $p_{MM}^{*}=\min_{\mathbf{Z}_{j}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\left((\mathbf{vec}(\mathbf{I}_{d})^{\top}\otimes\mathbf{I}_{c})(\mathbf{I}_{d}\otimes\mathbf{K}_{d,c})\otimes\mathbf{I}_{s}\right)\mathbf{vec}\left(\mathbf{D}_{j}^{(i)}({\mathbf{X}}_{i}^{\top}\otimes\mathbf{I}_{s})\mathbf{Z}_{j}\right),\mathbf{vec}(\mathbf{Y}_{i})\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}.$ (141) One can actually greatly simplify this result, and can re-write this as $\displaystyle p_{MM}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{s^{2}\times dc}}\sum_{i=1}^{n}\mathcal{L}\left(\begin{bmatrix}f_{1}(\mathbf{X}_{i})&\cdots&f_{c}(\mathbf{X}_{i})\end{bmatrix},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}$ (142) $\displaystyle f_{p}(\mathbf{X}_{i})$ $\displaystyle:=\sum_{j=1}^{m}\begin{bmatrix}\mathbf{D}_{j}^{(i,1)}\mathbf{Z}_{j}^{(p,1)}\cdots\mathbf{D}_{j}^{(i,d)}\mathbf{Z}_{j}^{(p,d)}\end{bmatrix}\mathbf{vec}({\mathbf{X}}_{i}).$ (143) as desired. ∎ ##### C.2.3 FNO ###### Theorem C.3. For the Gated ReLU activation FNO training problem (4.2), we define $\displaystyle\mathbf{X}$ $\displaystyle:=\begin{bmatrix}\mathrm{circ}(\mathbf{X}_{1})\\\ \cdots\\\ \mathrm{circ}(\mathbf{X}_{n})\end{bmatrix}$ $\displaystyle\\{\mathbf{D}_{j}\\}_{j=1}^{m}$ $\displaystyle:=\\{\mathrm{diag}\left(\mathbbm{1}\\{\mathbf{X}\mathbf{h}_{j}\geq 0\\}\right)\\},$ for fixed gates $\\{\mathbf{h}_{j}\in\mathbb{R}^{sd}\\}_{j=1}^{m}$. Then, the standard non-convex training objective is equivalent to a convex optimization problem, given by $\displaystyle p_{FN}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{sd\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}\mathrm{circ}(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}.$ (144) ###### Proof. We now apply Lemmas A.1 and A.2 with the Gated ReLU activation function to obtain $\displaystyle p_{FN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[m]\end{subarray}}\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i})\mathbf{w}_{1}\|_{2}\leq\beta.$ (145) Using the concept of dual norm, this is equivalent to $\displaystyle p_{FN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{g}\|_{2}\leq 1\\\ \|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[m]\end{subarray}}\mathbf{g}^{\top}\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i})\mathbf{w}_{1}\leq\beta$ (146) Then, we have $\displaystyle p_{FN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}j\in[m]\\\ \|\mathbf{Z}\|_{*}\leq 1\end{subarray}}\mathrm{trace}\left(\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i})\mathbf{Z}\right)\leq\beta$ (147) We form the Lagrangian as $\displaystyle p_{FN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{j=1}^{m}\lambda_{j}(\beta-\mathrm{trace}(\mathbf{Z}_{j}^{\top}\sum_{i=1}^{n}\mathbf{D}_{j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i})^{\top}\mathbf{V}_{i})).$ (148) We switch the order of the maximum and minimum using Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$ $\displaystyle p_{FN}^{*}$ $\displaystyle=\min_{\lambda_{j}\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}\sum_{i=1}^{n}\mathcal{L}(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i})+\beta\sum_{j=1}^{m}\lambda_{j}.$ (149) Lastly, we rescale $\tilde{\mathbf{Z}}_{j}=\lambda_{j}\mathbf{Z}_{j}$ to obtain $\displaystyle p_{FN}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{sd\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}\mathrm{circ}(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}.$ (150) as desired. ∎ ##### C.2.4 B-FNO ###### Theorem C.4. For the Gated ReLU activation B-FNO training problem (4.2.1), we define $\displaystyle\mathbf{X}_{b}$ $\displaystyle:=\begin{bmatrix}\mathrm{circ}(\mathbf{X}_{1}^{(b)})\\\ \cdots\\\ \mathrm{circ}(\mathbf{X}_{n}^{(b)})\end{bmatrix}$ $\displaystyle\\{\mathbf{D}_{b,j}\\}_{j=1}^{m}$ $\displaystyle:=\\{\mathrm{diag}\left(\mathbbm{1}\\{\mathbf{X}_{b}\mathbf{h}_{b,j}\geq 0\\}\right)\\},$ for fixed gates $\\{\mathbf{h}_{b,j}\in\mathbb{R}^{sd/B}\\}_{j=1}^{m}$. Then, the standard non-convex training objective is equivalent to a convex optimization problem, given by $\displaystyle p_{BFN}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{b,j}}\sum_{i=1}^{n}\mathcal{L}\left(\begin{bmatrix}f^{(1)}(\mathbf{X}_{i})&\cdots&f^{(B)}(\mathbf{X}_{i})\end{bmatrix},\mathbf{Y}_{i}\right)+\beta\sum_{b=1}^{B}\sum_{j=1}^{m}\|\mathbf{Z}_{j,b}\|_{*},$ (151) where $\displaystyle f^{(b)}$ $\displaystyle:=\sum_{j=1}^{m}\mathbf{D}_{b,j}\mathrm{circ}(\mathbf{X}_{i}^{(b)})\mathbf{Z}_{b,j}$ ###### Proof. We now apply Lemmas A.1 and A.2 with the Gated ReLU activation function to obtain $\displaystyle p_{BFN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{w}_{1b}\|_{2}\leq 1\\\ b\in[B]\\\ j\in[m]\end{subarray}}\|\sum_{i=1}^{n}{\mathbf{V}_{i}^{(b)}}^{\top}\mathbf{D}_{b,j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i}^{(b)})\mathbf{w}_{1b}\|_{2}\leq\beta$ (152) Using the concept of dual norm, this is equivalent to $\displaystyle p_{BFN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{g}_{b}\|_{2}\leq 1\\\ \|\mathbf{w}_{1b}\|_{2}\leq 1\\\ j\in[m]\\\ \mathbf{K}_{b,j}\mathbf{w}_{1b}\geq 0\end{subarray}}\mathbf{g}^{\top}\sum_{i=1}^{n}{\mathbf{V}_{i}^{(b)}}^{\top}\mathbf{D}_{b,j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i}^{(b)})\mathbf{w}_{1b}\leq\beta~{}\forall b\in[B].$ (153) Then, we have $\displaystyle p_{BFN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}j\in[m]\\\ \|\mathbf{Z}\|_{*}\leq 1\end{subarray}}\mathrm{trace}\left(\sum_{i=1}^{n}{\mathbf{V}_{i}^{(b)}}^{\top}\mathbf{D}_{b,j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i}^{(b)})\mathbf{Z}\right)\leq\beta~{}\forall b\in[B].$ (154) We form the Lagrangian as $\displaystyle p_{BFN}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\|\mathbf{Z}_{b,j}\|_{*}\leq 1}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{b=1}^{B}\sum_{j=1}^{m}\lambda_{b,j}(\beta-\mathrm{trace}(\mathbf{Z}_{b,j}^{\top}\sum_{i=1}^{n}\mathbf{D}_{b,j}^{(i)}\mathrm{circ}({\mathbf{X}}_{i}^{(b)})^{\top}\mathbf{V}_{i}^{(b)})).$ (155) We switch the order of the maximum and minimum using Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$ $\displaystyle p_{BFN}^{*}$ $\displaystyle=\min_{\lambda_{j}\geq 0}\min_{\|\mathbf{Z}_{b,j}\|_{*}\leq 1}\sum_{i=1}^{n}\mathcal{L}(\begin{bmatrix}\sum_{j=1}^{m}\lambda_{1,j}\mathbf{D}_{1,j}\mathrm{circ}(\mathbf{X}_{i}^{(1)})\mathbf{Z}_{1,j}&\cdots&\sum_{j=1}^{m}\lambda_{B,j}\mathbf{D}_{B,j}\mathrm{circ}(\mathbf{X}_{i}^{(B)})\mathbf{Z}_{B,j}\end{bmatrix},\mathbf{Y}_{i})+\beta\sum_{b=1}^{B}\sum_{j=1}^{m}\lambda_{b,j}.$ (156) Lastly, we rescale $\tilde{\mathbf{Z}}_{b,j}=\lambda_{b,j}\mathbf{Z}_{b,j}$ to obtain $\displaystyle p_{BFN}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{b,j}}\sum_{i=1}^{n}\mathcal{L}\left(\begin{bmatrix}\sum_{j=1}^{m}\mathbf{D}_{1,j}\mathrm{circ}(\mathbf{X}_{i}^{(1)})\mathbf{Z}_{1,j}&\cdots&\sum_{j=1}^{m}\mathbf{D}_{B,j}\mathrm{circ}(\mathbf{X}_{i}^{(B)})\mathbf{Z}_{B,j}\end{bmatrix},\mathbf{Y}_{i}\right)+\beta\sum_{b=1}^{B}\sum_{j=1}^{m}\|\mathbf{Z}_{j,b}\|_{*},$ (157) as desired. ∎ #### C.3 Additional Attention Alternatives: PoolFormer and FNet In (Yu et al., 2021), the authors propose a simple alternative to the standard MLP-Mixer architecture. In particular, the forward function is given by $f_{PF}({\mathbf{X}}_{i})=\sigma(\mathbf{P}\mathbf{X}_{i}\mathbf{W}_{1})\mathbf{W}_{2}$ (158) where $\mathbf{P}\in\mathbb{R}^{s\times s}$ is a local pooling function. In this way, the PoolFormer architecture still mixes across different tokens, but in a non-learnable, deterministic fashion. In (Lee-Thorp et al., 2021), the authors propose FNet, another alternative which resembles PoolFormer architecture. In particular, a 2D FFT is applied to the input ${\mathbf{X}}_{i}$ before being passed through an MLP $f_{FNET}({\mathbf{X}}_{i})=\sigma(\mathbf{F}_{s}\mathbf{X}_{i}\mathbf{F}_{d}^{\top}\mathbf{W}_{1})\mathbf{W}_{2}$ (159) One can use similar convex duality results as in the main body of this paper to generate convex dual forms for this architecture for linear, ReLU, and gated ReLU activation PoolFormers and FNets. To keep these results general, we will be analyzing networks of the form $p_{PF}^{*}:=\min_{\mathbf{w}_{1j},\mathbf{w}_{2j}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\sigma(h({\mathbf{X}}_{i})\mathbf{w}_{1j})\mathbf{w}_{2j}^{\top},\mathbf{Y}_{i}\right)+\frac{\beta}{2}\sum_{j=1}^{m}\|\mathbf{w}_{1j}\|_{2}^{2}+\|\mathbf{w}_{2j}\|_{2}^{2}$ (160) for any generic function $h:\mathbb{R}^{s\times d}\to\mathbb{R}^{s\times d}$, which encapsulates both methods and more. ###### Theorem C.5. For the linear activation network training problem (160), for $m\geq m^{*}$ where $m^{*}\leq\min\\{d,c\\}$, the standard non-convex training objective is equivalent to a convex optimization problem, given by $\displaystyle p_{PF}^{*}=\min_{\mathbf{Z}\in\mathbb{R}^{d\times c}}$ $\displaystyle\sum_{i=1}^{n}\mathcal{L}\left(h({\mathbf{X}}_{i})\mathbf{Z},\mathbf{Y}_{i}\right)+\beta\|\mathbf{Z}\|_{*}.$ (161) ###### Proof. We now apply Lemmas A.1 and A.2 to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\|\mathbf{w}_{1}\|_{2}\leq 1}\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}h({\mathbf{X}}_{i})\mathbf{w}_{1}\|_{2}\leq\beta$ (162) This is equivalent to $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}h({\mathbf{X}}_{i})\|_{2}\leq\beta$ (163) We form the Lagrangian as $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\|\mathbf{Z}\|_{*}\leq 1}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\lambda(\beta-\mathrm{trace}(\mathbf{Z}^{\top}\sum_{i=1}^{n}h({\mathbf{X}}_{i})^{\top}\mathbf{V}_{i})).$ (164) We switch the order of the maximum and minimum using Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$ $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\lambda\geq 0}\min_{\|\mathbf{Z}\|_{*}\leq 1}\sum_{i=1}^{n}\mathcal{L}(h({\mathbf{X}}_{i})\mathbf{Z},\mathbf{Y}_{i})+\beta\lambda.$ (165) Lastly, we rescale $\tilde{\mathbf{Z}}=\lambda\mathbf{Z}$ to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\mathbf{Z}\in\mathbb{R}^{d\times c}}\sum_{i=1}^{n}\mathcal{L}\left(h(\mathbf{X}_{i})\mathbf{Z},\mathbf{Y}_{i}\right)+\beta\|\mathbf{Z}\|_{*}.$ (166) as desired. ∎ ###### Theorem C.6. For the ReLU activation training problem (160), we define $\displaystyle\mathbf{X}$ $\displaystyle:=\begin{bmatrix}h(\mathbf{X}_{1})\\\ \cdots\\\ h(\mathbf{X}_{n})\end{bmatrix}$ $\displaystyle\\{\mathbf{D}_{j}\\}_{j=1}^{P}$ $\displaystyle:=\\{\mathrm{diag}\left(\mathbbm{1}\\{\mathbf{X}\mathbf{u}_{j}\geq 0\\}\right):\;\mathbf{u}_{j}\in\mathbb{R}^{d}\\},$ where $P\leq 2r\left(\frac{e(n-1)}{r}\right)^{r}$ and $r:=\mathrm{rank}(\mathbf{X})$. Then, for $m\geq m^{*}$ where $m^{*}\leq n\min\\{d,c\\}$, the standard non-convex training objective is equivalent to a convex optimization problem, given by $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{d\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{P}\mathbf{D}_{j}^{(i)}h(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{P}\|\mathbf{Z}_{j}\|_{*,\mathrm{K}_{j}},$ (167) where $\displaystyle\mathbf{K}_{j}$ $\displaystyle:=(2\mathbf{D}_{j}-\mathbf{I}_{ns})\mathbf{X}.$ ###### Proof. We now apply Lemmas A.1 and A.2 with the ReLU activation function to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\|\mathbf{w}_{1}\|_{2}\leq 1}\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}(h({\mathbf{X}}_{i})\mathbf{w}_{1})_{+}\|_{2}\leq\beta$ (168) We introduce hyperplane arrangements $\mathbf{D}_{j}$ and enumerate over all of them, yielding $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[P]\\\ \mathbf{K}_{j}\mathbf{w}_{1}\geq 0\end{subarray}}\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{w}_{1}\|_{2}\leq\beta.$ (169) Using the concept of dual norm, this is equivalent to $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{g}\|_{2}\leq 1\\\ \|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[P]\\\ \mathbf{K}_{j}\mathbf{w}_{1}\geq 0\end{subarray}}\mathbf{g}^{\top}\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{w}_{1}\leq\beta$ (170) We can also define sets $\mathcal{C}_{j}:=\\{\mathbf{Z}=\mathbf{u}\mathbf{g}^{\top}\in\mathbb{R}^{d\times c}:\mathbf{K}_{j}\mathbf{u}\geq 0,\;\|\mathbf{Z}\|_{*}\leq 1\\}$. Then, we have $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}j\in[P]\\\ \mathbf{Z}\in\mathcal{C}_{j}\end{subarray}}\mathrm{trace}\left(\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{Z}\right)\leq\beta$ (171) We form the Lagrangian as $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\mathbf{Z}_{j}\in\mathcal{C}_{j}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{j=1}^{P}\lambda_{j}(\beta-\mathrm{trace}(\mathbf{Z}_{j}^{\top}\sum_{i=1}^{n}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})^{\top}\mathbf{V}_{i})).$ (172) We switch the order of the maximum and minimum using Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$ $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\lambda_{j}\geq 0}\min_{\mathbf{Z}_{j}\in\mathcal{C}_{j}}\sum_{i=1}^{n}\mathcal{L}(\sum_{j=1}^{P}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i})+\beta\sum_{j=1}^{P}\lambda_{j}.$ (173) Lastly, we rescale $\tilde{\mathbf{Z}}_{j}=\lambda_{j}\mathbf{Z}_{j}$ to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{d\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{P}\mathbf{D}_{j}^{(i)}h(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{P}\|\mathbf{Z}_{j}\|_{*,\mathrm{K}_{j}}.$ (174) as desired. ∎ ###### Theorem C.7. For the Gated ReLU activation training problem (160), we define $\displaystyle\mathbf{X}$ $\displaystyle:=\begin{bmatrix}h(\mathbf{X}_{1})\\\ \cdots\\\ h(\mathbf{X}_{n})\end{bmatrix}$ $\displaystyle\\{\mathbf{D}_{j}\\}_{j=1}^{P}$ $\displaystyle:=\\{\mathrm{diag}\left(\mathbbm{1}\\{\mathbf{X}\mathbf{h}_{j}\geq 0\\}\right)\\},$ for fixed gates $\\{\mathbf{h}_{j}\in\mathbb{R}^{d}\\}_{j=1}^{m}$. Then, the standard non-convex training objective is equivalent to a convex optimization problem, given by $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{d\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}h(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}.$ (175) ###### Proof. We now apply Lemmas A.1 and A.2 with the Gated ReLU activation function to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[m]\end{subarray}}\|\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{w}_{1}\|_{2}\leq\beta.$ (176) Using the concept of dual norm, this is equivalent to $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}\|\mathbf{g}\|_{2}\leq 1\\\ \|\mathbf{w}_{1}\|_{2}\leq 1\\\ j\in[m]\end{subarray}}\mathbf{g}^{\top}\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{w}_{1}\leq\beta$ (177) Then, we have $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})$ $\displaystyle\mathrm{s.t.}~{}$ $\displaystyle\max_{\begin{subarray}{c}j\in[m]\\\ \|\mathbf{Z}\|_{*}\leq 1\end{subarray}}\mathrm{trace}\left(\sum_{i=1}^{n}\mathbf{V}_{i}^{\top}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{Z}\right)\leq\beta$ (178) We form the Lagrangian as $\displaystyle p_{PF}^{*}$ $\displaystyle=\max_{\mathbf{V}_{i}}\min_{\lambda\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}-\sum_{i=1}^{n}\mathcal{L}^{*}(\mathbf{V}_{i},\mathbf{Y}_{i})+\sum_{j=1}^{m}\lambda_{j}(\beta-\mathrm{trace}(\mathbf{Z}_{j}^{\top}\sum_{i=1}^{n}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})^{\top}\mathbf{V}_{i})).$ (179) We switch the order of the maximum and minimum using Sion’s minimax theorem and maximize over $\mathbf{V}_{i}$ $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\lambda_{j}\geq 0}\min_{\|\mathbf{Z}_{j}\|_{*}\leq 1}\sum_{i=1}^{n}\mathcal{L}(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}h({\mathbf{X}}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i})+\beta\sum_{j=1}^{m}\lambda_{j}.$ (180) Lastly, we rescale $\tilde{\mathbf{Z}}_{j}=\lambda_{j}\mathbf{Z}_{j}$ to obtain $\displaystyle p_{PF}^{*}$ $\displaystyle=\min_{\mathbf{Z}_{j}\in\mathbb{R}^{d\times c}}\sum_{i=1}^{n}\mathcal{L}\left(\sum_{j=1}^{m}\mathbf{D}_{j}^{(i)}h(\mathbf{X}_{i})\mathbf{Z}_{j},\mathbf{Y}_{i}\right)+\beta\sum_{j=1}^{m}\|\mathbf{Z}_{j}\|_{*}.$ (181) as desired. ∎
Efficient Approximate Kernel Based Spike Sequence Classification Sarwan Ali, Bikram Sahoo, Muhammad Asad Khan , Alexander Zelikovsky, Imdad Ullah Khan*, Murray Patterson* S. Ali, B. Sahoo, A.Zelikovskiy, and M. Patterson are with the Department of Computer Science, Georgia State University, Atlanta, GA, USA, 30303. E-mail<EMAIL_ADDRESS> M. A. Khan is with Department of Telecommunication Engineering, Hazara University, Mansehra, Pakistan. E-mail<EMAIL_ADDRESS>I. Khan is with Department of Computer Science, Lahore University of Management Sciences, Lahore, Pakistan. E-mail<EMAIL_ADDRESS> * Joint Last/corresponding Authors Manuscript received February 25, 2022. Transactions on Computational Biology and Bioinformatics (TCBB) Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Machine learning (ML) models, such as SVM, for tasks like classification and clustering of sequences, require a definition of distance/similarity between pairs of sequences. Several methods have been proposed to compute the similarity between sequences, such as the exact approach that counts the number of matches between $k$-mers (sub-sequences of length $k$) and an approximate approach that estimates pairwise similarity scores. Although exact methods yield better classification performance, they pose high computational costs, limiting their applicability to a small number of sequences. The approximate algorithms are proven to be more scalable and perform comparably to (sometimes better than) the exact methods — they are designed in a “general” way to deal with different types of sequences (e.g., music, protein, etc.). Although general applicability is a desired property of an algorithm, it is not the case in all scenarios. For example, in the current COVID-19 (coronavirus) pandemic, there is a need for an approach that can deal specifically with the coronavirus. To this end, we propose a series of ways to improve the performance of the approximate kernel (using minimizers and information gain) in order to enhance its predictive performance pm coronavirus sequences. More specifically, we improve the quality of the approximate kernel using domain knowledge (computed using information gain) and efficient preprocessing (using minimizers computation) to classify coronavirus spike protein sequences corresponding to different variants (e.g., Alpha, Beta, Gamma). We report results using different classification and clustering algorithms and evaluate their performance using multiple evaluation metrics. Using two datasets, we show that our proposed method helps improve the kernel's performance compared to the baseline and state-of-the-art approaches in the healthcare domain. Sequence Classification, Approximate Kernel, $k$-mers, Spike Sequence § INTRODUCTION The COVID-19 global pandemic is unique compared to past pandemics because it occurs in a time of worldwide travel like never before and the widespread availability of high-throughput sequencing. Before this pandemic, sequences for a given virus were gathered in the order of hundreds, maybe a few thousand, but the current number of sequences of the SARS-CoV-2 virus (which causes the COVID-19 disease) is orders of magnitude beyond this. This amount is so high that methods such as phylogenetic tree building [1], which have traditionally been used for studying the diversity, dynamics, and evolution of viruses, are no longer appropriate in this situation because they do not scale. Some researchers have hence turned to alternatives such as clustering and classification to tackle this “big data” problem [2, 3, 4]. The SARS-CoV-2 is a type of coronavirus, so-called because of its notable spikes, which resemble “crowns”. These spikes serve as the mechanism for the virus to fuse to the host cell membrane. Coronaviruses such as SARS-CoV-2 cause a wide variety of respiratory diseases in an array of different hosts. Changes in these spikes (in the form of spike protein mutations) allow coronaviruses to adapt to different hosts and evolve into new and more transmissible variants. See Figure <ref> for an illustration of the SARS-CoV-2 genomic structure, including the region (the spike region) that codes the spike protein. It is hence important to use this source of information to identify different host specificity [5] and variants [4]. This motivates approaches for classifying coronavirus spike sequences to better understand the dynamics of the different variants in terms of this information. The SARS-CoV-2 genome is around 29–30kb, encoding structural and non-structural proteins. ORF1ab encodes the non-structural proteins, while the four structural proteins spike, envelope, membrane, and nucleocapsid are encoded by their respective genes. The spike protein has roughly 1300 amino acids. Most classification approaches leverage the powerful tools of machine learning (ML) — methods such as support vector machine (SVM) or random forest (RF), which have highly optimized implementations. One problem researchers face while utilizing ML models' power is to convert the character-based sequences into fixed-length numerical vectors so that the machine can understand them. One popular way to deal with this problem is by using $k$-mers-based approach (a substring of length $k$). Several alignment-based [6, 5] and alignment-free [4, 7] have been proposed recently for ML tasks such as classification and clustering. Most of these methods involved computing $k$-mers from the spike sequences and then computing the $k$-mers count to get a frequency vector (more detail about $k$-mers is given in Section <ref>). Since these methods are based on feature engineering, they may not fully utilize ML models' power as loss of information may occur while performing the feature engineering process. One problem with the $k$-mers-based methods is that the number of $k$-mers in a given sequence could be large (and there could be a large number of similar/redundant $k$-mers). Therefore, there is a computational overhead for processing these redundant $k$-mers. One way to reduce that overhead is by using Minimizers [8]. Minimizers are a type of lightweight “signature” of a sequence that is used primarily in the context of de novo assembly and read mapping. In this paper, we are using minimizers (as a pre-processing step) as a way to remove some of the amino acids from the sequences and show that this eventually improves the predictive performance of the overall algorithm for classification and clustering. A popular domain of research for sequence classification that has shown success in the past is using kernel-based method [9, 10]. These methods compute the exact/approximate distance between pairs of sequences based on the matches and mismatches between the $k$-mers (substrings of length $k$) of the sequences. In [3], the authors use an approximate kernel proposed in [10] to compute the distance between pairs of sequences. These kernels are then used to perform classification using kernel-based classifiers such as SVM and non-kernel-based classifiers (using kernel PCA) such as decision trees. In this work, we devise a kernel that is computationally more efficient, has the kernel's theoretical properties and yields excellent predictive accuracy for both clustering and classification. We then use this kernel in comparison with many baselines (including that of [3]) and show that it outperforms all baselines in terms of predictive performance and runtime. Our contributions in this paper are the following: * We propose a method based on minimizers, information gain, and approximate kernel to perform classification and clustering on the COVID-19 spike sequences * Unlike in [3], we use more variants to show the performance of kernel matrix with a higher number of classes. * We show that spike sequences alone can be used to classify different COVID-19 variants efficiently. * We show that our method could work for classification and clustering tasks. * Using the domain knowledge (information gain), we show that the classification and clustering performance could be improved compared to the baselines. * We prove that the proposed kernel is positive semidefinite. The rest of the paper is organized as follows: Section <ref> contains related work. Our proposed model is given in Section <ref>, Section <ref>, Section <ref>, and Section <ref>. Section <ref> contains the experimental setup. Results are given in Section <ref>. Finally, we conclude the paper in Section <ref>. § RELATED WORK In bioinformatics, sequence homology (shared ancestry) detection between a pair of proteins and prediction of disease transmission using the Phylogeny-based method [11] are essential tasks. The use of $k$-mers counts for phylogenetic applications was first explored in [12], which proposed constructing accurate phylogenetic trees from several coding and non-coding nDNA sequences. In bioinformatics, sequence classification is a widely studied problem [13]. Some classification methods are alignment-free (considered computationally less expensive), while others rely on sequences' alignment (comparatively more computationally expensive). Converting input data into fixed-length numerical vectors for applying different machine learning algorithms such as classification and clustering is a common practice across numerous fields like smart grid [14, 15], graph analytics [16, 17, 18, 19, 20], electromyography [21], clinical data analysis [22], network security [23], and text classification [24]. Authors in [5] use the position weight matrix-based approach to compute feature embeddings for spike sequences. Although their approach shows promising results, one drawback of their method is that it only works for aligned data. A $k$-mers-based approach for classification of SARS-CoV-2 sequences is proposed in [4]. Several methods to perform clustering on the spike sequence data have also been proposed recently [7, 25]. Another domain for classifying sequences is by using kernel matrix (gram matrix). In this method, a kernel matrix is computed that represents the similarity between pairs of sequences [10, 9]. This matrix is used as input to kernel-based classifiers like Support Vector Machines (SVM). Recently, authors in [3] proposed a kernel-based approach for spike sequence classification. Although their method shows promising results on classification, it is not clear if the proposed method can be generalized to more variants and for other bioinformatics tasks such as clustering. One issue with sequence similarity measures involving dynamic programming such as Smith-Waterman [26], is that they have quadratic ($O(n^2)$) runtime and space complexity. Even at 7000 sequences, performing such an operation for all ($49$ million) pairs would be infeasible. In fact, the approximate kernel method we propose here is precisely for this reason, it allows for pairwise comparisons to be much faster ($O(k^2 nlog \ n)$, where $k$ is a constant and $n$ refers to the length of larger sequence between a pair of sequences). On the other hand, using common sequence similarity measures such as Hamming distance (which takes linear time) maybe too simple: not necessarily reflecting the underlying (e.g., physical) similarity between sequence fragments [9]. For example, in protein sequence analysis, different pairs of symbols (amino acids) induce different similarity levels, a consequence of particular physical or chemical properties. Hence, treating all differences the same (similar weight) may not be a good option. Moreover, Hamming distance treats sequences as vectors, which means that the sequences must have same length, which is not always the case. As compared to our alignment-free kernel-based method, Hamming distance may not be applicable in real world because of its dependence on sequence alignment, which itself is an expensive operation. § ALGORITHM FOR KERNEL COMPUTATION In this section, we formulate the problem, describe our algorithm and analyze its runtime and quality. $k$-spectrum and $k,m$-mismatch kernel: Given a sequence $X$ over alphabet $\Sigma$, the $k,m$-mismatch spectrum of $X$ is a $|\Sigma|^k$-dimensional vector, $\Phi_{k,m}(X)$ of number of times each possible $k$-mer occurs in $X$ with at most $m$ mismatches. Formally, \begin{equation} \label{Eq:MismatchSpectrum} \Phi_{k,m}(X) = \left( \Phi_{k,m}(X)[\gamma]\right)_{\gamma \in \Sigma^k} = \left( \sum_{\alpha \in X} I_m(\alpha,\gamma)\right)_{\gamma \in \Sigma^k}, \end{equation} where $I_m(\alpha,\gamma)=1$, if $\alpha$ belongs to the set of $k$-mers that differ from $\gamma$ by at most $m$ mismatches, i.e. the Hamming distance between $\alpha$ and $\gamma$, $d(\alpha,\gamma)\leq m$. Note that for $m=0$, it is known as $k$-spectrum of $X$. The $k,m$-mismatch kernel value for two sequences $X$ and $Y$ (the mismatch spectrum similarity score) [27] is defined as: \begin{equation} \label{Eq:MK} \centering \begin{aligned} K(X,Y|k,m) = \langle \Phi_{k,m}(X),\Phi_{k,m}(Y)\rangle \end{aligned} \end{equation} \begin{equation*} \centering \begin{aligned} = \sum_{\gamma \in \Sigma^k} \Phi_{k,m}(X)[\gamma] \Phi_{k,m}(Y)[\gamma]\notag \\ = \sum_{\gamma \in \Sigma^k} \sum_{\alpha \in X} I_m(\alpha,\gamma) \sum_{\beta \in Y} I_m(\beta,\gamma) \\ = \sum_{\alpha \in X} \sum_{\beta \in Y} \sum_{\gamma \in \Sigma^k} I_m(\alpha,\gamma) I_m(\beta,\gamma). \end{aligned} \end{equation*} For a $k$-mer $\alpha$, let $N_{k,m}(\alpha)=\{\gamma\in \Sigma^k: d(\alpha,\gamma)\leq m\}$ be the $m$-mutational neighborhood of $\alpha$. Then for a pair of sequences $X$ and $Y$, the $k,m$-mismatch kernel given in eq (<ref>) can be equivalently computed as follows [28]: K(X,Y|k,m) = ∑_α∈X ∑_β∈Y ∑_γ∈Σ^k I_m(α,γ) I_m(β,γ) \begin{equation*} \begin{aligned} = \sum_{\alpha \in X} \sum_{\beta \in Y} |N_{k,m}(\alpha) \cap N_{k,m}(\beta)| \\ = \sum_{\alpha \in X} \sum_{\beta \in Y} {\mathfrak{I}}_m(\alpha,\beta), \end{aligned} \end{equation*} where ${\mathfrak{I}}_m(\alpha,\beta) = |N_{k,m}(\alpha) \cap N_{k,m}(\beta)|$ is the size of intersection of $m$-mutational neighborhoods of $\alpha$ and $\beta$. We use the following two facts. ${\mathfrak{I}}_m(\alpha,\beta)$, the size of the intersection of $m$-mismatch neighborhoods of $\alpha$ and $\beta$, is a function of $k$, $m$, $|\Sigma|$ and $d(\alpha,\beta)$ and is independent of the actual $k$-mers $\alpha$ and $\beta$ or the actual positions where they differ. If $d(\alpha,\beta) > 2m$, then ${\mathfrak{I}}_m(\alpha,\beta) = 0$. In view of the above two facts, we can rewrite the kernel value (<ref>) as K(X,Y|k,m) = ∑_α∈X ∑_β∈Y ℑ_m(α,β) = ∑_i=0^min{2m,k} M_i(X,Y)·I_i,where ${\cal I}_i={\mathfrak{I}}_m(\alpha,\beta)$ when $d(\alpha,\beta)=i$ and $M_i(X,Y)$ is the number of pairs of $k$-mers $(\alpha,\beta) $ such that $d(\alpha,\beta)=i$, where $\alpha\in X$ and $\beta\in Y$. Note that bounds on the last summation follow from Fact <ref> and the fact that the Hamming distance between two $k$-mers is at most $k$. Hence the problem of kernel evaluation is reduced to computing $M_i(X,Y)$'s and evaluating ${\cal I}_i$'s. A closed form formula to evaluate the size of intersection of mismatch neighborhoods of two $k$-mers at distance $i$ is derived in [10]. Let $N_{k,m}(\alpha,\beta)$ be the intersection of $m$-mismatch neighborhoods of $\alpha$ and $\beta$ i.e. \begin{equation*} N_{k,m}(\alpha,\beta) = N_{k,m}(\alpha) \cap N_{k,m}(\beta) \end{equation*} As defined earlier $|N_{k,m}(\alpha,\beta)| = {\mathfrak{I}}_m(\alpha,\beta)$. Let $N_q(\alpha) = \{\gamma \in \Sigma^k : d(\alpha,\gamma) = q\}$ be the set of $k$-mers that differ with $\alpha$ in exactly $q$ indices. Note that $N_q(\alpha) \cap N_r(\alpha) = \emptyset$ for all $q\neq r$. Using this and defining $n^{qr}(\alpha,\beta)=|N_q(\alpha) \cap N_r(\beta)|$, \begin{equation} \begin{aligned} N_{k,m}(\alpha,\beta) = \bigcup_{q=0}^m \bigcup_{r=0}^m N_q(\alpha) \cap N_r(\beta) \\ \text{ and } \\ {\mathfrak{I}}_m(\alpha,\beta)=\sum_{q=0}^m \sum_{r=0}^m n^{qr}(\alpha,\beta) \end{aligned} \end{equation} With these notations, let $s =|\Sigma|$, $n^{ij}(\alpha,\beta)$ can be computed using the following closed form. Given two $k$-mers $\alpha$ and $\beta$ such that $d(\alpha,\beta)=d$, we have that \begin{equation} \begin{aligned} n^{ij}(\alpha,\beta)= \sum_{t=0}^{\frac{i+j-d}{2}}{2d - i-j+2t\choose d-(i-t)} {d \choose i+j-2t-d} \times \\ (s -2)^{i+j-2t-d} {k-d\choose t} (s-1)^t \end{aligned} \end{equation} Runtime of computing ${\cal I}_d$ is $O(m^3)$, independent of $k$ and $|\Sigma|$. This is so, because if $d(\alpha,\beta)=d$, ${\cal I}_d = \sum\limits_{q=0}^m \sum\limits_{r=0}^m n^{qr}(\alpha,\beta)$ and $n^{qr}(\alpha,\beta)$ can be computed in $O(m)$. §.§ Computing $M_i(X,Y)$ Recall that given two sequences $X$ and $Y$, $M_i(X,Y)$ is the number of pairs of $k$-mers $(\alpha,\beta)$ such that $d(\alpha,\beta)=i$, where $\alpha\in X$ and $\beta\in Y$. Formally, the problem of computing $M_i(X,Y)$ is as follows: Given $k$, $m$, and two sets of $k$-mers $S_X$ and $S_Y$ (set of $k$-mers extracted from the sequences $X$ and $Y$, respectively) with $|S_X|=n_X$ and $|S_Y| = n_Y$. Let $t = \min\{2m,k\}$, for $0\leq i \leq t$ compute $$M_i(X,Y) = |\{(\alpha,\beta) \in S_X\times S_Y : d(\alpha,\beta) = i\}| $$ Note that the brute force approach to compute $M_i(X,Y)$ requires $O(n_X\cdot n_Y\cdot k)$ comparisons. Let $\Q _k(j)$ denote the set of all $j$-sets of $\{1,\ldots,k\}$ (subsets of indices). For $\theta \in \Q _k(j)$ and a $k$-mer $\alpha$, let $\alpha|_{\theta}$ be the $j$-mer obtained by selecting the characters at the $j$ indices in $\theta$. Let $f_{\theta}(X,Y)$ be the number of pairs of $k$-mers in $S_X\times S_Y$ as follows; $$f_{\theta}(X,Y) = |\{(\alpha,\beta) \in S_X\times S_Y : d(\alpha|_{\theta},\beta|_{\theta}) = 0\}|.$$ We use the following important observations about $f_{\theta}$. For $0\leq i \leq k$ and $\theta \in \Q _k(k-i)$, if $d(\alpha|_{\theta},\beta|_{\theta}) = 0$, then $d(\alpha,\beta) \leq i$. For $0\leq i \leq k$ and $\theta \in \Q_k(k-i)$, $f_{\theta}(X,Y)$ can be computed in $O(kn\log n)$ time. This can be done by first lexicographically sorting the $k$-mers in each of $S_X$ and $S_Y$ by the indices in $\theta$. The pairs in $S_X\times S_Y$ that are the same at indices in $\theta$ can then be enumerated in one linear scan over the sorted lists. Let $n=n_X+n_Y$, the running time of this computation is $O(k(n+|\Sigma|))$ if we use counting sort (as in [28]) or $O(kn\log n)$ for mergesort (since $\theta$ has $O(k)$ indices.) Since this procedure is repeated many times, we refer to this as the sort-enumerate subroutine. Define F_i(X,Y) = ∑_θ∈_k(k-i) f_θ(X,Y). We can compute $M_i(X,Y)$ from $F_j(X,Y)$ using the following identity. F_i(X,Y) = ∑_j=0^i k-j k-iM_j(X,Y). Let $(\alpha,\beta)$ be a pair in $X\times Y$ that contributes to $M_j(X,Y)$, i.e. $d(\alpha,\beta) = j$. Then for every $\theta \in \Q_k(k-i)$ that has all indices within the $k-j$ positions where $\alpha$ and $\beta$ agree, the pair $(\alpha,\beta)$ is counted in $f_{\theta}(X,Y)$. The number of such $\theta$'s are ${k-j \choose k-i}$, hence $M_j(X,Y)$ is counted ${k-j \choose k-i}$ times in $F_i(X,Y)$, yielding the identity. $M_i(X,Y)$ can readily be computed as: $$ M_i(X,Y) \;=\; F_i(X,Y) - \sum\limits_{j=0}^{i-1} {k-j \choose k-i}M_j(X,Y)$$ Next, we derive expressions for $M_i$ matrices for $1\leq i \leq \min\{2m,k\}$ (with values of $M_i(X,Y)$ for all pairs of sequences). In this alternate form it is easier to approximate the matrix $M_i$ and show that the resultant approximate kernel matrix indeed is positive semidefinite, as required for kernel based machine learning methods. For a sequence $X$ and $\theta \in \Q_k(j)$, let $\vec{u}_\theta(X)$ be a $|\Sigma|^j$ dimensional vector defined as: \begin{equation} \label{thetaRep} \vec{u}_\theta(X) = \left( \vec{u}_{\theta}(X)[\gamma]\right)_{\gamma \in \Sigma^j} = \left( \sum_{\alpha \in X} I(\alpha|_\theta,\gamma)\right)_{\gamma \in \Sigma^j} \end{equation} where $I(\alpha|_\theta,\gamma)=1$, if $\alpha|_\theta = \gamma$. It is easy to see that by definition $f_\theta(X,Y) = \langle \vec{u}_\theta(X),\vec{u}_\theta(X)\rangle$. Let $U_i(X)$ be the concatenation $\vec{u}_\theta(X)$ for all $\theta \in \Q_k(k-i)$. Again by definition of $F_i(X,Y)$ in (<ref>) we have that $F_i(X,Y) = \langle U_i(X), U_i(Y)\rangle$. Let $F_i$ and $M_i$ be $N\times N$ matrix with a row and column corresponding to each of the $N$ sequences, with values $F_i(X,Y)$ and $M_i(X,Y)$ for all pairs of sequences $X$ and $Y$. We get the matrix versions of Lemma <ref> and Corollary <ref>, i.e. $$F_i \;=\; \sum_{j=0}^i {k-j \choose k-i}M_j \quad \text{ and } \quad M_i \;=\; F_i - \sum\limits_{j=0}^{i-1} {k-j \choose k-i}M_j$$ If $U_i$ is a matrix with $U_i(X)$'s as its $N$ columns, then by definition, $F_i = U_i^TU_i$, thus $F_i$ is a positive semidefinite matrix. Using Lemma <ref> one easily verify for $0\leq i \leq \min\{2m,k\}$, the matrices $M_i$ are also positive semidefinite. Note that for space and computational efficiency we use <ref> and Corollary <ref> to compute $F_i$ and $M_i$ to compute By definition, $F_i(X,Y)$ can be computed with ${k\choose k-i} ={k\choose i}$ $f_{\theta}$ computations. $K(X,Y|k,m)$ can be evaluated by (<ref>) after computing $M_i(X,Y)$ (by (<ref>)) and ${\cal I}_i$ (by Corollary <ref>) for $0\leq i\leq t$. The overall complexity of this strategy thus is $$\left(\sum_{i=0}^t {k\choose i} (k-i)(n\log n + n)\right) + O(n) = O(k\cdot 2^{k-1} \cdot (n\log n)).$$ Next, we give our sampling based approximate method for kernel computation. We select a random sample of index sets in $\Q_{k-i}$ for each $i$ and compute an estimate $\hat{F}_i^{xy}$ of $F_i(X,Y)$. These $\hat{F}_i$'s are used to compute estimates $\hat{M}_i$ of $M_i$. In matrix form, this corresponds to a random projection of the vectors $U_i(X)$ on the subspace spanned by the selected random index set. Thus, the resulting matrices $\hat{M}_i$'s are positive semidefinite, leading to positive semidefinite kernel matrix. : Approximate-Kernel($S_X$,$S_Y$,$k$,$m$, $\epsilon, \delta$) to estimate $K(X,Y|k,m)$ [1] ${\cal I}\gets \Call{zeros}{t+1}$ $\hat{M} \gets \Call{zeros}{t+1}$ $\hat{F} \gets \Call{zeros}{t+1}$ Populate ${\cal I}$ using Corollary <ref> $i =0$ to $t$ $\mu_F \gets 0$ $\theta \in B_i$ $\mu_F \gets \mu_F + \Call{sort-enumerate}{S_X,S_Y,k,\theta}$ Application of Fact <ref> $\hat{F}[i] \gets \mu_F\cdot \frac{1}{|B_i|}{k\choose k-i}$ $\hat{M}[i]\gets \hat{F}[i]$ $j = 0$ to $i-1$ Application of Corollary <ref> $\hat{M}[i] \gets \hat{M}[i] - {k-j \choose k-i}\cdot \hat{M}[j]$ $K' \gets \Call{sumproduct}{\hat{M},{\cal I}}$ Applying Equation (<ref>) We randomly sample a collection $B_i$ of index-sets from $\c Q_{k-i}$, that Algorithm <ref> uses to compute estimate $\hat{F}_i^{xy}$ for a pair of sequences $X,Y$. Using (<ref>) to estimate $\hat{F}_i^{xy}$ for the randomly chosen $\theta\in B_i$. The estimated $\hat{F}_i^{xy}$'s are used to compute $\hat{M}_i^{xy}$ (estimates of $M_i(X,Y)$) using Corollary <ref>. These $\hat{M}_i^{xy}\;$'s together with the pre-computed exact values of ${\cal I}_i$'s are used to compute our estimate, $K'(X,Y|k,m,\sigma,\epsilon,\delta)$ for the kernel value using (<ref>). The sample sizes (cardinalities of $B_i$'s) are chosen such that variance in the estimates are bounded above by $\sigma^2=\epsilon^2 \delta$, where $\epsilon$ and $\delta$ are user set parameters. First, we give an analytical bound on the runtime of Algorithm <ref> then we provide guarantees on its performance. Runtime of Algorithm <ref> is $O(k^2 n\log n)$. Let $B = \max_{i<t}\{|B_i|\}$. Observe that throughout the execution of the algorithm there are at most $tB$ computations of $f_{\theta}$, which by Fact <ref> needs $O(kn\log n)$ time. Since $B$ is an absolute constant and $t\leq k$, we get that the total runtime of the algorithm is $O(k^2n\log n)$. Let $K' = K'(X,Y|k,m,\epsilon,\delta)$ be our estimate (output of Algorithm <ref>) for $K = K(X,Y|k,m)$. $K'$ is an unbiased estimator of the true kernel value, i.e. $E(K') = K$. For this, we need the following result, whose proof is deferred. $E(\hat{M}_i^{xy}) = {M}_i(X,Y)$. By Line 17 of Algorithm <ref>, $E(K') =E( \sum_{i=0}^{t} {\cal I}_i \hat{M}_i^{xy}).$ Using the fact that ${\cal I}_i$'s are constants and Lemma <ref> we get that $$E(K') = \sum_{i=0}^{t} {\cal I}_i E(\hat{M}_i^{xy})= \sum_{i=0}^{\min\{2m,k\}} {\cal I}_i {M}_i(X,Y) = K.$$ For any $0< \epsilon, \delta < 1$, Algorithm <ref> is an $(\epsilon {\cal I}_{max}, \delta)-$additive approximation algorithm, i.e. $Pr(|K-K'| \geq \epsilon {\cal I}_{max} ) < \delta$, where ${\cal I}_{max} = \max_{i}\{{\cal I}_i\}$. Note that though ${\cal I}_{max}$ could be large, but it is only a fraction of one of the terms in summation for the kernel value $K(X,Y|k,m)$. Let $\hat{F}_i^{xy}$ be our estimate for $F_i\l X,Y\r $. We use the following bound on the variance of $K'$ that is proved later. \begin{lemma}\label{lem:kernelVariance} $Var(K') \leq \delta(\epsilon\cdot{\cal I}_{max})^2.$ \end{lemma} By Lemma \ref{lem:unbiasedKernel} we have $E(K') = K$, hence by Lemma \ref{lem:kernelVariance}, $Pr[|K' - K|] ≥ϵI_max$ is equivalent to $Pr[|K' - E(K')|] ≥1/√(δ)√(Var(K'))$. By Chebychev's inequality, this latter probability is at most $δ$. Therefore, Algorithm \ref{algo:approxKernel} is an $(ϵI_max, δ)-$additive approximation algorithm. \end{proof} \begin{proof}(Proof of Lemma \ref{lem:unbiasedKernel}) We prove it by induction on $i$. The base case ($i=0$) is true as we compute $M'[0]$ exactly, i.e. $\hat{M}[0] = {M}_0(X,Y)$. Suppose $E(\hat{M}_j^{xy}) = {M}_i(X,Y)$ for $0\leq j \leq i-1$. After execution of Line 10 we get $$\hat{F}[i] = \frac{1}{|B_i|} \mu_F {k\choose k-i} = \frac{1}{|B_i|} \sum_{r=1}^{|B_i|} f_{\theta_r}(X,Y) {k\choose k-i},$$ where $\theta_r$ is the random $(k-i)$ index set. Since $\theta_r$ is chosen uniformly at random, we get that \beq\label{Eq:expectedOurF} E(\hat{F}[i]) = E(f_{\theta_r}(X,Y)) {k\choose k-i} = \dfrac{F_i(X,Y)}{{k\choose k-i}} {k\choose k-i} \eeq After the loop on Line 15 is executed we get that $E(\hat{M}[i]) = F_i(X,Y) - \sum\limits_{j=0}^{i-1} {k-j \choose k-i}E(\hat{M}_j^{xy})$. Using $E(\hat{M}_j^{xy})={M}_j(X,Y)$ (inductive hypothesis) in \eqref{FalternateForm} we get that $E(\hat{M}_i^{xy}) = {M}_i(X,Y)$. \end{proof} \begin{proof} (Proof of Lemma \ref{lem:kernelVariance}) After execution of the inner loop in Algorithm \ref{algo:approxKernel}, we have $\hat{F}_i^{xy} = \sum\limits_{j=0}^i {k-j \choose k-i}\hat{M}_j^{xy}.$ We use the following fact that follows from basic calculations. \begin{fact}\label{fact:varLinComb} Suppose $X_0,\ldots,X_t$ are random variables and let $S= \sum_{i=0}^t a_iX_i$, where $a_0,\ldots,a_t$ are constants. Then $$Var(S) = \sum_{i=0}^t a_i^2Var(X_i) + 2\sum_{i=0}^t\sum_{j=i+1}^t a_ia_jCov(X_i,X_j).$$ \end{fact} Using fact \ref{fact:varLinComb} and definitions of ${\cal I}_{max}$ and $\sigma$ we get that \begin{align} Var(K')&=\sum_{i=0}^t {{\cal I}_i}^2 Var( \hat{M}_i^{xy}) +2\sum_{i<j}^t{\cal I}_i{\cal I}_j Cov(\hat{M}_i^{xy}, \hat{M}_j^{xy}) \notag \\ &\leq {\cal I}_{max}^2 \bigg[\sum_{i=0}^t Var(\hat{M}_i^{xy}) +2\sum_{i<j}^t Cov( \hat{M}_i^{xy}, \hat{M}_j^{xy})\bigg] \notag \\ &\leq {\cal I}_{max}^2 Var(\hat{F}_t^{xy}) \leq {\cal I}_{max}^2 \sigma^2 =\delta(\epsilon\cdot{\cal I}_{max})^2 \end{align} The last inequality follows from the following relation derived from the definition of $F_i'$ and Fact \ref{fact:varLinComb}. \begin{equation}\label{Eq:VarourF} \begin{aligned} Var(\hat{F}_t^{xy})=\sum_{i=0}^t {k-i\choose k-t}^2 Var(\hat{M}_i^{xy}) \\ +2\sum_{i<j}^t{k-i\choose k-t}{k-j\choose k-t} Cov( \hat{M}_i^{xy}, \hat{M}_j^{xy}) \end{aligned} \end{equation} \end{proof} \begin{remark} For reference, we call this kernel based method as \textit{Kernel Approximate} (or Kernel Approx.). We use $k=3$ and $m=0$ for this method, which is decided using standard validation set approach~[29]. \end{remark} \section{Ordered Minimizer with Kernel (OMK)}~\label{sec_omk} The original study of approximate kernel in~[10] (and the one that uses approximate kernel in our conference version of the paper~[3]) uses the $k$-mers-based method to compute the approximate values for the kernel matrix. One problem with the $k$-mers is that there could be a lot of similar $k$-mers in a sequence that may not add any value to boost the predictive performance of the underlying machine learning (ML) algorithms. These redundant $k$-mers, however, could add computational overhead to the underlying processing. One solution to deal with this problem is to use Minimizers~[30] (also called $m$-mers), where $m<k$. The main idea of the minimizer ($m$-mers) is the following: Given a $k$-mer, an $m$-mer $∈$ $k$-mer is a sub-sequence, which is lexicographically smallest both in forward and reverse order of the $k$-mer (as given in Figure~\ref{fig_k_mers_minimizer}). Since $m<k$, we ignore some of the amino acids (which were in $k$-mers but not in $m$-mers), which helps us to reduce computational overhead (as input size is reduced). The pseudocode to compute minimizers given a sequence is given in Algorithm~\ref{algo_minimizer}. Although the notion of minimizer is previously used in the domain of metagenomics~[31], it has not been used (to the best of our knowledge) for COVID-19 sequence classification. \begin{figure}[h!] \centering \includegraphics[scale = 0.2] {Figures/minimizer_plot.png} \caption{Example of $k$-mers and Minimizers in a spike sequence ``MDPEGRKMLSVBSLRDSY".} \label{fig_k_mers_minimizer} \end{figure} \begin{algorithm}[h!] \caption{Minimizer Computation} \label{algo_minimizer} \begin{algorithmic}[1] \State \textbf{Input:} Sequence $s$ and integer $k$ and $m$ \State \textbf{Output:} Set of Minimizers \State minimizers $\gets \emptyset$ \State queue $\gets []$ \Comment{$ \text{maintain queue of all m-mers }$} \State idx $\gets 0$ \Comment{ $ \text{index of the current minimizer}$} \For{$i = 1 \textup { to }|s|-k+1$} \State kmer $\gets s[i:i+k]$ % \Comment{$\text{current window of size } k$} \If{\textup{idx} $> 1$} \State queue.dequeue \State mmer $\gets$ $s[i+k-m:i+k]$ \Comment{$ \text{new m-mer}$} \State idx $\gets$ idx $- 1$ \Comment{$ \text{shift index of current minimizer}$} \State mmer $\gets$ min(mmer, reverse(mmer)) \Comment{$ \text{lexicographically smallest forw./rever.}$} \State queue.enqueue(mmer) % \Comment{$ \text{add new m-mer to the back}$} \If{\textup{mmer $<$ queue[idx]}} \State idx $\gets k-m$ \Comment{$ \text{update minimizer with new m-mer}$} \EndIf % \EndIf \Else % { \State queue = [] \Comment{$ \text{reset the queue}$} \State idx = 0 \For{$j = 1 \textup{ to } k-m+1$} % { \State mmer $\gets$ kmer$[j:j+m]$ \Comment{$ \text{compute each m-mer}$} \State mmer $\gets$ min(mmer, reverse(mmer)) \State queue.enqueue(mmer) \If{\textup{mmer $<$ queue[idx]}} % { \State idx $\gets$ $j$ \Comment{$ \text{index of current minimizer}$} % } \EndIf % } \EndFor % } \EndIf \State minimizers $\leftarrow$ minimizers $\cup$ queue[idx] \Comment{$ \text{add current minimizer}$} \EndFor \State return(minimizers) \end{algorithmic} \end{algorithm} To use the power of minimizer with the approximate kernel approach~[10], we perform the following operations: Given a sequence, we first compute a set of minimizers from the $k$-mers (where $m = 3$ and $k = 9$), see Figure~\ref{fig_k_mers_minimizer} for an example. We then concat those minimizers to make a new sequence (for reference, we call this new sequence as $s_minimizer$). That sequence is used as an input for the approximate kernel algorithm to compute the kernel matrix. Since the approximate kernel method operates by computing $k$-mers, the order of the amino acids in $s_minimizer$ is preserved by applying $k$-mers approach. This is the reason we call this method Ordered Minimizer with Kernel (OMK). After computing the kernel matrix, we use kernel PCA~[32] to compute the feature vector representation (we selected 50 principal components for our experiments) for the sequences and apply different machine learning tasks on the vectors, such as classification and clustering. \section{Information Gain with Kernel (IGK)}\label{sec_igk} One way to compute the importance of amino acids in a sequence is to use Information Gain (IG)~[3]. The IG of an amino acid position in terms of a class (variant) is defined as follows: \begin{equation} \mbox{IG}(Class,~position) = H(Class) - H(Class ~|~ position) \label{eq:ig} \end{equation} \begin{equation} H(C) = \sum_{i \in C} -p_i \log p_i \end{equation} is the entropy of category $C$, and $p_i$ is the probability of element $i$ of category $C$. Intuitively, the information gain of a given amino acid position tells us how much information this position provides in deciding the class (variant). Given a sequence, we first compute IG values for amino acids. We then select the amino acids with top IG values (top $243$ amino acids, which are selected using standard validation set approach) and use those amino acids only as input to the approximate kernel method. The approximate kernel approach computes the distance score for each pair of sequences based on the amino acids with top IG value only, and we then apply kernel PCA~[32] for non-kernel classifiers (we selected 50 principal components for our experiments using standard validation set approach) to compute the vector representation for the sequence and apply classification/clustering tasks. For reference, we call this method ``Information Gain with Kernel (IGK)". \begin{remark} Note that using less than $243$ amino acids gave worst classification results while having more that $243$ amino acids did not had any significant impact on the results. \end{remark} \section{Ordered Minimizer with Kernel and IG (OMK + IG)}\label{sec_omkig} In this setting, we first compute the set of minimizers from the $k$-mers (where $m = 3$ and $k = 9$) for a given sequence as performed in Section~\ref{sec_omk} (see Figure~\ref{fig_k_mers_minimizer} for an example). After getting the minimizers, we concat them to make a single sequence ($s_minimizer$). The information gain logic is then applied on $s_minimizer$ to get the top amino acid (we selected the top $2184$ amino acids, in this case, using the standard validation set approach). Those top amino acids as given as input to the approximate kernel algorithm for the computation of kernel matrix. Then kernel PCA~[32] is applied for non-kernel classifiers (we selected 50 principal components for our experiments) to get the vector representation followed by classification/clustering methods. \begin{remark} Note that the idea of using IG in this paper is to reduce the dimensionality of sequence-based data, so that the kernel computation and hence classification/clustering tasks can be performed efficiently, while retaining good accuracies. Since IG give us the importance of each amino acid positions within sequences, we can take advantage of those \textit{importance scores} to extract the relevant amino acid positions and discard the rest. In this way, since only the important features (amino acids to be precise) are considered, we will get better embeddings, and hence better classification/clustering results (in less computational time) because the noise (if any) from the sequences is ignored. \end{remark} In summary, given the original spike sequence data, we have $1274$ amino acids in each sequences. For IGK, we select top $243$ amino acids. For OMK, given a spike sequence, we first compute $k$-mers (where $k=9$) and then from each $9$-mer, we compute $m$-mer (where $m-3$). Since number of $k$-mers in any sequence are $N - k + 1$ (where $N$ is the length of sequence), we will have $1274 - 9 + 1 = 1266$ $k$-mers. Now, since we compute an $m$-mer of length $3$ from each $k$-mer, we have $1266$ $m$-mers in total. If we concat those $m$-mers, we will get $1266 * 3 = 3798$ amino acids, which we called as ordered minimizer and use it as input to kernel method (we call this OMK). In the next step, we compute top $2184$ amino acids using IG from $3798$ amino acids and we call this method as OMK + IG. \section{Experimental Setup}\label{sec_exp_setup} We use $70$-$30 %$ training and testing data split for experimentation. To tune the hyperparameters, we apply $5$ fold cross validation on the training data and then compute results on $30 %$ unseen (held out) testing set. Experiments are conducted $5$ times with different random train-test data, and average $±$ standard deviation results are reported. All experiments are performed in python language on a Core i5 system with Windows 10 OS and 32 GB RAM. \subsection{Dataset Statistics}\label{sec_dataset_stats} We randomly sampled spike sequences from the largest known database of human SARS-CoV-2, GISAID \footnote{\url{https://www.gisaid.org/}}. The sample size is $7000$ (each sequence is length $1274$). Since kernel-based algorithms require the kernel matrix saved in memory, in order to be able to perform experiments on a PC and avoid memory overflow, we use $7000$ sequences. Moreover, the computational overhead of the baseline methods at a larger scale hinders performing any meaningful comparison. Therefore, we chose $7000$ sequences. Unlike the conference version, where only $5$ variants were considered in experiments, here we consider all $22$ variants. We used uniform random sampling, hence the proportions of variants in the sample are close to those in the whole dataset on the date of sampling. The proportion of variants on both datasets is given in Table~\ref{tbl_variant_information}. We repeated the experiments on two independent samples, referred to as GISAID-1 and GISAID-2. \subsection{Data Visualization} We use t-distributed stochastic neighbor embedding (t-SNE) [33] to evaluate the (hidden) patterns in the data. The t-SNE method maps input data to 2D real vectors that can be visualized using scatter plots. The idea behind computing t-SNE plots is to see if the overall distribution of data is disturbed or remains the same when we use different embedding methods. The t-SNE plots for different embedding methods are shown in Figure~\ref{fig_all_tsne}. For the feature engineering-based methods (i.e. OHE, Spike2Vec, and PWM2Vec), we can see overlapping among different variants. However, for the kernel-based methods (for which embeddings were computed using kernel PCA), we can see a smaller (but comparatively pure) grouping of variants with less overlapping as compared to the feature engineering-based methods. \begin{figure*}[h!] \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/one_hot_plot.png} \caption{OHE} \label{} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/kmers_plot.png} \caption{Spike2Vec} \label{} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/pwm_plot.png} \caption{PWM2Vec} \label{} \end{subfigure}% \\ \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/string_kernel.png} \caption{Kernel Approximate} \label{} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/omk_plot.png} \caption{OMK} \label{} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/igk_plot.png} \caption{IGK} \label{} \end{subfigure}% \\ \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=0.28]{Figures/tsne/omk_plus_ig_plot.png} \caption{OMK + IG} \label{} \end{subfigure}% \caption{t-SNE plots for the SARS-CoV-2 dataset for different feature embeddings.} \label{fig_all_tsne} \end{figure*} \begin{table}[h!] \centering \resizebox{0.49\textwidth}{!}{ <EMAIL_ADDRESS>p{1.3cm} p{1.3cm}} \toprule \multirow{3}{*}{Lineages} & \multirow{3}{*}{Region} & \multirow{3}{*}{Labels} & \multirow{3}{1.2cm}{No. Mut. S/Gen.} & \multicolumn{2}{c}{No. of sequences}\\ \cmidrule{5-6} & & & & GISAID-1 & GISAID-2 \\ \midrule \midrule B.1.1.7 & UK~[34] & Alpha & 8/17 & \hskip.1in 3369 & 3397 \\ B.1.617.2 & India~[35] & Delta & 8/17 & \hskip.1in 875 & 878 \\ AY.4 & India~[36] & Delta & - & \hskip.1in 593 & 516 \\ B.1.2 & - & - & - & \hskip.1in 333 & 350 \\ B.1 & & & & \hskip.1in 292 & 276 \\ B.1.177 & Spain~[37] & - & - & \hskip.1in 243 & 281 \\ P.1 & Brazil~[38] & Gamma & 10/21 & \hskip.1in 194 & 201 \\ B.1.1 & - & & - & \hskip.1in 163 & 166 \\ B.1.429 & California & Epsilon & 3/5 & \hskip.1in 107 & 142 \\ B.1.526 & New York~[39] & Iota & 6/16 & \hskip.1in 104 & 82 \\ AY.12 & India~[36] & Delta & - & \hskip.1in 101 & 82 \\ B.1.160 & - & - & - & \hskip.1in 92 & 88 \\ B.1.351 & South Africa~[34] & Beta & 9/21& \hskip.1in 81 & 62 \\ B.1.427 & California~[40] & Epsilon & 3/5 & \hskip.1in 65 & 62 \\ B.1.1.214 & - & - & - & \hskip.1in 64 & 64 \\ B.1.1.519 & - & - & - & \hskip.1in 56 & 88 \\ D.2 & - & - & - & \hskip.1in 55 & 45 \\ B.1.221 & - & - & - & \hskip.1in 52 & 41 \\ B.1.177.21 & - & - & - & \hskip.1in 47 & 56 \\ B.1.258 & - & - & - & \hskip.1in 46 & 42 \\ B.1.243 & - & - & - & \hskip.1in 36 & 40 \\ R.1 & - & - & - & \hskip.1in 32 & 41 \\ \midrule Total & - & - & - & \hskip.1in 7000 & 7000 \\ \bottomrule \end{tabular} \caption{Dataset statistics for $22$ variants. The character `-' means that information not available.} \label{tbl_variant_information} \end{table} \subsection{Baseline Models} In this section, we introduce the baseline (one-hot encoding) and the state-of-the-art (SOTA) approaches (Spike2Vec and PWM2Vec) that we used for comparison with our models. \subsubsection{One-Hot Embedding (OHE)~[6]} A fixed-length numerical feature vector, OHE~[3, 6] generates 0-1 vector on the character's position in the sequence given $Σ$. The 0-1 vectors for all characters are concatenated to make a single vector for a given sequence. The length of the feature vector, in this case, is $30576$. \subsubsection{Spike2Vec~[4]} Spike2Vec is recently proposed in ~[4] for spike sequence classification. Given a sequence, Spike2Vec computes $N$ $k$-mers, where $N= L - k + 1$ ($L$ is the length of the spike sequence and $k=3$ as given in~[4]). After generating the k-mers for a spike sequence, the count of each k-mer is used to get the frequency vector. The length of Spike2Vec-based embedding for each spike sequence is $|Σ|^k = 13824$. \subsubsection{PWM2Vec~[5]} A method combining the power of k-mers and position weight matrix~[41], PWM2Vec is proposed in~[5]. PWM2Vec assigns different weight to each k-mer (where $k=9$) in the feature vector depending on the values of the characters in the position weight matrix. The length of PWM2Vec-based embedding for each spike sequence is $1265$. \subsection{Evaluation Metrics for Classification} Various ML algorithms have been utilized for the classification task. K-PCA output, which is $50$ components fed to different classifiers for prediction purposes. We use Support Vector Machine (SVM), Naive Bayes (NB), Multi-Layer Perceptron (MLP), K-Nearest Neighbour (KNN) (with $K = 5$), Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT) classifiers. The evaluation metrics that we are using are average accuracy, precision, recall, weighted and macro F1, and ROC area under the curve (AUC). \subsection{Evaluation Metrics for Clustering} To perform the clustering on the data, we use the simple $k$-means algorithm. To evaluate the performance of $k$-means, we use the following internal evaluation metrics: \textbf{Silhouette Coefficient}~[42]: Given a vector $v$, it measures the similarity of $v$ to its own cluster (cohesion) compared to the other clusters (separation). Its value range from $-1$ to $1$ where upper bound $1$ indicates best possible clustering and lower bound $-1$ shows worst possible clustering. \textbf{Calinski-Harabasz Score}~[43]: is the ratio between the inter-cluster dispersion and the between-cluster dispersion. The higher the value of this score, the higher the clustering \textbf{Davies-Bouldin Score}~[44]: It validates the clustering schemes by measuring the similarity between clusters. The ratio of distances within-cluster to between clusters is referred to as similarity. Unlike the previous metrics, a lower Davies-Bouldin Score indicates a better clustering performance, and its lower bound is $0$. \subsubsection{Elbow Method for k-means} To get the optimal number of clusters, we use the Elbow method~[45, 7]. The main idea of the elbow method is to compute clusterings and evaluate the trade-off between two metrics, namely runtime and the sum of squared error (distortion score). The clustering having the optimal value for both metrics is selected as the ideal number of clusters. The optimal number of clusters in this case is $4$ (see Figure~\ref{fig_elbow_method}). \begin{figure} \centering \includegraphics[scale = 0.4]{Figures/elbow_method.png} \caption{Elbow method to find the optimal number of clusters.} \label{fig_elbow_method} \end{figure} \section{Results and Discussion}\label{sec_results} In this section, we report classification and clustering results using our proposed model, baseline, and SOTA methods. \subsection{Classification Results} The classification results (average $±$ standard deviation values of $5$ runs) for GISAID-1 dataset are given in Table~\ref{tbl_classification_gisaid1}. We can observe that OMK+IG outperforms all other methods, including the baseline and SOTA, in terms of all evaluation metrics. Similar behavior is observed for GISAID-2 dataset (see Table~\ref{tbl_classification_gisaid2}). From these observations, we can conclude that using the domain knowledge from IG along with the power of minimizers, we can improve the classification performance of the approximate kernel. \begin{table*}[h!] \centering \resizebox{0.99\textwidth}{!}{ \begin{tabular}{p{1.2cm}cp{1.9cm}p{1.7cm}p{1.9cm}p{1.7cm}p{1.7cm}p{1.7cm}|p{2.1cm}} \toprule & & Acc. & Prec. & Recall & F1 (Weig.) & F1 (Macro) & ROC AUC & Train Time (Sec.) \\ \midrule \midrule \multirow{7}{1.2cm}{OHE} & SVM & 0.83 $\pm$ 0.0019 & 0.83 $\pm$ 0.0053 & 0.83 $\pm$ 0.0019 & 0.82 $\pm$ 0.0030 & 0.67 $\pm$ 0.0140 & 0.82 $\pm$ 0.0047 & 301.53 $\pm$ 0.2618 \\ & NB & 0.64 $\pm$ 0.0085 & 0.75 $\pm$ 0.0084 & 0.64 $\pm$ 0.0096 & 0.65 $\pm$ 0.0089 & 0.48 $\pm$ 0.0155 & 0.75 $\pm$ 0.0102 & 18.9 $\pm$ 0.2816 \\ & MLP & 0.79 $\pm$ 0.0045 & 0.81 $\pm$ 0.0604 & 0.79 $\pm$ 0.0045 & 0.78 $\pm$ 0.0060 & 0.61 $\pm$ 0.0202 & 0.79 $\pm$ 0.0079 & 164.05 $\pm$ 0.0164 \\ & KNN & 0.8 $\pm$ 0.0116 & 0.81 $\pm$ 0.0074 & 0.8 $\pm$ 0.0116 & 0.79 $\pm$ 0.0099 & 0.6 $\pm$ 0.0287 & 0.79 $\pm$ 0.0158 & 498.46 $\pm$ 1.4808 \\ & RF & 0.82 $\pm$ 0.0066 & 0.82 $\pm$ 0.0096 & 0.82 $\pm$ 0.0066 & 0.8 $\pm$ 0.0077 & 0.64 $\pm$ 0.0142 & 0.8 $\pm$ 0.0053 & 29.52 $\pm$ 0.0147 \\ & LR & 0.83 $\pm$ 0.0048 & 0.83 $\pm$ 0.0050 & 0.83 $\pm$ 0.0048 & 0.82 $\pm$ 0.0065 & 0.67 $\pm$ 0.0344 & 0.81 $\pm$ 0.0173 & 70.07 $\pm$ 0.0442 \\ & DT & 0.83 $\pm$ 0.0100 & 0.83 $\pm$ 0.0112 & 0.83 $\pm$ 0.0100 & 0.82 $\pm$ 0.0103 & 0.68 $\pm$ 0.0242 & 0.82 $\pm$ 0.0144 & 6.25 $\pm$ 0.0120 \\ \midrule \multirow{7}{1.2cm}{Spike2Vec} & SVM & 0.85 $\pm$ 0.0017 & 0.84 $\pm$ 0.0047 & 0.85 $\pm$ 0.0017 & 0.83 $\pm$ 0.0027 & 0.68 $\pm$ 0.0126 & 0.83 $\pm$ 0.0043 & 230.57 $\pm$ 0.2356 \\ & NB & 0.35 $\pm$ 0.0077 & 0.73 $\pm$ 0.0076 & 0.35 $\pm$ 0.0087 & 0.45 $\pm$ 0.0080 & 0.45 $\pm$ 0.0140 & 0.72 $\pm$ 0.0092 & 12.54 $\pm$ 0.2534 \\ & MLP & 0.79 $\pm$ 0.0040 & 0.81 $\pm$ 0.0544 & 0.79 $\pm$ 0.0040 & 0.8 $\pm$ 0.0054 & 0.58 $\pm$ 0.0182 & 0.79 $\pm$ 0.0071 & 65.79 $\pm$ 0.0147 \\ & KNN & 0.82 $\pm$ 0.0104 & 0.82 $\pm$ 0.0067 & 0.82 $\pm$ 0.0104 & 0.81 $\pm$ 0.0089 & 0.6 $\pm$ 0.0258 & 0.78 $\pm$ 0.0142 & 115.85 $\pm$ 1.3327 \\ & RF & 0.85 $\pm$ 0.0059 & 0.84 $\pm$ 0.0086 & 0.85 $\pm$ 0.0059 & 0.83 $\pm$ 0.0069 & 0.66 $\pm$ 0.0128 & 0.82 $\pm$ 0.0047 & 15.62 $\pm$ 0.0133 \\ & LR & 0.85 $\pm$ 0.0044 & 0.85 $\pm$ 0.0045 & 0.85 $\pm$ 0.0044 & 0.84 $\pm$ 0.0058 & 0.68 $\pm$ 0.0310 & 0.83 $\pm$ 0.0156 & 50.74 $\pm$ 0.0398 \\ & DT & 0.85 $\pm$ 0.0090 & 0.85 $\pm$ 0.0101 & 0.85 $\pm$ 0.0090 & 0.84 $\pm$ 0.0092 & 0.67 $\pm$ 0.0218 & 0.82 $\pm$ 0.0130 & 3.19 $\pm$ 0.0108 \\ \midrule \multirow{7}{1.2cm}{PWM2Vec} & SVM & 0.82 $\pm$ 0.0015 & 0.83 $\pm$ 0.0042 & 0.82 $\pm$ 0.0015 & 0.81 $\pm$ 0.0024 & 0.63 $\pm$ 0.0112 & 0.81 $\pm$ 0.0038 & 173.89 $\pm$ 0.2095 \\ & NB & 0.51 $\pm$ 0.0068 & 0.61 $\pm$ 0.0068 & 0.51 $\pm$ 0.0077 & 0.53 $\pm$ 0.0071 & 0.17 $\pm$ 0.0124 & 0.62 $\pm$ 0.0082 & 1.17 $\pm$ 0.2253 \\ & MLP & 0.8 $\pm$ 0.0036 & 0.78 $\pm$ 0.0483 & 0.8 $\pm$ 0.0036 & 0.78 $\pm$ 0.0048 & 0.53 $\pm$ 0.0162 & 0.77 $\pm$ 0.0063 & 24.4 $\pm$ 0.0131 \\ & KNN & 0.77 $\pm$ 0.0093 & 0.79 $\pm$ 0.0059 & 0.77 $\pm$ 0.0093 & 0.76 $\pm$ 0.0079 & 0.55 $\pm$ 0.0230 & 0.76 $\pm$ 0.0127 & 10.55 $\pm$ 1.1846 \\ & RF & 0.83 $\pm$ 0.0053 & 0.83 $\pm$ 0.0077 & 0.83 $\pm$ 0.0053 & 0.82 $\pm$ 0.0061 & 0.63 $\pm$ 0.0113 & 0.8 $\pm$ 0.0042 & 13.54 $\pm$ 0.0118 \\ & LR & 0.82 $\pm$ 0.0039 & 0.81 $\pm$ 0.0040 & 0.82 $\pm$ 0.0039 & 0.81 $\pm$ 0.0052 & 0.62 $\pm$ 0.0276 & 0.8 $\pm$ 0.0139 & 40.81 $\pm$ 0.0353 \\ & DT & 0.8 $\pm$ 0.0080 & 0.81 $\pm$ 0.0090 & 0.8 $\pm$ 0.0080 & 0.8 $\pm$ 0.0082 & 0.59 $\pm$ 0.0193 & 0.79 $\pm$ 0.0116 & 2.63 $\pm$ 0.0096 \\ \midrule \multirow{7}{1.2cm}{Kernel Approx.} & SVM & 0.84 $\pm$ 0.0016 & 0.83 $\pm$ 0.0045 & 0.84 $\pm$ 0.0016 & 0.82 $\pm$ 0.0026 & 0.63 $\pm$ 0.0120 & 0.81 $\pm$ 0.0040 & 7.35 $\pm$ 0.2239 \\ & NB & 0.75 $\pm$ 0.0073 & 0.82 $\pm$ 0.0072 & 0.75 $\pm$ 0.0082 & 0.77 $\pm$ 0.0076 & 0.6 $\pm$ 0.0133 & 0.82 $\pm$ 0.0088 & 0.17 $\pm$ 0.2408 \\ & MLP & 0.83 $\pm$ 0.0038 & 0.82 $\pm$ 0.0517 & 0.83 $\pm$ 0.0038 & 0.82 $\pm$ 0.0052 & 0.62 $\pm$ 0.0173 & 0.81 $\pm$ 0.0068 & 12.65 $\pm$ 0.0140 \\ & KNN & 0.82 $\pm$ 0.0099 & 0.82 $\pm$ 0.0063 & 0.82 $\pm$ 0.0099 & 0.82 $\pm$ 0.0084 & 0.62 $\pm$ 0.0245 & 0.79 $\pm$ 0.0135 & 0.32 $\pm$ 1.2661 \\ & RF & 0.84 $\pm$ 0.0056 & 0.84 $\pm$ 0.0082 & 0.84 $\pm$ 0.0056 & 0.83 $\pm$ 0.0066 & 0.66 $\pm$ 0.0121 & 0.82 $\pm$ 0.0045 & 1.46 $\pm$ 0.0126 \\ & LR & 0.84 $\pm$ 0.0041 & 0.84 $\pm$ 0.0042 & 0.84 $\pm$ 0.0041 & 0.82 $\pm$ 0.0055 & 0.62 $\pm$ 0.0294 & 0.81 $\pm$ 0.0148 & 1.86 $\pm$ 0.0378 \\ & DT & 0.82 $\pm$ 0.0086 & 0.82 $\pm$ 0.0096 & 0.82 $\pm$ 0.0086 & 0.82 $\pm$ 0.0088 & 0.63 $\pm$ 0.0207 & 0.82 $\pm$ 0.0124 & 0.24 $\pm$ 0.0102 \\ \midrule \multirow{7}{1.2cm}{OMK} & SVM & 0.85 $\pm$ 0.0015 & 0.83 $\pm$ 0.0041 & 0.85 $\pm$ 0.0015 & 0.83 $\pm$ 0.0023 & 0.62 $\pm$ 0.0110 & 0.81 $\pm$ 0.0037 & 33.9 $\pm$ 0.2053 \\ & NB & 0.74 $\pm$ 0.0067 & 0.8 $\pm$ 0.0066 & 0.74 $\pm$ 0.0075 & 0.76 $\pm$ 0.0070 & 0.59 $\pm$ 0.0122 & 0.8 $\pm$ 0.0080 & 0.13 $\pm$ 0.2208 \\ & MLP & 0.83 $\pm$ 0.0035 & 0.82 $\pm$ 0.0474 & 0.83 $\pm$ 0.0035 & 0.82 $\pm$ 0.0047 & 0.61 $\pm$ 0.0158 & 0.8 $\pm$ 0.0062 & 21.77 $\pm$ 0.0128 \\ & KNN & 0.81 $\pm$ 0.0091 & 0.81 $\pm$ 0.0058 & 0.81 $\pm$ 0.0091 & 0.8 $\pm$ 0.0077 & 0.63 $\pm$ 0.0225 & 0.8 $\pm$ 0.0124 & 0.31 $\pm$ 1.1609 \\ & RF & 0.862 $\pm$ 0.0052 & 0.85 $\pm$ 0.0075 & 0.862 $\pm$ 0.0052 & 0.84 $\pm$ 0.0060 & 0.67 $\pm$ 0.0111 & 0.83 $\pm$ 0.0041 & 1.54 $\pm$ 0.0116 \\ & LR & 0.85 $\pm$ 0.0038 & 0.84 $\pm$ 0.0039 & 0.85 $\pm$ 0.0038 & 0.83 $\pm$ 0.0051 & 0.63 $\pm$ 0.0270 & 0.81 $\pm$ 0.0136 & 2.99 $\pm$ 0.0346 \\ & DT & 0.83 $\pm$ 0.0078 & 0.83 $\pm$ 0.0088 & 0.83 $\pm$ 0.0078 & 0.82 $\pm$ 0.0080 & 0.63 $\pm$ 0.0190 & 0.81 $\pm$ 0.0113 & 0.23 $\pm$ 0.0094 \\ \midrule \multirow{7}{1.2cm}{IGK} & SVM & 0.85 $\pm$ 0.0018 & 0.84 $\pm$ 0.0051 & 0.85 $\pm$ 0.0018 & 0.83 $\pm$ 0.0029 & 0.6 $\pm$ 0.0136 & 0.8 $\pm$ 0.0046 & 3.23 $\pm$ 0.2540 \\ & NB & 0.74 $\pm$ 0.0083 & 0.82 $\pm$ 0.0082 & 0.74 $\pm$ 0.0093 & 0.76 $\pm$ 0.0087 & 0.58 $\pm$ 0.0151 & 0.8 $\pm$ 0.0099 & 0.1 $\pm$ 0.2731 \\ & MLP & 0.83 $\pm$ 0.0043 & 0.82 $\pm$ 0.0586 & 0.83 $\pm$ 0.0043 & 0.81 $\pm$ 0.0059 & 0.59 $\pm$ 0.0196 & 0.79 $\pm$ 0.0077 & 9.96 $\pm$ 0.0159 \\ & KNN & 0.82 $\pm$ 0.0113 & 0.82 $\pm$ 0.0072 & 0.82 $\pm$ 0.0113 & 0.81 $\pm$ 0.0096 & 0.59 $\pm$ 0.0278 & 0.79 $\pm$ 0.0153 & 0.34 $\pm$ 1.4364 \\ & RF & 0.84 $\pm$ 0.0064 & 0.83 $\pm$ 0.0093 & 0.84 $\pm$ 0.0064 & 0.82 $\pm$ 0.0074 & 0.59 $\pm$ 0.0138 & 0.8 $\pm$ 0.0051 & 1.36 $\pm$ 0.0143 \\ & LR & 0.85 $\pm$ 0.0047 & 0.84 $\pm$ 0.0048 & 0.85 $\pm$ 0.0047 & 0.83 $\pm$ 0.0063 & 0.61 $\pm$ 0.0334 & 0.8 $\pm$ 0.0168 & 1.7 $\pm$ 0.0428 \\ & DT & 0.83 $\pm$ 0.0097 & 0.82 $\pm$ 0.0109 & 0.83 $\pm$ 0.0097 & 0.81 $\pm$ 0.0100 & 0.58 $\pm$ 0.0234 & 0.79 $\pm$ 0.0140 & 0.21 $\pm$ 0.0116 \\ \midrule \multirow{7}{1.2cm}{OMK + IG} & SVM & \textbf{0.867} $\pm$ 0.0016 & 0.85 $\pm$ 0.0045 & \textbf{0.868} $\pm$ 0.0016 & \textbf{0.85} $\pm$ 0.0025 & 0.66 $\pm$ 0.0119 & 0.83 $\pm$ 0.0040 & 20.83 $\pm$ 0.2216 \\ & NB & 0.75 $\pm$ 0.0072 & 0.83 $\pm$ 0.0072 & 0.75 $\pm$ 0.0082 & 0.77 $\pm$ 0.0076 & 0.61 $\pm$ 0.0131 & 0.82 $\pm$ 0.0087 & \textbf{0.09} $\pm$ 0.2384 \\ & MLP & 0.84 $\pm$ 0.0038 & 0.84 $\pm$ 0.0511 & 0.84 $\pm$ 0.0038 & 0.83 $\pm$ 0.0051 & 0.65 $\pm$ 0.0171 & 0.83 $\pm$ 0.0067 & 13.26 $\pm$ 0.0138 \\ & KNN & 0.83 $\pm$ 0.0098 & 0.84 $\pm$ 0.0063 & 0.83 $\pm$ 0.0098 & 0.83 $\pm$ 0.0084 & 0.65 $\pm$ 0.0243 & 0.81 $\pm$ 0.0134 & 0.31 $\pm$ 1.2534 \\ & RF & 0.864 $\pm$ 0.0056 & \textbf{0.86} $\pm$ 0.0081 & 0.865 $\pm$ 0.0056 & 0.84 $\pm$ 0.0065 & \textbf{0.69} $\pm$ 0.0120 & \textbf{0.84} $\pm$ 0.0045 & 1.26 $\pm$ 0.0125 \\ & LR & 0.865 $\pm$ 0.0041 & 0.85 $\pm$ 0.0042 & 0.86 $\pm$ 0.0041 & 0.84 $\pm$ 0.0055 & 0.63 $\pm$ 0.0292 & 0.82 $\pm$ 0.0147 & 2.08 $\pm$ 0.0374 \\ & DT & 0.84 $\pm$ 0.0085 & 0.84 $\pm$ 0.0095 & 0.84 $\pm$ 0.0085 & 0.84 $\pm$ 0.0087 & 0.65 $\pm$ 0.0205 & 0.83 $\pm$ 0.0122 & 0.19 $\pm$ 0.0101 \\ \bottomrule \end{tabular} \caption{Average $\pm$ standard deviation classification results for GISAID-1 dataset. Best average values are shown in bold.} \label{tbl_classification_gisaid1} \end{table*} \begin{table*}[h!] \centering \resizebox{0.99\textwidth}{!}{ \begin{tabular}{p{1.2cm}cp{1.9cm}p{1.7cm}p{1.9cm}p{1.7cm}p{1.7cm}p{1.7cm}|p{2.1cm}} \toprule & & Acc. & Prec. & Recall & F1 (Weig.) & F1 (Macro) & ROC AUC & Train Time (Sec.) \\ \midrule \midrule \multirow{7}{1.2cm}{OHE} & SVM & 0.84 $\pm$ 0.0017 & 0.84 $\pm$ 0.0046 & 0.84 $\pm$ 0.0018 & 0.83 $\pm$ 0.0028 & 0.6 $\pm$ 0.0138 & 0.8 $\pm$ 0.0045 & 285.83 $\pm$ 0.2511 \\ & NB & 0.65 $\pm$ 0.0077 & 0.79 $\pm$ 0.0074 & 0.65 $\pm$ 0.0093 & 0.66 $\pm$ 0.0083 & 0.47 $\pm$ 0.0153 & 0.76 $\pm$ 0.0098 & 18.89 $\pm$ 0.2700 \\ & MLP & 0.83 $\pm$ 0.0040 & 0.81 $\pm$ 0.0526 & 0.83 $\pm$ 0.0043 & 0.81 $\pm$ 0.0056 & 0.57 $\pm$ 0.0199 & 0.79 $\pm$ 0.0076 & 107.92 $\pm$ 0.1500 \\ & KNN & 0.82 $\pm$ 0.0104 & 0.82 $\pm$ 0.0064 & 0.82 $\pm$ 0.0113 & 0.81 $\pm$ 0.0092 & 0.62 $\pm$ 0.0283 & 0.79 $\pm$ 0.0151 & 501.72 $\pm$ 2.3541 \\ & RF & 0.85 $\pm$ 0.0059 & 0.85 $\pm$ 0.0083 & 0.85 $\pm$ 0.0064 & 0.83 $\pm$ 0.0071 & 0.63 $\pm$ 0.0140 & 0.81 $\pm$ 0.0050 & 29.84 $\pm$ 0.0985 \\ & LR & 0.85 $\pm$ 0.0044 & 0.82 $\pm$ 0.0043 & 0.85 $\pm$ 0.0047 & 0.82 $\pm$ 0.0060 & 0.57 $\pm$ 0.0340 & 0.79 $\pm$ 0.0166 & 65.87 $\pm$ 0.1074 \\ & DT & 0.83 $\pm$ 0.0090 & 0.82 $\pm$ 0.0098 & 0.83 $\pm$ 0.0097 & 0.82 $\pm$ 0.0095 & 0.6 $\pm$ 0.0239 & 0.8 $\pm$ 0.0138 & 6.49 $\pm$ 0.0824 \\ \midrule \multirow{7}{1.2cm}{Spike2Vec} & SVM & 0.86 $\pm$ 0.0016 & 0.86 $\pm$ 0.0039 & 0.86 $\pm$ 0.0018 & 0.85 $\pm$ 0.0025 & 0.69 $\pm$ 0.0120 & 0.84 $\pm$ 0.0040 & 136.92 $\pm$ 0.2159 \\ & NB & 0.67 $\pm$ 0.0072 & 0.71 $\pm$ 0.0062 & 0.67 $\pm$ 0.0089 & 0.66 $\pm$ 0.0076 & 0.48 $\pm$ 0.0133 & 0.75 $\pm$ 0.0087 & 10.07 $\pm$ 0.2322 \\ & MLP & 0.82 $\pm$ 0.0038 & 0.83 $\pm$ 0.0447 & 0.82 $\pm$ 0.0041 & 0.81 $\pm$ 0.0051 & 0.61 $\pm$ 0.0174 & 0.8 $\pm$ 0.0067 & 69.85 $\pm$ 0.1870 \\ & KNN & 0.81 $\pm$ 0.0098 & 0.81 $\pm$ 0.0055 & 0.81 $\pm$ 0.0107 & 0.8 $\pm$ 0.0084 & 0.61 $\pm$ 0.0246 & 0.8 $\pm$ 0.0135 & 117.44 $\pm$ 2.0245 \\ & RF & 0.86 $\pm$ 0.0056 & 0.85 $\pm$ 0.0071 & 0.86 $\pm$ 0.0061 & 0.84 $\pm$ 0.0065 & 0.68 $\pm$ 0.0122 & 0.84 $\pm$ 0.0045 & 13.02 $\pm$ 0.0847 \\ & LR & 0.87 $\pm$ 0.0041 & 0.87 $\pm$ 0.0037 & 0.87 $\pm$ 0.0045 & 0.85 $\pm$ 0.0055 & 0.69 $\pm$ 0.0296 & 0.84 $\pm$ 0.0148 & 48.76 $\pm$ 0.0924 \\ & DT & 0.86 $\pm$ 0.0085 & 0.85 $\pm$ 0.0083 & 0.86 $\pm$ 0.0092 & 0.85 $\pm$ 0.0087 & 0.68 $\pm$ 0.0208 & 0.83 $\pm$ 0.0123 & 2.45 $\pm$ 0.0709 \\ \midrule \multirow{7}{1.2cm}{PWM2Vec} & SVM & 0.82 $\pm$ 0.0019 & 0.81 $\pm$ 0.0055 & 0.82 $\pm$ 0.0028 & 0.81 $\pm$ 0.0036 & 0.58 $\pm$ 0.0166 & 0.79 $\pm$ 0.0063 & 17.13 $\pm$ 0.4269 \\ & NB & 0.51 $\pm$ 0.0084 & 0.6 $\pm$ 0.0088 & 0.51 $\pm$ 0.0140 & 0.52 $\pm$ 0.0108 & 0.13 $\pm$ 0.0184 & 0.62 $\pm$ 0.0137 & 0.96 $\pm$ 0.4591 \\ & MLP & 0.8 $\pm$ 0.0044 & 0.78 $\pm$ 0.0631 & 0.8 $\pm$ 0.0065 & 0.77 $\pm$ 0.0073 & 0.47 $\pm$ 0.0239 & 0.73 $\pm$ 0.0106 & 19.02 $\pm$ 0.1650 \\ & KNN & 0.81 $\pm$ 0.0115 & 0.82 $\pm$ 0.0077 & 0.81 $\pm$ 0.0169 & 0.8 $\pm$ 0.0119 & 0.6 $\pm$ 0.0340 & 0.79 $\pm$ 0.0212 & 7.78 $\pm$ 4.0020 \\ & RF & 0.85 $\pm$ 0.0065 & 0.84 $\pm$ 0.0100 & 0.85 $\pm$ 0.0096 & 0.84 $\pm$ 0.0093 & 0.62 $\pm$ 0.0168 & 0.81 $\pm$ 0.0071 & 4.8 $\pm$ 0.1675 \\ & LR & 0.82 $\pm$ 0.0048 & 0.81 $\pm$ 0.0052 & 0.82 $\pm$ 0.0070 & 0.81 $\pm$ 0.0078 & 0.57 $\pm$ 0.0408 & 0.79 $\pm$ 0.0232 & 33.44 $\pm$ 0.1826 \\ & DT & 0.81 $\pm$ 0.0099 & 0.82 $\pm$ 0.0117 & 0.81 $\pm$ 0.0146 & 0.81 $\pm$ 0.0124 & 0.57 $\pm$ 0.0286 & 0.78 $\pm$ 0.0194 & 2.47 $\pm$ 0.1401 \\ \midrule \multirow{7}{1.2cm}{Kernel Approx.} & SVM & 0.85 $\pm$ 0.0023 & 0.85 $\pm$ 0.0043 & 0.85 $\pm$ 0.0021 & 0.84 $\pm$ 0.0030 & 0.63 $\pm$ 0.0132 & 0.81 $\pm$ 0.0040 & 5.06 $\pm$ 0.2591 \\ & NB & 0.75 $\pm$ 0.0101 & 0.81 $\pm$ 0.0069 & 0.75 $\pm$ 0.0106 & 0.76 $\pm$ 0.0091 & 0.58 $\pm$ 0.0147 & 0.8 $\pm$ 0.0086 & 0.11 $\pm$ 0.2787 \\ & MLP & 0.85 $\pm$ 0.0053 & 0.84 $\pm$ 0.0491 & 0.85 $\pm$ 0.0049 & 0.83 $\pm$ 0.0061 & 0.66 $\pm$ 0.0191 & 0.83 $\pm$ 0.0067 & 15.92 $\pm$ 0.1644 \\ & KNN & 0.82 $\pm$ 0.0137 & 0.82 $\pm$ 0.0060 & 0.82 $\pm$ 0.0128 & 0.82 $\pm$ 0.0100 & 0.62 $\pm$ 0.0271 & 0.79 $\pm$ 0.0133 & 0.29 $\pm$ 2.4294 \\ & RF & 0.85 $\pm$ 0.0078 & 0.85 $\pm$ 0.0078 & 0.85 $\pm$ 0.0073 & 0.84 $\pm$ 0.0078 & 0.66 $\pm$ 0.0134 & 0.82 $\pm$ 0.0044 & 1.49 $\pm$ 0.1017 \\ & LR & 0.85 $\pm$ 0.0057 & 0.84 $\pm$ 0.0040 & 0.85 $\pm$ 0.0053 & 0.83 $\pm$ 0.0066 & 0.6 $\pm$ 0.0325 & 0.81 $\pm$ 0.0146 & 1.76 $\pm$ 0.1108 \\ & DT & 0.83 $\pm$ 0.0119 & 0.83 $\pm$ 0.0091 & 0.83 $\pm$ 0.0111 & 0.82 $\pm$ 0.0104 & 0.63 $\pm$ 0.0228 & 0.81 $\pm$ 0.0122 & 0.25 $\pm$ 0.0850 \\ \midrule \multirow{7}{1.2cm}{OMK} & SVM & 0.86 $\pm$ 0.0018 & 0.86 $\pm$ 0.0052 & 0.86 $\pm$ 0.0026 & 0.85 $\pm$ 0.0034 & 0.67 $\pm$ 0.0156 & 0.83 $\pm$ 0.0060 & 46.7 $\pm$ 0.4012 \\ & NB & 0.71 $\pm$ 0.0079 & 0.79 $\pm$ 0.0083 & 0.71 $\pm$ 0.0132 & 0.73 $\pm$ 0.0102 & 0.49 $\pm$ 0.0173 & 0.75 $\pm$ 0.0129 & 0.12 $\pm$ 0.4315 \\ & MLP & 0.85 $\pm$ 0.0042 & 0.85 $\pm$ 0.0593 & 0.85 $\pm$ 0.0061 & 0.83 $\pm$ 0.0069 & 0.64 $\pm$ 0.0225 & 0.82 $\pm$ 0.0100 & 30.54 $\pm$ 0.1191 \\ & KNN & 0.83 $\pm$ 0.0108 & 0.85 $\pm$ 0.0073 & 0.83 $\pm$ 0.0159 & 0.83 $\pm$ 0.0112 & 0.64 $\pm$ 0.0319 & 0.82 $\pm$ 0.0199 & 0.27 $\pm$ 3.7619 \\ & RF & 0.86 $\pm$ 0.0061 & 0.86 $\pm$ 0.0094 & 0.86 $\pm$ 0.0090 & 0.84 $\pm$ 0.0087 & 0.65 $\pm$ 0.0158 & 0.82 $\pm$ 0.0066 & 1.43 $\pm$ 0.1574 \\ & LR & 0.87 $\pm$ 0.0045 & 0.87 $\pm$ 0.0049 & 0.87 $\pm$ 0.0066 & 0.86 $\pm$ 0.0073 & 0.69 $\pm$ 0.0383 & 0.84 $\pm$ 0.0218 & 3.1 $\pm$ 0.1716 \\ & DT & 0.86 $\pm$ 0.0093 & 0.86 $\pm$ 0.0110 & 0.86 $\pm$ 0.0137 & 0.85 $\pm$ 0.0117 & 0.68 $\pm$ 0.0269 & 0.83 $\pm$ 0.0182 & 0.19 $\pm$ 0.1317 \\ \midrule \multirow{7}{1.2cm}{IGK} & SVM & 0.86 $\pm$ 0.0016 & 0.86 $\pm$ 0.0042 & 0.86 $\pm$ 0.0017 & 0.84 $\pm$ 0.0026 & 0.62 $\pm$ 0.0127 & 0.81 $\pm$ 0.0042 & 4.94 $\pm$ 0.2310 \\ & NB & 0.74 $\pm$ 0.0070 & 0.82 $\pm$ 0.0068 & 0.74 $\pm$ 0.0086 & 0.76 $\pm$ 0.0076 & 0.56 $\pm$ 0.0141 & 0.81 $\pm$ 0.0090 & \textbf{0.08} $\pm$ 0.2484 \\ & MLP & 0.84 $\pm$ 0.0037 & 0.84 $\pm$ 0.0484 & 0.84 $\pm$ 0.0040 & 0.83 $\pm$ 0.0052 & 0.59 $\pm$ 0.0184 & 0.8 $\pm$ 0.0070 & 10.62 $\pm$ 0.1140 \\ & KNN & 0.83 $\pm$ 0.0096 & 0.83 $\pm$ 0.0059 & 0.83 $\pm$ 0.0104 & 0.83 $\pm$ 0.0085 & 0.61 $\pm$ 0.0261 & 0.8 $\pm$ 0.0139 & 0.3 $\pm$ 2.1658 \\ & RF & 0.86 $\pm$ 0.0055 & 0.86 $\pm$ 0.0077 & 0.86 $\pm$ 0.0059 & 0.84 $\pm$ 0.0066 & 0.62 $\pm$ 0.0129 & 0.81 $\pm$ 0.0046 & 1.19 $\pm$ 0.0906 \\ & LR & 0.86 $\pm$ 0.0040 & 0.86 $\pm$ 0.0040 & 0.86 $\pm$ 0.0043 & 0.83 $\pm$ 0.0055 & 0.6 $\pm$ 0.0313 & 0.8 $\pm$ 0.0153 & 1.65 $\pm$ 0.0988 \\ & DT & 0.84 $\pm$ 0.0083 & 0.84 $\pm$ 0.0090 & 0.84 $\pm$ 0.0089 & 0.83 $\pm$ 0.0088 & 0.59 $\pm$ 0.0220 & 0.8 $\pm$ 0.0127 & 0.18 $\pm$ 0.0758 \\ \midrule \multirow{7}{1.2cm}{OMK + IG} & SVM & 0.87 $\pm$ 0.0020 & 0.87 $\pm$ 0.0038 & 0.87 $\pm$ 0.0019 & 0.85 $\pm$ 0.0027 & 0.69 $\pm$ 0.0118 & 0.84 $\pm$ 0.0036 & 15.09 $\pm$ 0.2306 \\ & NB & 0.76 $\pm$ 0.0090 & 0.84 $\pm$ 0.0061 & 0.76 $\pm$ 0.0095 & 0.77 $\pm$ 0.0081 & 0.6 $\pm$ 0.0130 & 0.83 $\pm$ 0.0077 & 0.1 $\pm$ 0.2480 \\ & MLP & 0.86 $\pm$ 0.0047 & 0.85 $\pm$ 0.0437 & 0.86 $\pm$ 0.0044 & 0.85 $\pm$ 0.0055 & 0.66 $\pm$ 0.0170 & 0.83 $\pm$ 0.0059 & 18.67 $\pm$ 0.1133 \\ & KNN & 0.85 $\pm$ 0.0122 & 0.85 $\pm$ 0.0054 & 0.85 $\pm$ 0.0114 & 0.85 $\pm$ 0.0089 & 0.65 $\pm$ 0.0241 & 0.81 $\pm$ 0.0119 & 0.3 $\pm$ 2.1622 \\ & RF & \textbf{0.88} $\pm$ 0.0070 & \textbf{0.88} $\pm$ 0.0069 & \textbf{0.88} $\pm$ 0.0065 & \textbf{0.87} $\pm$ 0.0069 & \textbf{0.70} $\pm$ 0.0119 & \textbf{0.85} $\pm$ 0.0040 & 1.29 $\pm$ 0.0905 \\ & LR & 0.87 $\pm$ 0.0051 & 0.87 $\pm$ 0.0036 & 0.87 $\pm$ 0.0048 & 0.86 $\pm$ 0.0058 & 0.68 $\pm$ 0.0290 & 0.84 $\pm$ 0.0130 & 2.37 $\pm$ 0.0986 \\ & DT & 0.85 $\pm$ 0.0106 & 0.85 $\pm$ 0.0081 & 0.85 $\pm$ 0.0099 & 0.84 $\pm$ 0.0093 & 0.66 $\pm$ 0.0203 & 0.83 $\pm$ 0.0108 & 0.21 $\pm$ 0.0757 \\ \bottomrule \end{tabular} \caption{Average $\pm$ standard deviation classification results for GISAID-2 dataset. Best values are shown in bold.} \label{tbl_classification_gisaid2} \end{table*} \subsection{Clustering Results} To evaluate the performance of different methods in terms of clustering, we report the quality of clustering using different evaluation metrics. The clustering results for GISAID-1 and GISAID-2 datasets are given in Table~\ref{tbl_clust_1} and Table~\ref{tbl_clust_2}, respectively. For the silhouette coefficient, OMK method performs better than the other methods for both datasets. In terms of Calinski-Harabasz score, IGK performs better than the baselines for both datasets. However, the OHE outperforms all methods in terms of Davies-Bouldin score. However, one problem with OHE method is its runtime complexity due to high dimensionality of the vectors. In terms of runtime, OMK + IG performs better than other methods in case of GISAID-1 dataset, while OMK outperforms the other methods in case of GISAID-2 dataset. From the reported clustering results, we can conclude that there is no single method that outperforms all other approaches for all evaluation metrics (as can be seen from classification results). However, kernel-based methods appear to be performing better overall for both datasets (similar behavior is observed from classification results). \begin{table}[h!] \centering \resizebox{0.49\textwidth}{!}{ \begin{tabular}{p{2cm}p{1.4cm}p{2.1cm}p{1.9cm}p{0.9cm}} \toprule \multirow{2}{*}{Methods} & Silhouette Coefficient & Calinski-Harabasz Score & Davies-Bouldin Score & Runtime (Sec.) \\ \midrule \midrule OHE & 0.856 & 32376.919 & \textbf{0.250} & 24.77 \\ Spike2Vec & 0.834 & 22794.361 & 0.467 & 11.31 \\ PWM2Vec & 0.477 & 1762.983 & 1.007 & 1.45 \\ Kernel Approx. & 0.851 & 24619.646 & 0.423 & 0.078 \\ OMK & \textbf{0.858} & 22083.103 & 0.456 & 0.080 \\ IGK & 0.717 & \textbf{37924.721} & 0.489 & 0.093 \\ OMK + IG & 0.672 & 14459.024 & 0.578 & \textbf{0.073} \\ \bottomrule \end{tabular} \caption{Clustering performance comparison using different metrics for k-means on GISAID-1 dataset. Best values are shown in bold.} \label{tbl_clust_1} \end{table} \begin{table}[h!] \centering \resizebox{0.49\textwidth}{!}{ \begin{tabular}{p{2cm}p{1.4cm}p{2.1cm}p{1.9cm}p{0.9cm}} \toprule \multirow{2}{*}{Methods} & Silhouette Coefficient & Calinski-Harabasz Score & Davies-Bouldin Score & Runtime (Sec.) \\ \midrule \midrule OHE & 0.862 & 34223.922 & \textbf{0.239} & 20.48 \\ Spike2Vec & 0.840 & 30878.005 & 0.465 & 11.32 \\ PWM2Vec & 0.487 & 2061.861 & 1.033 & 1.27 \\ Kernel Approx. & 0.863 & 34296.208 & 0.425 & 0.096 \\ OMK & \textbf{0.864} & 33966.355 & 0.425 & \textbf{0.058} \\ IGK & 0.714 & \textbf{35821.496} & 0.502 & 0.088 \\ OMK + IG & 0.645 & 17086.919 & 0.562 & 0.075 \\ \bottomrule \end{tabular} \caption{Clustering performance comparison using different metrics for k-means on GISAID-2 dataset. Best values are shown in bold.} \label{tbl_clust_2} \end{table} \subsection{Kernel Computation Runtime} We report the kernel computation runtime for Approximate kernel, IGK, OMK, and OMK + IG in Table~\ref{tbl_kernel_runtime} (for GISAID-1 dataset). We can observe that since IGK contains the least number of amino acids in each sequence, the kernel computation time for this method is the minimum. However, OMK method takes the maximum ($2163$ sec.) to compute the kernel matrix. Since both GISAID-1 and GISAID-2 datasets contain the same number of sequences, the kernel computation time for both datasets will be similar. \begin{table}[h!] \centering \begin{tabular}{ccc} \toprule Method & Runtime (sec.) & \# of Amino Acids \\ \midrule \midrule OMK & 2163.02 & 3798 \\ OMK + IG & 1818.05 & 2184 \\ Kernel Approx. & 1510.07 & 1274 \\ IGK & 1048.03 & 243 \\ \bottomrule \end{tabular} \caption{Kernel computation runtime for different methods. Note that since both GISAID-1 and GISAID-2 dataset have $7000$ sequences each, the kernel computation runtime for both datasets will be similar. The last column shows the number of amino acids in the input data for kernel matrices.} \label{tbl_kernel_runtime} \end{table} \section{Conclusion}\label{sec_conclusion} The COVID-19 outbreak induced by SARS-CoV-2 captured the scientific community's attention across the world. The current research on SARS-CoV-2 focused on understanding the transmission pattern of the virus, identifying new variants, improving public health, and developing start-of-art vaccine and treatment options. Computational biology played a significant role in this scientific journey of comprehensive understanding of the COVID-19 pandemic and management. Especially in the processing of the high-throughput sequencing data, there is an unmet need to classify the genomic data accurately. Here, we propose three different settings to efficiently perform different machine learning tasks such as classification and clustering on SARS-CoV-2 variants using spike sequences. Results show that the minimizer plus information gain-based method outperforms the existing baseline and state-of-the-art methods in terms of predictive performance. In the future, we will work towards detecting new (unknown) variants (such as Omicron) based on whole genome sequences. We will collect more data in the future to test the scalability of the proposed model. Another exciting future work is considering other attributes like countries, cities, and dates to design richer feature vector representations for spike sequences. \section*{Acknowledgements} The authors would like to acknowledge funding from an MBD fellowhip to Sarwan Ali and Bikram Sahoo, and Georgia State University Computer Science Startup Grant to Murray Patterson. \bibliographystyle{IEEEtran} % Generated by IEEEtran.bst, version: 1.14 (2015/08/26) \begin{thebibliography}{10} \providecommand{\url}[1]{#1} \csname url@samestyle\endcsname \providecommand{\newblock}{\relax} \providecommand{\bibinfo}[2]{#2} \providecommand{\BIBentrySTDinterwordspacing}{\spaceskip=0pt\relax} \providecommand{\BIBentryALTinterwordstretchfactor}{4} \providecommand{\BIBentryALTinterwordspacing}{\spaceskip=\fontdimen2\font plus \BIBentryALTinterwordstretchfactor\fontdimen3\font minus \fontdimen4\font\relax} \providecommand{\BIBforeignlanguage}[2]{{% \expandafter\ifx\csname l@#1\endcsname\relax \typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}% \typeout{** loaded for the language `#1'. Using the pattern for}% \typeout{** the default language instead.}% \else \language=\csname l@#1\endcsname \fi \providecommand{\BIBdecl}{\relax} \BIBdecl [1] J.~Hadfield, C.~Megill, S.~Bell, J.~Huddleston, B.~Potter, C.~Callender, P.~Sagulenko, T.~Bedford, and R.~Neher, ``Nextstrain: real-time tracking of pathogen evolution,'' \emph{Bioinformatics}, vol.~34, pp. 4121--4123, 2018. [2] A.~Melnyk, F.~Mohebbi, S.~Knyazev, B.~Sahoo, R.~Hosseini, P.~Skums, A.~Zelikovsky, and M.~Patterson, ``From alpha to zeta: Identifying variants and subtypes of {SARS-CoV-2} via clustering,'' \emph{Journal of Computational Biology}, vol.~28, no.~11, pp. 1113--1129, 2021. [3] S.~Ali, B.~Sahoo, N.~Ullah, A.~Zelikovskiy, M.~Patterson, and I.~Khan, ``A k-mer based approach for {SARS-CoV-2} variant identification,'' in \emph{International Symposium on Bioinformatics Research and Applications}, 2021, pp. 153--164. [4] S.~Ali and M.~Patterson, ``{Spike2Vec}: An efficient and scalable embedding approach for {COVID-19} spike sequences,'' in \emph{IEEE International Conference on Big Data (Big Data)}, 2021, pp. 1533--1540. [5] S.~Ali, B.~Bello, P.~Chourasia, R.~T. Punathil, Y.~Zhou, and M.~Patterson, ``{PWM2Vec}: An efficient embedding approach for viral host specification from coronavirus spike sequences,'' \emph{MDPI Biology}, 2022. [6] K.~Kuzmin, E.~Adeniyi, A.~DaSouza~Jr, D.~Lim, H.~Nguyen, N.~Molina, L.~Xiong, I.~Weber, and R.~Harrison, ``Machine learning methods accurately predict host specificity of coronaviruses based on spike sequences alone,'' \emph{Biochemical and Biophysical Research Communications}, vol. 533, no.~3, pp. 553--558, 2020. [7] S.~Ali, T.~E. Ali, M.~A. Khan, I.~Khan, and M.~Patterson, ``Effective and scalable clustering of {SARS-CoV-2} sequences,'' in \emph{2021 the 5th International Conference on Big Data Research (ICBDR)}, 2021, pp. 42--49. [8] M.~Roberts, W.~Haynes, B.~Hunt, S.~Mount, and J.~Yorke, ``Reducing storage requirements for biological sequence comparison,'' \emph{Bioinformatics}, vol.~20, pp. 3363--9, 2004. [9] P.~Kuksa, I.~Khan, and V.~Pavlovic, ``Generalized similarity kernels for efficient sequence classification,'' in \emph{SIAM International Conference on Data Mining (SDM)}, 2012, pp. 873--882. [10] M.~Farhan, J.~Tariq, A.~Zaman, M.~Shabbir, and I.~Khan, ``Efficient approximation algorithms for strings kernel based sequence classification,'' in \emph{Advances in neural information processing systems (NeurIPS)}, 2017, pp. 6935--6945. [11] S.~Dhar \emph{et~al.}, ``Tnet: Phylogeny-based inference of disease transmission networks using within-host strain diversity,'' in \emph{International Symposium on Bioinformatics Research and Applications (ISBRA)}, 2020, pp. 203--216. [12] B.~Blaisdell, ``A measure of the similarity of sets of sequences not requiring sequence alignment,'' \emph{Proceedings of the National Academy of Sciences}, vol.~83, pp. 5155--5159, 1986. [13] G.~Krishnan, S.~Kamath, and V.~Sugumaran, ``Predicting vaccine hesitancy and vaccine sentiment using topic modeling and evolutionary optimization,'' in \emph{International Conference on Applications of Natural Language to Information Systems (NLDB)}, 2021, pp. 255--263. [14] S.~Ali, H.~Mansoor, N.~Arshad, and I.~Khan, ``Short term load forecasting using smart meter data,'' in \emph{International Conference on Future Energy Systems}, 2019, pp. 419--421. [15] S.~Ali, H.~Mansoor, I.~Khan, N.~Arshad, M.~A. Khan, and S.~Faizullah, ``Short-term load forecasting using {AMI} data,'' \emph{preprint, arXiv:1912.12479}, 2019. [16] S.~Ali, M.~H. Shakeel, I.~Khan, S.~Faizullah, and M.~A. Khan, ``Predicting attributes of nodes using network structure,'' \emph{ACM Transactions on Intelligent Systems and Technology (TIST)}, vol.~12, no.~2, pp. 1--23, 2021. [17] M.~Ahmad, S.~Ali, J.~Tariq, I.~Khan, M.~Shabbir, and A.~Zaman, ``Combinatorial trace method for network immunization,'' \emph{Information Sciences}, vol. 519, pp. 215 -- 228, 2020. [18] M.~Ahmad, J.~Tariq, M.~Farhan, M.~Shabbir, and I.~Khan, ``Who should receive the vaccine?'' in \emph{Australasian Data Mining Conference (AusDM)}, 2016, pp. 137--145. [19] J.~Tariq, M.~Ahmad, I.~Khan, and M.~Shabbir, ``Scalable approximation algorithm for network immunization,'' in \emph{Pacific Asia Conference on Information Systems (PACIS)}, 2017, p. 200. [20] M.~Ahmad, J.~Tariq, M.~Shabbir, and I.~Khan, ``Spectral methods for immunization of large networks,'' \emph{Australasian Journal of Information Systems}, vol.~21, 2017. [21] A.~Ullah, S.~Ali, I.~Khan, M.~A. Khan, and S.~Faizullah, ``Effect of analysis window and feature selection on classification of hand movements using emg signal,'' in \emph{SAI Intelligent Systems Conference (IntelliSys)}, 2020, pp. 400--415. [22] S.~Ali, Y.~Zhou, and M.~Patterson, ``Efficient analysis of {COVID-19} clinical data using machine learning models,'' \emph{Medical \& Biological Engineering \& Computing}, pp. 1--16, 2022. [23] S.~Ali, M.~K. Alvi, S.~Faizullah, M.~A. Khan, A.~Alshanqiti, and I.~Khan, ``Detecting {DDoS} attack on {SDN} due to vulnerabilities in openflow,'' in \emph{International Conference on Advances in the Emerging Computing Technologies (AECT)}, 2020, pp. 1--6. [24] M.~Shakeel, S.~Faizullah, T.~Alghamidi, and I.~Khan, ``Language independent sentiment analysis,'' in \emph{International Conference on Advances in the Emerging Computing Technologies}, 2020, pp. 1--5. [25] Z.~Tayebi, S.~Ali, and M.~Patterson, ``Robust representation and efficient feature selection allows for effective clustering of {SARS-CoV-2} variants,'' \emph{Algorithms}, vol.~14, no.~12, p. 348, 2021. [26] T.~F. Smith and M.~S. Waterman, ``Identification of common molecular subsequences,'' \emph{Journal of molecular biology}, vol. 147, no.~1, pp. 195--197, 1981. [27] C.~Leslie, E.~Eskin, J.~Weston, and W.~Noble, ``Mismatch string kernels for {SVM} protein classification,'' in \emph{Advances in neural information processing systems (NeurIPS)}, 2003, pp. 1441--1448. [28] P.~Kuksa, P.~Huang, and V.~Pavlovic, ``Scalable algorithms for string kernels with inexact matching,'' in \emph{Advances in Neural Information Processing Systems}, 2009, pp. 881--888. [29] P.~Devijver and J.~Kittler, ``Pattern recognition: A statistical approach,'' in \emph{London, GB: Prentice-Hall}, 1982, pp. 1--448. [30] M.~Roberts, W.~Hayes, B.~Hunt, S.~Mount, and J.~Yorke, ``Reducing storage requirements for biological sequence comparison,'' \emph{Bioinformatics}, vol.~20, no.~18, pp. 3363--3369, 2004. [31] S.~Girotto, C.~Pizzi, and M.~Comin, ``Metaprob: accurate metagenomic reads binning based on probabilistic sequence signatures,'' \emph{Bioinformatics}, vol.~32, no.~17, pp. i567--i575, 2016. [32] H.~Hoffmann, ``Kernel {PCA} for novelty detection,'' \emph{Pattern recognition}, vol.~40, no.~3, pp. 863--874, 2007. [33] L.~Van~der M. and G.~Hinton, ``Visualizing data using {t-SNE}.'' \emph{Journal of Machine Learning Research (JMLR)}, vol.~9, no.~11, 2008. [34] S.~Galloway \emph{et~al.}, ``Emergence of {SARS-CoV-2} {b.1.1.7} lineage,'' \emph{Morbidity and Mortality Weekly Report}, vol.~70, no.~3, p.~95, 2021. [35] P.~Yadav \emph{et~al.}, ``Neutralization potential of covishield vaccinated individuals sera against b. 1.617. 1,'' \emph{bioRxiv}, vol.~1, 2021. [36] {SARS-CoV-2 Variant Classifications and Definitions}, \url{https://www.cdc.gov/coronavirus/2019-ncov/variants/variant-info.html}, 2021, [Online; accessed 29-December-2021]. [37] E.~Hodcroft \emph{et~al.}, ``Emergence and spread of a {SARS-CoV-2} variant through europe in the summer of 2020,'' \emph{MedRxiv}, 2020. [38] F.~Naveca \emph{et~al.}, ``Phylogenetic relationship of {SARS-CoV-2} sequences from amazonas with emerging brazilian variants harboring mutations e484k and n501y in the spike protein,'' \emph{Virological. org}, vol.~1, 2021. [39] A.~West~Jr \emph{et~al.}, ``Detection and characterization of the {SARS-CoV-2} lineage b. 1.526 in new york,'' \emph{bioRxiv}, 2021. [40] W.~Zhang \emph{et~al.}, ``Emergence of a novel {SARS-CoV-2} variant in southern california,'' \emph{{Jama}}, vol. 325, no.~13, pp. 1324--1326, 2021. [41] \BIBentryALTinterwordspacing G.~Stormo, T.~Schneider, L.~Gold, and A.~Ehrenfeucht, ``{ Use of the ‘Perceptron’ algorithm to distinguish translational initiation sites in E. coli},'' \emph{Nucleic Acids Research}, vol.~10, no.~9, pp. 2997--3011, 1982. [Online]. Available: \url{https://doi.org/10.1093/nar/10.9.2997} \BIBentrySTDinterwordspacing [42] P.~Rousseeuw, ``Silhouettes: a graphical aid to the interpretation and validation of cluster analysis,'' \emph{Journal of computational and applied mathematics}, vol.~20, pp. 53--65, 1987. [43] T.~Cali{\'n}ski and J.~Harabasz, ``A dendrite method for cluster analysis,'' \emph{Communications in Statistics-theory and Methods}, vol.~3, no.~1, pp. 1--27, 1974. [44] D.~Davies and D.~Bouldin, ``A cluster separation measure,'' \emph{IEEE transactions on pattern analysis and machine intelligence}, no.~2, pp. 224--227, 1979. [45] V.~Satopaa, J.~Albrecht, D.~Irwin, and B.~Raghavan, ``Finding a" kneedle" in a haystack: Detecting knee points in system behavior,'' in \emph{International conference on distributed computing systems workshops}.\hskip 1em plus 0.5em minus 0.4em\relax IEEE, 2011, pp. 166--171. \end{thebibliography} \end{document}
# Two-center problem with harmonic-like interactions: periodic orbits and integrability Adrian M. Escobar Ruiz1, Lidia Jiménez–Lara1 and Jaume Llibre2 1 Departamento de Física, Universidad Autónoma Metropolitana–Iztapalapa, P.O. Box 55–534, México, D.F., 09340 México<EMAIL_ADDRESS><EMAIL_ADDRESS>2 Departament de Matemàtiques, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Catalonia, Spain<EMAIL_ADDRESS> ###### Abstract. We study the classical planar two-center problem of a particle $m$ subjected to harmonic-like interactions with two fixed centers. For convenient values of the dimensionless parameter of this problem we use the averaging theory for showing analytically the existence of periodic orbits bifurcating from two of the three equilibrium points of the Hamiltonian system modeling this problem. Moreover, it is shown that the system is generically non-integrable in the sense of Liouville–Arnold. The analytical results are complemented by numerical computations of the Poincaré sections as well as providing some explicit periodic orbits. ###### 1991 Mathematics Subject Classification: Primary: 34C05 ## 1\. Introduction and statement of the main results Nonlinear dynamical systems are central objects in theoretical physics. For instance, in classical mechanics [1, 2, 3, 4] appears the rich and complex behaviour of physical relevant systems such as the 3-body problem, dynamical astronomy, Henon-Heiles Hamiltonian systems, rigid bodies problems, and non- linear Hamiltonian systems [5, 6, 7]. Among the various trajectories that mechanical systems can exhibit, periodic solutions play a fundamental role. These trajectories represent a closed motion in the phase space, offering valuable insights into the underlying dynamical properties and stability of the system. Needless to say that, in general, the intricate time evolution of non-linear systems do not admit a straightforward analysis. There is a lack of universally applicable formulas for determining periodic trajectories in dynamical systems. Therefore, the development of asymptotic approximations to the solutions of a nonlinear differential system as well as their numerical approaches are needed. In this context the averaging theory formulated in Fatou’s seminal work [8] offers a systematic approach to extract essential information from complex dynamical systems. Subsequent contributions in the 1930s by Bogoliubov and Krylov [9], as referenced by Bogoliubov [10] in 1945, significantly increased both practical applications and theoretical understanding of the averaging theory. Over time, the ideas of averaging theory have undergone refinement and expansion in various directions, catering to both finite and infinite- dimensional differentiable systems. For contemporary literature and developments in averaging theory, we refer the readers to the works of Sanders, Verhulst, Murdock (see [11] and refereces therein), and Verhulst [12], among others, which provide modern expositions and present-day results on the subject. The fundamental premise of the averaging theory lies in the recognition that many physical systems exhibit fast and slow motions simultaneously. By exploiting this timescale separation, averaging techniques aim to construct simplified models that capture the essential dynamics while filtering out fast oscillations and transient behavior. This reduction in complexity not only facilitates analytical tractability but also provides qualitative insights into the long-term behavior of the system. Concrete applications can be found in the works [13, 14]. In this paper we aim to investigate the dynamics of a two-center problem with harmonic interactions using the averaging theory. This system can be viewed as a limiting case of the 3-body harmonic oscillator, a nine-parameter system with three arbitrary masses, three rest lengths, and three spring constants. In the simplest scenario involving equal masses on the plane and equal spring constants, the 3-body harmonic oscillator exhibits a remarkably diverse dynamics in function of the energy [15, 16]. This diversity manifests in a power-law statistics reminiscent of the Levi-walk model [15]. Even when restricted to the invariant manifold of zero total angular momentum, the parameter space displays regions of both regular and chaotic dynamics due to inherent non-linearities stemming from non-zero rest lengths [17]. Upon setting these rest lengths equal to zero, the system attains superintegrability. It is noteworthy that at zero rest lengths, the corresponding classical and quantum systems become exactly solvable [18]. However, for non-zero rest lengths, an exact solution does not exist. Consequently the quantum 3-body harmonic oscillator [19] serves as a practical model for testing theoretical and numerical methodologies aimed at elucidating the interplay between classical and quantum mechanics within chaotic systems. In the case when two bodies are considered infinitely massive, the 3-body harmonic oscillator reduces to the two-center problem the one investigated in this paper. Numerical as well as analytical tools based on the averaging theory are used to explore the dynamics of this system. The main goal is to find periodic trajectories emanating from the equilibrium points of this system. ### 1.1. Equations of motion of the two-center problem with harmonic-like interactions In the Euclidean space $\mathbb{R}^{2}$ we consider a two-center problem of a non-relativistic point particle $m$ subjected to harmonic-like interactions with two fixed centers possessing the same constant of elasticity $k>0$. In cartesian coordinates $(X,Y)$ the Hamiltonian of the system is of the form: (1) ${\mathcal{H}}\ =\ \dfrac{1}{2\,m}\left(\,P_{X}^{2}\ +\ P_{Y}^{2}\,\right)\ +\ V(X,Y)\ ,$ where $V(X,Y)$ is the translational-invariant potential $\begin{split}V(X,Y)\ &=\ \dfrac{1}{2}\,k\,\big{(}\,\left(R_{1}-A\right)^{2}\ +\ \left(R_{2}-A\right)^{2}\,\big{)}\ ,\\\ &=\ \dfrac{1}{2}\,k\left(\sqrt{(X+L)^{2}+Y^{2}}-A\right)^{2}\ +\ \left(\sqrt{(X-L)^{2}+Y^{2}}-A\right)^{2}\,\big{)}\ ,\end{split}$ $R_{1}$, $R_{2}$ are the distances from the mass $m$ to the two fixed centers, respectively, which we assume are located at $(\pm L,0)$, and the constant $A\geq 0$ denotes the equilibrium distance from $m$ to each one of the fixed centers. Hence, the phase space is four-dimensional. First, for the non- dimensionalization of ${\mathcal{H}}$, we divide the expression (1) by $m\,L^{2}\omega^{2}$, where $\omega^{2}=k/m$, and define a dimensionless time $\tau=t\,\omega$. More precisely, we introduce the set of non-dimensional quantities: $\begin{split}x&=X/L\ ,\quad y=Y/L\ ,\quad a=A/L\,,\quad r_{i}=R_{i}/L\ ,\quad\tau=t\,\omega\ ,\\\ p_{x}&=\dfrac{dx}{d\tau}\ ,\quad p_{y}=\dfrac{dy}{d\tau}\ ,\quad H={\mathcal{H}}/(m\,L^{2}\omega^{2})\ ,\quad U=V/(m\,L^{2}\omega^{2})\ .\end{split}$ In these variables the original Hamiltonian (1) and potential are written in dimensionless form as follows (2) $H\ =\ \dfrac{1}{2}\left(\,p_{x}^{2}\ +\ p_{y}^{2}\,\right)\ +\ U(x,y)\ ,$ $U(x,y)\ =\ \dfrac{1}{2}\left(\,\left(\sqrt{(x+1)^{2}+y^{2}}\,-\,a\right)^{2}+\left(\sqrt{\left(x-1)^{2}+y^{2}\right)}\,-\,a\right)^{2}\,\right)\ ,$ here the only remaining dimensionless parameter is $a$. Below in Fig. 1, the geometrical settings of the system are presented in detail, and in Fig. 2 we graph the potential. In the special case $a=2$, a configuration of equilibrium $r_{1}=r_{2}=a$ corresponds to the equilateral triangle with sides $(2,2,2)$, where the particle and the two centers mark the vertices. Figure 1. Planar two-center problem with harmonic-like interactions in dimensionless variables. The distance between the two fixed centers is 2 and the only free parameter is $a=A/L$. The reference system with the origin in the midpoint of the line which connects the two centers is adopted. These centers are located at $(\pm 1,\,0)$, respectively. At $a=0$ system (2) coincides with the 2D isotropic harmonic oscillator, a superintegrable system possessing three algebraically independent first integrals in the Liouville sense. Along the line $x=0$, for $0\leq a\leq 1$ the potential (2) possesses a critical point (a minimum) at $y=0$, $U(0,0)={(a-1)}^{2}$, whilst for $a>1$ this point $(0,0)$ becomes a maximum and two symmetric minima located at $y_{\pm}=\pm\sqrt{a^{2}-1}$, respectively, occur with $U(0,y_{\pm})=0$. On the line $y=0$, for $0\leq a\leq 1$ the potential (2) displays a minimum at $x=0$, whereas for $a>1$ two additional symmetric maxima located at $x=\pm 1$, respectively, emerge. Also, for any value of $a$ the derivative of the potential is discontinuous at $x=\pm 1$. Figure 2. The potential $U(x,y)$ in (2) as a function of the parameter $a$; (a) $a=\frac{1}{2}$, (b) $a=\frac{3}{2}$, (c) $a=2$. The corresponding level curves are shown in Figs. (d), (e) and (f), respectively. For the Hamiltonian (2) the associated Hamilton’s equations of motion are (3) $\displaystyle\dot{x}$ $\displaystyle=\ p_{x},$ $\displaystyle\dot{y}$ $\displaystyle=\ p_{y},$ $\displaystyle\dot{p}_{x}$ $\displaystyle=\ -\left(2\,x\,-\,a\left(\dfrac{x+1}{\sqrt{(x+1)^{2}+y^{2}}}\ +\ \dfrac{x-1}{\sqrt{(x-1)^{2}+y^{2}}}\right)\right),$ $\displaystyle\dot{p}_{y}$ $\displaystyle=\ -\left(2\,y\,-\,a\,y\left(\dfrac{1}{\sqrt{(x+1)^{2}+y^{2}}}\ +\ \dfrac{1}{\sqrt{(x-1)^{2}+y^{2}}}\right)\right).$ This differential system is the main object of study of the present paper. It is invariant under the discrete symmetries $\begin{array}[]{l}\mathcal{S}_{1}:\ (x\rightarrow-x,\,y\rightarrow y,p_{x}\rightarrow-p_{x},\,p_{y}\rightarrow p_{y}),\\\ \mathcal{S}_{2}:\ (x\rightarrow x,\,y\rightarrow-y,p_{x}\rightarrow p_{x},\,p_{y}\rightarrow- p_{y}),\end{array}$ and $\mathcal{S}_{1}\circ\mathcal{S}_{2}$. Then the orbits are symmetric by the planes $(x,0,p_{x},0)$ and $(0,y,0,p_{y})$ as well as from the origin under the symmetry $\mathcal{S}_{1}\circ\mathcal{S}_{2}$. ### 1.2. Main results System (3) has three equilibrium points ${\mathbf{x}}_{0}\,\equiv\,(x,y,p_{x},p_{y})$, namely: $(0,\,0,\,0,\,0),\quad(0,\,-\sqrt{a^{2}-1},\,0,\,0),\quad(0,\,\sqrt{a^{2}-1},\,0,\,0).$ For $a>1$ the eigenvalues of the equilibrium point $(0,\,\sqrt{a^{2}-1},\,0,\,0)$ are purely imaginary, i.e. they are $\pm i\,\sqrt{2}/a$ and $\pm i\sqrt{2(a^{2}-1)}/a$, and in the next theorem we describe the periodic orbit that bifurcate from this equilibrium. ###### Theorem 1. For $a>1$ in each energy level $H=h$ with $h>0$ sufficiently small, from the equilibrium point $(0,\,\sqrt{a^{2}-1},\,0,\,0)$ of the Hamiltonian system (3) can bifurcate one or more periodic orbits $(x(t,\varepsilon),y(t,\varepsilon),p_{x}(t,\varepsilon),p_{y}(t,\varepsilon))$ with initial conditions $(x(0,\varepsilon),y(0,\varepsilon),p_{x}(0,\varepsilon),p_{y}(0,\varepsilon))$ of the form (4) $\left(\varepsilon\tilde{r},\,\sqrt{a^{2}-1}+\varepsilon\tilde{\rho}\cos\frac{\sqrt{2}g}{a}\tilde{s},\,0,\,-\varepsilon\dfrac{\sqrt{2}g}{a}\tilde{\rho}\sin\frac{\sqrt{2}g}{a}\tilde{s}\right)\ +\ O(\varepsilon^{2}),$ when the determinant of the Jacobian matrix (5) $\left.\frac{\partial(f_{1}(\rho,s),f_{2}(\rho,s))}{\partial(\rho,s)}\right|_{\rho=\tilde{\rho},s=\tilde{s}}\neq 0\ .$ Here $\varepsilon>0$ is a small parameter and $g\equiv\sqrt{a^{2}-1}\in\mathbb{Q}$. The values of the functions $f_{i}(\rho,s)$ for $i=1,2$, and of the constants $\tilde{r}$, $\tilde{\rho}$, $\tilde{s}$ are given in the proof of this theorem. Theorem 1 is proved in section 4. In section 5 for different values of the parameter $a$ we prove the existence of one or two periodic orbits given by Theorem 1. Note that when $\varepsilon\to 0$ the periodic orbit of Theorem 1 bifurcates from the equilibrium point $(0,\,\sqrt{a^{2}-1},\,0,\,0)$. Due to the $\mathcal{S}_{2}$ symmetry by studying the periodic orbit bifurcating from the equilibrium $(0,\,\sqrt{a^{2}-1},\,0,\,0)$ we are also studying the periodic orbit bifurcating from the symmetric equilibrium $(0,\,-\sqrt{a^{2}-1},\,0,\,0)$. If the periodic orbit obtained in Theorem 1 is not invariant under the $\mathcal{S}_{1}$ symmetry of the differential system (3), there is another symmetric periodic orbit distinct to the one given in Theorem 1 with the initial condition $(-x(0,\varepsilon),\allowbreak y(0,\varepsilon),\allowbreak-p_{x}(0,\varepsilon),\allowbreak p_{y}(0,\varepsilon))$ which is also near the equilibrium point $(0,\,\sqrt{a^{2}-1},\,0,\,0)$. On the other hand, if $C=\left(\tilde{\rho}\cos\frac{\sqrt{2}g}{a}\tilde{s}\right)^{2}+\left(\dfrac{\sqrt{2}g}{a}\tilde{\rho}\sin\frac{\sqrt{2}g}{a}\tilde{s}\right)^{2}\neq 0\,,$ then the $\mathcal{S}_{2}$ symmetry of the differential system (3) provides another periodic orbit distinct to the one given by Theorem 1 with initial conditions $(-x(0,\varepsilon),\allowbreak y(0,\varepsilon),\allowbreak- p_{x}(0,\varepsilon),\allowbreak p_{y}(0,\varepsilon))$, however this is not near the original equilibrium point $(0,\sqrt{a^{2}-1},\,0,\,0)$, but near $(0,-\sqrt{a^{2}-1},\,0,\,0)$. Finally, if $\tilde{r}C\neq 0$, then the $\mathcal{S}_{1}\circ\mathcal{S}_{2}$ symmetry of the differential system (3) provides another periodic orbit distinct to the one given in Theorem 1 with the initial condition $(-x(0,\varepsilon),-y(0,\varepsilon),-p_{x}(0,\varepsilon),-p_{y}(0,\varepsilon))$, and consequently near the equilibrium point $(0,-\sqrt{a^{2}-1},\,0,\,0)$. There is a local symmetry which inverts $y$ and $p_{y}$ around the equilibrium point $(0,\sqrt{a^{2}-1},\,0,\,0)$, valid only when $\varepsilon\rightarrow 0$. In this way, for each periodic orbit bifurcating from the equilibrium point, there can be four periodic orbits bifurcating simultaneously from it. In short, we have proved the next corollary. ###### Corollary 2. Under the assumptions of Theorem 1 and for each periodic orbit bifurcating from the equilibrium $(0,\,\sqrt{a^{2}-1},\,0,\,0)$, if $\tilde{r}C\neq 0$ and $a>1$, then at each energy level $H=h$ of the Hamiltonian system (3) with $h>0$ sufficiently small, there are at least $4$ periodic orbits, $2$ near the equilibrium $(0,\,\sqrt{a^{2}-1},\,0,\,0)$, and the other two near the equilibrium $(0,-\sqrt{a^{2}-1},\,0,\,0)$. Of course, the integrable and non–integrable Hamiltonian systems can have infinitely many periodic orbits. In general, it is not easy to find explicitly a whole family of analytical periodic orbits mainly when the Hamiltonian system is non–integrable. Here we find them in Theorem 1 and in Corollary 2. Once we have proved that at any positive energy level sufficiently small there exist analytic periodic orbits, we can use them to prove the next result about the non–integrability of the Hamiltonian system (3) in the sense of Liouville–Arnold. ###### Theorem 3. Suppose that the Hamiltonian system (3) satisfies the hypotheses of Theorem 1 and Corollary 2. Then one of the following two statements hold: * (a) If the determinant of the fundamental matrix associated to some of the periodic orbits of Corollary 2 is different from $1$, then the Hamiltonian system is not Liouville-Arnold integrable. * (b) If all the determinants of the fundamental matrices associated to the periodic orbits of Corollary 2 are $1$, the Hamiltonian system can be Liouville-Arnold integrable. The paper is structured as follows. In section 2 we show the Poincaré sections of the flow of the Hamiltonian system (3) for the values of the parameter $a=3/2,2,\sqrt{5},5$ and for some fixed values of the energy $H$. These Poincaré sections provide some information about the global dynamics of the Hamiltonian system (3). In section 3 we recall the basic results of the averaging theory of first order for computing periodic orbits that we shall need for proving Theorem 1. As we have said in section 4 we prove Theorem 1. In section 5 we compute the periodic orbit of Theorem 1 for the values of the parameter $a=\sqrt{29}/2,\sqrt{5},5/4$. The case $a=\sqrt{5}$ which is excluded in Theorem 1, is addressed in this section, and the periodic orbits are explicitly exhibited and graphed. Finally, in section 6 we prove Theorem 3. ## 2\. Poincaré sections Now for the Hamiltonian (2) we present the Poincaré sections on the $(y,p_{y})$ plane, considering the values $a=3/2,2,\sqrt{5},5$ as a function of the energy $E=H$. The configuration of equilibrium $r_{1}=r_{2}=a$ with $a=2$ corresponds to the equilateral triangle with sides $(2,2,2)$. The Poincaré sections are determined from the intersection of trajectories, associated with given initial conditions within the phase space, with a lower- dimensional subspace ($x=0$, $y$, $p_{x}(y,p_{y},E)$, $p_{y}$). They are transversal planes to the flow of the Hamiltonian system, and they can be regarded as a discretized version of the dynamical system retaining relevant properties of the original continuous system but acting in a reduced phase space. For the calculations of the Poincaré sections we take as a reference point of energy the value $E_{s}$ of the potential (2) evaluated at the saddle point $(0,0)$, namely $E_{s}\ =\ U(0,0)\ =\ {(a-1)}^{2}$. The Figures (3), (4), (5), and (6), display the (oriented) Poincaré sections for the system (2) at $a=3/2,2,\sqrt{5},5$, respectively, as a function of the energy $E=H$ in units of $E_{s}$. They were obtained using numerical simulations for 120 random initial conditions with a simulation time of 6000. Computations were performed in MATLAB utilizing a personal laptop. (a) $E=\frac{1}{4}E_{s}$ (b) $E=\frac{1}{2}E_{s}$ (c) $E=\frac{3}{4}E_{s}$ (d) $E=\frac{99}{100}E_{s}$ (e) $E=4\,E_{s}$ Figure 3. Poincaré sections on the plane $(y,\,p_{y})$, for the Hamiltonian $H$ (2) with $a=\frac{3}{2}$, at different values of the energy $H=E$. In this case the equilibrium points are $(0,\,\pm\frac{1}{2}\sqrt{5},\,0,\,0)$ and $E_{s}=\frac{1}{4}\,.$ For $E<E_{s}$, the Poincaré sections consist of two symmetric disconnected regions which smoothly merge at $(y=0,p_{y}=0)$ when $E=E_{s}$. For low energy, $E\ll E_{s}$, the zones of regular dynamics dominate the accessible phase space landscape. At fixed $a$, the presence of chaotic behaviour is prominent at $E\sim E_{s}$. Interestingly, at fixed energy $E$ (in units of $E_{s}$), additional numerical experiments indicate that the degree of chaoticity is not a monotonous function of the parameter $a$ in the interval $a\in(1,5]$ (see, for instance, the Figures (3)-(6) for the case $E=\frac{1}{4}E_{s}$). Nevertheless, the presence of chaos tends to decrease as the value of $a\gtrsim 3$ grows. Notice that even at $E\sim E_{s}$, islands of stability (regular dynamics) persist. Hence, the coexistence of regularity and chaos exhibits the rich dynamics of the two-center problem with harmonic-like interactions. In particular, we highlight the complexity of the structure of the Poincaré sections for the case $a=\sqrt{5}$, see Fig. 5. (a) $E=\frac{1}{4}E_{s}$ (b) $E=\frac{1}{2}E_{s}$ (c) $E=\frac{3}{4}E_{s}$ (d) $E=\frac{99}{100}E_{s}$ (e) $E=4\,E_{s}$ Figure 4. Poincaré sections on the plane $(y,\,p_{y})$, for the Hamiltonian $H$ (2) with $a=2$, at different values of the energy $H=E$. In this case the equilibrium points are $(0,\,\pm\sqrt{3},\,0,\,0)$ and $E_{s}=1\,.$ (a) $E=\frac{1}{4}E_{s}$ (b) $E=\frac{1}{2}E_{s}$ (c) $E=\frac{3}{4}E_{s}$ (d) $E=\frac{99}{100}E_{s}$ (e) $E=4\,E_{s}$ Figure 5. Poincaré sections on the plane $(y,\,p_{y})$, for the Hamiltonian $H$ (2) with $a=\sqrt{5}$, at different values of the energy $H=E$. The equilibrium points are $(0,\,\pm 2,\,0,\,0)$ and $E_{s}={(\sqrt{5}-1)}^{2}\,.$ (a) $E=\frac{1}{4}E_{s}$ (b) $E=\frac{1}{2}E_{s}$ (c) $E=\frac{3}{4}E_{s}$ (d) $E=\frac{99}{100}E_{s}$ (e) $E=4\,E_{s}$ Figure 6. Poincaré sections on the plane $(y,\,p_{y})$, for the Hamiltonian $H$ (2) with $a=5$, at different values of the energy $H=E$. The equilibrium points are $(0,\,\pm 2\,\sqrt{6},\,0,\,0)$ and $E_{s}=16\,.$ ## 3\. The averaging theory of first order It is worth recalling the basics of the averaging theory (periodic case) of first order. This tool will be used to derive the main results of the present study. Essentially, we deal with the problem of finding $T$-periodic solutions for a differential system whose vector field depends on a small parameter $\varepsilon$. For more details about the averaging theory of first order for finding periodic orbits see [20]. We consider the differential system (6) ${\dot{\bf x}}(t)\ =\ \varepsilon\,F_{1}(t,{\bf x})\ +\ O(\varepsilon^{2}),$ where $\varepsilon\neq 0$ is a sufficiently small parameter, i.e. $|\varepsilon|\ll 1$, $F_{1}:\mathbb{R}\times\Omega\rightarrow\mathbb{R}^{n}$ is a continuous function $T$-periodic in the variable $t$, and $\Omega$ denotes an open subset of $\mathbb{R}^{n}$. The above equation often arises by expansion in the neighborhood of an equilibrium point taking convenient coordinates. Now we introduce the averaged function of first order $f_{1}:\Omega\rightarrow\mathbb{R}^{n}$ as follows (7) $f_{1}({\bf z})\ =\ \frac{1}{T}\int_{0}^{T}F_{1}(s,\,{\bf z})\,ds\ ,$ and also assume that: * • (i) $F_{1}$ is locally Lipschitz with respect to ${\bf x}$; * • (ii) for ${\bf z}_{0}$ in $\Omega$ with $f_{1}({\bf z}_{0})=0$, there exists a neighborhood $U$ of ${\bf z}_{0}$ such that $f_{1}({\bf z})\neq 0$ for all ${\bf z}\in U\setminus\\{{\bf z}_{0}\\}$ and $d_{B}(f_{1},\,U,\,{\bf z}_{0})\neq 0$ (the Brouwer degree of the $f_{1}$ at ${\bf z}_{0}$ is not zero). Then for $|\varepsilon|$ sufficiently small, there exists a $T-$periodic solution ${\bf x}(t,\,\varepsilon)$ of system (6) such that ${\bf x}(0,\,\varepsilon)\rightarrow{\bf z}_{0}$ as $\varepsilon\rightarrow 0$. That is, the simple zeros of the averaged function (7) provide the initial conditions for isolated $T$-periodic solutions of the differential system (6). Here a simple zero ${\bf z}_{0}$ of the function $f_{1}$ means that the Jacobian of $f_{1}$ at ${\bf z}_{0}$ is not zero. We recall that if the Jacobian of $f_{1}$ at ${\bf z}_{0}$ is not zero, then the Brouwer degree $d_{B}(f_{1},\,U,\,{\bf z}_{0})\neq 0$, for details see [21]. ## 4\. Proof of theorem 1 In this section we address the problem of finding periodic orbits of the differential system (3) bifurcating from the equilibrium point ${\mathbf{x}}_{0}\,=\,(x,y,p_{x},p_{y})\ =\ (0,\,\sqrt{a^{2}-1},\,0,\,0)$. As a first step we translate the equilibrium ${\mathbf{x}}_{0}$ to the origin of coordinates. To this end, we introduce the canonical transformation (8) $(x,y,p_{x},p_{y})=(X,Y+g,P,Q).$ In these new variables the Hamiltonian (2) writes $\begin{array}[]{rl}H=&\frac{1}{2}\big{(}\,P^{2}\ +\ Q^{2}\,\big{)}\ +\ X^{2}\ +\ Y^{2}\ +\ 2\,a^{2}\ +\ 2\,Y\,g\\\ &-\ a\,\bigg{(}\sqrt{\left(g+Y\right)^{2}+(X-1)^{2}}\ +\ \sqrt{\left(g+Y\right)^{2}+(X+1)^{2}}\bigg{)},\end{array}$ and its Hamilton’s equations are (9) $\displaystyle\dot{X}=$ $\displaystyle P,$ $\displaystyle\dot{Y}=$ $\displaystyle Q,$ $\displaystyle\dot{P}=$ $\displaystyle-\frac{(X-1)\left(\sqrt{(g+Y)^{2}+(X-1)^{2}}-a\right)}{\sqrt{(g+Y)^{2}+(X-1)^{2}}}$ $\displaystyle-\frac{(X+1)\left(\sqrt{(g+Y)^{2}+(X+1)^{2}}-a\right)}{\sqrt{(g+Y)^{2}+(X+1)^{2}}},$ $\displaystyle\dot{Q}=$ $\displaystyle(g+Y)\left(a\left(\frac{1}{\sqrt{(g+Y)^{2}+(X+1)^{2}}}+\frac{1}{\sqrt{(g+Y)^{2}+(X-1)^{2}}}\right)-2\right).$ Now it is convenient to perform another change of variables such that the linear part at the origin of the differential system (9) be in its real Jordan normal form. Direct calculations lead to the transformations (10) $(X,Y,P,Q)=\left(U,W,-\frac{\sqrt{2}}{a}\,V,-\frac{\sqrt{2}\,g}{a}Z\right).$ From (9) and (10) we obtain the differential system (11) $\displaystyle\dot{U}$ $\displaystyle=-\frac{\sqrt{2}}{a}\,V,$ $\displaystyle\dot{W}$ $\displaystyle=-\frac{\sqrt{2}\,g}{a}\,Z,$ $\displaystyle\dot{V}$ $\displaystyle=\ -\frac{a}{\sqrt{2}}\left[\left[\frac{a\,(U+1)}{\sqrt{(g+W)^{2}+(U+1)^{2}}}+\frac{a\,(U-1)}{\sqrt{(g+W)^{2}+(U-1)^{2}}}\right]-\ 2\,U\,\right],$ $\displaystyle\dot{Z}$ $\displaystyle=\frac{a\,(g+W)}{\sqrt{2}g}\,\left[2-a\left[\frac{1}{\sqrt{(g+W)^{2}+(U+1)^{2}}}+\frac{1}{\sqrt{(g+W)^{2}+(U-1)^{2}}}\right]\right].$ which admits from (4) the first integral (12) $\begin{array}[]{rl}H=&\dfrac{(V-Z)(V+Z)}{a^{2}}+a^{2}+(g+W)^{2}+U^{2}+Z^{2}+1\vspace{0.2cm}\\\ &-a\left(\sqrt{(g+W)^{2}+(U-1)^{2}}+\sqrt{(g+W)^{2}+(U+1)^{2}}\right).\end{array}$ Next we rescale the variables for rewritting system (11) into a suitable form for applying the averaging theory. Let $\varepsilon$ be a small parameter, and we do the rescaling (13) $(U,W,V,Z)=(\varepsilon\,u,\varepsilon\,w,\varepsilon\,v,\varepsilon\,z),$ in the differential system (11), and expanding the new differential system in powers of the small parameter $\varepsilon$ we obtain (14) $\displaystyle\dot{u}$ $\displaystyle=\ -\frac{\sqrt{2}\,v}{a},$ $\displaystyle\dot{w}$ $\displaystyle=\ -\frac{\sqrt{2}\,g\,z}{a},$ $\displaystyle\dot{v}$ $\displaystyle=\ \frac{\sqrt{2}\,u}{a}+\frac{\sqrt{2}\left(a^{2}-3\right)\,g\,u\,w}{a^{3}}\varepsilon+O(\varepsilon^{2}),$ $\displaystyle\dot{z}$ $\displaystyle=\ \frac{\sqrt{2}\,g\,w}{a}\ +\ \frac{\left(\left(a^{2}-3\right)u^{2}+3w^{2}\right)}{\sqrt{2}\,a^{3}}\,\varepsilon+O(\varepsilon^{2}).$ whereas for (12) the first integral becomes $H\ =\ \frac{\left(g^{2}\left(w^{2}+z^{2}\right)+u^{2}+v^{2}\right)}{a^{2}}\varepsilon^{2}\ +\ O(\varepsilon^{3}).$ We introduce the following polar variables $(r,\theta,\rho,\,s)$ (15) $\begin{array}[]{ll}u\,=\,r\,\cos(\sqrt{2}\,\theta/a),&v\,=\,r\,\sin(\sqrt{2}\,\theta/a),\\\ w\,=\,\rho\,\cos(\sqrt{2}\,g\,(\theta+s)/a),&z\,=\,\rho\,\sin(\sqrt{2}\,g\,(\theta+s)/a).\end{array}$ Now we take the angular variable $\theta$ as the new independent variable, in this way the differential system will be periodic in the variable $\theta$. So, from (14) and (15) we arrive to the differential system (16) $\begin{array}[]{rl}\dfrac{dr}{d\theta}=&\dfrac{\varepsilon}{\sqrt{2}a^{3}}\left(a^{2}-3\right)\,g\,\rho\,r\,\sin\frac{2\sqrt{2}\theta}{a}\cos\frac{\sqrt{2}g(\theta+s)}{a}+O(\varepsilon^{2}),\vspace{0.2cm}\\\ \dfrac{d\rho}{d\theta}=&\dfrac{\varepsilon}{\sqrt{2}a^{3}}\sin\frac{\sqrt{2}g(\theta+s)}{a}\left(\left(a^{2}-3\right)r^{2}\cos^{2}\frac{\sqrt{2}\theta}{a}+3\rho^{2}\cos^{2}\frac{\sqrt{2}g(\theta+s)}{a}\right)+O(\varepsilon^{2}),\vspace{0.2cm}\\\ \dfrac{ds}{d\theta}=&\dfrac{\varepsilon}{4a^{2}g\rho}\cos\frac{\sqrt{2}g(\theta+s)}{a}\left(\left(a^{2}-3\right)\cos\frac{2\sqrt{2}\theta}{a}\left(r^{2}-2g^{2}\rho^{2}\right)+\left(a^{2}-3\right)r^{2}\right.\vspace{0.2cm}\\\ &\left.+\left(8a^{2}-2a^{4}-3\right)\rho^{2}+3\rho^{2}\cos\frac{2\sqrt{2}g(\theta+s)}{a}\right)+O(\varepsilon^{2}),\end{array}$ possessing the first integral $\displaystyle H\ =\ \dfrac{\left(g^{2}\,\rho^{2}\,+\,r^{2}\right)}{a^{2}}\,\varepsilon^{2}\ +\ O(\varepsilon^{3}).$ Since in the Hamiltonian systems generically the periodic orbits appear in cylinders of periodic orbits parametrized by the values of the Hamiltonian $H$, and the averaging theory only can detect periodic orbits that are isolated, we restrict the above differential system (16) to the energy level $H=\varepsilon^{2}\,h$ with $h>0$. We impose this restriction computing $r$ as a function of $\rho$ and $s$ in the energy level $H=\varepsilon^{2}\,h$, namely $r\ =\ \sqrt{a^{2}\left(h-\rho^{2}\right)\,+\,\rho^{2}}+O(\varepsilon).$ Therefore, up to first order in $\varepsilon$ we obtain from (16) the differential system (17) $\begin{array}[]{rl}\dfrac{d\rho}{d\theta}=&\dfrac{\varepsilon}{\sqrt{2}a^{3}}\sin\frac{\sqrt{2}g(\theta+s)}{a}\left(\left(a^{2}-3\right)\left(a^{2}\left(h-\rho^{2}\right)+\rho^{2}\right)\cos^{2}\frac{\sqrt{2}\theta}{a}\right.\vspace{0.2cm}\\\ &\left.+3\rho^{2}\cos^{2}\frac{\sqrt{2}g(\theta+s)}{a}\right)+O(\varepsilon^{2})\vspace{0.2cm}\\\ =&F_{11}(\theta,\rho,s)+O(\varepsilon^{2}),\vspace{0.2cm}\\\ \dfrac{ds}{d\theta}=&\dfrac{\varepsilon}{4a^{2}g\rho}\cos\frac{\sqrt{2}g(\theta+s)}{a}\left(\left(a^{2}-3\right)\left(a^{2}\left(h-3\rho^{2}\right)+3\rho^{2}\right)\cos\frac{2\sqrt{2}\theta}{a}\right.\vspace{0.2cm}\\\ &\left.+\left(a^{2}-3\right)a^{2}h-3\left(a^{4}-4a^{2}+2\right)\rho^{2}+3\rho^{2}\cos\frac{2\sqrt{2}g(\theta+s)}{a}\right)+O(\varepsilon^{2})\vspace{0.2cm}\\\ =&F_{12}(\theta,\rho,s)+O(\varepsilon^{2}).\end{array}$ Assuming $a\notin\\{0,1,\sqrt{3},\sqrt{5}\\}$, by direct integration we compute the first averaged function $f(\rho,s)=(f_{1}(\rho,s),f_{2}(\rho,s))$, i.e. (18) $\begin{array}[]{rl}f_{1}(\rho,\,s)=&\dfrac{1}{2\pi}\displaystyle\int_{0}^{2\pi}F_{11}(\theta,\rho,s)d\theta\vspace{0.2cm}\\\ =&\dfrac{1}{16\pi a^{2}(a^{2}-5)g}\Bigg{[}\bigg{[}4(a^{3}-3a)^{2}h+(21-57a^{2}+28a^{4}-4a^{6})\rho^{2}\bigg{]}\cos\frac{\sqrt{2}gs}{a}\vspace{0.2cm}\\\ &+\left(a^{2}-5\right)\rho^{2}\,\cos\frac{3\,\sqrt{2}\,g\,s}{a}-\ \left(a^{2}-5\right)\rho^{2}\cos\frac{3\,\sqrt{2}\,g(s+2\pi)}{a}\vspace{0.2cm}\\\ &+\left(a^{2}-5\right)\big{(}\left(2a^{4}-8a^{2}+3\right)\rho^{2}-2a^{2}\left(a^{2}-3\right)h\big{)}\cos\frac{\sqrt{2}g(s+2\pi)}{a}\vspace{0.2cm}\\\ &-\left(a^{2}-3\right)\left(a^{2}+2g-1\right)\left(a^{2}\left(h-\rho^{2}\right)+\rho^{2}\right)\,\cos\frac{\sqrt{2}\,(g\,s+2\pi(g-2))}{a}\vspace{0.2cm}\\\ &-\big{(}\left(a^{2}-3\right)\left(a^{2}-2g-1\right)\left(a^{2}\left(h-\rho^{2}\right)+\rho^{2}\right)\big{)}\cos\frac{\sqrt{2}\,(g\,s+2\pi(g+2))}{a}\Bigg{]},\vspace{0.2cm}\end{array}$ $\begin{array}[]{rl}f_{2}(\rho,\,s)=&\dfrac{1}{2\pi}\displaystyle\int_{0}^{2\pi}F_{12}(\theta,\rho,s)d\theta\vspace{0.2cm}\\\ =&-\dfrac{1}{16\sqrt{2}\pi a(a+1)(a-1)(a^{2}-5)\rho}\vspace{0.2cm}\\\ &\Bigg{[}\bigg{(}4(a^{3}-3a)^{2}h+3(21-57a^{2}+28a^{4}-4a^{6})\rho^{2}\bigg{)}\sin\frac{\sqrt{2}gs}{a}\vspace{0.2cm}\\\ &+\left(a^{2}-5\right)\rho^{2}\,\sin\frac{3\,\sqrt{2}\,g\,s}{a}-\left(a^{2}-5\right)\rho^{2}\sin\frac{3\,\sqrt{2}\,g(s+2\pi)}{a}\vspace{0.2cm}\\\ &+\ \left(a^{2}-5\right)\big{(}3\left(2a^{4}-8a^{2}+3\right)\rho^{2}-2a^{2}\left(a^{2}-3\right)h\big{)}\sin\frac{\sqrt{2}g(s+2\pi)}{a}\vspace{0.2cm}\\\ &-\ \left(a^{2}-3\right)\left(a^{2}+2g-1\right)\left(a^{2}\left(h-3\rho^{2}\right)+3\rho^{2}\right)\,\sin\frac{\sqrt{2}\,(g\,s+2\pi(g-2))}{a}\vspace{0.2cm}\\\ &-\ \big{(}(a^{2}-3)(a^{2}-2g-1)(a^{2}(h-3\rho^{2})+3\rho^{2})\big{)}\sin\frac{\sqrt{2}(g\,s+2\pi(g+2))}{a}\Bigg{]}.\end{array}$ We are interested in the zeros $(\tilde{\rho},\tilde{s})=(\tilde{\rho}(a,\,h),\tilde{s}(a,\,h))$ of the function $f(\rho,\,s)$. From the averaging theory described in section 3 such solutions must satisfy the requirements: (19) $\tilde{\rho}>0,\quad 0\leq\tilde{s}<2\pi,\quad\tilde{r}=\sqrt{a^{2}(h-\tilde{\rho}^{2})+\tilde{\rho}^{2}}>0,\quad\dfrac{\partial(f_{1},\,f_{2})}{\partial(\rho,\,s)}|_{\rho=\tilde{\rho},s=\tilde{s}}\neq 0.$ Since the variable $\rho^{2}$ appears in (18) linearly, we solve $f_{1}(\rho,s)=0$ for $\rho^{2}$ and then substitute its expression into the equation $f_{2}(\rho,s)=0$. So we arrive to a quadratic equation in the variable $\xi^{2}$ being $\xi=\cos\left(\sqrt{2}\,g\,s/a\right)$. Thus we have the equation (20) $h\,(a^{2}-3)\,\big{[}\,4\,P_{1}^{2}\,\xi^{2}(\xi^{2}-1)\ +\ {(\,P_{2}\,+\,P_{3}\,\xi^{2}\,)}^{2}\,\big{]}\ =\ 0,$ where the constants $P_{1},P_{2},P_{3}$ depend on $a$ but not on the parameter $h$. These constants are given in the Appendix. From (20) we must compute the solutions $s=\tilde{s}$. If the solutions $\xi$ of the biquadratic equation (20) are complex or real outside the interval $[-1,1]$, then equation (20) has no solutions in the variable $s$. Assume that equation (20) has some solutions $\tilde{s_{i}}$ in the variable $s$. Then for each one of these solutions substituting it into equation (18) we obtain $\tilde{\rho}_{i}$. Substituting $\tilde{\rho}_{i}$ in equation (19) we obtain $\tilde{r}_{i}$. So, from the averaging theory of section 3, we have the initial conditions for a periodic orbit of the differential system (17) if the determinant of the matrix $\left.\frac{\partial(f_{1}(\rho,s),f_{2}(\rho,s))}{\partial(\rho,s)}\right|_{\rho=\tilde{\rho}_{i},s=\tilde{s}_{i}}\neq 0.$ Going back from polar coordinates (15), the scale factor $\varepsilon$ (13), the change of coordinates (10) and the translation transformations (8), we obtain for $\varepsilon>0$ sufficiently small the initial conditions $(\,x(0,\varepsilon),y(0,\varepsilon),p_{x}(0,\varepsilon),p_{y}(0,\varepsilon))$ for a periodic orbit of the Hamiltonian system (3) in an energy level of $H=\tilde{h}>0$ sufficiently small because $\tilde{h}=h\varepsilon^{2}+O(\varepsilon^{3})$. More precisely, the initial conditions of such periodic orbit are (21) $\left(\varepsilon\tilde{r}_{i},\varepsilon\tilde{\rho}_{i}\cos\frac{\sqrt{2}g}{a}\tilde{s}_{i}+\sqrt{a^{2}-1},0,\varepsilon\dfrac{\sqrt{2}g}{a}\tilde{\rho}_{i}\sin\frac{\sqrt{2}g}{a}\tilde{s}_{i}\right),$ with an error of $O(\varepsilon^{2})$. Theorem 1 is proved. ## 5\. Numerical examples In this section we present some examples of periodic orbits with initial conditions calculated by the averaging theory discussed in the previous section. The equilibrium point from which they bifurcate is ${\bf x}_{0}$, and their frequencies in the $x$ and $y$ directions are $\omega_{x}=\sqrt{2}/a$ and $\omega_{y}=\sqrt{2}g/a$, respectively. The frequency ratio of the periodic motion in the $(x,p_{x})$ and $(y,p_{y})$ planes, $\omega_{x}/\omega_{y}=1/g=1/\sqrt{a^{2}-1}$, is determined by the parameter $a$. Although we have found periodic solutions on both planes for $h>0$ sufficiently small, the motion in the phase space can be either periodic or quasi-periodic, depending on whether the frequency ratio $\omega_{x}/\omega_{y}=1/\sqrt{a^{2}-1}\ $ is a rational or an irrational number. If the commensurability condition $\omega_{x}/\omega_{y}=l/m\ \in\mathbb{Q}$ holds, with $l$, $m\,\in\mathbb{Z}$ relative primes, the orbit will be periodic for $h>0$ sufficiently small, with $l$ oscillations in the $x$ direction and $m$ oscillations in the $y$ direction. In order to guarantee the periodicity of the solutions constructed from the averaging method, $a$ must be selected such that (22) $a\ =\ \sqrt{\left(\frac{m}{l}\right)^{2}\,+\,1}\,,\qquad l\,,m\in\mathbb{Z}$ ### 5.1. Case $a=\sqrt{29}/2$ For any value of $h>0$ sufficiently small, we obtain one zero of the function (18) such that $\dfrac{\partial(f_{1},\,f_{2})}{\partial(\rho,\,s)}|_{\rho=\tilde{\rho},s=\tilde{s}}\,\neq\,0$, i.e. $(\tilde{\rho}_{1},\tilde{s}_{1})=(0.6270241608257199..,1.7084698955985402..).$ (a) $x$ vs $y$ (b) $x$ vs $y$ Figure 7. Case $a=\frac{\sqrt{29}}{2}$. (a) Periodic orbit and its initial condition (23) marked by a black dot, obtained by averaging theory. (b) The four periodic orbits bifurcating from the equilibrium point $(0,\sqrt{a^{2}-1},0,0)$ and their initial conditions marked by black dots. By taking $\varepsilon=10^{-2}$ the above values provide the following initial conditions (21): (23) $\begin{split}&(x(0,\varepsilon),y(0,\varepsilon),p_{x}(0,\varepsilon),p_{y}(0,\varepsilon))=\\\ &\quad(0.02189236027905628...,2.496093823665088...,0,-0.00644040535596368...),\end{split}$ which correspond to the periodic orbit displayed in Fig. 7. In this case $\omega_{x}/\omega_{y}=1/\sqrt{a^{2}-1}=2/5$, and the periodic orbit has 2 oscillations in the $x$ direction by 5 oscillations in the $y$ direction. The three periodic orbits related to (23) by reflection near the equilibrium point have the initial conditions $\begin{split}&(-0.021892360279056278...,2.496093823665088...,0,-0.00644040535596368...)\,,\\\ &(0.021892360279056278...,2.503906176334912...,0,0.00644040535596368...)\,,\\\ &(-0.021892360279056278...,2.503906176334912...,0,0.00644040535596368...)\,,\end{split}$ ### 5.2. Case $a=\sqrt{5}$ This case was excluded from the proof of Theorem 1, and we address this special value here. The averaged function (18) reduces to (24) $\begin{array}[]{rl}f_{1}(\rho,\,s)=&\dfrac{1}{2\pi}\displaystyle\int_{0}^{2\pi}F_{11}(\theta,\rho,s)d\theta\vspace{0.2cm}\\\ =&\frac{1}{800\pi}\Bigg{[}5\left(25h-17\rho^{2}\right)\cos\left(2\sqrt{\frac{2}{5}}s\right)\vspace{0.2cm}\\\ &+5\left(4\rho^{2}-5h\right)\cos\left(2\sqrt{\frac{2}{5}}(s+4\pi)\right)+40\pi\sqrt{10}h\sin\left(2\sqrt{\frac{2}{5}}s\right)\vspace{0.2cm}\\\ &-100h\cos\left(2\sqrt{\frac{2}{5}}(s+2\pi)\right)-32\sqrt{10}\pi\rho^{2}\sin\left(2\sqrt{\frac{2}{5}}s\right)\vspace{0.2cm}\\\ &+5\rho^{2}\cos\left(6\sqrt{\frac{2}{5}}s\right)+65\rho^{2}\cos\left(2\sqrt{\frac{2}{5}}(s+2\pi)\right)\vspace{0.2cm}\\\ &-5\rho^{2}\cos\left(6\sqrt{\frac{2}{5}}(s+2\pi)\right)\,\Bigg{]}\vspace{0.2cm}\end{array}$ $\begin{array}[]{rl}f_{2}(\rho,\,s)=&\dfrac{1}{2\pi}\displaystyle\int_{0}^{2\pi}F_{12}(\theta,\rho,s)d\theta\vspace{0.2cm}\\\ =&\dfrac{1}{640\,\pi\,\rho}\Bigg{[}16\pi\left(5h-12\rho^{2}\right)\cos\left(2\sqrt{\frac{2}{5}}s\right)\vspace{0.2cm}\\\ &+\sqrt{10}\,\Bigg{(}\left(51\rho^{2}-25h\right)\sin\left(2\sqrt{\frac{2}{5}}s\right)+\left(5h-12\rho^{2}\right)\sin\left(2\sqrt{\frac{2}{5}}(s+4\pi)\right)\vspace{0.2cm}\\\ &+2\sin\left(2\sqrt{\frac{2}{5}}(s+2\pi)\right)\left(10h-19\rho^{2}+\rho^{2}\cos\left(4\sqrt{\frac{2}{5}}(s+2\pi)\right)\right)\vspace{0.2cm}\\\ &-\ \rho^{2}\,\sin\left(6\sqrt{\frac{2}{5}}s\right)\Bigg{)}\Bigg{]},\end{array}$ Eliminating the variable $\rho^{2}$ in the system of equations, $f_{1}=0$ and $f_{2}=0$, we arrive to the equation $\begin{array}[]{l}400\sin\left(4\sqrt{\frac{2}{5}}s\right)\ +\ 24\left(5\sin\left(4\sqrt{\frac{2}{5}}\pi\right)-4\sqrt{10}\pi\right)\sin^{2}\left(2\sqrt{\frac{2}{5}}\pi\right)\vspace{0.2cm}\\\ +8\pi\Bigg{[}\sqrt{10}\left(29\cos\left(4\sqrt{\frac{2}{5}}(s+\pi)\right)-38\cos\left(4\sqrt{\frac{2}{5}}s\right)+8\cos\left(4\sqrt{\frac{2}{5}}(s+2\pi)\right)\right.\vspace{0.2cm}\\\ \left.+\cos\left(4\sqrt{\frac{2}{5}}(s+3\pi)\right)\right)-64\pi\sin\left(4\sqrt{\frac{2}{5}}s\right)\Bigg{]}+5\Bigg{[}\sin\left(4\sqrt{\frac{2}{5}}(s-2\pi)\right)\vspace{0.2cm}\\\ +4\sin\left(4\sqrt{\frac{2}{5}}(s-\pi)\right)-134\sin\left(4\sqrt{\frac{2}{5}}(s+\pi)\right)+11\,\sin\left(4\sqrt{\frac{2}{5}}(s+2\pi)\right)\vspace{0.2cm}\\\ +34\sin\left(4\sqrt{\frac{2}{5}}(s+3\pi)\right)+4\sin\left(4\sqrt{\frac{2}{5}}(s+4\pi)\right)\Bigg{]}=0.\end{array}$ (a) $x$ vs $p_{x}$ (b) $p_{x}$ vs $p_{y}$ (c) $x$ vs $y$ Figure 8. Case $a=\sqrt{5}$. (a) and (b) Periodic orbit and its initial conditions (25) obtained using the averaging theory on the $(x,p_{x})$ and $(p_{x},p_{y})$ planes, marked by black dots. (c) The two periodic orbits that bifurcate from the equilibrium point $(0,\sqrt{a^{2}-1},0,0)$. The initial conditions are marked by black dots For $a=\sqrt{5}$ and for any value of $h>0$ we only have the zero $(\tilde{\rho},\tilde{s})=(0.66138231965282354...,2.21971887978497651...),$ of the averaged function (24). By taking $\varepsilon=10^{-2}$ this provides the following initial conditions (21) (25) $\begin{split}&(x(0,\varepsilon),y(0,\varepsilon),p_{x}(0,\varepsilon),p_{y}(0,\varepsilon))=\\\ &\quad(0.01802857096112335...,1.9937513313786317...,0,-0.002741327484343618...),\end{split}$ for the periodic orbit displayed in Fig. 8. In this case $\omega_{x}/\omega_{y}=1/\sqrt{a^{2}-1}=1/2$, and the periodic orbit exhibits 1 oscillation in the $x$ direction for every 2 oscillations in the $y$ direction. In this special case, only two periodic orbits bifurcate from the equilibrium point $(0,\sqrt{a^{2}-1},0,0)$, with the second orbit given by (26) $\begin{split}&(x(0,\varepsilon),y(0,\varepsilon),p_{x}(0,\varepsilon),p_{y}(0,\varepsilon))=\\\ &\quad(0.01802857096112335...,2.0062486686213683...,0,0.002741327484343618...),\end{split}$ This is due to the fact that the periodic orbit obtained from averaging (25) is invariant under the symmetry $\mathcal{S}_{1}$. Of course, with the symmetry $\mathcal{S}_{2}$ we obtain two additional periodic orbits from averaging theory, bringing the total to four. ### 5.3. Case $a=5/4$ In this case there exist two zeros of the averaged function (24) such that $\dfrac{\partial(f_{1},\,f_{2})}{\partial(\rho,\,s)}|_{\rho=\tilde{\rho},x=\tilde{s}}\,\neq\,0$. They are $\begin{array}[]{l}(\tilde{\rho}_{1},\tilde{s}_{1})=(0.78394053965364..,2.85274914632239..),\\\ (\tilde{\rho}_{2},\tilde{s}_{2})=(1.13713501897674..,2.22877470926716..),\end{array}$ which provide the initial conditions $\begin{array}[]{l}(x_{1}(0,\varepsilon),y_{1}(0,\varepsilon),p_{x_{1}}(0,\varepsilon),p_{y_{1}}(0,\varepsilon))=\\\ (0.011030904051965..,0.744111227990272..,0,-0.004390970468570..),\\\ (x_{2}(0,\varepsilon),y_{2}(0,\varepsilon),p_{x_{2}}(0,\varepsilon),p_{y_{2}}(0,\varepsilon))=\\\ (0.009138625285550..,0.7464188333578204..,0,-0.009157928392703...),\end{array}$ for two periodic orbits, respectively. These periodic orbits have 4 oscillations in the $x$ direction and 3 in the $y$. They are displayed in Fig. 9. (a) $x$ vs $y$ (b) $x$ vs $y$ Figure 9. Case $a=\frac{5}{4}=1.25$, two periodic orbits and their associated initial conditions (5.3) (marked by black dots), obtained using the averaging theory. The frequency ratio is $\omega_{1}/\omega_{2}=4/3$. As we have mentioned, these orbits provide symmetrical periodic orbits by applying $\mathcal{S}_{1}$ to the previous initial conditions. Of course, all the periodic orbits obtained in this section have Jacobian determinant (5) different from 0. ## 6\. On the non-integrability We consider the autonomous differential system (27) $\dot{x}=f(x),$ where $f\colon U\to\mathbb{R}^{n}$ is $C^{2}$, $U$ is an open subset of $\mathbb{R}^{n}$ and the dot denotes the derivative with respect to the time $t$. Let $x(t,x_{0})$ be a periodic solution of the differential system (27) of period $T$ such that $x(0,x_{0})=x_{0}$. The variational equation associated to the $T$-periodic solution $x(t,x_{0})$ is (28) $\dot{M}=\bigg{(}\frac{\partial f(x)}{\partial x}\Big{|}_{x=x(t,x_{0})}\bigg{)}M,$ where $M$ is an $n\times n$ matrix. Of course $\partial f(x)/\partial x$ denotes the Jacobian matrix of $f$ with respect to $x$. The _monodromy matrix_ associated to the $T$-periodic solution $x(t,x_{0})$ is the solution $M(T,x_{0})$ of (28) satisfying that $M(0,x_{0})$ is the identity matrix. The eigenvalues of the monodromy matrix associated to the periodic solution $x(t,x_{0})$ are called the _multipliers_ of the periodic orbit. We recall an important theorem due to Poincaré [23], (see also [22] p. 36 and [24]), on the integrability of a Hamiltonian system with two degrees of freedom. Poincaré Theorem. If a Hamiltonian system with two degrees of freedom and Hamiltonian $H$ is Liouville–Arnold integrable, and $C$ is a second first integral such that the differentials of $H$ and $C$ are linearly independent at each point of a periodic orbit of the system, then all the multipliers of this periodic orbit are equal to $1$. ###### Proof of Theorem 3. Under the hypotheses of Theorem 1 and Corollary 2, assume that the determinant of the fundamental matrix associated to some of the periodic orbits of Corollary 2 is different from $1$, then the Hamiltonian system is not Liouville-Arnold integrable because some multiplier is distinct from $1$, If all the determinants of the fundamental matrices associated to the periodic orbits of Corollary 2 are $1$, the Hamiltonian system can be Liouville-Arnold integrable. ∎ ## 7\. Conclusions We have used the averaging theory for studying the periodic orbits of the Hamiltonian system modeling the two-center problem with harmonic-like interactions in some of their fixed Hamiltonian levels, see Theorem 1 and Corollary 2. This tool can be applied to Hamiltonian systems with an arbitrary degrees of freedom. Using a result due to Poincaré we have analyzed the non–integrability in the sense of Liouville–Arnol’d of the mentioned Hamiltonian systems, see Theorem 3. Again this tool can be applied to Hamiltonian systems with an arbitrary number of degrees of freedom. We must remark that these two tools are possible to apply if we have analytic information on the periodic orbits of the Hamiltonian system. Here this is the case thanks to the averaging theory for computing periodic orbits. ### Credit authorship contribution statement All the authors have contributed equally to this paper. ### Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Data availability No data was used for the research described in the article. ## 8\. Appendix Here, we present the constants $P_{1},P_{2}$ and $P_{3}$ appearing in (20): $P_{1}\ =\ a^{8}\left(8c_{g-2}+16c_{g}-6c_{2g}+8c_{g+2}-c_{2g-4}-4\left(c_{2g-2}+c_{2g+2}+4\right)-c_{2g+4}\right)+2a^{6}\left(8(g-4)c_{g-2}-96c_{g}+38c_{2g}-8(g+4)c_{g+2}-2gc_{2g-4}+c_{2g-4}-4gc_{2g-2}+20c_{2g-2}+4gc_{2g+2}+20c_{2g+2}+(2g+1)c_{2g+4}+80\right)-4a^{4}\left((28g-43)c_{g-2}-195c_{g}+82c_{2g}+c_{3g}-(28g+43)c_{g+2}-5gc_{2g-4}-2c_{2g-4}-18gc_{2g-2}+31c_{2g-2}+18gc_{2g+2}+31c_{2g+2}+(5g-2)c_{2g+4}+140\right)+2a^{2}\left(4(29g-21)c_{g-2}-584c_{g}+258c_{2g}+16c_{3g}-4(29g+21)c_{g+2}-14gc_{2g-4}-9c_{2g-4}-90gc_{2g-2}+64c_{2g-2}+90gc_{2g+2}+64c_{2g+2}+(14g-9)c_{2g+4}+368\right)-104gc_{g-2}+52c_{g-2}+340c_{g}-130c_{2g}-60c_{3g}+104gc_{g+2}+52c_{g+2}+12gc_{2g-4}+9c_{2g-4}+100gc_{2g-2}-40c_{2g-2}-100gc_{2g+2}-40c_{2g+2}-12\,g\,c_{2g+4}\ +\ 9\,c_{2g+4}\ -\ 192\ .$ $P_{2}\ =\ -16a^{8}b_{g}+6a^{8}b_{2g}-8a^{8}b_{g+2}+a^{8}b_{2g-4}+4a^{8}b_{2g-2}+4a^{8}b_{2g+2}+192a^{6}b_{g}-76a^{6}b_{2g}+16a^{6}gb_{g+2}+64a^{6}b_{g+2}+4a^{6}gb_{2g-4}-2a^{6}b_{2g-4}+8a^{6}gb_{2g-2}-40a^{6}b_{2g-2}-8a^{6}gb_{2g+2}-40a^{6}b_{2g+2}-796a^{4}b_{g}+328a^{4}b_{2g}+4a^{4}b_{3g}-112a^{4}gb_{g+2}-168a^{4}b_{g+2}-20a^{4}gb_{2g-4}-8a^{4}b_{2g-4}-72a^{4}gb_{2g-2}+124a^{4}b_{2g-2}+72a^{4}gb_{2g+2}+124a^{4}b_{2g+2}+1280a^{2}b_{g}-516a^{2}b_{2g}-32a^{2}b_{3g}+224a^{2}gb_{g+2}+144a^{2}b_{g+2}+28a^{2}gb_{2g-4}+18a^{2}b_{2g-4}+180a^{2}gb_{2g-2}-128a^{2}b_{2g-2}-180a^{2}gb_{2g+2}-128a^{2}b_{2g+2}+24\left(a^{2}-5\right)b_{2}g+\left(a^{2}-3\right)\left(a^{2}-1\right)^{2}\left(a^{2}-4g+3\right)b_{2g+4}-8\left(a^{6}-7a^{4}+14a^{2}-4\right)\left(a^{2}+2g-1\right)b_{g-2}-500b_{g}+130b_{2g}+60b_{3g}-64gb_{g+2}-32b_{g+2}-12gb_{2g-4}-9b_{2g-4}-100gb_{2g-2}+40b_{2g-2}+\ 100\,g\,b_{2g+2}\ +\ 40\,b_{2g+2}\ .$ $P_{3}\ =\ -6a^{8}b_{2g}+8a^{8}b_{g+2}-a^{8}b_{2g-4}-4a^{8}b_{2g-2}-4a^{8}b_{2g+2}+76a^{6}b_{2g}-16a^{6}gb_{g+2}-64a^{6}b_{g+2}-4a^{6}gb_{2g-4}+2a^{6}b_{2g-4}-8a^{6}gb_{2g-2}+40a^{6}b_{2g-2}+8a^{6}gb_{2g+2}+40a^{6}b_{2g+2}-328a^{4}b_{2g}-4a^{4}b_{3g}+112a^{4}gb_{g+2}+174a^{4}b_{g+2}+20a^{4}gb_{2g-4}+8a^{4}b_{2g-4}+72a^{4}gb_{2g-2}-124a^{4}b_{2g-2}-72a^{4}gb_{2g+2}-124a^{4}b_{2g+2}+516a^{2}b_{2g}+32a^{2}b_{3g}-236a^{2}gb_{g+2}-180a^{2}b_{g+2}-28a^{2}gb_{2g-4}-18a^{2}b_{2g-4}-180a^{2}gb_{2g-2}+128a^{2}b_{2g-2}+180a^{2}gb_{2g+2}+128a^{2}b_{2g+2}-\left(a^{2}-3\right)\left(a^{2}-1\right)^{2}\left(a^{2}-4g+3\right)b_{2g+4}+8\left(a^{2}-5\right)\left(2a^{2}\left(a^{4}-7a^{2}+14\right)-11\right)b_{g}+2\left(4a^{6}-28a^{4}+59a^{2}-31\right)\left(a^{2}+2g-1\right)b_{g-2}-130b_{2g}-60b_{3g}+124gb_{g+2}+62b_{g+2}+12gb_{2g-4}+9b_{2g-4}+100gb_{2g-2}-40b_{2g-2}-100gb_{2g+2}-40b_{2g+2}\ ,$ where $c_{x}\ \equiv\ \cos\left(\dfrac{2\,\sqrt{2}\,x\,\pi}{a}\right)\qquad;\qquad b_{x}\ \equiv\ \sin\left(\dfrac{2\,\sqrt{2}\,x\,\pi}{a}\right).$ ## Acknowledgments A.M. Escobar Ruiz would like to thank the support from Consejo Nacional de Humanidades, Ciencias y Tecnologías (CONAHCyT) of Mexico under Grant CF-2023-I-1496 and from UAM research grant 2024-CPIR-0. The third author is partially supported by the Agencia Estatal de Investigación of Spain grant PID2022-136613NB-100, AGAUR (Generalitat de Catalunya) grant 2021SGR00113, and by the Reial Acadèmia de Ciències i Arts de Barcelona. ## References * [1] L. Landau and E. Lifshitz, Mechanics, 3rd ed., Pergamon Press, Vol. 1, 1976. * [2] H. Goldstein, C. Poole and J. Safko, Classical Mechanics, third edition. Addison-Wesley, 2002. * [3] V. Arnold, Mathematical Methods of Classical Mechanics, second edition, Springer, 1989. * [4] V. Arnold, V. Kozlov and A. Neishtadt, Mathematical Aspects of Classical and Celestial Mechanics, third edition, Springer, 2006. * [5] Contopoulos G., Order and Chaos in Dynamical Astronomy, Springer Verlag, 2002. * [6] M. Hénon and C. Heiles, The applicability of the third integral of motion: Some numerical experiments, Astron, J. 69 (1964), 73–79. * [7] MacKay, R.S. and Meiss, J.D., Hamiltonian Dynamical Systems, Taylor & Francis, 1987. * [8] P. Fatou, Sur le mouvement d’un système soumis à des forces à courte période, Bull. Soc. Math. France 56 (1928), 98–139. * [9] N.N. Bogoliubov and N. Krylov, The application of methods of nonlinear mechanics in the theory of stationary oscillations, Publ. 8 of the Ukrainian Acad. Sci. Kiev, 1934. * [10] N.N. Bogoliubov, Mathematical On some statistical methods in mathematical physics, Izv. vo Akad. Nauk Ukr. SSR, Kiev, 1945. * [11] J.A. Sanders, F. Verhulst and J. Murdock, Averaging Methods in Nonlinear Dynamical Systems, Applied Mathematical Sciences, Second Edition , Springer New York, New York, 2007. * [12] F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Universitext, Springer, 1991. * [13] A. Buică, J.P. Françoise and J. Llibre, Periodic solutions of nonlinear periodic differential systems with a small parameter, Commun. Pure Appl. Anal. 6(1) (2007), 103–111. * [14] J. Llibre and L. Jiménez-Lara, Periodic orbits and non-integrability of Hénon–Heiles systems, J. Phys. A: Math. Theor. 44 (2011), 205103- * [15] O. Saporta Katz and E. Efrati, Self-driven fractional rotational diffusion of the harmonic three-mass system, Phys. Rev. Lett. 122 (2019), 024102. * [16] A.M. Escobar-Ruiz, M.A. Quiroz-Juarez and J.L. Del Rio-Correa, Classical harmonic three-body system: an experimental electronic realization, Sci Rep 12 (2022), 13346. * [17] O. Saporta Katz and E. Efrati, Regular regimes of the harmonic three-mass system, Phys. Rev. E 101 (2020), 032211. * [18] A.V. Turbiner, W. Miller and M.A. Escobar-Ruiz, Three-body closed chain of interactive (an)harmonic oscillators and the algebra, J. Phys. A 53 (2020), 055302. * [19] H. Olivares-Pilón, A.M. Escobar-Ruiz and F. Montoya Molina, Three-body harmonic molecule, J. Phys. B: At. Mol. Opt. Phys. 56 (2023), 075002. * [20] A. Buica and J. Llibre, Averaging methods for finding periodic orbits via Brouwer degree, Bull. Sci. Math. 128 (2004), 7–22. * [21] N.G. Lloyd, Degree Theory, Cambridge University Press, Cambridge, 1978. * [22] V.V. Kozlov, Integrability and non-integrability in Hamiltonian mechanics, Russian Math. Surveys 38 (1983), 1–76. * [23] H. Poincaré, Les méthodes nouvelles de la mécanique céleste, Vol. I, Gauthier-Villars, Paris, 1899. * [24] J. Llibre and C. Valls, On the $C^{1}$ non-integrability of differential systems via periodic orbits, European J. Appl. Math. 22 (2011), 381–391.
∎ 11institutetext: Kartick Sutradhar 22institutetext: Indian Institute of Technology (ISM) Dhanbad Mobile: +91-7602621359 22email<EMAIL_ADDRESS>33institutetext: Hari Om 44institutetext: Indian Institute of Technology (ISM) Dhanbad 44email<EMAIL_ADDRESS> # An Efficient Simulation of Quantum Secret Sharing Kartick Sutradhar Hari Om (Received: date / Accepted: date) ###### Abstract In quantum cryptography, quantum secret sharing $(QSS)$ is a fundamental primitive. $QSS$ can be used to create complex and secure multiparty quantum protocols. Existing $QSS$ protocols are either at the $(n,n)$ threshold $2$ level or at the $(t,n)$ threshold $d$ level with a trusted player, where $n$ denotes the number of players and $t$ denotes the threshold number of players. Here, we propose a secure $d$-level $QSS$ protocol for sharing a secret with efficient simulation. This protocol is more secure, flexible, and practical as compared to the existing $QSS$ protocols: $(n,n)$ threshold $2$-level and $(t,n)$ threshold $d$-level with a trusted player. Further, it does not disclose any information about the secret to players. Its security analysis shows that the intercept-resend, intercept, entangle-measure, forgery, collision and collusion attacks are not possible in this protocol. ###### Keywords: Secure Computation Quantum Cryptography Information Security Quantum Secret Sharing ## 1 Introduction A dealer shares a secret with $n$ players in secret sharing $(SS)$, and when the secret needs to be reconstructed, the threshold number of players can do so collaboratively. The quantum secret sharing hillery1999quantum ; bao2009threshold ; yang2013secret ; Gang4 ; lu2018verifiable ; lau2013quantum ; Hao ; mashhadi2016fairly ; dehkordi2019proactive ; mashhadi2017provably ; mashhadi2016share ; mashhadi2016analysis ; mashhadi2020toward ; mashhadi2020csa ; mashhadi2017new ; karimifard2016semiquantum ; charoghchi2021three ; mashhadi2020improvement ; shi2010quantum ; run2010efficient ; shi2011multi ; gyongyosi2019quantum ; sutradhar2020efficient is a fundamental primitive protocol for sharing a secret in quantum cryptography, which may be considered as an extension of secret sharing. The $QSS$ protocol can be used to create complex multiparty quantum computing protocols that are secure. In the $(n,n)$ threshold $QSS$, a dealer shares a secret with $n$ players by dividing it into $n$ bits, known as shares, which are distributed among $n$ players, each of whom has only his share. The secret can be reconstructed by the $n$ players working together. Similarly, in the $(t,n)$ threshold $QSS$, a dealer shares a secret with $n$ players by dividing it into $n$ bits and distributing them to the $n$ players. The $t$ players will work together to solve the mystery. Because it protects the quantum threshold and secure quantum multiparty computation, the $QSS$ is commonly used in quantum threshold cryptography and secure quantum multiparty computation. Here, we propose a secure $d$-level $QSS$ protocol for sharing a secret, where $t$ players can reconstruct the secret without a trusted player. In our protocol, each player knows only his share, even the reconstructor knows only his share. In this protocol, we use some basic operations i.e., protocol-I of Shi et al. shi2016secure , $CNOT$ gate nielsen2002quantum , secure communication Gang1 ; Gang3 ; shi2017quantum ; shi2016efficient ; sun2020toward ; shi2018efficient ; peng2018novel ; zhang2018economic ; luo2018novel ; xu2017nearest ; sutradhar2020hybrid ; sutradhar2020generalized ; sutradhar2021efficient , entangle state Gang2 ; shi2016comment ; shi2016quantum ; dan2016efficient ; shi2016data ; shi2015quantum ; shi2015comments ; shi2013multi ; shi2012multiparty ; shi2012novel ; shi2011efficient ; run2011novel ; shi2011multi ; shi2011asymmetric , Quantum Fourier Transform $(QFT)$ Nielsen2002 and Inverse Quantum Fourier Transform $(QFT^{-1})$ Nielsen2002 , to transform the particles. We use a quantum approach in classical secret sharing to combine the benefits of both classical and quantum secret sharing, preventing attacks such as Intercept-Resend (IR), Intercept, Entangle-Measure (EM), Forgery, Collision, and Collusion. ## 2 Related Work There are numerous $QSS$ protocols for secret sharing in quantum cryptography Mashhadi2019 ; mashhadi2012novel ; hillery1999quantum ; bao2009threshold ; mashhadi2012analysis ; yang2013secret ; Gang4 ; lu2018verifiable ; lau2013quantum ; Hao ; dehkordi2008new ; dehkordi2008efficient ; mashhadi2015two ; dehkordi2008verifiable ; mashhadi2017secure ; mashhadi2015computationally ; mashhadi2013novel . In 1999, Hillery et al. discussed the first $QSS$ protocol hillery1999quantum based on the Greenberger-Home-Zeilinger $(GHZ)$ state. In 2009, Li et al. introduced a $QSS$ protocol bao2009threshold of secure direct communication. This protocol is $(t,n)$ threshold scheme but $2$ level. In 2013, Yang et al. introduced a $QSS$ protocol yang2013secret based on the $QFT$. This protocol is $d$-level $(t,n)$ threshold scheme but it is not secure because each player broadcasts the results of the measurement at the last step. Because the measurement results contain information about the secret, if an attacker intercepts the measurement results, he may expose the secret or execute an intercept-resend attack. In 2015, Qin et al. discussed a $QSS$ protocol qin2015t based on the phase shift operation, which is $2$-level $(t,n)$ threshold scheme. The protocols bao2009threshold and qin2015t are not secure because the unitary operation transforms the private information of player $P_{e-1}$ and then the transformed information is transmitted to player $P_{e}$. So, the players $P_{e-1}$ and $P_{e+1}$ collaboratively can retrieve the private information of player $P_{e}$. In 2017, Song et al. discussed a $(t,n)$ threshold $d$-level $QSS$ protocol song2017t based on some basic operations, i.e., $d$-level $CNOT$ gate, $QFT$, generalized Pauli operator, and $QFT^{-1}$. In that protocol, Alice (dealer) selects $Bob_{1}$ as a trusted reconstructor from the set of participants $\mathbb{B}=\\{Bob_{1},Bob_{2},\dots,Bob_{n}\\}$ and then selects a hash function $SHA1$ eastlake2001us to compute the hash value of the secret (which is to be shared) and sends this hash value to the trusted reconstructor $Bob_{1}$. Here, $Bob_{1}$ can perform collision attack to reveal the secret. So, the security of this protocol is dependent on the trusted reconstructor $Bob_{1}$. The main problem of Song et al.’s protocol is that the reconstructor $Bob_{1}$ cannot recover the original secret because $QFT^{-1}$ cannot be summed up over all the states CommentKao2018 . In other words, the reconstructor $Bob_{1}$ needs the secret information of other players to reconstruct the original secret. In 2018, Qin et al. Qin2018Multidimensional discussed a $QSS$ protocol which can efficiently share a secret by using the $QFT$ and Pauli operator, but it is a $(n,n)$ threshold scheme. In our protocol, any $t$ players can reconstruct the secret without a trusted player and each player knows only his share, nothing else. Furthermore, the reconstructor is unable to perform the collision attack because the secret’s hash value is shared among the players. ## 3 Preliminaries The $QFT$, $QFT-1$, Control-NOT $(CNOT)$ gate, and Shamir’s Secret Sharing, which will be used in the proposed $QSS$ protocol, are all introduced here. ### 3.1 Quantum Fourier Transform The $QFT$ Nielsen2002 , a unitary transform, is based on the quantum phenomenon and expansion of the standard discrete Fourier transform. For $s\in\\{0,1,\dots d-1\\}$, the $QFT$ of $d$-level quantum system is defined as follows: $QFT:\ket{s}\rightarrow\frac{1}{\sqrt{d}}\sum_{q=0}^{d-1}e^{2\pi i\frac{s}{d}q}\ket{q}.$ (1) The $QFT^{-1}$ is defined by $QFT^{-1}:\ket{q}\rightarrow\frac{1}{\sqrt{d}}\sum_{s=0}^{d-1}e^{-2\pi i\frac{q}{d}s}\ket{s}.$ (2) Further, $\sum_{q=0}^{d-1}e^{2\pi i\frac{s}{d}q}=\begin{cases}0\leavevmode\nobreak\ \text{if}\leavevmode\nobreak\ s\neq 0\leavevmode\nobreak\ mod\leavevmode\nobreak\ d\\\ d\leavevmode\nobreak\ \text{if}\leavevmode\nobreak\ s=0\leavevmode\nobreak\ mod\leavevmode\nobreak\ d\end{cases}$ (3) So, $\begin{split}QFT^{-1}\Bigg{(}\frac{1}{\sqrt{d}}\sum_{q=0}^{d-1}e^{2\pi i\frac{s}{d}q}\ket{q}\Bigg{)}&=\frac{1}{\sqrt{d}}\sum_{q=0}^{d-1}e^{2\pi i\frac{s}{d}q}QFT^{-1}\ket{q}\\\ &=\frac{1}{d}\sum_{q=0}^{d-1}\ket{s}+\frac{1}{d}\sum_{k=0\wedge k\neq s}^{d-1}0.\ket{k}=\ket{s}\end{split}$ (4) That is, $QFT^{-1}(QFT\ket{s})=\ket{s}.$ (5) ### 3.2 Control-NOT $(CNOT)$ gate The $CNOT$ gate nielsen2002quantum is a two-qubit gate, one is control qubit and other is target qubit. If the control bit of $CNOT$ gate is set to $\ket{0}$, then the $NOT$ gate would not be applied to the target bit. If the control bit of the $CNOT$ gate is set to $\ket{1}$, then the $NOT$ gate would be applied to the target bit. ### 3.3 Shamir’s Secret Sharing In the Shamir’s secret sharing shamir1979share , there are a dealer $\mathbb{D}$ and $n$ players $\mathcal{P}=\\{P_{1},P_{2},\dots P_{n}\\}$. The Shamir’s secret sharing consists of two phases: #### 3.3.1 Secret Sharing Phase In this phase, the dealer selects a polynomial $f(x)=S+a_{1}x+a_{2}x^{2}+\dots+a_{t-1}x^{t-1}$ of degree $(t-1)$, where $S$ is a secret and $a_{1},a_{2},\dots,a_{t-1}$ are coefficients of the polynomial $f(x)$. The dealer computes $n$ shares and distributes them among $n$ players, each player $P_{i}$ only knows $f(x_{i})$, where $i=1,2,\dots,n$. #### 3.3.2 Secret Reconstruction Phase Using $t$ shares of the secret and the Lagrange interpolation formula, $t$ players will jointly reconstruct the secret in this phase. $f(x)=\sum_{r=1}^{t}f(x_{r})\prod_{1\leq j\leq t,j\neq r}\frac{x-x_{j}}{x_{r}-x_{j}}$ (6) To calculate the polynomial at $x=0$, Eq.(6) can be simplified as $\begin{split}f(0)&=\sum_{r=1}^{t}f(x_{r})\prod_{1\leq j\leq t,j\neq r}\frac{x_{j}}{x_{j}-x_{r}}\end{split}$ (7) ## 4 Proposed Method We present a $d$-level $QSS$ protocol for sharing a secret that allows $t$ players to reconstruct the secret without the help of a trusted player. In comparison to the existing $QSS$ protocols, such as the $(n,n)$ threshold $2$-level and $(t,n)$ threshold $d$-level, which both require a trusted player, this protocol is more secure, versatile, and practical.Furthermore, no information about the secret is revealed to any of the players. There are two stages to the $QSS$ protocol: secret sharing and secret reconstruction. ### 4.1 Secret Sharing Phase In this phase, the dealer $\mathbb{D}$ shares the secret among players $\mathcal{P}=\\{P_{1},P_{2},\dots P_{n}\\}$. Initially, the dealer $\mathbb{D}$ selects a prime $d$ such that $2\leq d\leq 2n$ and sets a finite field $Z_{d}$. Then, the dealer $\mathbb{D}$ selects a polynomial $f(x)=S+a_{1}x+a_{2}x^{2}+\dots+a_{t-1}x^{t-1}$ of degree of $(t-1)$, where $S$ is secret, $a_{1},a_{2},\dots,a_{t-1}$ are coefficients of polynomial $f(x)\in Z_{d}$ and the symbol ${}^{\prime}+^{\prime}$ is defined as addition modulo $d$. The dealer computes the classical shares $f(xi)$ and uses the BB84 protocol to encode these classical shares $f(xi)$ in a qubit string bennett1984update . The qubit string of $f(x_{i})$ is distributed among $n$ players, player $P_{i}$ only knows the share $f(x_{i})$. In addition, the dealer $\mathbb{D}$ selects the $SHA1$ hash function to compute the hash value $\mathcal{H}(S)$ eastlake2001us and shares it among $n$ players using the polynomial $g(x)=\mathcal{H}(S)+b_{1}x+b_{2}x^{2}+\dots+b_{t-1}x^{t-1}$. Player $P_{i}$ only knows the share $g(x_{i})$, where $i=1,2,\dots,n$. ### 4.2 Secret Reconstruction Phase Suppose $\mathcal{Q}=\\{P_{1},P_{2}\dots P_{t}\\}$ is a qualified subset from all the qualified subsets, where the number of players in each qualified subset is $t$. The dealer $\mathbb{D}$ selects a player from the qualified subset $\mathcal{Q}=\\{P_{1},P_{2}\dots P_{t}\\}$ as a reconstructor. Here, the dealer $\mathbb{D}$ selects player $P_{1}$ from the qualified subset $\mathcal{Q}=\\{P_{1},P_{2}\dots P_{t}\\}$ as a reconstructor. The reconstructor $P_{1}$ only knows his share, nothing else. This reconstructor $P_{1}$ reconstructs the secret and hash value. The process of reconstruction is given as follows: Step 1: Player $P_{r}$, $r=1,2,\dots,t$, calculates the shadow $(s_{r})$ of the share as follows. $s_{r}=f(x_{r})\prod_{1\leq j\leq t,j\neq r}\frac{x_{j}}{x_{j}-x_{r}}\mod d$ (8) Step 2: Player $P_{1}$ (reconstructor) makes basis state $\ket{s_{1}}_{H}$, where size of the basis state is $c$-qubit, $s_{1}$ is his private shadow of the share and $c=\lceil\log_{2}^{d}\rceil$. Then, player $P_{1}$ applies $QFT$ on the state $\ket{s_{1}}_{H}$ and the resultant state $\ket{\varphi_{1}}$ is calculated as follows: $\begin{split}\ket{\varphi_{1}}&=(QFT\ket{s_{1}}_{H})\\\ &=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\frac{s_{1}}{d}k}\ket{k}_{H}\end{split}$ (9) Step 3: Player $P_{1}$ again makes ancillary state $\ket{0}_{T}$, where size of the ancillary state is $c$-qubit and $c=\lceil\log_{2}^{d}\rceil$, and then executes $CNOT^{\otimes c}$ operations on the combined state $\ket{\varphi_{1}}\ket{0}_{T}$, where the first $c$-qubits is control qubit and second $c$-qubits is target qubit. After performing $CNOT^{\otimes c}$ operations, the state $\ket{\varphi_{1}}$ evolves as an entangled state $\ket{\varphi_{2}}$, where subscript $H$ or $T$ represents home state (non- transmitted state) or transmitted state. $\begin{split}\ket{\varphi_{2}}&=CNOT^{\otimes c}\ket{\varphi_{1}}\ket{0}_{T}\\\ &=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\frac{s_{1}}{d}k}\ket{k}_{H}\ket{k}_{T}\end{split}$ (10) Step 4: Player $P_{1}$ communicates with player $P_{2}$ using the authenticated quantum channel to send the ancillary state $\ket{k}_{T}$ (i.e., second $c$-qubits). Step 5: Player $P_{2}$ applies an oracle operator $C_{k}$ on $\ket{k}_{T}\ket{s_{2}}$, where $C_{k}$ is given by $C_{k}:\ket{k}_{T}\ket{s_{2}}\rightarrow\ket{k}_{T}U^{k}\ket{s_{2}}$ (11) with $U\ket{s_{2}}=e^{2\pi i\frac{s_{2}}{d}}\ket{s_{2}}$ (12) where, $\ket{s_{2}}$ is an eigenvector of $U$ with eigenvalue $e^{2\pi i\frac{s_{2}}{d}}$. The combined quantum system of $P_{1}$ and $P_{2}$ is shown as follows. $\begin{split}\ket{\varphi_{3}}&=C_{k}\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\frac{s_{1}}{d}k}\ket{k}_{H}\ket{k}_{T}\ket{s_{2}}\\\ &=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\frac{s_{1}+s_{2}}{d}k}\ket{k}_{H}\ket{k}_{T}\ket{s_{2}}\end{split}$ (13) Step 6: Player $P_{2}$ communicates with player $P_{3}$ through an authenticated quantum channel to send the ancillary state $\ket{k}_{T}$ and keeps $\ket{s_{2}}$ as secret. Player $P_{3}$ performs $t-1$ times similar process as done by $P_{2}$. If $t$ players honestly perform the protocol, then the combined quantum state is obtained as shown below. $\ket{\varphi_{4}}=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\big{(}\frac{\sum_{r=1}^{t}s_{r}}{d}\big{)}k}\ket{k}_{H}\ket{k}_{T}\ket{s_{2}}\dots\ket{s_{t}}.$ (14) Step 7: The ancillary state $\ket{k}_{T}$ is sent by $P_{t}$ back to $P_{1}$ through an authenticated quantum channel. Player $P_{1}$ again performs $CNOT^{\otimes c}$ operation on his $2c$ qubits, where the first $c$-qubits is control qubit and second $c$-qubits is target qubit. The output state is shown as below. $\begin{split}\ket{\varphi_{5}}&=CNOT^{\otimes c}\ket{\varphi_{4}}=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\big{(}\frac{\sum_{r=1}^{t}s_{r}}{d}\big{)}k}\ket{k}_{H}\ket{0}_{T}\ket{s_{2}}\dots\ket{s_{t}}\end{split}$ (15) Step 8: The second $c$-qubits (i.e., ancillary state $\ket{0}_{T}$) is measured by player $P_{1}$ in computational basis. If the output of the measurement is $\ket{0}$, then player $P_{1}$ continues the process; otherwise, he believes that the protocol executes with at least one corrupted player and ends the protocol. Step 9: Player $P_{1}$ applies $QFT^{-1}$ on the first $c$-qubits and measures the output to get the secret $f(0)^{\prime}=\sum_{r=1}^{t}\leavevmode\nobreak\ s_{r}\leavevmode\nobreak\ mod\leavevmode\nobreak\ d$. Step 10: Finally, $t$ players perform all the above nine steps again to get the hash value of the secret and player $P_{1}$ gets the hash value of the secret $g(0)^{\prime}=\sum_{r=1}^{t}\leavevmode\nobreak\ h_{r}\leavevmode\nobreak\ mod\leavevmode\nobreak\ d$, where $h_{r}$ is the shadow of hash value shares. Player $P_{1}$ uses the hash function $SHA1$ to compute the hash value $\mathcal{H}(f(0)^{\prime})$ and compares it with the hash value $g(0)^{\prime}$. If $(\mathcal{H}(f(0)^{\prime})=g(0)^{\prime})$, then player $P_{1}$ realizes that all $t$ players have performed the reconstruction phase honestly; otherwise, player $P_{1}$ believes that there is at least one corrupted player. ## 5 Correctness Proof of $(t,n)$ threshold $d$-level $QSS$ Here, we prove the correctness of the proposed $(t,n)$ threshold $d$-level $QSS$. We mainly focus on the correctness proof of Step $9$ of secret reconstruction phase. ###### Lemma 1 If $QFT^{-1}$ (as given in Equation 2) is applied to the first $c$-qubits, then the measurement of the output is secret $(f(0)^{\prime})$. ###### Proof Applying $QFT^{-1}$ to the first $c$-qubits provides the process of secret recovery as given below: The original secret $f(0)^{\prime}$ can be calculated using the Lagrange interpolation and Equation 7 as follows. $\begin{split}f(0)^{\prime}&=f(x_{1})\prod_{1\leq j\leq t,j\neq 1}\frac{x_{j}}{x_{j}-x_{1}}+\dots+f(x_{t})\prod_{1\leq j\leq t,j\neq t}\frac{x_{j}}{x_{j}-x_{t}}\mod d\\\ &=(s_{1}+\dots+s_{t})\mod d\\\ &=(\sum_{r=1}^{t}s_{r}\mod d)\end{split}$ (16) Player $P_{1}$ applies $QFT^{-1}$ to the first $c$-qubits. $\begin{split}QFT^{-1}\Bigg{(}\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\big{(}\frac{\sum_{r=1}^{t}s_{r}}{d}\big{)}k}\ket{k}_{H}\Bigg{)}&=\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{2\pi i\big{(}\frac{\sum_{r=1}^{t}s_{r}}{d}\big{)}k}QFT^{-1}\ket{k}_{H}\\\ &=\Bigg{|}\sum_{r=1}^{t}s_{r}\leavevmode\nobreak\ mod\leavevmode\nobreak\ d\Bigg{>}_{H}+\frac{1}{d}\sum_{l=0}^{d-1}0.\ket{l}_{H}\\\ &=\Bigg{|}\sum_{r=1}^{t}s_{r}\leavevmode\nobreak\ mod\leavevmode\nobreak\ d\Bigg{>}_{H}=\ket{f(0)^{\prime}}_{H}\end{split}$ (17) Therefore, if this protocol honestly is executed by $t$ players, the reconstructor $P_{1}$ will get the original secret. ## 6 Simulation Results In this protocol, the initiator $P_{1}$ applies $QFT$ on the $q$-qubit state and executes $CNOT$ gate. Then, the ancillary qubit is sent to player $P_{2}$, who applies the oracle operator on the ancillary qubit. Thereafter, player $P_{2}$ sends the ancillary qubit to $P_{3}$ and player $P_{3}$ performs similar process. This process is performed $(t-1)$ times. After that, the ancillary qubit is sent back to player $P_{1}$ by $P_{t}$. The player $P_{1}$ performs $CNOT$ and $QFT^{-1}$ to get the multiplication. In the secure multiparty quantum multiplication, the Hadamard gate is taken to be the $QFT$. After this, the initiator $P_{1}$ performs $QFT$ on the $c$-qubit state and executes $CNOT$ gate. Then, $\ket{k}_{T}$ is sent to player $P_{2}$ who applies the oracle operator on $\ket{k}_{T}$. Thereafter, player $P_{2}$ sends $\ket{k}_{T}$ to $P_{3}$, who performs similar process. This process is performed $(t-1)$ times. After that, the ancillary state $\ket{k}_{T}$ is sent back to player $P_{1}$ by $P_{t}$. Player $P_{1}$ performs $CNOT$ and $QFT^{-1}$ to get the multiplication. We have executed this quantum protocol for $(t,n)$ threshold secure multiparty multiplication using the following number of players and qubits: * • In simulations $1-3$, we have considered three players with one qubit, three players with two qubits, and three players with three qubits, respectively, and got efficient result after taking $8192$ number of average shots. * • In simulations $4-6$, we have considered four players with one qubit, four players with two qubits, and four players with three qubits, respectively, and got efficient result after taking $8192$ number of average shots. * • In simulations $7-9$, we have considered fifteen players with one qubit, fifteen players with two qubits, and fifteen players with three qubits, respectively, and got efficient result after taking $8192$ number of average shots. We got efficient results of multiplication after taking $8192$ number of average shots. ## 7 Results and Discussion In this section, we discuss the security and performance analysis of the proposed $(t,n)$ threshold $QSS$ protocol based on some properties. ### 7.1 Security Analysis Here, we analyze the outside (i.e., outside eavesdropper wants to steal the private information of all players) and participant (i.e., attack from one or more dishonest players) attacks. We discuss four types of outside attacks (i.e., Intercept-Resend $(IR)$, Intercept, Entangle-Measure $(EM)$ and Forgery) and two types of participant attacks (i.e., Collision and Collusion) cai2019cryptanalysis ; ting2009participant ; Wang2008 ; wang2011security ; wang2013cryptanalysis ; wang2017security . #### 7.1.1 Outside Attack In this type of attack, an outside eavesdropper wants to steal the private information of all players. We discuss the Intercept-Resend $(IR)$, Intercept, Entangle-Measure $(EM)$ and Forgery attacks as follows. ##### Intercept-Resend $(IR)$ Attack: In intercept-resend attack, a player measures the quantum state, which is sent by another player and replaces this state with his own state and then sends the replacement state to other players. In our proposed protocol, player $P_{1}$ sends the ancillary state $\ket{k}_{T}$ to dishonest player $P_{2}$ through an authenticated quantum channel and player $P_{2}$ wants to eavesdrop $P_{1}$’s shadow of the share $s_{1}$. If the ancillary state measured by dishonest player $P_{2}$ in computational basis ${\ket{0},\ket{1},\dots,\ket{d-1}}$. The dishonest player $P_{2}$ can succeed to get $\ket{l}_{T}$ with the probability of $1/d$, but the output of the measurement $k$ is totally independent of $P_{1}^{\prime}s$ share $s_{1}$. Further, player $P_{2}$ sends the state $\ket{k}_{T}$ to player $P_{3}$. Unfortunately, $k$ does not possess any partial information about $P_{1}$’s shadow of the share $s_{1}$. The dishonest player $P_{2}$ cannot get any information from the intercepted state, and similarly the dishonest player $P_{3}$ cannot get any information from the transmitted state $\ket{k}_{T}$. So, the intercept-resend attack is infeasible. ##### Intercept Attack: In this attack, the dishonest player $P_{2}$ wants to eavesdrop $P_{1}$’s shadow of the share $s_{1}$. The dishonest player $P_{2}$ can measure the output of the unitary operator (transformed state) because, based on $QFT$, player $P_{2}$ knows that player $P_{1}$’s shadow of the share state $\ket{s_{1}}$ has evolved as the ancillary state $\ket{k}_{T}$. So, $QFT^{-1}$ can be performed on the ancillary state $\ket{k}_{T}$ by dishonest player $P_{2}$ to reveal $s_{1}$. If the ancillary state measured by dishonest player $P_{2}$ in computational basis ${\ket{0},\ket{1},\dots,\ket{d-1}}$, then $P_{2}$ can succeed to get $\ket{l}_{T}$ with the probability of $1/d$, but $P_{2}$ cannot get $P_{1}$’s shadow of the share, because the global information cannot be extracted from the limited number of qubits. The entangled systems cannot be disentangled by the limited number of qubits. So, the attacker cannot get any information about $P_{1}$’s shadow of the share. ##### Entangle-Measure $(EM)$ Attack: The dishonest player $P_{2}$ performs a more complicated entangle-measure attack. Player $P_{2}$ prepares an ancillary state $\ket{0}_{P_{2}}$ that gets entangled with the transmitted state $\ket{k}_{T}$ using the local unitary operations. Then, player $P_{2}$ measures the entangle state to get the partial information about player $P_{1}$’s shadow of the share. After successful completion of honesty test, it can easily be deduced that $\eta_{k}=1$. After performing $\bar{U}_{TP_{2}}$, $P_{2}$ sends $\ket{k}_{T}$ back to $P_{1}$ and measures the ancillary system after execution of $CNOT^{\otimes c}$ operation by player $P_{1}$. If player $P_{2}$ measures the ancillary state $\ket{\phi(k)}_{P_{2}}$, $P_{2}$ cannot get any information about $P_{1}$’s shadow of the share $s_{1}$ because of entanglement of $\ket{k}_{H}$ and $\ket{\phi(k)}_{P_{2}}$. So, this attack is also infeasible. ##### Forgery Attack: In forgery attack, the participants can execute the protocol with the fake shares. The proposed $QSS$ protocol can prevent the forgery attack, which is one of the important issues, where the participants can provide the fake shares. If any dishonest player performs the Pauli operator with the fake shadow, the original secret cannot be reconstructed correctly. In the proposed protocol, player $P_{1}$ uses the hash function $SHA1$ to compute the hash value $H(f(0)^{\prime})$ and compares it with the hash value $g(0)^{\prime}$. If $(H(f(0)^{\prime})=g(0)^{\prime})$, then $P_{1}$ shares the secret with other $t-1$ players; otherwise, $P_{1}$ realizes that at least one player performs the reconstruction phase dishonestly and terminates the reconstruction phase. So, the forgery attack is not possible in our quantum ($t,n$) threshold $QSS$ protocol. #### 7.1.2 Participant Attack This type of attack is performed by one or more dishonest players to reveal the secret information. ##### Collision (attack from one) Attack: In collision attack, the attacker performs an attack on the hash function, where the hash function produces the same hash value for two different inputs. Many existing $QSS$ protocols cannot prevent the collision attack. In song2017t , Alice (dealer) selects $Bob_{1}$ as a trusted reconstructor from the set of participants $\mathbb{B}=\\{Bob_{1},Bob_{2},\dots,Bob_{n}\\}$ and then selects a hash function $SHA1$ to compute the hash value of the secret (which is to be shared). Then, Alice sends this hash value to the trusted reconstructor $Bob_{1}$. At this point, $Bob_{1}$ can perform collision attack to reveal the secret. So, the security of their protocol is dependent on the trusted reconstructor $Bob_{1}$. In our protocol, the dealer $\mathbb{D}$ computes the hash value $H(S)$ using the $SHA1$ hash function and shares it among $n$ players. Therefore, the reconstructor $P_{1}$ does not have any information about the hash value and he cannot perform the collision attack. ##### Collusion (attack from more than one dishonest players) Attack: In collusion attack, some players can collude together to get the shadow of the share of other player. In order to get the private information of $P_{e}$ , players $P_{e-1}$ and $P_{e+1}$ perform the protocol dishonestly. In our proposed protocol, players $P_{e-1}$ and $P_{e+1}$ cannot perform the collusion attack because the unitary operation is performed by each participant with his private information. Moreover, this private information is not transmitted through a quantum channel. ### 7.2 Performance Analysis We analyze and compare the performance of the proposed $QSS$ protocol with the existing $QSS$ protocols, i.e., Li et al.’s $QSS$ bao2009threshold , Yang et al.’s $QSS$ yang2013secret , Qin et al.’s $QSS$ qin2015t , Song et al.’s $QSS$ song2017t , and Qin et al.’s $QSS$ Qin2018Multidimensional in terms of three parameters: universality, cost, and attack. Li et al.’s $QSS$ protocol bao2009threshold is ($t,n$) threshold scheme, but it is not for $d$-level particle. Yang et al.’s $QSS$ protocol yang2013secret is for $d$-level particle, but this protocol is ($n,n$) threshold scheme. Qin et al.’s $QSS$ protocol qin2015t is ($t,n$) threshold scheme, but it is not for $d$-level particle. Song et al.’s $QSS$ protocol song2017t is for $d$-level particle, and also it is ($t,n$) threshold scheme. Song et al.’s $QSS$ protocol song2017t can prevent the $IR$, $EM$, forgery attacks, but it cannot prevent the collision attack. Qin et al.’s $QSS$ protocol Qin2018Multidimensional is for $d$-level particle and $cn$ qubits, but this protocol is ($n,n$) threshold scheme. Here, we have compared the proposed $QSS$ protocol with six existing protocols in terms of the communication cost and computation cost.The communication cost can be computed based on the transmitted particles, i.e., message particles and decoy particles. The computation cost can be computed based on five parameters: $QFT$, $U$ operation, $QFT^{-1}$, measure operation, and hash operation. The Li et al.’s $QSS$ protocol bao2009threshold needs to perform $t(2t-1)$ number of $U$ operations, $t$ number of measure operations, needs to transmit $t(t+1)$ number of messages, and $z(t+1)$ number of decoy particles, where $z$ is the number of decoy particles. The Yang et al.’s protocol yang2013secret needs to perform $n$ number of $QFT$, $n$ number of $U$ operations, $n$ number of measure operations, and needs to transmit $(n-1)$ number of message particles. The Qin et al.’s protocol qin2015t needs to perform $t(t+1)$ number of $U$ operations, needs to transmit $t(t+1)$ number of messages, and $z(t+1)$ number of decoy particles. The Song et al.’s protocol song2017t needs to perform $1$ number of $QFT$, $t$ number of $U$ operations, $1$ number of $QFT^{-1}$, $1$ number of measure operations, $2$ number of hash operations and needs to transmit $(t-1)$ number of message particles. The Qin et al.’s protocol Qin2018Multidimensional needs to perform $1$ number of $QFT$, $t(t+1)+n$ number of $U$ operations, $1$ number of $QFT^{-1}$, $n$ number of measure operations, and needs to transmit $n$ number of message particles as well as $z(t+1)$ number of decoy particles. Our proposed protocol needs to perform $1$ number of $QFT$, $(t-1)$ number of $U$ operations, $1$ number of $QFT^{-1}$, $2$ number of hash operations, and needs to transmit $t$ number of decoy particles. So, the complexity of our proposed protocol is less as compared to the existing $QSS$ protocols. ## 8 Conclusion In this paper, we discussed a secret-sharing protocol in which $t$ players can reconstruct the secret without the help of a trusted player. In comparison to existing $QSS$ protocols, our protocol is more secure, versatile, and practical. The reconstructor $P1$ only knows his share and nothing else; even the secret’s hash value is unknown to him. Since the reconstructor $P_{1}$ only knows his share, it cannot perform the collision attack. ## Ethical Statement This article does not contain any studies with human or animal subjects performed by the any of the authors. The manuscript has been prepared following the instructions provided in the Authors Guidelines of the journal. ## Conflict of Interest The authors declare that they have no conflict of interest. ## References * (1) Bao-Kui, L., Yu-Guang, Y., Qiao-Yan, W.: Threshold quantum secret sharing of secure direct communication. Chinese Physics Letters 26(1), 010302 (2009) * (2) Bennett, C.H., Brassard, G.: An update on quantum cryptography. In: Workshop on the Theory and Application of Cryptographic Techniques, pp. 475–480. Springer (1984) * (3) Cai, X.Q., Wang, T.Y., Wei, C.Y., Gao, F.: Cryptanalysis of multiparty quantum digital signatures. Quantum Information Processing 18(8), 252 (2019) * (4) Charoghchi, S., Mashhadi, S.: Three (t, n)-secret image sharing schemes based on homogeneous linear recursion. Information Sciences 552, 220–243 (2021) * (5) Chen, X.B., Sun, Y.R., Xu, G., Yang, Y.X.: Quantum homomorphic encryption scheme with flexible number of evaluator based on (k, n)-threshold quantum state sharing☆. Information Sciences (2019) * (6) Chen, X.B., Wang, Y.L., Xu, G., Yang, Y.X.: Quantum network communication with a novel discrete-time quantum walk. IEEE Access 7, 13634–13642 (2019) * (7) Dan, L., SHI, R.h., ZHANG, S., ZHONG, H.: Efficient anonymous roaming authentication scheme using certificateless aggregate signature in wireless network. Journal on Communications 37(7), 182 (2016) * (8) Dehkordi, M.H., Mashhadi, S.: An efficient threshold verifiable multi-secret sharing. Computer Standards & Interfaces 30(3), 187–190 (2008) * (9) Dehkordi, M.H., Mashhadi, S.: New efficient and practical verifiable multi-secret sharing schemes. Information Sciences 178(9), 2262–2274 (2008) * (10) Dehkordi, M.H., Mashhadi, S.: Verifiable secret sharing schemes based on non-homogeneous linear recursions and elliptic curves. Computer Communications 31(9), 1777–1784 (2008) * (11) Dehkordi, M.H., Mashhadi, S., Oraei, H.: A proactive multi stage secret sharing scheme for any given access structure. Wireless Personal Communications 104(1), 491–503 (2019) * (12) Eastlake, D., Jones, P.: Us secure hash algorithm 1 (sha1) (2001) * (13) Gyongyosi, L., Imre, S.: Quantum circuit design for objective function maximization in gate-model quantum computers. Quantum Information Processing 18(7), 1–33 (2019) * (14) Hao, C., Wenping, M.: (t, n) threshold quantum state sharing scheme based on linear equations and unitary operation. IEEE Photonics Journal 9(1), 1–7 (2017) * (15) Hillery, M., Bužek, V., Berthiaume, A.: Quantum secret sharing. Physical Review A 59(3), 1829 (1999) * (16) Kao, S.H., Hwang, T.: Comment on (t, n) threshold d-level quantum secret sharing. arXiv preprint arXiv:1803.00216 (2018) * (17) Karimifard, Z., Mashhadi, S., EBRAHIMI, B.D.: Semiquantum secret sharing using three particles without entanglement (2016) * (18) Lau, H.K., Weedbrook, C.: Quantum secret sharing with continuous-variable cluster states. Physical Review A 88(4), 042313 (2013) * (19) Lo, H.K., Spiller, T., Popescu, S.: Introduction to quantum computation and information. World Scientific (1998) * (20) Lu, C., Miao, F., Hou, J., Meng, K.: Verifiable threshold quantum secret sharing with sequential communication. Quantum Information Processing 17(11), 310 (2018) * (21) Luo, Z.y., Shi, R.h., Xu, M., Zhang, S.: A novel quantum solution to privacy-preserving nearest neighbor query in location-based services. International Journal of Theoretical Physics 57(4), 1049–1059 (2018) * (22) Mashhadi, S.: Analysis of frame attack on hsu et al.’s non-repudiable threshold multi-proxy multi-signature scheme with shared verification. Scientia Iranica 19(3), 674–679 (2012) * (23) Mashhadi, S.: A novel secure self proxy signature scheme. IJ Network Security 14(1), 22–26 (2012) * (24) Mashhadi, S.: A novel non-repudiable threshold proxy signature scheme with known signers. IJ Network Security 15(4), 274–279 (2013) * (25) Mashhadi, S.: Computationally secure multiple secret sharing: Models, schemes, and formal security analysis. ISeCure 7(2) (2015) * (26) Mashhadi, S.: Analysis of warrant attacks on some threshold proxy signature schemes. Journal of Information Processing Systems 12(2), 249–262 (2016) * (27) Mashhadi, S.: How to fairly share multiple secrets stage by stage. Wireless Personal Communications 90(1), 93–107 (2016) * (28) Mashhadi, S.: Share secrets stage by stage with homogeneous linear feedback shift register in the standard model. Security and Communication Networks 9(17), 4495–4504 (2016) * (29) Mashhadi, S.: New multi-stage secret sharing in the standard model. Information Processing Letters 127, 43–48 (2017) * (30) Mashhadi, S.: Secure publicly verifiable and proactive secret sharing schemes with general access structure. Information sciences 378, 99–108 (2017) * (31) Mashhadi, S.: General secret sharing based on quantum fourier transform. Quantum Information Processing 18(4), 114 (2019) * (32) Mashhadi, S.: A csa-secure multi-secret sharing scheme in the standard model. Journal of Applied Security Research 15(1), 84–95 (2020) * (33) Mashhadi, S.: Improvement of a (t, n) threshold d- level quantum secret sharing scheme. Journal of Applied Security Research pp. 1–12 (2020) * (34) Mashhadi, S.: Toward a formal proof for multi-secret sharing in the random oracle model. Information Security Journal: A Global Perspective 29(5), 244–249 (2020) * (35) Mashhadi, S., Dehkordi, M.H.: Two verifiable multi secret sharing schemes based on nonhomogeneous linear recursion and lfsr public-key cryptosystem. Information Sciences 294, 31–40 (2015) * (36) Mashhadi, S., Dehkordi, M.H., Kiamari, N.: Provably secure verifiable multi-stage secret sharing scheme based on monotone span program. IET Information Security 11(6), 326–331 (2017) * (37) Nielsen, M.A., Chuang, I.: Quantum computation and quantum information (2002) * (38) Peng, Z.w., Shi, R.h., Wang, P.h., Zhang, S.: A novel quantum solution to secure two-party distance computation. Quantum Information Processing 17(6), 1–12 (2018) * (39) Qin, H., Tso, R., Dai, Y.: Multi-dimensional quantum state sharing based on quantum fourier transform. Quantum Information Processing 17(3), 48 (2018) * (40) Qin, H., Zhu, X., Dai, Y.: (t, n) threshold quantum secret sharing using the phase shift operation. Quantum Information Processing 14(8), 2997–3004 (2015) * (41) Run-Hua, S., Liu-Sheng, H., Wei, Y., Hong, Z.: An efficient scheme for multiparty multi-particle state sharing. Communications in Theoretical Physics 54(1), 93 (2010) * (42) Run-Hua, S., Liu-Sheng, H., Wei, Y., Hong, Z.: A novel multiparty quantum secret sharing scheme of secure direct communication based on bell states and bell measurements. Chinese Physics Letters 28(5), 050303 (2011) * (43) Shamir, A.: How to share a secret. Communications of the ACM 22(11), 612–613 (1979) * (44) Shi, R., Huang, L., Yang, W., Zhong, H.: Quantum secret sharing between multiparty and multiparty with bell states and bell measurements. SCIENCE CHINA Physics, Mechanics and Astronomy 53(12), 2238–2244 (2010) * (45) Shi, R., Zhang, Y., Zhong, H., Cui, J., Zhang, S.: Data integrity checking protocol based on secure multiparty computation. In: Wireless Communications, Networking and Applications, pp. 873–882. Springer (2016) * (46) Shi, R.H.: Efficient quantum protocol for private set intersection cardinality. IEEE Access 6, 73102–73109 (2018) * (47) Shi, R.h., Huang, L.s., Yang, W., Zhong, H.: Efficient symmetric five-party quantum state sharing of an arbitrary m-qubit state. International Journal of Theoretical Physics 50(11), 3329–3336 (2011) * (48) Shi, R.h., Huang, L.s., Yang, W., Zhong, H.: Multi-party quantum state sharing of an arbitrary two-qubit state with bell states. Quantum Information Processing 10(2), 231–239 (2011) * (49) Shi, R.H., Huang, L.S., Yang, W., Zhong, H.: Novel and effective secret sharing scheme. Journal of China Institute of Communications 33(1), 10–16 (2012) * (50) Shi, R.h., Mu, Y., Zhong, H., Cui, J., Zhang, S.: An efficient quantum scheme for private set intersection. Quantum Information Processing 15(1), 363–371 (2016) * (51) Shi, R.h., Mu, Y., Zhong, H., Cui, J., Zhang, S.: Secure multiparty quantum computation for summation and multiplication. Scientific reports 6(1), 1–9 (2016) * (52) Shi, R.h., Mu, Y., Zhong, H., Zhang, S.: Quantum oblivious set-member decision protocol. Physical Review A 92(2), 022309 (2015) * (53) Shi, R.h., Mu, Y., Zhong, H., Zhang, S.: Comment on “secure quantum private information retrieval using phase-encoded queries”. Physical Review A 94(6), 066301 (2016) * (54) Shi, R.h., Mu, Y., Zhong, H., Zhang, S., Cui, J.: Quantum private set intersection cardinality and its application to anonymous authentication. Information Sciences 370, 147–158 (2016) * (55) Shi, R.H., Zhang, S.: Quantum solution to a class of two-party private summation problems. Quantum Information Processing 16(9), 1–9 (2017) * (56) Shi, R.H., Zhong, H.: Asymmetric multiparty-controlled teleportation of arbitrary n-qudit states using different quantum channels. In: International Conference on Theoretical and Mathematical Foundations of Computer Science, pp. 337–344. Springer (2011) * (57) Shi, R.h., Zhong, H.: Multiparty quantum secret sharing with the pure entangled two-photon states. Quantum Information Processing 11(1), 161–169 (2012) * (58) Shi, R.H., Zhong, H.: Multi-party quantum key agreement with bell states and bell measurements. Quantum information processing 12(2), 921–932 (2013) * (59) Shi, R.h., Zhong, H., Zhang, S.: Comments on two schemes of identity-based user authentication and key agreement for mobile client–server networks. The Journal of Supercomputing 71(11), 4015–4018 (2015) * (60) Song, X.L., Liu, Y.B., Deng, H.Y., Xiao, Y.G.: (t, n) threshold d-level quantum secret sharing. Scientific reports 7(1), 6366 (2017) * (61) Sun, Z., Song, L., Huang, Q., Yin, L., Long, G., Lu, J., Hanzo, L.: Toward practical quantum secure direct communication: A quantum-memory-free protocol and code design. IEEE Transactions on Communications 68(9), 5778–5792 (2020) * (62) Sutradhar, K., Om, H.: Efficient quantum secret sharing without a trusted player. Quantum Information Processing 19(2), 1–15 (2020) * (63) Sutradhar, K., Om, H.: A generalized quantum protocol for secure multiparty summation. IEEE Transactions on Circuits and Systems II: Express Briefs 67(12), 2978–2982 (2020) * (64) Sutradhar, K., Om, H.: Hybrid quantum protocols for secure multiparty summation and multiplication. Scientific Reports 10(1), 1–9 (2020) * (65) Sutradhar, K., Om, H.: An efficient simulation for quantum secure multiparty computation. Scientific Reports 11(1), 1–9 (2021) * (66) Ting-Ting, S., Jie, Z., Fei, G., Qiao-Yan, W., Fu-Chen, Z.: Participant attack on quantum secret sharing based on entanglement swapping. Chinese Physics B 18(4), 1333 (2009) * (67) Wang, T.Y., Li, Y.P.: Cryptanalysis of dynamic quantum secret sharing. Quantum information processing 12(5), 1991–1997 (2013) * (68) Wang, T.Y., Liu, Y.Z., Wei, C.Y., Cai, X.Q., Ma, J.F.: Security of a kind of quantum secret sharing with entangled states. Scientific reports 7(1), 2485 (2017) * (69) Wang, T.Y., Wen, Q.Y.: Security of a kind of quantum secret sharing with single photons. Quantum Information & Computation 11(5), 434–443 (2011) * (70) Wang, T.y., Wen, Q.y., Gao, F., Lin, S., Zhu, F.c.: Cryptanalysis and improvement of multiparty quantum secret sharing schemes. Physics Letters A 373(1), 65–68 (2008) * (71) Xu, G., Chen, X.B., Dou, Z., Li, J., Liu, X., Li, Z.: Novel criteria for deterministic remote state preparation via the entangled six-qubit state. Entropy 18(7), 267 (2016) * (72) Xu, G., Xiao, K., Li, Z., Niu, X.X., Ryan, M.: Controlled secure direct communication protocol via the three-qubit partially entangled set of states. Comput. Mater. Continua 58(3), 809–827 (2019) * (73) Xu, M., Shi, R.h., Luo, Z.y., Peng, Z.w.: Nearest private query based on quantum oblivious key distribution. Quantum Information Processing 16(12), 1–12 (2017) * (74) Yang, W., Huang, L., Shi, R., He, L.: Secret sharing based on quantum fourier transform. Quantum information processing 12(7), 2465–2474 (2013) * (75) Zhang, R., Shi, R.h., Qin, J.q., Peng, Z.w.: An economic and feasible quantum sealed-bid auction protocol. Quantum Information Processing 17(2), 1–14 (2018)